Jan 31 07:31:35 localhost kernel: Linux version 5.14.0-665.el9.x86_64 (mockbuild@x86-05.stream.rdu2.redhat.com) (gcc (GCC) 11.5.0 20240719 (Red Hat 11.5.0-14), GNU ld version 2.35.2-69.el9) #1 SMP PREEMPT_DYNAMIC Thu Jan 22 12:30:22 UTC 2026
Jan 31 07:31:35 localhost kernel: The list of certified hardware and cloud instances for Red Hat Enterprise Linux 9 can be viewed at the Red Hat Ecosystem Catalog, https://catalog.redhat.com.
Jan 31 07:31:35 localhost kernel: Command line: BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-665.el9.x86_64 root=UUID=822f14ea-6e7e-41df-b0d8-fbe282d9ded8 ro console=ttyS0,115200n8 no_timer_check net.ifnames=0 crashkernel=1G-2G:192M,2G-64G:256M,64G-:512M
Jan 31 07:31:35 localhost kernel: BIOS-provided physical RAM map:
Jan 31 07:31:35 localhost kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable
Jan 31 07:31:35 localhost kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved
Jan 31 07:31:35 localhost kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved
Jan 31 07:31:35 localhost kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000bffdafff] usable
Jan 31 07:31:35 localhost kernel: BIOS-e820: [mem 0x00000000bffdb000-0x00000000bfffffff] reserved
Jan 31 07:31:35 localhost kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved
Jan 31 07:31:35 localhost kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved
Jan 31 07:31:35 localhost kernel: BIOS-e820: [mem 0x0000000100000000-0x000000023fffffff] usable
Jan 31 07:31:35 localhost kernel: NX (Execute Disable) protection: active
Jan 31 07:31:35 localhost kernel: APIC: Static calls initialized
Jan 31 07:31:35 localhost kernel: SMBIOS 2.8 present.
Jan 31 07:31:35 localhost kernel: DMI: OpenStack Foundation OpenStack Nova, BIOS 1.15.0-1 04/01/2014
Jan 31 07:31:35 localhost kernel: Hypervisor detected: KVM
Jan 31 07:31:35 localhost kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00
Jan 31 07:31:35 localhost kernel: kvm-clock: using sched offset of 4108458130 cycles
Jan 31 07:31:35 localhost kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns
Jan 31 07:31:35 localhost kernel: tsc: Detected 2800.000 MHz processor
Jan 31 07:31:35 localhost kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved
Jan 31 07:31:35 localhost kernel: e820: remove [mem 0x000a0000-0x000fffff] usable
Jan 31 07:31:35 localhost kernel: last_pfn = 0x240000 max_arch_pfn = 0x400000000
Jan 31 07:31:35 localhost kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs
Jan 31 07:31:35 localhost kernel: x86/PAT: Configuration [0-7]: WB  WC  UC- UC  WB  WP  UC- WT  
Jan 31 07:31:35 localhost kernel: last_pfn = 0xbffdb max_arch_pfn = 0x400000000
Jan 31 07:31:35 localhost kernel: found SMP MP-table at [mem 0x000f5ae0-0x000f5aef]
Jan 31 07:31:35 localhost kernel: Using GB pages for direct mapping
Jan 31 07:31:35 localhost kernel: RAMDISK: [mem 0x2d410000-0x329fffff]
Jan 31 07:31:35 localhost kernel: ACPI: Early table checksum verification disabled
Jan 31 07:31:35 localhost kernel: ACPI: RSDP 0x00000000000F5AA0 000014 (v00 BOCHS )
Jan 31 07:31:35 localhost kernel: ACPI: RSDT 0x00000000BFFE16BD 000030 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Jan 31 07:31:35 localhost kernel: ACPI: FACP 0x00000000BFFE1571 000074 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Jan 31 07:31:35 localhost kernel: ACPI: DSDT 0x00000000BFFDFC80 0018F1 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Jan 31 07:31:35 localhost kernel: ACPI: FACS 0x00000000BFFDFC40 000040
Jan 31 07:31:35 localhost kernel: ACPI: APIC 0x00000000BFFE15E5 0000B0 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Jan 31 07:31:35 localhost kernel: ACPI: WAET 0x00000000BFFE1695 000028 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Jan 31 07:31:35 localhost kernel: ACPI: Reserving FACP table memory at [mem 0xbffe1571-0xbffe15e4]
Jan 31 07:31:35 localhost kernel: ACPI: Reserving DSDT table memory at [mem 0xbffdfc80-0xbffe1570]
Jan 31 07:31:35 localhost kernel: ACPI: Reserving FACS table memory at [mem 0xbffdfc40-0xbffdfc7f]
Jan 31 07:31:35 localhost kernel: ACPI: Reserving APIC table memory at [mem 0xbffe15e5-0xbffe1694]
Jan 31 07:31:35 localhost kernel: ACPI: Reserving WAET table memory at [mem 0xbffe1695-0xbffe16bc]
Jan 31 07:31:35 localhost kernel: No NUMA configuration found
Jan 31 07:31:35 localhost kernel: Faking a node at [mem 0x0000000000000000-0x000000023fffffff]
Jan 31 07:31:35 localhost kernel: NODE_DATA(0) allocated [mem 0x23ffd3000-0x23fffdfff]
Jan 31 07:31:35 localhost kernel: crashkernel reserved: 0x00000000af000000 - 0x00000000bf000000 (256 MB)
Jan 31 07:31:35 localhost kernel: Zone ranges:
Jan 31 07:31:35 localhost kernel:   DMA      [mem 0x0000000000001000-0x0000000000ffffff]
Jan 31 07:31:35 localhost kernel:   DMA32    [mem 0x0000000001000000-0x00000000ffffffff]
Jan 31 07:31:35 localhost kernel:   Normal   [mem 0x0000000100000000-0x000000023fffffff]
Jan 31 07:31:35 localhost kernel:   Device   empty
Jan 31 07:31:35 localhost kernel: Movable zone start for each node
Jan 31 07:31:35 localhost kernel: Early memory node ranges
Jan 31 07:31:35 localhost kernel:   node   0: [mem 0x0000000000001000-0x000000000009efff]
Jan 31 07:31:35 localhost kernel:   node   0: [mem 0x0000000000100000-0x00000000bffdafff]
Jan 31 07:31:35 localhost kernel:   node   0: [mem 0x0000000100000000-0x000000023fffffff]
Jan 31 07:31:35 localhost kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000023fffffff]
Jan 31 07:31:35 localhost kernel: On node 0, zone DMA: 1 pages in unavailable ranges
Jan 31 07:31:35 localhost kernel: On node 0, zone DMA: 97 pages in unavailable ranges
Jan 31 07:31:35 localhost kernel: On node 0, zone Normal: 37 pages in unavailable ranges
Jan 31 07:31:35 localhost kernel: ACPI: PM-Timer IO Port: 0x608
Jan 31 07:31:35 localhost kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1])
Jan 31 07:31:35 localhost kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23
Jan 31 07:31:35 localhost kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl)
Jan 31 07:31:35 localhost kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level)
Jan 31 07:31:35 localhost kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level)
Jan 31 07:31:35 localhost kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level)
Jan 31 07:31:35 localhost kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level)
Jan 31 07:31:35 localhost kernel: ACPI: Using ACPI (MADT) for SMP configuration information
Jan 31 07:31:35 localhost kernel: TSC deadline timer available
Jan 31 07:31:35 localhost kernel: CPU topo: Max. logical packages:   8
Jan 31 07:31:35 localhost kernel: CPU topo: Max. logical dies:       8
Jan 31 07:31:35 localhost kernel: CPU topo: Max. dies per package:   1
Jan 31 07:31:35 localhost kernel: CPU topo: Max. threads per core:   1
Jan 31 07:31:35 localhost kernel: CPU topo: Num. cores per package:     1
Jan 31 07:31:35 localhost kernel: CPU topo: Num. threads per package:   1
Jan 31 07:31:35 localhost kernel: CPU topo: Allowing 8 present CPUs plus 0 hotplug CPUs
Jan 31 07:31:35 localhost kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write()
Jan 31 07:31:35 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0x00000000-0x00000fff]
Jan 31 07:31:35 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0x0009f000-0x0009ffff]
Jan 31 07:31:35 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0x000a0000-0x000effff]
Jan 31 07:31:35 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0x000f0000-0x000fffff]
Jan 31 07:31:35 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0xbffdb000-0xbfffffff]
Jan 31 07:31:35 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0xc0000000-0xfeffbfff]
Jan 31 07:31:35 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0xfeffc000-0xfeffffff]
Jan 31 07:31:35 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0xff000000-0xfffbffff]
Jan 31 07:31:35 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0xfffc0000-0xffffffff]
Jan 31 07:31:35 localhost kernel: [mem 0xc0000000-0xfeffbfff] available for PCI devices
Jan 31 07:31:35 localhost kernel: Booting paravirtualized kernel on KVM
Jan 31 07:31:35 localhost kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns
Jan 31 07:31:35 localhost kernel: setup_percpu: NR_CPUS:8192 nr_cpumask_bits:8 nr_cpu_ids:8 nr_node_ids:1
Jan 31 07:31:35 localhost kernel: percpu: Embedded 64 pages/cpu s225280 r8192 d28672 u262144
Jan 31 07:31:35 localhost kernel: pcpu-alloc: s225280 r8192 d28672 u262144 alloc=1*2097152
Jan 31 07:31:35 localhost kernel: pcpu-alloc: [0] 0 1 2 3 4 5 6 7 
Jan 31 07:31:35 localhost kernel: kvm-guest: PV spinlocks disabled, no host support
Jan 31 07:31:35 localhost kernel: Kernel command line: BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-665.el9.x86_64 root=UUID=822f14ea-6e7e-41df-b0d8-fbe282d9ded8 ro console=ttyS0,115200n8 no_timer_check net.ifnames=0 crashkernel=1G-2G:192M,2G-64G:256M,64G-:512M
Jan 31 07:31:35 localhost kernel: Unknown kernel command line parameters "BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-665.el9.x86_64", will be passed to user space.
Jan 31 07:31:35 localhost kernel: random: crng init done
Jan 31 07:31:35 localhost kernel: Dentry cache hash table entries: 1048576 (order: 11, 8388608 bytes, linear)
Jan 31 07:31:35 localhost kernel: Inode-cache hash table entries: 524288 (order: 10, 4194304 bytes, linear)
Jan 31 07:31:35 localhost kernel: Fallback order for Node 0: 0 
Jan 31 07:31:35 localhost kernel: Built 1 zonelists, mobility grouping on.  Total pages: 2064091
Jan 31 07:31:35 localhost kernel: Policy zone: Normal
Jan 31 07:31:35 localhost kernel: mem auto-init: stack:off, heap alloc:off, heap free:off
Jan 31 07:31:35 localhost kernel: software IO TLB: area num 8.
Jan 31 07:31:35 localhost kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=8, Nodes=1
Jan 31 07:31:35 localhost kernel: ftrace: allocating 49438 entries in 194 pages
Jan 31 07:31:35 localhost kernel: ftrace: allocated 194 pages with 3 groups
Jan 31 07:31:35 localhost kernel: Dynamic Preempt: voluntary
Jan 31 07:31:35 localhost kernel: rcu: Preemptible hierarchical RCU implementation.
Jan 31 07:31:35 localhost kernel: rcu:         RCU event tracing is enabled.
Jan 31 07:31:35 localhost kernel: rcu:         RCU restricting CPUs from NR_CPUS=8192 to nr_cpu_ids=8.
Jan 31 07:31:35 localhost kernel:         Trampoline variant of Tasks RCU enabled.
Jan 31 07:31:35 localhost kernel:         Rude variant of Tasks RCU enabled.
Jan 31 07:31:35 localhost kernel:         Tracing variant of Tasks RCU enabled.
Jan 31 07:31:35 localhost kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies.
Jan 31 07:31:35 localhost kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=8
Jan 31 07:31:35 localhost kernel: RCU Tasks: Setting shift to 3 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=8.
Jan 31 07:31:35 localhost kernel: RCU Tasks Rude: Setting shift to 3 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=8.
Jan 31 07:31:35 localhost kernel: RCU Tasks Trace: Setting shift to 3 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=8.
Jan 31 07:31:35 localhost kernel: NR_IRQS: 524544, nr_irqs: 488, preallocated irqs: 16
Jan 31 07:31:35 localhost kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention.
Jan 31 07:31:35 localhost kernel: kfence: initialized - using 2097152 bytes for 255 objects at 0x(____ptrval____)-0x(____ptrval____)
Jan 31 07:31:35 localhost kernel: Console: colour VGA+ 80x25
Jan 31 07:31:35 localhost kernel: printk: console [ttyS0] enabled
Jan 31 07:31:35 localhost kernel: ACPI: Core revision 20230331
Jan 31 07:31:35 localhost kernel: APIC: Switch to symmetric I/O mode setup
Jan 31 07:31:35 localhost kernel: x2apic enabled
Jan 31 07:31:35 localhost kernel: APIC: Switched APIC routing to: physical x2apic
Jan 31 07:31:35 localhost kernel: tsc: Marking TSC unstable due to TSCs unsynchronized
Jan 31 07:31:35 localhost kernel: Calibrating delay loop (skipped) preset value.. 5600.00 BogoMIPS (lpj=2800000)
Jan 31 07:31:35 localhost kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated
Jan 31 07:31:35 localhost kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127
Jan 31 07:31:35 localhost kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0
Jan 31 07:31:35 localhost kernel: mitigations: Enabled attack vectors: user_kernel, user_user, guest_host, guest_guest, SMT mitigations: auto
Jan 31 07:31:35 localhost kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl
Jan 31 07:31:35 localhost kernel: Spectre V2 : Mitigation: Retpolines
Jan 31 07:31:35 localhost kernel: RETBleed: Mitigation: untrained return thunk
Jan 31 07:31:35 localhost kernel: Speculative Return Stack Overflow: Mitigation: SMT disabled
Jan 31 07:31:35 localhost kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization
Jan 31 07:31:35 localhost kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT
Jan 31 07:31:35 localhost kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls
Jan 31 07:31:35 localhost kernel: active return thunk: retbleed_return_thunk
Jan 31 07:31:35 localhost kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier
Jan 31 07:31:35 localhost kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers'
Jan 31 07:31:35 localhost kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers'
Jan 31 07:31:35 localhost kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers'
Jan 31 07:31:35 localhost kernel: x86/fpu: xstate_offset[2]:  576, xstate_sizes[2]:  256
Jan 31 07:31:35 localhost kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format.
Jan 31 07:31:35 localhost kernel: Freeing SMP alternatives memory: 40K
Jan 31 07:31:35 localhost kernel: pid_max: default: 32768 minimum: 301
Jan 31 07:31:35 localhost kernel: LSM: initializing lsm=lockdown,capability,landlock,yama,integrity,selinux,bpf
Jan 31 07:31:35 localhost kernel: landlock: Up and running.
Jan 31 07:31:35 localhost kernel: Yama: becoming mindful.
Jan 31 07:31:35 localhost kernel: SELinux:  Initializing.
Jan 31 07:31:35 localhost kernel: LSM support for eBPF active
Jan 31 07:31:35 localhost kernel: Mount-cache hash table entries: 16384 (order: 5, 131072 bytes, linear)
Jan 31 07:31:35 localhost kernel: Mountpoint-cache hash table entries: 16384 (order: 5, 131072 bytes, linear)
Jan 31 07:31:35 localhost kernel: smpboot: CPU0: AMD EPYC-Rome Processor (family: 0x17, model: 0x31, stepping: 0x0)
Jan 31 07:31:35 localhost kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver.
Jan 31 07:31:35 localhost kernel: ... version:                0
Jan 31 07:31:35 localhost kernel: ... bit width:              48
Jan 31 07:31:35 localhost kernel: ... generic registers:      6
Jan 31 07:31:35 localhost kernel: ... value mask:             0000ffffffffffff
Jan 31 07:31:35 localhost kernel: ... max period:             00007fffffffffff
Jan 31 07:31:35 localhost kernel: ... fixed-purpose events:   0
Jan 31 07:31:35 localhost kernel: ... event mask:             000000000000003f
Jan 31 07:31:35 localhost kernel: signal: max sigframe size: 1776
Jan 31 07:31:35 localhost kernel: rcu: Hierarchical SRCU implementation.
Jan 31 07:31:35 localhost kernel: rcu:         Max phase no-delay instances is 400.
Jan 31 07:31:35 localhost kernel: smp: Bringing up secondary CPUs ...
Jan 31 07:31:35 localhost kernel: smpboot: x86: Booting SMP configuration:
Jan 31 07:31:35 localhost kernel: .... node  #0, CPUs:      #1 #2 #3 #4 #5 #6 #7
Jan 31 07:31:35 localhost kernel: smp: Brought up 1 node, 8 CPUs
Jan 31 07:31:35 localhost kernel: smpboot: Total of 8 processors activated (44800.00 BogoMIPS)
Jan 31 07:31:35 localhost kernel: node 0 deferred pages initialised in 10ms
Jan 31 07:31:35 localhost kernel: Memory: 7763608K/8388068K available (16384K kernel code, 5801K rwdata, 13928K rodata, 4196K init, 7192K bss, 618404K reserved, 0K cma-reserved)
Jan 31 07:31:35 localhost kernel: devtmpfs: initialized
Jan 31 07:31:35 localhost kernel: x86/mm: Memory block size: 128MB
Jan 31 07:31:35 localhost kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns
Jan 31 07:31:35 localhost kernel: futex hash table entries: 2048 (131072 bytes on 1 NUMA nodes, total 128 KiB, linear).
Jan 31 07:31:35 localhost kernel: pinctrl core: initialized pinctrl subsystem
Jan 31 07:31:35 localhost kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family
Jan 31 07:31:35 localhost kernel: DMA: preallocated 1024 KiB GFP_KERNEL pool for atomic allocations
Jan 31 07:31:35 localhost kernel: DMA: preallocated 1024 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations
Jan 31 07:31:35 localhost kernel: DMA: preallocated 1024 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations
Jan 31 07:31:35 localhost kernel: audit: initializing netlink subsys (disabled)
Jan 31 07:31:35 localhost kernel: audit: type=2000 audit(1769844694.169:1): state=initialized audit_enabled=0 res=1
Jan 31 07:31:35 localhost kernel: thermal_sys: Registered thermal governor 'fair_share'
Jan 31 07:31:35 localhost kernel: thermal_sys: Registered thermal governor 'step_wise'
Jan 31 07:31:35 localhost kernel: thermal_sys: Registered thermal governor 'user_space'
Jan 31 07:31:35 localhost kernel: cpuidle: using governor menu
Jan 31 07:31:35 localhost kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5
Jan 31 07:31:35 localhost kernel: PCI: Using configuration type 1 for base access
Jan 31 07:31:35 localhost kernel: PCI: Using configuration type 1 for extended access
Jan 31 07:31:35 localhost kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible.
Jan 31 07:31:35 localhost kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages
Jan 31 07:31:35 localhost kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page
Jan 31 07:31:35 localhost kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages
Jan 31 07:31:35 localhost kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page
Jan 31 07:31:35 localhost kernel: Demotion targets for Node 0: null
Jan 31 07:31:35 localhost kernel: cryptd: max_cpu_qlen set to 1000
Jan 31 07:31:35 localhost kernel: ACPI: Added _OSI(Module Device)
Jan 31 07:31:35 localhost kernel: ACPI: Added _OSI(Processor Device)
Jan 31 07:31:35 localhost kernel: ACPI: Added _OSI(Processor Aggregator Device)
Jan 31 07:31:35 localhost kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded
Jan 31 07:31:35 localhost kernel: ACPI: Interpreter enabled
Jan 31 07:31:35 localhost kernel: ACPI: PM: (supports S0 S3 S4 S5)
Jan 31 07:31:35 localhost kernel: ACPI: Using IOAPIC for interrupt routing
Jan 31 07:31:35 localhost kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug
Jan 31 07:31:35 localhost kernel: PCI: Using E820 reservations for host bridge windows
Jan 31 07:31:35 localhost kernel: ACPI: Enabled 2 GPEs in block 00 to 0F
Jan 31 07:31:35 localhost kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff])
Jan 31 07:31:35 localhost kernel: acpi PNP0A03:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI EDR HPX-Type3]
Jan 31 07:31:35 localhost kernel: acpiphp: Slot [3] registered
Jan 31 07:31:35 localhost kernel: acpiphp: Slot [4] registered
Jan 31 07:31:35 localhost kernel: acpiphp: Slot [5] registered
Jan 31 07:31:35 localhost kernel: acpiphp: Slot [6] registered
Jan 31 07:31:35 localhost kernel: acpiphp: Slot [7] registered
Jan 31 07:31:35 localhost kernel: acpiphp: Slot [8] registered
Jan 31 07:31:35 localhost kernel: acpiphp: Slot [9] registered
Jan 31 07:31:35 localhost kernel: acpiphp: Slot [10] registered
Jan 31 07:31:35 localhost kernel: acpiphp: Slot [11] registered
Jan 31 07:31:35 localhost kernel: acpiphp: Slot [12] registered
Jan 31 07:31:35 localhost kernel: acpiphp: Slot [13] registered
Jan 31 07:31:35 localhost kernel: acpiphp: Slot [14] registered
Jan 31 07:31:35 localhost kernel: acpiphp: Slot [15] registered
Jan 31 07:31:35 localhost kernel: acpiphp: Slot [16] registered
Jan 31 07:31:35 localhost kernel: acpiphp: Slot [17] registered
Jan 31 07:31:35 localhost kernel: acpiphp: Slot [18] registered
Jan 31 07:31:35 localhost kernel: acpiphp: Slot [19] registered
Jan 31 07:31:35 localhost kernel: acpiphp: Slot [20] registered
Jan 31 07:31:35 localhost kernel: acpiphp: Slot [21] registered
Jan 31 07:31:35 localhost kernel: acpiphp: Slot [22] registered
Jan 31 07:31:35 localhost kernel: acpiphp: Slot [23] registered
Jan 31 07:31:35 localhost kernel: acpiphp: Slot [24] registered
Jan 31 07:31:35 localhost kernel: acpiphp: Slot [25] registered
Jan 31 07:31:35 localhost kernel: acpiphp: Slot [26] registered
Jan 31 07:31:35 localhost kernel: acpiphp: Slot [27] registered
Jan 31 07:31:35 localhost kernel: acpiphp: Slot [28] registered
Jan 31 07:31:35 localhost kernel: acpiphp: Slot [29] registered
Jan 31 07:31:35 localhost kernel: acpiphp: Slot [30] registered
Jan 31 07:31:35 localhost kernel: acpiphp: Slot [31] registered
Jan 31 07:31:35 localhost kernel: PCI host bridge to bus 0000:00
Jan 31 07:31:35 localhost kernel: pci_bus 0000:00: root bus resource [io  0x0000-0x0cf7 window]
Jan 31 07:31:35 localhost kernel: pci_bus 0000:00: root bus resource [io  0x0d00-0xffff window]
Jan 31 07:31:35 localhost kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window]
Jan 31 07:31:35 localhost kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window]
Jan 31 07:31:35 localhost kernel: pci_bus 0000:00: root bus resource [mem 0x240000000-0x2bfffffff window]
Jan 31 07:31:35 localhost kernel: pci_bus 0000:00: root bus resource [bus 00-ff]
Jan 31 07:31:35 localhost kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 conventional PCI endpoint
Jan 31 07:31:35 localhost kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 conventional PCI endpoint
Jan 31 07:31:35 localhost kernel: pci 0000:00:01.1: [8086:7010] type 00 class 0x010180 conventional PCI endpoint
Jan 31 07:31:35 localhost kernel: pci 0000:00:01.1: BAR 4 [io  0xc140-0xc14f]
Jan 31 07:31:35 localhost kernel: pci 0000:00:01.1: BAR 0 [io  0x01f0-0x01f7]: legacy IDE quirk
Jan 31 07:31:35 localhost kernel: pci 0000:00:01.1: BAR 1 [io  0x03f6]: legacy IDE quirk
Jan 31 07:31:35 localhost kernel: pci 0000:00:01.1: BAR 2 [io  0x0170-0x0177]: legacy IDE quirk
Jan 31 07:31:35 localhost kernel: pci 0000:00:01.1: BAR 3 [io  0x0376]: legacy IDE quirk
Jan 31 07:31:35 localhost kernel: pci 0000:00:01.2: [8086:7020] type 00 class 0x0c0300 conventional PCI endpoint
Jan 31 07:31:35 localhost kernel: pci 0000:00:01.2: BAR 4 [io  0xc100-0xc11f]
Jan 31 07:31:35 localhost kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 conventional PCI endpoint
Jan 31 07:31:35 localhost kernel: pci 0000:00:01.3: quirk: [io  0x0600-0x063f] claimed by PIIX4 ACPI
Jan 31 07:31:35 localhost kernel: pci 0000:00:01.3: quirk: [io  0x0700-0x070f] claimed by PIIX4 SMB
Jan 31 07:31:35 localhost kernel: pci 0000:00:02.0: [1af4:1050] type 00 class 0x030000 conventional PCI endpoint
Jan 31 07:31:35 localhost kernel: pci 0000:00:02.0: BAR 0 [mem 0xfe000000-0xfe7fffff pref]
Jan 31 07:31:35 localhost kernel: pci 0000:00:02.0: BAR 2 [mem 0xfe800000-0xfe803fff 64bit pref]
Jan 31 07:31:35 localhost kernel: pci 0000:00:02.0: BAR 4 [mem 0xfeb90000-0xfeb90fff]
Jan 31 07:31:35 localhost kernel: pci 0000:00:02.0: ROM [mem 0xfeb80000-0xfeb8ffff pref]
Jan 31 07:31:35 localhost kernel: pci 0000:00:02.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff]
Jan 31 07:31:35 localhost kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint
Jan 31 07:31:35 localhost kernel: pci 0000:00:03.0: BAR 0 [io  0xc080-0xc0bf]
Jan 31 07:31:35 localhost kernel: pci 0000:00:03.0: BAR 1 [mem 0xfeb91000-0xfeb91fff]
Jan 31 07:31:35 localhost kernel: pci 0000:00:03.0: BAR 4 [mem 0xfe804000-0xfe807fff 64bit pref]
Jan 31 07:31:35 localhost kernel: pci 0000:00:03.0: ROM [mem 0xfeb00000-0xfeb7ffff pref]
Jan 31 07:31:35 localhost kernel: pci 0000:00:04.0: [1af4:1001] type 00 class 0x010000 conventional PCI endpoint
Jan 31 07:31:35 localhost kernel: pci 0000:00:04.0: BAR 0 [io  0xc000-0xc07f]
Jan 31 07:31:35 localhost kernel: pci 0000:00:04.0: BAR 1 [mem 0xfeb92000-0xfeb92fff]
Jan 31 07:31:35 localhost kernel: pci 0000:00:04.0: BAR 4 [mem 0xfe808000-0xfe80bfff 64bit pref]
Jan 31 07:31:35 localhost kernel: pci 0000:00:05.0: [1af4:1002] type 00 class 0x00ff00 conventional PCI endpoint
Jan 31 07:31:35 localhost kernel: pci 0000:00:05.0: BAR 0 [io  0xc0c0-0xc0ff]
Jan 31 07:31:35 localhost kernel: pci 0000:00:05.0: BAR 4 [mem 0xfe80c000-0xfe80ffff 64bit pref]
Jan 31 07:31:35 localhost kernel: pci 0000:00:06.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint
Jan 31 07:31:35 localhost kernel: pci 0000:00:06.0: BAR 0 [io  0xc120-0xc13f]
Jan 31 07:31:35 localhost kernel: pci 0000:00:06.0: BAR 4 [mem 0xfe810000-0xfe813fff 64bit pref]
Jan 31 07:31:35 localhost kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10
Jan 31 07:31:35 localhost kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10
Jan 31 07:31:35 localhost kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11
Jan 31 07:31:35 localhost kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11
Jan 31 07:31:35 localhost kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9
Jan 31 07:31:35 localhost kernel: iommu: Default domain type: Translated
Jan 31 07:31:35 localhost kernel: iommu: DMA domain TLB invalidation policy: lazy mode
Jan 31 07:31:35 localhost kernel: SCSI subsystem initialized
Jan 31 07:31:35 localhost kernel: ACPI: bus type USB registered
Jan 31 07:31:35 localhost kernel: usbcore: registered new interface driver usbfs
Jan 31 07:31:35 localhost kernel: usbcore: registered new interface driver hub
Jan 31 07:31:35 localhost kernel: usbcore: registered new device driver usb
Jan 31 07:31:35 localhost kernel: pps_core: LinuxPPS API ver. 1 registered
Jan 31 07:31:35 localhost kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti <giometti@linux.it>
Jan 31 07:31:35 localhost kernel: PTP clock support registered
Jan 31 07:31:35 localhost kernel: EDAC MC: Ver: 3.0.0
Jan 31 07:31:35 localhost kernel: NetLabel: Initializing
Jan 31 07:31:35 localhost kernel: NetLabel:  domain hash size = 128
Jan 31 07:31:35 localhost kernel: NetLabel:  protocols = UNLABELED CIPSOv4 CALIPSO
Jan 31 07:31:35 localhost kernel: NetLabel:  unlabeled traffic allowed by default
Jan 31 07:31:35 localhost kernel: PCI: Using ACPI for IRQ routing
Jan 31 07:31:35 localhost kernel: PCI: pci_cache_line_size set to 64 bytes
Jan 31 07:31:35 localhost kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff]
Jan 31 07:31:35 localhost kernel: e820: reserve RAM buffer [mem 0xbffdb000-0xbfffffff]
Jan 31 07:31:35 localhost kernel: pci 0000:00:02.0: vgaarb: setting as boot VGA device
Jan 31 07:31:35 localhost kernel: pci 0000:00:02.0: vgaarb: bridge control possible
Jan 31 07:31:35 localhost kernel: pci 0000:00:02.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none
Jan 31 07:31:35 localhost kernel: vgaarb: loaded
Jan 31 07:31:35 localhost kernel: clocksource: Switched to clocksource kvm-clock
Jan 31 07:31:35 localhost kernel: VFS: Disk quotas dquot_6.6.0
Jan 31 07:31:35 localhost kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes)
Jan 31 07:31:35 localhost kernel: pnp: PnP ACPI init
Jan 31 07:31:35 localhost kernel: pnp 00:03: [dma 2]
Jan 31 07:31:35 localhost kernel: pnp: PnP ACPI: found 5 devices
Jan 31 07:31:35 localhost kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns
Jan 31 07:31:35 localhost kernel: NET: Registered PF_INET protocol family
Jan 31 07:31:35 localhost kernel: IP idents hash table entries: 131072 (order: 8, 1048576 bytes, linear)
Jan 31 07:31:35 localhost kernel: tcp_listen_portaddr_hash hash table entries: 4096 (order: 4, 65536 bytes, linear)
Jan 31 07:31:35 localhost kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear)
Jan 31 07:31:35 localhost kernel: TCP established hash table entries: 65536 (order: 7, 524288 bytes, linear)
Jan 31 07:31:35 localhost kernel: TCP bind hash table entries: 65536 (order: 8, 1048576 bytes, linear)
Jan 31 07:31:35 localhost kernel: TCP: Hash tables configured (established 65536 bind 65536)
Jan 31 07:31:35 localhost kernel: MPTCP token hash table entries: 8192 (order: 5, 196608 bytes, linear)
Jan 31 07:31:35 localhost kernel: UDP hash table entries: 4096 (order: 5, 131072 bytes, linear)
Jan 31 07:31:35 localhost kernel: UDP-Lite hash table entries: 4096 (order: 5, 131072 bytes, linear)
Jan 31 07:31:35 localhost kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family
Jan 31 07:31:35 localhost kernel: NET: Registered PF_XDP protocol family
Jan 31 07:31:35 localhost kernel: pci_bus 0000:00: resource 4 [io  0x0000-0x0cf7 window]
Jan 31 07:31:35 localhost kernel: pci_bus 0000:00: resource 5 [io  0x0d00-0xffff window]
Jan 31 07:31:35 localhost kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window]
Jan 31 07:31:35 localhost kernel: pci_bus 0000:00: resource 7 [mem 0xc0000000-0xfebfffff window]
Jan 31 07:31:35 localhost kernel: pci_bus 0000:00: resource 8 [mem 0x240000000-0x2bfffffff window]
Jan 31 07:31:35 localhost kernel: pci 0000:00:01.0: PIIX3: Enabling Passive Release
Jan 31 07:31:35 localhost kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers
Jan 31 07:31:35 localhost kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11
Jan 31 07:31:35 localhost kernel: pci 0000:00:01.2: quirk_usb_early_handoff+0x0/0x160 took 24569 usecs
Jan 31 07:31:35 localhost kernel: PCI: CLS 0 bytes, default 64
Jan 31 07:31:35 localhost kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB)
Jan 31 07:31:35 localhost kernel: software IO TLB: mapped [mem 0x00000000ab000000-0x00000000af000000] (64MB)
Jan 31 07:31:35 localhost kernel: ACPI: bus type thunderbolt registered
Jan 31 07:31:35 localhost kernel: Trying to unpack rootfs image as initramfs...
Jan 31 07:31:35 localhost kernel: Initialise system trusted keyrings
Jan 31 07:31:35 localhost kernel: Key type blacklist registered
Jan 31 07:31:35 localhost kernel: workingset: timestamp_bits=36 max_order=21 bucket_order=0
Jan 31 07:31:35 localhost kernel: zbud: loaded
Jan 31 07:31:35 localhost kernel: integrity: Platform Keyring initialized
Jan 31 07:31:35 localhost kernel: integrity: Machine keyring initialized
Jan 31 07:31:35 localhost kernel: Freeing initrd memory: 88000K
Jan 31 07:31:35 localhost kernel: NET: Registered PF_ALG protocol family
Jan 31 07:31:35 localhost kernel: xor: automatically using best checksumming function   avx       
Jan 31 07:31:35 localhost kernel: Key type asymmetric registered
Jan 31 07:31:35 localhost kernel: Asymmetric key parser 'x509' registered
Jan 31 07:31:35 localhost kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 246)
Jan 31 07:31:35 localhost kernel: io scheduler mq-deadline registered
Jan 31 07:31:35 localhost kernel: io scheduler kyber registered
Jan 31 07:31:35 localhost kernel: io scheduler bfq registered
Jan 31 07:31:35 localhost kernel: atomic64_test: passed for x86-64 platform with CX8 and with SSE
Jan 31 07:31:35 localhost kernel: shpchp: Standard Hot Plug PCI Controller Driver version: 0.4
Jan 31 07:31:35 localhost kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input0
Jan 31 07:31:35 localhost kernel: ACPI: button: Power Button [PWRF]
Jan 31 07:31:35 localhost kernel: ACPI: \_SB_.LNKB: Enabled at IRQ 10
Jan 31 07:31:35 localhost kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11
Jan 31 07:31:35 localhost kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10
Jan 31 07:31:35 localhost kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled
Jan 31 07:31:35 localhost kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A
Jan 31 07:31:35 localhost kernel: Non-volatile memory driver v1.3
Jan 31 07:31:35 localhost kernel: rdac: device handler registered
Jan 31 07:31:35 localhost kernel: hp_sw: device handler registered
Jan 31 07:31:35 localhost kernel: emc: device handler registered
Jan 31 07:31:35 localhost kernel: alua: device handler registered
Jan 31 07:31:35 localhost kernel: uhci_hcd 0000:00:01.2: UHCI Host Controller
Jan 31 07:31:35 localhost kernel: uhci_hcd 0000:00:01.2: new USB bus registered, assigned bus number 1
Jan 31 07:31:35 localhost kernel: uhci_hcd 0000:00:01.2: detected 2 ports
Jan 31 07:31:35 localhost kernel: uhci_hcd 0000:00:01.2: irq 11, io port 0x0000c100
Jan 31 07:31:35 localhost kernel: usb usb1: New USB device found, idVendor=1d6b, idProduct=0001, bcdDevice= 5.14
Jan 31 07:31:35 localhost kernel: usb usb1: New USB device strings: Mfr=3, Product=2, SerialNumber=1
Jan 31 07:31:35 localhost kernel: usb usb1: Product: UHCI Host Controller
Jan 31 07:31:35 localhost kernel: usb usb1: Manufacturer: Linux 5.14.0-665.el9.x86_64 uhci_hcd
Jan 31 07:31:35 localhost kernel: usb usb1: SerialNumber: 0000:00:01.2
Jan 31 07:31:35 localhost kernel: hub 1-0:1.0: USB hub found
Jan 31 07:31:35 localhost kernel: hub 1-0:1.0: 2 ports detected
Jan 31 07:31:35 localhost kernel: usbcore: registered new interface driver usbserial_generic
Jan 31 07:31:35 localhost kernel: usbserial: USB Serial support registered for generic
Jan 31 07:31:35 localhost kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12
Jan 31 07:31:35 localhost kernel: serio: i8042 KBD port at 0x60,0x64 irq 1
Jan 31 07:31:35 localhost kernel: serio: i8042 AUX port at 0x60,0x64 irq 12
Jan 31 07:31:35 localhost kernel: mousedev: PS/2 mouse device common for all mice
Jan 31 07:31:35 localhost kernel: rtc_cmos 00:04: RTC can wake from S4
Jan 31 07:31:35 localhost kernel: rtc_cmos 00:04: registered as rtc0
Jan 31 07:31:35 localhost kernel: rtc_cmos 00:04: setting system clock to 2026-01-31T07:31:34 UTC (1769844694)
Jan 31 07:31:35 localhost kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram
Jan 31 07:31:35 localhost kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled
Jan 31 07:31:35 localhost kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input1
Jan 31 07:31:35 localhost kernel: hid: raw HID events driver (C) Jiri Kosina
Jan 31 07:31:35 localhost kernel: usbcore: registered new interface driver usbhid
Jan 31 07:31:35 localhost kernel: usbhid: USB HID core driver
Jan 31 07:31:35 localhost kernel: drop_monitor: Initializing network drop monitor service
Jan 31 07:31:35 localhost kernel: input: VirtualPS/2 VMware VMMouse as /devices/platform/i8042/serio1/input/input4
Jan 31 07:31:35 localhost kernel: input: VirtualPS/2 VMware VMMouse as /devices/platform/i8042/serio1/input/input3
Jan 31 07:31:35 localhost kernel: Initializing XFRM netlink socket
Jan 31 07:31:35 localhost kernel: NET: Registered PF_INET6 protocol family
Jan 31 07:31:35 localhost kernel: Segment Routing with IPv6
Jan 31 07:31:35 localhost kernel: NET: Registered PF_PACKET protocol family
Jan 31 07:31:35 localhost kernel: mpls_gso: MPLS GSO support
Jan 31 07:31:35 localhost kernel: IPI shorthand broadcast: enabled
Jan 31 07:31:35 localhost kernel: AVX2 version of gcm_enc/dec engaged.
Jan 31 07:31:35 localhost kernel: AES CTR mode by8 optimization enabled
Jan 31 07:31:35 localhost kernel: sched_clock: Marking stable (919010430, 139302830)->(1131291050, -72977790)
Jan 31 07:31:35 localhost kernel: registered taskstats version 1
Jan 31 07:31:35 localhost kernel: Loading compiled-in X.509 certificates
Jan 31 07:31:35 localhost kernel: Loaded X.509 cert 'The CentOS Project: CentOS Stream kernel signing key: 8d408fd8f954b245ea1a4231fd25ac56c328a9b5'
Jan 31 07:31:35 localhost kernel: Loaded X.509 cert 'Red Hat Enterprise Linux Driver Update Program (key 3): bf57f3e87362bc7229d9f465321773dfd1f77a80'
Jan 31 07:31:35 localhost kernel: Loaded X.509 cert 'Red Hat Enterprise Linux kpatch signing key: 4d38fd864ebe18c5f0b72e3852e2014c3a676fc8'
Jan 31 07:31:35 localhost kernel: Loaded X.509 cert 'RH-IMA-CA: Red Hat IMA CA: fb31825dd0e073685b264e3038963673f753959a'
Jan 31 07:31:35 localhost kernel: Loaded X.509 cert 'Nvidia GPU OOT signing 001: 55e1cef88193e60419f0b0ec379c49f77545acf0'
Jan 31 07:31:35 localhost kernel: Demotion targets for Node 0: null
Jan 31 07:31:35 localhost kernel: page_owner is disabled
Jan 31 07:31:35 localhost kernel: Key type .fscrypt registered
Jan 31 07:31:35 localhost kernel: Key type fscrypt-provisioning registered
Jan 31 07:31:35 localhost kernel: Key type big_key registered
Jan 31 07:31:35 localhost kernel: Key type encrypted registered
Jan 31 07:31:35 localhost kernel: ima: No TPM chip found, activating TPM-bypass!
Jan 31 07:31:35 localhost kernel: Loading compiled-in module X.509 certificates
Jan 31 07:31:35 localhost kernel: Loaded X.509 cert 'The CentOS Project: CentOS Stream kernel signing key: 8d408fd8f954b245ea1a4231fd25ac56c328a9b5'
Jan 31 07:31:35 localhost kernel: ima: Allocated hash algorithm: sha256
Jan 31 07:31:35 localhost kernel: ima: No architecture policies found
Jan 31 07:31:35 localhost kernel: evm: Initialising EVM extended attributes:
Jan 31 07:31:35 localhost kernel: evm: security.selinux
Jan 31 07:31:35 localhost kernel: evm: security.SMACK64 (disabled)
Jan 31 07:31:35 localhost kernel: evm: security.SMACK64EXEC (disabled)
Jan 31 07:31:35 localhost kernel: evm: security.SMACK64TRANSMUTE (disabled)
Jan 31 07:31:35 localhost kernel: evm: security.SMACK64MMAP (disabled)
Jan 31 07:31:35 localhost kernel: evm: security.apparmor (disabled)
Jan 31 07:31:35 localhost kernel: evm: security.ima
Jan 31 07:31:35 localhost kernel: evm: security.capability
Jan 31 07:31:35 localhost kernel: evm: HMAC attrs: 0x1
Jan 31 07:31:35 localhost kernel: usb 1-1: new full-speed USB device number 2 using uhci_hcd
Jan 31 07:31:35 localhost kernel: Running certificate verification RSA selftest
Jan 31 07:31:35 localhost kernel: Loaded X.509 cert 'Certificate verification self-testing key: f58703bb33ce1b73ee02eccdee5b8817518fe3db'
Jan 31 07:31:35 localhost kernel: Running certificate verification ECDSA selftest
Jan 31 07:31:35 localhost kernel: Loaded X.509 cert 'Certificate verification ECDSA self-testing key: 2900bcea1deb7bc8479a84a23d758efdfdd2b2d3'
Jan 31 07:31:35 localhost kernel: clk: Disabling unused clocks
Jan 31 07:31:35 localhost kernel: Freeing unused decrypted memory: 2028K
Jan 31 07:31:35 localhost kernel: Freeing unused kernel image (initmem) memory: 4196K
Jan 31 07:31:35 localhost kernel: Write protecting the kernel read-only data: 30720k
Jan 31 07:31:35 localhost kernel: Freeing unused kernel image (rodata/data gap) memory: 408K
Jan 31 07:31:35 localhost kernel: x86/mm: Checked W+X mappings: passed, no W+X pages found.
Jan 31 07:31:35 localhost kernel: Run /init as init process
Jan 31 07:31:35 localhost kernel:   with arguments:
Jan 31 07:31:35 localhost kernel:     /init
Jan 31 07:31:35 localhost kernel:   with environment:
Jan 31 07:31:35 localhost kernel:     HOME=/
Jan 31 07:31:35 localhost kernel:     TERM=linux
Jan 31 07:31:35 localhost kernel:     BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-665.el9.x86_64
Jan 31 07:31:35 localhost systemd[1]: systemd 252-64.el9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT +GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN -IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY +P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK +XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified)
Jan 31 07:31:35 localhost systemd[1]: Detected virtualization kvm.
Jan 31 07:31:35 localhost systemd[1]: Detected architecture x86-64.
Jan 31 07:31:35 localhost systemd[1]: Running in initrd.
Jan 31 07:31:35 localhost systemd[1]: No hostname configured, using default hostname.
Jan 31 07:31:35 localhost systemd[1]: Hostname set to <localhost>.
Jan 31 07:31:35 localhost systemd[1]: Initializing machine ID from VM UUID.
Jan 31 07:31:35 localhost kernel: usb 1-1: New USB device found, idVendor=0627, idProduct=0001, bcdDevice= 0.00
Jan 31 07:31:35 localhost kernel: usb 1-1: New USB device strings: Mfr=1, Product=3, SerialNumber=10
Jan 31 07:31:35 localhost kernel: usb 1-1: Product: QEMU USB Tablet
Jan 31 07:31:35 localhost kernel: usb 1-1: Manufacturer: QEMU
Jan 31 07:31:35 localhost kernel: usb 1-1: SerialNumber: 28754-0000:00:01.2-1
Jan 31 07:31:35 localhost kernel: input: QEMU QEMU USB Tablet as /devices/pci0000:00/0000:00:01.2/usb1/1-1/1-1:1.0/0003:0627:0001.0001/input/input5
Jan 31 07:31:35 localhost kernel: hid-generic 0003:0627:0001.0001: input,hidraw0: USB HID v0.01 Mouse [QEMU QEMU USB Tablet] on usb-0000:00:01.2-1/input0
Jan 31 07:31:35 localhost systemd[1]: Queued start job for default target Initrd Default Target.
Jan 31 07:31:35 localhost systemd[1]: Started Dispatch Password Requests to Console Directory Watch.
Jan 31 07:31:35 localhost systemd[1]: Reached target Local Encrypted Volumes.
Jan 31 07:31:35 localhost systemd[1]: Reached target Initrd /usr File System.
Jan 31 07:31:35 localhost systemd[1]: Reached target Local File Systems.
Jan 31 07:31:35 localhost systemd[1]: Reached target Path Units.
Jan 31 07:31:35 localhost systemd[1]: Reached target Slice Units.
Jan 31 07:31:35 localhost systemd[1]: Reached target Swaps.
Jan 31 07:31:35 localhost systemd[1]: Reached target Timer Units.
Jan 31 07:31:35 localhost systemd[1]: Listening on D-Bus System Message Bus Socket.
Jan 31 07:31:35 localhost systemd[1]: Listening on Journal Socket (/dev/log).
Jan 31 07:31:35 localhost systemd[1]: Listening on Journal Socket.
Jan 31 07:31:35 localhost systemd[1]: Listening on udev Control Socket.
Jan 31 07:31:35 localhost systemd[1]: Listening on udev Kernel Socket.
Jan 31 07:31:35 localhost systemd[1]: Reached target Socket Units.
Jan 31 07:31:35 localhost systemd[1]: Starting Create List of Static Device Nodes...
Jan 31 07:31:35 localhost systemd[1]: Starting Journal Service...
Jan 31 07:31:35 localhost systemd[1]: Load Kernel Modules was skipped because no trigger condition checks were met.
Jan 31 07:31:35 localhost systemd[1]: Starting Apply Kernel Variables...
Jan 31 07:31:35 localhost systemd[1]: Starting Create System Users...
Jan 31 07:31:35 localhost systemd[1]: Starting Setup Virtual Console...
Jan 31 07:31:35 localhost systemd[1]: Finished Create List of Static Device Nodes.
Jan 31 07:31:35 localhost systemd[1]: Finished Apply Kernel Variables.
Jan 31 07:31:35 localhost systemd[1]: Finished Create System Users.
Jan 31 07:31:35 localhost systemd-journald[305]: Journal started
Jan 31 07:31:35 localhost systemd-journald[305]: Runtime Journal (/run/log/journal/2848852e0b6443df9df31c9bd96fb83b) is 8.0M, max 153.6M, 145.6M free.
Jan 31 07:31:35 localhost systemd-sysusers[309]: Creating group 'users' with GID 100.
Jan 31 07:31:35 localhost systemd-sysusers[309]: Creating group 'dbus' with GID 81.
Jan 31 07:31:35 localhost systemd-sysusers[309]: Creating user 'dbus' (System Message Bus) with UID 81 and GID 81.
Jan 31 07:31:35 localhost systemd[1]: Started Journal Service.
Jan 31 07:31:35 localhost systemd[1]: Starting Create Static Device Nodes in /dev...
Jan 31 07:31:35 localhost systemd[1]: Starting Create Volatile Files and Directories...
Jan 31 07:31:35 localhost systemd[1]: Finished Create Static Device Nodes in /dev.
Jan 31 07:31:35 localhost systemd[1]: Finished Create Volatile Files and Directories.
Jan 31 07:31:35 localhost systemd[1]: Finished Setup Virtual Console.
Jan 31 07:31:35 localhost systemd[1]: dracut ask for additional cmdline parameters was skipped because no trigger condition checks were met.
Jan 31 07:31:35 localhost systemd[1]: Starting dracut cmdline hook...
Jan 31 07:31:35 localhost dracut-cmdline[325]: dracut-9 dracut-057-102.git20250818.el9
Jan 31 07:31:35 localhost dracut-cmdline[325]: Using kernel command line parameters:    BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-665.el9.x86_64 root=UUID=822f14ea-6e7e-41df-b0d8-fbe282d9ded8 ro console=ttyS0,115200n8 no_timer_check net.ifnames=0 crashkernel=1G-2G:192M,2G-64G:256M,64G-:512M
Jan 31 07:31:35 localhost systemd[1]: Finished dracut cmdline hook.
Jan 31 07:31:35 localhost systemd[1]: Starting dracut pre-udev hook...
Jan 31 07:31:35 localhost kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
Jan 31 07:31:35 localhost kernel: device-mapper: uevent: version 1.0.3
Jan 31 07:31:35 localhost kernel: device-mapper: ioctl: 4.50.0-ioctl (2025-04-28) initialised: dm-devel@lists.linux.dev
Jan 31 07:31:35 localhost kernel: RPC: Registered named UNIX socket transport module.
Jan 31 07:31:35 localhost kernel: RPC: Registered udp transport module.
Jan 31 07:31:35 localhost kernel: RPC: Registered tcp transport module.
Jan 31 07:31:35 localhost kernel: RPC: Registered tcp-with-tls transport module.
Jan 31 07:31:35 localhost kernel: RPC: Registered tcp NFSv4.1 backchannel transport module.
Jan 31 07:31:35 localhost rpc.statd[441]: Version 2.5.4 starting
Jan 31 07:31:35 localhost rpc.statd[441]: Initializing NSM state
Jan 31 07:31:35 localhost rpc.idmapd[446]: Setting log level to 0
Jan 31 07:31:35 localhost systemd[1]: Finished dracut pre-udev hook.
Jan 31 07:31:35 localhost systemd[1]: Starting Rule-based Manager for Device Events and Files...
Jan 31 07:31:35 localhost systemd-udevd[459]: Using default interface naming scheme 'rhel-9.0'.
Jan 31 07:31:35 localhost systemd[1]: Started Rule-based Manager for Device Events and Files.
Jan 31 07:31:35 localhost systemd[1]: Starting dracut pre-trigger hook...
Jan 31 07:31:35 localhost systemd[1]: Finished dracut pre-trigger hook.
Jan 31 07:31:35 localhost systemd[1]: Starting Coldplug All udev Devices...
Jan 31 07:31:35 localhost systemd[1]: Created slice Slice /system/modprobe.
Jan 31 07:31:35 localhost systemd[1]: Starting Load Kernel Module configfs...
Jan 31 07:31:35 localhost systemd[1]: Finished Coldplug All udev Devices.
Jan 31 07:31:35 localhost systemd[1]: modprobe@configfs.service: Deactivated successfully.
Jan 31 07:31:35 localhost systemd[1]: Finished Load Kernel Module configfs.
Jan 31 07:31:35 localhost systemd[1]: nm-initrd.service was skipped because of an unmet condition check (ConditionPathExists=/run/NetworkManager/initrd/neednet).
Jan 31 07:31:35 localhost systemd[1]: Reached target Network.
Jan 31 07:31:35 localhost systemd[1]: nm-wait-online-initrd.service was skipped because of an unmet condition check (ConditionPathExists=/run/NetworkManager/initrd/neednet).
Jan 31 07:31:35 localhost systemd[1]: Starting dracut initqueue hook...
Jan 31 07:31:35 localhost kernel: virtio_blk virtio2: 8/0/0 default/read/poll queues
Jan 31 07:31:35 localhost kernel: virtio_blk virtio2: [vda] 167772160 512-byte logical blocks (85.9 GB/80.0 GiB)
Jan 31 07:31:35 localhost kernel:  vda: vda1
Jan 31 07:31:35 localhost kernel: libata version 3.00 loaded.
Jan 31 07:31:35 localhost kernel: ata_piix 0000:00:01.1: version 2.13
Jan 31 07:31:35 localhost systemd-udevd[476]: Network interface NamePolicy= disabled on kernel command line.
Jan 31 07:31:35 localhost kernel: scsi host0: ata_piix
Jan 31 07:31:35 localhost kernel: scsi host1: ata_piix
Jan 31 07:31:35 localhost kernel: ata1: PATA max MWDMA2 cmd 0x1f0 ctl 0x3f6 bmdma 0xc140 irq 14 lpm-pol 0
Jan 31 07:31:35 localhost kernel: ata2: PATA max MWDMA2 cmd 0x170 ctl 0x376 bmdma 0xc148 irq 15 lpm-pol 0
Jan 31 07:31:36 localhost systemd[1]: Found device /dev/disk/by-uuid/822f14ea-6e7e-41df-b0d8-fbe282d9ded8.
Jan 31 07:31:36 localhost systemd[1]: Reached target Initrd Root Device.
Jan 31 07:31:36 localhost systemd[1]: Mounting Kernel Configuration File System...
Jan 31 07:31:36 localhost systemd[1]: Mounted Kernel Configuration File System.
Jan 31 07:31:36 localhost systemd[1]: Reached target System Initialization.
Jan 31 07:31:36 localhost systemd[1]: Reached target Basic System.
Jan 31 07:31:36 localhost kernel: ata1: found unknown device (class 0)
Jan 31 07:31:36 localhost kernel: ata1.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100
Jan 31 07:31:36 localhost kernel: scsi 0:0:0:0: CD-ROM            QEMU     QEMU DVD-ROM     2.5+ PQ: 0 ANSI: 5
Jan 31 07:31:36 localhost kernel: scsi 0:0:0:0: Attached scsi generic sg0 type 5
Jan 31 07:31:36 localhost kernel: sr 0:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray
Jan 31 07:31:36 localhost kernel: cdrom: Uniform CD-ROM driver Revision: 3.20
Jan 31 07:31:36 localhost kernel: sr 0:0:0:0: Attached scsi CD-ROM sr0
Jan 31 07:31:36 localhost systemd[1]: Finished dracut initqueue hook.
Jan 31 07:31:36 localhost systemd[1]: Reached target Preparation for Remote File Systems.
Jan 31 07:31:36 localhost systemd[1]: Reached target Remote Encrypted Volumes.
Jan 31 07:31:36 localhost systemd[1]: Reached target Remote File Systems.
Jan 31 07:31:36 localhost systemd[1]: Starting dracut pre-mount hook...
Jan 31 07:31:36 localhost systemd[1]: Finished dracut pre-mount hook.
Jan 31 07:31:36 localhost systemd[1]: Starting File System Check on /dev/disk/by-uuid/822f14ea-6e7e-41df-b0d8-fbe282d9ded8...
Jan 31 07:31:36 localhost systemd-fsck[559]: /usr/sbin/fsck.xfs: XFS file system.
Jan 31 07:31:36 localhost systemd[1]: Finished File System Check on /dev/disk/by-uuid/822f14ea-6e7e-41df-b0d8-fbe282d9ded8.
Jan 31 07:31:36 localhost systemd[1]: Mounting /sysroot...
Jan 31 07:31:36 localhost kernel: SGI XFS with ACLs, security attributes, scrub, quota, no debug enabled
Jan 31 07:31:36 localhost kernel: XFS (vda1): Mounting V5 Filesystem 822f14ea-6e7e-41df-b0d8-fbe282d9ded8
Jan 31 07:31:36 localhost kernel: XFS (vda1): Ending clean mount
Jan 31 07:31:36 localhost systemd[1]: Mounted /sysroot.
Jan 31 07:31:36 localhost systemd[1]: Reached target Initrd Root File System.
Jan 31 07:31:36 localhost systemd[1]: Starting Mountpoints Configured in the Real Root...
Jan 31 07:31:36 localhost systemd[1]: initrd-parse-etc.service: Deactivated successfully.
Jan 31 07:31:36 localhost systemd[1]: Finished Mountpoints Configured in the Real Root.
Jan 31 07:31:36 localhost systemd[1]: Reached target Initrd File Systems.
Jan 31 07:31:36 localhost systemd[1]: Reached target Initrd Default Target.
Jan 31 07:31:36 localhost systemd[1]: Starting dracut mount hook...
Jan 31 07:31:36 localhost systemd[1]: Finished dracut mount hook.
Jan 31 07:31:36 localhost systemd[1]: Starting dracut pre-pivot and cleanup hook...
Jan 31 07:31:37 localhost rpc.idmapd[446]: exiting on signal 15
Jan 31 07:31:37 localhost systemd[1]: var-lib-nfs-rpc_pipefs.mount: Deactivated successfully.
Jan 31 07:31:37 localhost systemd[1]: Finished dracut pre-pivot and cleanup hook.
Jan 31 07:31:37 localhost systemd[1]: Starting Cleaning Up and Shutting Down Daemons...
Jan 31 07:31:37 localhost systemd[1]: Stopped target Network.
Jan 31 07:31:37 localhost systemd[1]: Stopped target Remote Encrypted Volumes.
Jan 31 07:31:37 localhost systemd[1]: Stopped target Timer Units.
Jan 31 07:31:37 localhost systemd[1]: dbus.socket: Deactivated successfully.
Jan 31 07:31:37 localhost systemd[1]: Closed D-Bus System Message Bus Socket.
Jan 31 07:31:37 localhost systemd[1]: dracut-pre-pivot.service: Deactivated successfully.
Jan 31 07:31:37 localhost systemd[1]: Stopped dracut pre-pivot and cleanup hook.
Jan 31 07:31:37 localhost systemd[1]: Stopped target Initrd Default Target.
Jan 31 07:31:37 localhost systemd[1]: Stopped target Basic System.
Jan 31 07:31:37 localhost systemd[1]: Stopped target Initrd Root Device.
Jan 31 07:31:37 localhost systemd[1]: Stopped target Initrd /usr File System.
Jan 31 07:31:37 localhost systemd[1]: Stopped target Path Units.
Jan 31 07:31:37 localhost systemd[1]: Stopped target Remote File Systems.
Jan 31 07:31:37 localhost systemd[1]: Stopped target Preparation for Remote File Systems.
Jan 31 07:31:37 localhost systemd[1]: Stopped target Slice Units.
Jan 31 07:31:37 localhost systemd[1]: Stopped target Socket Units.
Jan 31 07:31:37 localhost systemd[1]: Stopped target System Initialization.
Jan 31 07:31:37 localhost systemd[1]: Stopped target Local File Systems.
Jan 31 07:31:37 localhost systemd[1]: Stopped target Swaps.
Jan 31 07:31:37 localhost systemd[1]: dracut-mount.service: Deactivated successfully.
Jan 31 07:31:37 localhost systemd[1]: Stopped dracut mount hook.
Jan 31 07:31:37 localhost systemd[1]: dracut-pre-mount.service: Deactivated successfully.
Jan 31 07:31:37 localhost systemd[1]: Stopped dracut pre-mount hook.
Jan 31 07:31:37 localhost systemd[1]: Stopped target Local Encrypted Volumes.
Jan 31 07:31:37 localhost systemd[1]: systemd-ask-password-console.path: Deactivated successfully.
Jan 31 07:31:37 localhost systemd[1]: Stopped Dispatch Password Requests to Console Directory Watch.
Jan 31 07:31:37 localhost systemd[1]: dracut-initqueue.service: Deactivated successfully.
Jan 31 07:31:37 localhost systemd[1]: Stopped dracut initqueue hook.
Jan 31 07:31:37 localhost systemd[1]: systemd-sysctl.service: Deactivated successfully.
Jan 31 07:31:37 localhost systemd[1]: Stopped Apply Kernel Variables.
Jan 31 07:31:37 localhost systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully.
Jan 31 07:31:37 localhost systemd[1]: Stopped Create Volatile Files and Directories.
Jan 31 07:31:37 localhost systemd[1]: systemd-udev-trigger.service: Deactivated successfully.
Jan 31 07:31:37 localhost systemd[1]: Stopped Coldplug All udev Devices.
Jan 31 07:31:37 localhost systemd[1]: dracut-pre-trigger.service: Deactivated successfully.
Jan 31 07:31:37 localhost systemd[1]: Stopped dracut pre-trigger hook.
Jan 31 07:31:37 localhost systemd[1]: Stopping Rule-based Manager for Device Events and Files...
Jan 31 07:31:37 localhost systemd[1]: systemd-vconsole-setup.service: Deactivated successfully.
Jan 31 07:31:37 localhost systemd[1]: Stopped Setup Virtual Console.
Jan 31 07:31:37 localhost systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully.
Jan 31 07:31:37 localhost systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully.
Jan 31 07:31:37 localhost systemd[1]: systemd-udevd.service: Deactivated successfully.
Jan 31 07:31:37 localhost systemd[1]: Stopped Rule-based Manager for Device Events and Files.
Jan 31 07:31:37 localhost systemd[1]: systemd-udevd-control.socket: Deactivated successfully.
Jan 31 07:31:37 localhost systemd[1]: Closed udev Control Socket.
Jan 31 07:31:37 localhost systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully.
Jan 31 07:31:37 localhost systemd[1]: Closed udev Kernel Socket.
Jan 31 07:31:37 localhost systemd[1]: dracut-pre-udev.service: Deactivated successfully.
Jan 31 07:31:37 localhost systemd[1]: Stopped dracut pre-udev hook.
Jan 31 07:31:37 localhost systemd[1]: dracut-cmdline.service: Deactivated successfully.
Jan 31 07:31:37 localhost systemd[1]: Stopped dracut cmdline hook.
Jan 31 07:31:37 localhost systemd[1]: Starting Cleanup udev Database...
Jan 31 07:31:37 localhost systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully.
Jan 31 07:31:37 localhost systemd[1]: Stopped Create Static Device Nodes in /dev.
Jan 31 07:31:37 localhost systemd[1]: kmod-static-nodes.service: Deactivated successfully.
Jan 31 07:31:37 localhost systemd[1]: Stopped Create List of Static Device Nodes.
Jan 31 07:31:37 localhost systemd[1]: systemd-sysusers.service: Deactivated successfully.
Jan 31 07:31:37 localhost systemd[1]: Stopped Create System Users.
Jan 31 07:31:37 localhost systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully.
Jan 31 07:31:37 localhost systemd[1]: run-credentials-systemd\x2dsysusers.service.mount: Deactivated successfully.
Jan 31 07:31:37 localhost systemd[1]: initrd-cleanup.service: Deactivated successfully.
Jan 31 07:31:37 localhost systemd[1]: Finished Cleaning Up and Shutting Down Daemons.
Jan 31 07:31:37 localhost systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully.
Jan 31 07:31:37 localhost systemd[1]: Finished Cleanup udev Database.
Jan 31 07:31:37 localhost systemd[1]: Reached target Switch Root.
Jan 31 07:31:37 localhost systemd[1]: Starting Switch Root...
Jan 31 07:31:37 localhost systemd[1]: Switching root.
Jan 31 07:31:37 localhost systemd-journald[305]: Journal stopped
Jan 31 07:31:38 localhost systemd-journald[305]: Received SIGTERM from PID 1 (systemd).
Jan 31 07:31:38 localhost kernel: audit: type=1404 audit(1769844697.342:2): enforcing=1 old_enforcing=0 auid=4294967295 ses=4294967295 enabled=1 old-enabled=1 lsm=selinux res=1
Jan 31 07:31:38 localhost kernel: SELinux:  policy capability network_peer_controls=1
Jan 31 07:31:38 localhost kernel: SELinux:  policy capability open_perms=1
Jan 31 07:31:38 localhost kernel: SELinux:  policy capability extended_socket_class=1
Jan 31 07:31:38 localhost kernel: SELinux:  policy capability always_check_network=0
Jan 31 07:31:38 localhost kernel: SELinux:  policy capability cgroup_seclabel=1
Jan 31 07:31:38 localhost kernel: SELinux:  policy capability nnp_nosuid_transition=1
Jan 31 07:31:38 localhost kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Jan 31 07:31:38 localhost kernel: audit: type=1403 audit(1769844697.478:3): auid=4294967295 ses=4294967295 lsm=selinux res=1
Jan 31 07:31:38 localhost systemd[1]: Successfully loaded SELinux policy in 142.036ms.
Jan 31 07:31:38 localhost systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 36.829ms.
Jan 31 07:31:38 localhost systemd[1]: systemd 252-64.el9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT +GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN -IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY +P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK +XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified)
Jan 31 07:31:38 localhost systemd[1]: Detected virtualization kvm.
Jan 31 07:31:38 localhost systemd[1]: Detected architecture x86-64.
Jan 31 07:31:38 localhost systemd-rc-local-generator[641]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 07:31:38 localhost systemd[1]: initrd-switch-root.service: Deactivated successfully.
Jan 31 07:31:38 localhost systemd[1]: Stopped Switch Root.
Jan 31 07:31:38 localhost systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1.
Jan 31 07:31:38 localhost systemd[1]: Created slice Slice /system/getty.
Jan 31 07:31:38 localhost systemd[1]: Created slice Slice /system/serial-getty.
Jan 31 07:31:38 localhost systemd[1]: Created slice Slice /system/sshd-keygen.
Jan 31 07:31:38 localhost systemd[1]: Created slice User and Session Slice.
Jan 31 07:31:38 localhost systemd[1]: Started Dispatch Password Requests to Console Directory Watch.
Jan 31 07:31:38 localhost systemd[1]: Started Forward Password Requests to Wall Directory Watch.
Jan 31 07:31:38 localhost systemd[1]: Set up automount Arbitrary Executable File Formats File System Automount Point.
Jan 31 07:31:38 localhost systemd[1]: Reached target Local Encrypted Volumes.
Jan 31 07:31:38 localhost systemd[1]: Stopped target Switch Root.
Jan 31 07:31:38 localhost systemd[1]: Stopped target Initrd File Systems.
Jan 31 07:31:38 localhost systemd[1]: Stopped target Initrd Root File System.
Jan 31 07:31:38 localhost systemd[1]: Reached target Local Integrity Protected Volumes.
Jan 31 07:31:38 localhost systemd[1]: Reached target Path Units.
Jan 31 07:31:38 localhost systemd[1]: Reached target rpc_pipefs.target.
Jan 31 07:31:38 localhost systemd[1]: Reached target Slice Units.
Jan 31 07:31:38 localhost systemd[1]: Reached target Swaps.
Jan 31 07:31:38 localhost systemd[1]: Reached target Local Verity Protected Volumes.
Jan 31 07:31:38 localhost systemd[1]: Listening on RPCbind Server Activation Socket.
Jan 31 07:31:38 localhost systemd[1]: Reached target RPC Port Mapper.
Jan 31 07:31:38 localhost systemd[1]: Listening on Process Core Dump Socket.
Jan 31 07:31:38 localhost systemd[1]: Listening on initctl Compatibility Named Pipe.
Jan 31 07:31:38 localhost systemd[1]: Listening on udev Control Socket.
Jan 31 07:31:38 localhost systemd[1]: Listening on udev Kernel Socket.
Jan 31 07:31:38 localhost systemd[1]: Mounting Huge Pages File System...
Jan 31 07:31:38 localhost systemd[1]: Mounting POSIX Message Queue File System...
Jan 31 07:31:38 localhost systemd[1]: Mounting Kernel Debug File System...
Jan 31 07:31:38 localhost systemd[1]: Mounting Kernel Trace File System...
Jan 31 07:31:38 localhost systemd[1]: Kernel Module supporting RPCSEC_GSS was skipped because of an unmet condition check (ConditionPathExists=/etc/krb5.keytab).
Jan 31 07:31:38 localhost systemd[1]: Starting Create List of Static Device Nodes...
Jan 31 07:31:38 localhost systemd[1]: Starting Load Kernel Module configfs...
Jan 31 07:31:38 localhost systemd[1]: Starting Load Kernel Module drm...
Jan 31 07:31:38 localhost systemd[1]: Starting Load Kernel Module efi_pstore...
Jan 31 07:31:38 localhost systemd[1]: Starting Load Kernel Module fuse...
Jan 31 07:31:38 localhost systemd[1]: Starting Read and set NIS domainname from /etc/sysconfig/network...
Jan 31 07:31:38 localhost systemd[1]: systemd-fsck-root.service: Deactivated successfully.
Jan 31 07:31:38 localhost systemd[1]: Stopped File System Check on Root Device.
Jan 31 07:31:38 localhost systemd[1]: Stopped Journal Service.
Jan 31 07:31:38 localhost systemd[1]: Starting Journal Service...
Jan 31 07:31:38 localhost systemd[1]: Load Kernel Modules was skipped because no trigger condition checks were met.
Jan 31 07:31:38 localhost systemd[1]: Starting Generate network units from Kernel command line...
Jan 31 07:31:38 localhost systemd[1]: TPM2 PCR Machine ID Measurement was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f).
Jan 31 07:31:38 localhost systemd[1]: Starting Remount Root and Kernel File Systems...
Jan 31 07:31:38 localhost systemd[1]: Repartition Root Disk was skipped because no trigger condition checks were met.
Jan 31 07:31:38 localhost kernel: xfs filesystem being remounted at / supports timestamps until 2038 (0x7fffffff)
Jan 31 07:31:38 localhost systemd[1]: Starting Apply Kernel Variables...
Jan 31 07:31:38 localhost kernel: fuse: init (API version 7.37)
Jan 31 07:31:38 localhost systemd[1]: Starting Coldplug All udev Devices...
Jan 31 07:31:38 localhost systemd[1]: Mounted Huge Pages File System.
Jan 31 07:31:38 localhost systemd[1]: Mounted POSIX Message Queue File System.
Jan 31 07:31:38 localhost systemd[1]: Mounted Kernel Debug File System.
Jan 31 07:31:38 localhost systemd[1]: Mounted Kernel Trace File System.
Jan 31 07:31:38 localhost systemd[1]: Finished Create List of Static Device Nodes.
Jan 31 07:31:38 localhost systemd[1]: modprobe@configfs.service: Deactivated successfully.
Jan 31 07:31:38 localhost systemd[1]: Finished Load Kernel Module configfs.
Jan 31 07:31:38 localhost kernel: ACPI: bus type drm_connector registered
Jan 31 07:31:38 localhost systemd-journald[682]: Journal started
Jan 31 07:31:38 localhost systemd-journald[682]: Runtime Journal (/run/log/journal/bf0bc0bb03de29b24cba1cc9599cf5d0) is 8.0M, max 153.6M, 145.6M free.
Jan 31 07:31:37 localhost systemd[1]: Queued start job for default target Multi-User System.
Jan 31 07:31:37 localhost systemd[1]: systemd-journald.service: Deactivated successfully.
Jan 31 07:31:38 localhost systemd[1]: Started Journal Service.
Jan 31 07:31:38 localhost systemd[1]: modprobe@drm.service: Deactivated successfully.
Jan 31 07:31:38 localhost systemd[1]: Finished Load Kernel Module drm.
Jan 31 07:31:38 localhost systemd[1]: modprobe@efi_pstore.service: Deactivated successfully.
Jan 31 07:31:38 localhost systemd[1]: Finished Load Kernel Module efi_pstore.
Jan 31 07:31:38 localhost systemd[1]: modprobe@fuse.service: Deactivated successfully.
Jan 31 07:31:38 localhost systemd[1]: Finished Load Kernel Module fuse.
Jan 31 07:31:38 localhost systemd[1]: Finished Read and set NIS domainname from /etc/sysconfig/network.
Jan 31 07:31:38 localhost systemd[1]: Finished Generate network units from Kernel command line.
Jan 31 07:31:38 localhost systemd[1]: Finished Remount Root and Kernel File Systems.
Jan 31 07:31:38 localhost systemd[1]: Finished Apply Kernel Variables.
Jan 31 07:31:38 localhost systemd[1]: Mounting FUSE Control File System...
Jan 31 07:31:38 localhost systemd[1]: First Boot Wizard was skipped because of an unmet condition check (ConditionFirstBoot=yes).
Jan 31 07:31:38 localhost systemd[1]: Starting Rebuild Hardware Database...
Jan 31 07:31:38 localhost systemd[1]: Starting Flush Journal to Persistent Storage...
Jan 31 07:31:38 localhost systemd[1]: Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore).
Jan 31 07:31:38 localhost systemd[1]: Starting Load/Save OS Random Seed...
Jan 31 07:31:38 localhost systemd[1]: Starting Create System Users...
Jan 31 07:31:38 localhost systemd[1]: Finished Coldplug All udev Devices.
Jan 31 07:31:38 localhost systemd[1]: Mounted FUSE Control File System.
Jan 31 07:31:38 localhost systemd-journald[682]: Runtime Journal (/run/log/journal/bf0bc0bb03de29b24cba1cc9599cf5d0) is 8.0M, max 153.6M, 145.6M free.
Jan 31 07:31:38 localhost systemd-journald[682]: Received client request to flush runtime journal.
Jan 31 07:31:38 localhost systemd[1]: Finished Flush Journal to Persistent Storage.
Jan 31 07:31:38 localhost systemd[1]: Finished Load/Save OS Random Seed.
Jan 31 07:31:38 localhost systemd[1]: First Boot Complete was skipped because of an unmet condition check (ConditionFirstBoot=yes).
Jan 31 07:31:38 localhost systemd[1]: Finished Create System Users.
Jan 31 07:31:38 localhost systemd[1]: Starting Create Static Device Nodes in /dev...
Jan 31 07:31:38 localhost systemd[1]: Finished Create Static Device Nodes in /dev.
Jan 31 07:31:38 localhost systemd[1]: Reached target Preparation for Local File Systems.
Jan 31 07:31:38 localhost systemd[1]: Reached target Local File Systems.
Jan 31 07:31:38 localhost systemd[1]: Starting Rebuild Dynamic Linker Cache...
Jan 31 07:31:38 localhost systemd[1]: Mark the need to relabel after reboot was skipped because of an unmet condition check (ConditionSecurity=!selinux).
Jan 31 07:31:38 localhost systemd[1]: Set Up Additional Binary Formats was skipped because no trigger condition checks were met.
Jan 31 07:31:38 localhost systemd[1]: Update Boot Loader Random Seed was skipped because no trigger condition checks were met.
Jan 31 07:31:38 localhost systemd[1]: Starting Automatic Boot Loader Update...
Jan 31 07:31:38 localhost systemd[1]: Commit a transient machine-id on disk was skipped because of an unmet condition check (ConditionPathIsMountPoint=/etc/machine-id).
Jan 31 07:31:38 localhost systemd[1]: Starting Create Volatile Files and Directories...
Jan 31 07:31:38 localhost bootctl[700]: Couldn't find EFI system partition, skipping.
Jan 31 07:31:38 localhost systemd[1]: Finished Automatic Boot Loader Update.
Jan 31 07:31:38 localhost systemd[1]: Finished Create Volatile Files and Directories.
Jan 31 07:31:38 localhost systemd[1]: Starting Security Auditing Service...
Jan 31 07:31:38 localhost systemd[1]: Starting RPC Bind...
Jan 31 07:31:38 localhost systemd[1]: Starting Rebuild Journal Catalog...
Jan 31 07:31:38 localhost auditd[706]: audit dispatcher initialized with q_depth=2000 and 1 active plugins
Jan 31 07:31:38 localhost auditd[706]: Init complete, auditd 3.1.5 listening for events (startup state enable)
Jan 31 07:31:38 localhost systemd[1]: Finished Rebuild Journal Catalog.
Jan 31 07:31:38 localhost systemd[1]: Started RPC Bind.
Jan 31 07:31:38 localhost augenrules[711]: /sbin/augenrules: No change
Jan 31 07:31:38 localhost augenrules[726]: No rules
Jan 31 07:31:38 localhost augenrules[726]: enabled 1
Jan 31 07:31:38 localhost augenrules[726]: failure 1
Jan 31 07:31:38 localhost augenrules[726]: pid 706
Jan 31 07:31:38 localhost augenrules[726]: rate_limit 0
Jan 31 07:31:38 localhost augenrules[726]: backlog_limit 8192
Jan 31 07:31:38 localhost augenrules[726]: lost 0
Jan 31 07:31:38 localhost augenrules[726]: backlog 4
Jan 31 07:31:38 localhost augenrules[726]: backlog_wait_time 60000
Jan 31 07:31:38 localhost augenrules[726]: backlog_wait_time_actual 0
Jan 31 07:31:38 localhost augenrules[726]: enabled 1
Jan 31 07:31:38 localhost augenrules[726]: failure 1
Jan 31 07:31:38 localhost augenrules[726]: pid 706
Jan 31 07:31:38 localhost augenrules[726]: rate_limit 0
Jan 31 07:31:38 localhost augenrules[726]: backlog_limit 8192
Jan 31 07:31:38 localhost augenrules[726]: lost 0
Jan 31 07:31:38 localhost augenrules[726]: backlog 4
Jan 31 07:31:38 localhost augenrules[726]: backlog_wait_time 60000
Jan 31 07:31:38 localhost augenrules[726]: backlog_wait_time_actual 0
Jan 31 07:31:38 localhost augenrules[726]: enabled 1
Jan 31 07:31:38 localhost augenrules[726]: failure 1
Jan 31 07:31:38 localhost augenrules[726]: pid 706
Jan 31 07:31:38 localhost augenrules[726]: rate_limit 0
Jan 31 07:31:38 localhost augenrules[726]: backlog_limit 8192
Jan 31 07:31:38 localhost augenrules[726]: lost 0
Jan 31 07:31:38 localhost augenrules[726]: backlog 3
Jan 31 07:31:38 localhost augenrules[726]: backlog_wait_time 60000
Jan 31 07:31:38 localhost augenrules[726]: backlog_wait_time_actual 0
Jan 31 07:31:38 localhost systemd[1]: Started Security Auditing Service.
Jan 31 07:31:38 localhost systemd[1]: Starting Record System Boot/Shutdown in UTMP...
Jan 31 07:31:38 localhost systemd[1]: Finished Record System Boot/Shutdown in UTMP.
Jan 31 07:31:38 localhost systemd[1]: Finished Rebuild Hardware Database.
Jan 31 07:31:38 localhost systemd[1]: Starting Rule-based Manager for Device Events and Files...
Jan 31 07:31:38 localhost systemd[1]: Finished Rebuild Dynamic Linker Cache.
Jan 31 07:31:38 localhost systemd[1]: Starting Update is Completed...
Jan 31 07:31:38 localhost systemd[1]: Finished Update is Completed.
Jan 31 07:31:38 localhost systemd-udevd[734]: Using default interface naming scheme 'rhel-9.0'.
Jan 31 07:31:38 localhost systemd[1]: Started Rule-based Manager for Device Events and Files.
Jan 31 07:31:38 localhost systemd[1]: Reached target System Initialization.
Jan 31 07:31:38 localhost systemd[1]: Started dnf makecache --timer.
Jan 31 07:31:38 localhost systemd[1]: Started Daily rotation of log files.
Jan 31 07:31:38 localhost systemd[1]: Started Daily Cleanup of Temporary Directories.
Jan 31 07:31:38 localhost systemd[1]: Reached target Timer Units.
Jan 31 07:31:38 localhost systemd[1]: Listening on D-Bus System Message Bus Socket.
Jan 31 07:31:38 localhost systemd[1]: Listening on SSSD Kerberos Cache Manager responder socket.
Jan 31 07:31:38 localhost systemd[1]: Reached target Socket Units.
Jan 31 07:31:38 localhost systemd-udevd[737]: Network interface NamePolicy= disabled on kernel command line.
Jan 31 07:31:38 localhost systemd[1]: Starting D-Bus System Message Bus...
Jan 31 07:31:38 localhost systemd[1]: TPM2 PCR Barrier (Initialization) was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f).
Jan 31 07:31:38 localhost systemd[1]: Condition check resulted in /dev/ttyS0 being skipped.
Jan 31 07:31:38 localhost systemd[1]: Starting Load Kernel Module configfs...
Jan 31 07:31:38 localhost systemd[1]: modprobe@configfs.service: Deactivated successfully.
Jan 31 07:31:38 localhost systemd[1]: Finished Load Kernel Module configfs.
Jan 31 07:31:38 localhost kernel: input: PC Speaker as /devices/platform/pcspkr/input/input6
Jan 31 07:31:38 localhost kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0x700, revision 0
Jan 31 07:31:38 localhost kernel: i2c i2c-0: 1/1 memory slots populated (from DMI)
Jan 31 07:31:38 localhost kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD
Jan 31 07:31:38 localhost systemd[1]: Started D-Bus System Message Bus.
Jan 31 07:31:38 localhost dbus-broker-lau[771]: Ready
Jan 31 07:31:38 localhost systemd[1]: Reached target Basic System.
Jan 31 07:31:38 localhost systemd[1]: Starting NTP client/server...
Jan 31 07:31:38 localhost systemd[1]: Starting Cloud-init: Local Stage (pre-network)...
Jan 31 07:31:38 localhost systemd[1]: Starting Restore /run/initramfs on shutdown...
Jan 31 07:31:38 localhost systemd[1]: Starting IPv4 firewall with iptables...
Jan 31 07:31:38 localhost systemd[1]: Started irqbalance daemon.
Jan 31 07:31:38 localhost systemd[1]: Load CPU microcode update was skipped because of an unmet condition check (ConditionPathExists=/sys/devices/system/cpu/microcode/reload).
Jan 31 07:31:38 localhost systemd[1]: OpenSSH ecdsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Jan 31 07:31:38 localhost systemd[1]: OpenSSH ed25519 Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Jan 31 07:31:38 localhost systemd[1]: OpenSSH rsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Jan 31 07:31:38 localhost systemd[1]: Reached target sshd-keygen.target.
Jan 31 07:31:38 localhost systemd[1]: System Security Services Daemon was skipped because no trigger condition checks were met.
Jan 31 07:31:38 localhost systemd[1]: Reached target User and Group Name Lookups.
Jan 31 07:31:38 localhost chronyd[792]: chronyd version 4.8 starting (+CMDMON +REFCLOCK +RTC +PRIVDROP +SCFILTER +SIGND +NTS +SECHASH +IPV6 +DEBUG)
Jan 31 07:31:38 localhost chronyd[792]: Loaded 0 symmetric keys
Jan 31 07:31:38 localhost chronyd[792]: Using right/UTC timezone to obtain leap second data
Jan 31 07:31:38 localhost chronyd[792]: Loaded seccomp filter (level 2)
Jan 31 07:31:39 localhost systemd[1]: Starting User Login Management...
Jan 31 07:31:39 localhost kernel: [drm] pci: virtio-vga detected at 0000:00:02.0
Jan 31 07:31:39 localhost kernel: virtio-pci 0000:00:02.0: vgaarb: deactivate vga console
Jan 31 07:31:39 localhost kernel: Console: switching to colour dummy device 80x25
Jan 31 07:31:39 localhost kernel: [drm] features: -virgl +edid -resource_blob -host_visible
Jan 31 07:31:39 localhost kernel: [drm] features: -context_init
Jan 31 07:31:39 localhost systemd[1]: Started NTP client/server.
Jan 31 07:31:39 localhost systemd[1]: Finished Restore /run/initramfs on shutdown.
Jan 31 07:31:39 localhost kernel: [drm] number of scanouts: 1
Jan 31 07:31:39 localhost kernel: [drm] number of cap sets: 0
Jan 31 07:31:39 localhost kernel: [drm] Initialized virtio_gpu 0.1.0 for 0000:00:02.0 on minor 0
Jan 31 07:31:39 localhost kernel: fbcon: virtio_gpudrmfb (fb0) is primary device
Jan 31 07:31:39 localhost kernel: Console: switching to colour frame buffer device 128x48
Jan 31 07:31:39 localhost kernel: virtio-pci 0000:00:02.0: [drm] fb0: virtio_gpudrmfb frame buffer device
Jan 31 07:31:39 localhost kernel: kvm_amd: TSC scaling supported
Jan 31 07:31:39 localhost kernel: kvm_amd: Nested Virtualization enabled
Jan 31 07:31:39 localhost kernel: kvm_amd: Nested Paging enabled
Jan 31 07:31:39 localhost kernel: kvm_amd: LBR virtualization supported
Jan 31 07:31:39 localhost systemd-logind[793]: New seat seat0.
Jan 31 07:31:39 localhost systemd-logind[793]: Watching system buttons on /dev/input/event0 (Power Button)
Jan 31 07:31:39 localhost systemd-logind[793]: Watching system buttons on /dev/input/event1 (AT Translated Set 2 keyboard)
Jan 31 07:31:39 localhost systemd[1]: Started User Login Management.
Jan 31 07:31:39 localhost kernel: Warning: Deprecated Driver is detected: nft_compat will not be maintained in a future major release and may be disabled
Jan 31 07:31:39 localhost kernel: Warning: Deprecated Driver is detected: nft_compat_module_init will not be maintained in a future major release and may be disabled
Jan 31 07:31:39 localhost iptables.init[785]: iptables: Applying firewall rules: [  OK  ]
Jan 31 07:31:39 localhost systemd[1]: Finished IPv4 firewall with iptables.
Jan 31 07:31:39 localhost cloud-init[843]: Cloud-init v. 24.4-8.el9 running 'init-local' at Sat, 31 Jan 2026 07:31:39 +0000. Up 6.09 seconds.
Jan 31 07:31:39 localhost kernel: ISO 9660 Extensions: Microsoft Joliet Level 3
Jan 31 07:31:39 localhost kernel: ISO 9660 Extensions: RRIP_1991A
Jan 31 07:31:39 localhost systemd[1]: run-cloud\x2dinit-tmp-tmpd_m9_jnx.mount: Deactivated successfully.
Jan 31 07:31:40 localhost systemd[1]: Starting Hostname Service...
Jan 31 07:31:40 localhost systemd[1]: Started Hostname Service.
Jan 31 07:31:40 np0005603663.novalocal systemd-hostnamed[857]: Hostname set to <np0005603663.novalocal> (static)
Jan 31 07:31:40 np0005603663.novalocal systemd[1]: Finished Cloud-init: Local Stage (pre-network).
Jan 31 07:31:40 np0005603663.novalocal systemd[1]: Reached target Preparation for Network.
Jan 31 07:31:40 np0005603663.novalocal systemd[1]: Starting Network Manager...
Jan 31 07:31:40 np0005603663.novalocal NetworkManager[861]: <info>  [1769844700.2763] NetworkManager (version 1.54.3-2.el9) is starting... (boot:46d0e983-b0c8-47a0-b578-409408b2d808)
Jan 31 07:31:40 np0005603663.novalocal NetworkManager[861]: <info>  [1769844700.2768] Read config: /etc/NetworkManager/NetworkManager.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf
Jan 31 07:31:40 np0005603663.novalocal NetworkManager[861]: <info>  [1769844700.2945] manager[0x55bbc3499000]: monitoring kernel firmware directory '/lib/firmware'.
Jan 31 07:31:40 np0005603663.novalocal NetworkManager[861]: <info>  [1769844700.2990] hostname: hostname: using hostnamed
Jan 31 07:31:40 np0005603663.novalocal NetworkManager[861]: <info>  [1769844700.2991] hostname: static hostname changed from (none) to "np0005603663.novalocal"
Jan 31 07:31:40 np0005603663.novalocal NetworkManager[861]: <info>  [1769844700.2999] dns-mgr: init: dns=default,systemd-resolved rc-manager=symlink (auto)
Jan 31 07:31:40 np0005603663.novalocal NetworkManager[861]: <info>  [1769844700.3119] manager[0x55bbc3499000]: rfkill: Wi-Fi hardware radio set enabled
Jan 31 07:31:40 np0005603663.novalocal NetworkManager[861]: <info>  [1769844700.3120] manager[0x55bbc3499000]: rfkill: WWAN hardware radio set enabled
Jan 31 07:31:40 np0005603663.novalocal systemd[1]: Listening on Load/Save RF Kill Switch Status /dev/rfkill Watch.
Jan 31 07:31:40 np0005603663.novalocal NetworkManager[861]: <info>  [1769844700.3215] Loaded device plugin: NMTeamFactory (/usr/lib64/NetworkManager/1.54.3-2.el9/libnm-device-plugin-team.so)
Jan 31 07:31:40 np0005603663.novalocal NetworkManager[861]: <info>  [1769844700.3215] manager: rfkill: Wi-Fi enabled by radio killswitch; enabled by state file
Jan 31 07:31:40 np0005603663.novalocal NetworkManager[861]: <info>  [1769844700.3216] manager: rfkill: WWAN enabled by radio killswitch; enabled by state file
Jan 31 07:31:40 np0005603663.novalocal NetworkManager[861]: <info>  [1769844700.3216] manager: Networking is enabled by state file
Jan 31 07:31:40 np0005603663.novalocal NetworkManager[861]: <info>  [1769844700.3218] settings: Loaded settings plugin: keyfile (internal)
Jan 31 07:31:40 np0005603663.novalocal NetworkManager[861]: <info>  [1769844700.3247] settings: Loaded settings plugin: ifcfg-rh ("/usr/lib64/NetworkManager/1.54.3-2.el9/libnm-settings-plugin-ifcfg-rh.so")
Jan 31 07:31:40 np0005603663.novalocal NetworkManager[861]: <info>  [1769844700.3277] Warning: the ifcfg-rh plugin is deprecated, please migrate connections to the keyfile format using "nmcli connection migrate"
Jan 31 07:31:40 np0005603663.novalocal NetworkManager[861]: <info>  [1769844700.3289] dhcp: init: Using DHCP client 'internal'
Jan 31 07:31:40 np0005603663.novalocal NetworkManager[861]: <info>  [1769844700.3294] manager: (lo): new Loopback device (/org/freedesktop/NetworkManager/Devices/1)
Jan 31 07:31:40 np0005603663.novalocal NetworkManager[861]: <info>  [1769844700.3304] device (lo): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 31 07:31:40 np0005603663.novalocal NetworkManager[861]: <info>  [1769844700.3316] device (lo): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'external')
Jan 31 07:31:40 np0005603663.novalocal NetworkManager[861]: <info>  [1769844700.3326] device (lo): Activation: starting connection 'lo' (4e410dfc-e55f-4386-a962-128f9b1580ba)
Jan 31 07:31:40 np0005603663.novalocal NetworkManager[861]: <info>  [1769844700.3332] manager: (eth0): new Ethernet device (/org/freedesktop/NetworkManager/Devices/2)
Jan 31 07:31:40 np0005603663.novalocal NetworkManager[861]: <info>  [1769844700.3334] device (eth0): state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Jan 31 07:31:40 np0005603663.novalocal NetworkManager[861]: <info>  [1769844700.3362] bus-manager: acquired D-Bus service "org.freedesktop.NetworkManager"
Jan 31 07:31:40 np0005603663.novalocal NetworkManager[861]: <info>  [1769844700.3365] device (lo): state change: disconnected -> prepare (reason 'none', managed-type: 'external')
Jan 31 07:31:40 np0005603663.novalocal NetworkManager[861]: <info>  [1769844700.3367] device (lo): state change: prepare -> config (reason 'none', managed-type: 'external')
Jan 31 07:31:40 np0005603663.novalocal NetworkManager[861]: <info>  [1769844700.3369] device (lo): state change: config -> ip-config (reason 'none', managed-type: 'external')
Jan 31 07:31:40 np0005603663.novalocal NetworkManager[861]: <info>  [1769844700.3370] device (eth0): carrier: link connected
Jan 31 07:31:40 np0005603663.novalocal NetworkManager[861]: <info>  [1769844700.3371] device (lo): state change: ip-config -> ip-check (reason 'none', managed-type: 'external')
Jan 31 07:31:40 np0005603663.novalocal NetworkManager[861]: <info>  [1769844700.3376] device (eth0): state change: unavailable -> disconnected (reason 'carrier-changed', managed-type: 'full')
Jan 31 07:31:40 np0005603663.novalocal NetworkManager[861]: <info>  [1769844700.3381] policy: auto-activating connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03)
Jan 31 07:31:40 np0005603663.novalocal NetworkManager[861]: <info>  [1769844700.3384] device (eth0): Activation: starting connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03)
Jan 31 07:31:40 np0005603663.novalocal NetworkManager[861]: <info>  [1769844700.3385] device (eth0): state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Jan 31 07:31:40 np0005603663.novalocal systemd[1]: Starting Network Manager Script Dispatcher Service...
Jan 31 07:31:40 np0005603663.novalocal NetworkManager[861]: <info>  [1769844700.3387] manager: NetworkManager state is now CONNECTING
Jan 31 07:31:40 np0005603663.novalocal NetworkManager[861]: <info>  [1769844700.3389] device (eth0): state change: prepare -> config (reason 'none', managed-type: 'full')
Jan 31 07:31:40 np0005603663.novalocal NetworkManager[861]: <info>  [1769844700.3394] device (eth0): state change: config -> ip-config (reason 'none', managed-type: 'full')
Jan 31 07:31:40 np0005603663.novalocal NetworkManager[861]: <info>  [1769844700.3396] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Jan 31 07:31:40 np0005603663.novalocal systemd[1]: Started Network Manager.
Jan 31 07:31:40 np0005603663.novalocal systemd[1]: Reached target Network.
Jan 31 07:31:40 np0005603663.novalocal NetworkManager[861]: <info>  [1769844700.3452] dhcp4 (eth0): state changed new lease, address=38.102.83.23
Jan 31 07:31:40 np0005603663.novalocal NetworkManager[861]: <info>  [1769844700.3459] policy: set 'System eth0' (eth0) as default for IPv4 routing and DNS
Jan 31 07:31:40 np0005603663.novalocal NetworkManager[861]: <info>  [1769844700.3481] device (eth0): state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Jan 31 07:31:40 np0005603663.novalocal systemd[1]: Starting Network Manager Wait Online...
Jan 31 07:31:40 np0005603663.novalocal systemd[1]: Starting GSSAPI Proxy Daemon...
Jan 31 07:31:40 np0005603663.novalocal NetworkManager[861]: <info>  [1769844700.3567] device (lo): state change: ip-check -> secondaries (reason 'none', managed-type: 'external')
Jan 31 07:31:40 np0005603663.novalocal NetworkManager[861]: <info>  [1769844700.3570] device (lo): state change: secondaries -> activated (reason 'none', managed-type: 'external')
Jan 31 07:31:40 np0005603663.novalocal systemd[1]: Started Network Manager Script Dispatcher Service.
Jan 31 07:31:40 np0005603663.novalocal NetworkManager[861]: <info>  [1769844700.3609] device (lo): Activation: successful, device activated.
Jan 31 07:31:40 np0005603663.novalocal NetworkManager[861]: <info>  [1769844700.3622] device (eth0): state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Jan 31 07:31:40 np0005603663.novalocal NetworkManager[861]: <info>  [1769844700.3625] device (eth0): state change: secondaries -> activated (reason 'none', managed-type: 'full')
Jan 31 07:31:40 np0005603663.novalocal NetworkManager[861]: <info>  [1769844700.3631] manager: NetworkManager state is now CONNECTED_SITE
Jan 31 07:31:40 np0005603663.novalocal NetworkManager[861]: <info>  [1769844700.3635] device (eth0): Activation: successful, device activated.
Jan 31 07:31:40 np0005603663.novalocal NetworkManager[861]: <info>  [1769844700.3644] manager: NetworkManager state is now CONNECTED_GLOBAL
Jan 31 07:31:40 np0005603663.novalocal NetworkManager[861]: <info>  [1769844700.3648] manager: startup complete
Jan 31 07:31:40 np0005603663.novalocal systemd[1]: Started GSSAPI Proxy Daemon.
Jan 31 07:31:40 np0005603663.novalocal systemd[1]: RPC security service for NFS client and server was skipped because of an unmet condition check (ConditionPathExists=/etc/krb5.keytab).
Jan 31 07:31:40 np0005603663.novalocal systemd[1]: Reached target NFS client services.
Jan 31 07:31:40 np0005603663.novalocal systemd[1]: Reached target Preparation for Remote File Systems.
Jan 31 07:31:40 np0005603663.novalocal systemd[1]: Reached target Remote File Systems.
Jan 31 07:31:40 np0005603663.novalocal systemd[1]: TPM2 PCR Barrier (User) was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f).
Jan 31 07:31:40 np0005603663.novalocal systemd[1]: Finished Network Manager Wait Online.
Jan 31 07:31:40 np0005603663.novalocal systemd[1]: Starting Cloud-init: Network Stage...
Jan 31 07:31:40 np0005603663.novalocal cloud-init[924]: Cloud-init v. 24.4-8.el9 running 'init' at Sat, 31 Jan 2026 07:31:40 +0000. Up 7.05 seconds.
Jan 31 07:31:40 np0005603663.novalocal cloud-init[924]: ci-info: +++++++++++++++++++++++++++++++++++++++Net device info+++++++++++++++++++++++++++++++++++++++
Jan 31 07:31:40 np0005603663.novalocal cloud-init[924]: ci-info: +--------+------+------------------------------+---------------+--------+-------------------+
Jan 31 07:31:40 np0005603663.novalocal cloud-init[924]: ci-info: | Device |  Up  |           Address            |      Mask     | Scope  |     Hw-Address    |
Jan 31 07:31:40 np0005603663.novalocal cloud-init[924]: ci-info: +--------+------+------------------------------+---------------+--------+-------------------+
Jan 31 07:31:40 np0005603663.novalocal cloud-init[924]: ci-info: |  eth0  | True |         38.102.83.23         | 255.255.255.0 | global | fa:16:3e:10:2a:3d |
Jan 31 07:31:40 np0005603663.novalocal cloud-init[924]: ci-info: |  eth0  | True | fe80::f816:3eff:fe10:2a3d/64 |       .       |  link  | fa:16:3e:10:2a:3d |
Jan 31 07:31:40 np0005603663.novalocal cloud-init[924]: ci-info: |   lo   | True |          127.0.0.1           |   255.0.0.0   |  host  |         .         |
Jan 31 07:31:40 np0005603663.novalocal cloud-init[924]: ci-info: |   lo   | True |           ::1/128            |       .       |  host  |         .         |
Jan 31 07:31:40 np0005603663.novalocal cloud-init[924]: ci-info: +--------+------+------------------------------+---------------+--------+-------------------+
Jan 31 07:31:40 np0005603663.novalocal cloud-init[924]: ci-info: +++++++++++++++++++++++++++++++++Route IPv4 info+++++++++++++++++++++++++++++++++
Jan 31 07:31:40 np0005603663.novalocal cloud-init[924]: ci-info: +-------+-----------------+---------------+-----------------+-----------+-------+
Jan 31 07:31:40 np0005603663.novalocal cloud-init[924]: ci-info: | Route |   Destination   |    Gateway    |     Genmask     | Interface | Flags |
Jan 31 07:31:40 np0005603663.novalocal cloud-init[924]: ci-info: +-------+-----------------+---------------+-----------------+-----------+-------+
Jan 31 07:31:40 np0005603663.novalocal cloud-init[924]: ci-info: |   0   |     0.0.0.0     |  38.102.83.1  |     0.0.0.0     |    eth0   |   UG  |
Jan 31 07:31:40 np0005603663.novalocal cloud-init[924]: ci-info: |   1   |   38.102.83.0   |    0.0.0.0    |  255.255.255.0  |    eth0   |   U   |
Jan 31 07:31:40 np0005603663.novalocal cloud-init[924]: ci-info: |   2   | 169.254.169.254 | 38.102.83.126 | 255.255.255.255 |    eth0   |  UGH  |
Jan 31 07:31:40 np0005603663.novalocal cloud-init[924]: ci-info: +-------+-----------------+---------------+-----------------+-----------+-------+
Jan 31 07:31:40 np0005603663.novalocal cloud-init[924]: ci-info: +++++++++++++++++++Route IPv6 info+++++++++++++++++++
Jan 31 07:31:40 np0005603663.novalocal cloud-init[924]: ci-info: +-------+-------------+---------+-----------+-------+
Jan 31 07:31:40 np0005603663.novalocal cloud-init[924]: ci-info: | Route | Destination | Gateway | Interface | Flags |
Jan 31 07:31:40 np0005603663.novalocal cloud-init[924]: ci-info: +-------+-------------+---------+-----------+-------+
Jan 31 07:31:40 np0005603663.novalocal cloud-init[924]: ci-info: |   1   |  fe80::/64  |    ::   |    eth0   |   U   |
Jan 31 07:31:40 np0005603663.novalocal cloud-init[924]: ci-info: |   3   |  multicast  |    ::   |    eth0   |   U   |
Jan 31 07:31:40 np0005603663.novalocal cloud-init[924]: ci-info: +-------+-------------+---------+-----------+-------+
Jan 31 07:31:41 np0005603663.novalocal useradd[990]: new group: name=cloud-user, GID=1001
Jan 31 07:31:41 np0005603663.novalocal useradd[990]: new user: name=cloud-user, UID=1001, GID=1001, home=/home/cloud-user, shell=/bin/bash, from=none
Jan 31 07:31:41 np0005603663.novalocal useradd[990]: add 'cloud-user' to group 'adm'
Jan 31 07:31:41 np0005603663.novalocal useradd[990]: add 'cloud-user' to group 'systemd-journal'
Jan 31 07:31:41 np0005603663.novalocal useradd[990]: add 'cloud-user' to shadow group 'adm'
Jan 31 07:31:41 np0005603663.novalocal useradd[990]: add 'cloud-user' to shadow group 'systemd-journal'
Jan 31 07:31:42 np0005603663.novalocal cloud-init[924]: Generating public/private rsa key pair.
Jan 31 07:31:42 np0005603663.novalocal cloud-init[924]: Your identification has been saved in /etc/ssh/ssh_host_rsa_key
Jan 31 07:31:42 np0005603663.novalocal cloud-init[924]: Your public key has been saved in /etc/ssh/ssh_host_rsa_key.pub
Jan 31 07:31:42 np0005603663.novalocal cloud-init[924]: The key fingerprint is:
Jan 31 07:31:42 np0005603663.novalocal cloud-init[924]: SHA256:HxoPDMOYF4QbC5R2UOcIKdXrqzmMIxDBD+XL6CbY91c root@np0005603663.novalocal
Jan 31 07:31:42 np0005603663.novalocal cloud-init[924]: The key's randomart image is:
Jan 31 07:31:42 np0005603663.novalocal cloud-init[924]: +---[RSA 3072]----+
Jan 31 07:31:42 np0005603663.novalocal cloud-init[924]: |.o*B.o+          |
Jan 31 07:31:42 np0005603663.novalocal cloud-init[924]: |oo*.=B .         |
Jan 31 07:31:42 np0005603663.novalocal cloud-init[924]: | +o++=*          |
Jan 31 07:31:42 np0005603663.novalocal cloud-init[924]: |. o.=. +         |
Jan 31 07:31:42 np0005603663.novalocal cloud-init[924]: | o +    S .      |
Jan 31 07:31:42 np0005603663.novalocal cloud-init[924]: |+.  .    *E.     |
Jan 31 07:31:42 np0005603663.novalocal cloud-init[924]: |+=. ..  ..o      |
Jan 31 07:31:42 np0005603663.novalocal cloud-init[924]: |* oo..  .        |
Jan 31 07:31:42 np0005603663.novalocal cloud-init[924]: |..oo  ..         |
Jan 31 07:31:42 np0005603663.novalocal cloud-init[924]: +----[SHA256]-----+
Jan 31 07:31:42 np0005603663.novalocal cloud-init[924]: Generating public/private ecdsa key pair.
Jan 31 07:31:42 np0005603663.novalocal cloud-init[924]: Your identification has been saved in /etc/ssh/ssh_host_ecdsa_key
Jan 31 07:31:42 np0005603663.novalocal cloud-init[924]: Your public key has been saved in /etc/ssh/ssh_host_ecdsa_key.pub
Jan 31 07:31:42 np0005603663.novalocal cloud-init[924]: The key fingerprint is:
Jan 31 07:31:42 np0005603663.novalocal cloud-init[924]: SHA256:wga2K2W/JZ6ryQdN2/3oe+M0W2+oTRsn7/Xf/jg38Jg root@np0005603663.novalocal
Jan 31 07:31:42 np0005603663.novalocal cloud-init[924]: The key's randomart image is:
Jan 31 07:31:42 np0005603663.novalocal cloud-init[924]: +---[ECDSA 256]---+
Jan 31 07:31:42 np0005603663.novalocal cloud-init[924]: |                 |
Jan 31 07:31:42 np0005603663.novalocal cloud-init[924]: |                 |
Jan 31 07:31:42 np0005603663.novalocal cloud-init[924]: |    o            |
Jan 31 07:31:42 np0005603663.novalocal cloud-init[924]: |   . +.          |
Jan 31 07:31:42 np0005603663.novalocal cloud-init[924]: |    +o+oS.       |
Jan 31 07:31:42 np0005603663.novalocal cloud-init[924]: |   o.+o.. .  .   |
Jan 31 07:31:42 np0005603663.novalocal cloud-init[924]: |  . ..o .  oo X.o|
Jan 31 07:31:42 np0005603663.novalocal cloud-init[924]: |   o o.=  ..+Eo@*|
Jan 31 07:31:42 np0005603663.novalocal cloud-init[924]: |    +o=. .o++o+*&|
Jan 31 07:31:42 np0005603663.novalocal cloud-init[924]: +----[SHA256]-----+
Jan 31 07:31:42 np0005603663.novalocal cloud-init[924]: Generating public/private ed25519 key pair.
Jan 31 07:31:42 np0005603663.novalocal cloud-init[924]: Your identification has been saved in /etc/ssh/ssh_host_ed25519_key
Jan 31 07:31:42 np0005603663.novalocal cloud-init[924]: Your public key has been saved in /etc/ssh/ssh_host_ed25519_key.pub
Jan 31 07:31:42 np0005603663.novalocal cloud-init[924]: The key fingerprint is:
Jan 31 07:31:42 np0005603663.novalocal cloud-init[924]: SHA256:TWn5HSBHJfh2roAc8eqbKUvdpViNa+gGJbMu7m9qWqs root@np0005603663.novalocal
Jan 31 07:31:42 np0005603663.novalocal cloud-init[924]: The key's randomart image is:
Jan 31 07:31:42 np0005603663.novalocal cloud-init[924]: +--[ED25519 256]--+
Jan 31 07:31:42 np0005603663.novalocal cloud-init[924]: |          .o=..  |
Jan 31 07:31:42 np0005603663.novalocal cloud-init[924]: |        . .= o   |
Jan 31 07:31:42 np0005603663.novalocal cloud-init[924]: |         o=.  .  |
Jan 31 07:31:42 np0005603663.novalocal cloud-init[924]: |      o o+.+o... |
Jan 31 07:31:42 np0005603663.novalocal cloud-init[924]: |       *S++.+o.  |
Jan 31 07:31:42 np0005603663.novalocal cloud-init[924]: |      o.+=.+  .  |
Jan 31 07:31:42 np0005603663.novalocal cloud-init[924]: |    ...o+ =. .   |
Jan 31 07:31:42 np0005603663.novalocal cloud-init[924]: |   .oo+.o+  .    |
Jan 31 07:31:42 np0005603663.novalocal cloud-init[924]: |  E*==o+=.       |
Jan 31 07:31:42 np0005603663.novalocal cloud-init[924]: +----[SHA256]-----+
Jan 31 07:31:42 np0005603663.novalocal systemd[1]: Finished Cloud-init: Network Stage.
Jan 31 07:31:42 np0005603663.novalocal systemd[1]: Reached target Cloud-config availability.
Jan 31 07:31:42 np0005603663.novalocal systemd[1]: Reached target Network is Online.
Jan 31 07:31:42 np0005603663.novalocal systemd[1]: Starting Cloud-init: Config Stage...
Jan 31 07:31:42 np0005603663.novalocal systemd[1]: Starting Crash recovery kernel arming...
Jan 31 07:31:42 np0005603663.novalocal systemd[1]: Starting Notify NFS peers of a restart...
Jan 31 07:31:42 np0005603663.novalocal systemd[1]: Starting System Logging Service...
Jan 31 07:31:42 np0005603663.novalocal systemd[1]: Starting OpenSSH server daemon...
Jan 31 07:31:42 np0005603663.novalocal sm-notify[1006]: Version 2.5.4 starting
Jan 31 07:31:42 np0005603663.novalocal systemd[1]: Starting Permit User Sessions...
Jan 31 07:31:42 np0005603663.novalocal systemd[1]: Started Notify NFS peers of a restart.
Jan 31 07:31:42 np0005603663.novalocal systemd[1]: Finished Permit User Sessions.
Jan 31 07:31:42 np0005603663.novalocal systemd[1]: Started Command Scheduler.
Jan 31 07:31:42 np0005603663.novalocal sshd[1008]: Server listening on 0.0.0.0 port 22.
Jan 31 07:31:42 np0005603663.novalocal sshd[1008]: Server listening on :: port 22.
Jan 31 07:31:42 np0005603663.novalocal systemd[1]: Started Getty on tty1.
Jan 31 07:31:42 np0005603663.novalocal systemd[1]: Started Serial Getty on ttyS0.
Jan 31 07:31:42 np0005603663.novalocal crond[1011]: (CRON) STARTUP (1.5.7)
Jan 31 07:31:42 np0005603663.novalocal crond[1011]: (CRON) INFO (Syslog will be used instead of sendmail.)
Jan 31 07:31:42 np0005603663.novalocal crond[1011]: (CRON) INFO (RANDOM_DELAY will be scaled with factor 42% if used.)
Jan 31 07:31:42 np0005603663.novalocal crond[1011]: (CRON) INFO (running with inotify support)
Jan 31 07:31:42 np0005603663.novalocal systemd[1]: Reached target Login Prompts.
Jan 31 07:31:42 np0005603663.novalocal systemd[1]: Started OpenSSH server daemon.
Jan 31 07:31:42 np0005603663.novalocal rsyslogd[1007]: [origin software="rsyslogd" swVersion="8.2510.0-2.el9" x-pid="1007" x-info="https://www.rsyslog.com"] start
Jan 31 07:31:42 np0005603663.novalocal rsyslogd[1007]: imjournal: No statefile exists, /var/lib/rsyslog/imjournal.state will be created (ignore if this is first run): No such file or directory [v8.2510.0-2.el9 try https://www.rsyslog.com/e/2040 ]
Jan 31 07:31:42 np0005603663.novalocal systemd[1]: Started System Logging Service.
Jan 31 07:31:42 np0005603663.novalocal systemd[1]: Reached target Multi-User System.
Jan 31 07:31:42 np0005603663.novalocal systemd[1]: Starting Record Runlevel Change in UTMP...
Jan 31 07:31:42 np0005603663.novalocal sshd-session[1015]: Connection reset by 38.102.83.114 port 45270 [preauth]
Jan 31 07:31:42 np0005603663.novalocal systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully.
Jan 31 07:31:42 np0005603663.novalocal systemd[1]: Finished Record Runlevel Change in UTMP.
Jan 31 07:31:42 np0005603663.novalocal sshd-session[1047]: Unable to negotiate with 38.102.83.114 port 45274: no matching host key type found. Their offer: ssh-ed25519,ssh-ed25519-cert-v01@openssh.com [preauth]
Jan 31 07:31:42 np0005603663.novalocal sshd-session[1070]: Unable to negotiate with 38.102.83.114 port 45294: no matching host key type found. Their offer: ecdsa-sha2-nistp384,ecdsa-sha2-nistp384-cert-v01@openssh.com [preauth]
Jan 31 07:31:42 np0005603663.novalocal sshd-session[1072]: Unable to negotiate with 38.102.83.114 port 45304: no matching host key type found. Their offer: ecdsa-sha2-nistp521,ecdsa-sha2-nistp521-cert-v01@openssh.com [preauth]
Jan 31 07:31:42 np0005603663.novalocal rsyslogd[1007]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 31 07:31:42 np0005603663.novalocal kdumpctl[1017]: kdump: No kdump initial ramdisk found.
Jan 31 07:31:42 np0005603663.novalocal kdumpctl[1017]: kdump: Rebuilding /boot/initramfs-5.14.0-665.el9.x86_64kdump.img
Jan 31 07:31:42 np0005603663.novalocal sshd-session[1078]: Connection reset by 38.102.83.114 port 45332 [preauth]
Jan 31 07:31:42 np0005603663.novalocal sshd-session[1086]: Unable to negotiate with 38.102.83.114 port 45340: no matching host key type found. Their offer: ssh-rsa,ssh-rsa-cert-v01@openssh.com [preauth]
Jan 31 07:31:42 np0005603663.novalocal sshd-session[1094]: Unable to negotiate with 38.102.83.114 port 45346: no matching host key type found. Their offer: ssh-dss,ssh-dss-cert-v01@openssh.com [preauth]
Jan 31 07:31:42 np0005603663.novalocal sshd-session[1058]: Connection closed by 38.102.83.114 port 45280 [preauth]
Jan 31 07:31:42 np0005603663.novalocal cloud-init[1131]: Cloud-init v. 24.4-8.el9 running 'modules:config' at Sat, 31 Jan 2026 07:31:42 +0000. Up 8.88 seconds.
Jan 31 07:31:42 np0005603663.novalocal sshd-session[1074]: Connection closed by 38.102.83.114 port 45320 [preauth]
Jan 31 07:31:42 np0005603663.novalocal systemd[1]: Finished Cloud-init: Config Stage.
Jan 31 07:31:42 np0005603663.novalocal systemd[1]: Starting Cloud-init: Final Stage...
Jan 31 07:31:42 np0005603663.novalocal dracut[1285]: dracut-057-102.git20250818.el9
Jan 31 07:31:42 np0005603663.novalocal cloud-init[1303]: Cloud-init v. 24.4-8.el9 running 'modules:final' at Sat, 31 Jan 2026 07:31:42 +0000. Up 9.26 seconds.
Jan 31 07:31:42 np0005603663.novalocal dracut[1287]: Executing: /usr/bin/dracut --quiet --hostonly --hostonly-cmdline --hostonly-i18n --hostonly-mode strict --hostonly-nics  --mount "/dev/disk/by-uuid/822f14ea-6e7e-41df-b0d8-fbe282d9ded8 /sysroot xfs rw,relatime,seclabel,attr2,inode64,logbufs=8,logbsize=32k,noquota" --squash-compressor zstd --no-hostonly-default-device --add-confdir /lib/kdump/dracut.conf.d -f /boot/initramfs-5.14.0-665.el9.x86_64kdump.img 5.14.0-665.el9.x86_64
Jan 31 07:31:42 np0005603663.novalocal cloud-init[1320]: #############################################################
Jan 31 07:31:42 np0005603663.novalocal cloud-init[1324]: -----BEGIN SSH HOST KEY FINGERPRINTS-----
Jan 31 07:31:43 np0005603663.novalocal cloud-init[1331]: 256 SHA256:wga2K2W/JZ6ryQdN2/3oe+M0W2+oTRsn7/Xf/jg38Jg root@np0005603663.novalocal (ECDSA)
Jan 31 07:31:43 np0005603663.novalocal cloud-init[1337]: 256 SHA256:TWn5HSBHJfh2roAc8eqbKUvdpViNa+gGJbMu7m9qWqs root@np0005603663.novalocal (ED25519)
Jan 31 07:31:43 np0005603663.novalocal cloud-init[1344]: 3072 SHA256:HxoPDMOYF4QbC5R2UOcIKdXrqzmMIxDBD+XL6CbY91c root@np0005603663.novalocal (RSA)
Jan 31 07:31:43 np0005603663.novalocal cloud-init[1346]: -----END SSH HOST KEY FINGERPRINTS-----
Jan 31 07:31:43 np0005603663.novalocal cloud-init[1351]: #############################################################
Jan 31 07:31:43 np0005603663.novalocal cloud-init[1303]: Cloud-init v. 24.4-8.el9 finished at Sat, 31 Jan 2026 07:31:43 +0000. Datasource DataSourceConfigDrive [net,ver=2][source=/dev/sr0].  Up 9.44 seconds
Jan 31 07:31:43 np0005603663.novalocal systemd[1]: Finished Cloud-init: Final Stage.
Jan 31 07:31:43 np0005603663.novalocal systemd[1]: Reached target Cloud-init target.
Jan 31 07:31:43 np0005603663.novalocal dracut[1287]: dracut module 'systemd-networkd' will not be installed, because command 'networkctl' could not be found!
Jan 31 07:31:43 np0005603663.novalocal dracut[1287]: dracut module 'systemd-networkd' will not be installed, because command '/usr/lib/systemd/systemd-networkd' could not be found!
Jan 31 07:31:43 np0005603663.novalocal dracut[1287]: dracut module 'systemd-networkd' will not be installed, because command '/usr/lib/systemd/systemd-networkd-wait-online' could not be found!
Jan 31 07:31:43 np0005603663.novalocal dracut[1287]: dracut module 'systemd-resolved' will not be installed, because command 'resolvectl' could not be found!
Jan 31 07:31:43 np0005603663.novalocal dracut[1287]: dracut module 'systemd-resolved' will not be installed, because command '/usr/lib/systemd/systemd-resolved' could not be found!
Jan 31 07:31:43 np0005603663.novalocal dracut[1287]: dracut module 'systemd-timesyncd' will not be installed, because command '/usr/lib/systemd/systemd-timesyncd' could not be found!
Jan 31 07:31:43 np0005603663.novalocal dracut[1287]: dracut module 'systemd-timesyncd' will not be installed, because command '/usr/lib/systemd/systemd-time-wait-sync' could not be found!
Jan 31 07:31:43 np0005603663.novalocal dracut[1287]: dracut module 'busybox' will not be installed, because command 'busybox' could not be found!
Jan 31 07:31:43 np0005603663.novalocal dracut[1287]: dracut module 'dbus-daemon' will not be installed, because command 'dbus-daemon' could not be found!
Jan 31 07:31:43 np0005603663.novalocal dracut[1287]: dracut module 'rngd' will not be installed, because command 'rngd' could not be found!
Jan 31 07:31:43 np0005603663.novalocal dracut[1287]: dracut module 'connman' will not be installed, because command 'connmand' could not be found!
Jan 31 07:31:43 np0005603663.novalocal dracut[1287]: dracut module 'connman' will not be installed, because command 'connmanctl' could not be found!
Jan 31 07:31:43 np0005603663.novalocal dracut[1287]: dracut module 'connman' will not be installed, because command 'connmand-wait-online' could not be found!
Jan 31 07:31:43 np0005603663.novalocal dracut[1287]: dracut module 'network-wicked' will not be installed, because command 'wicked' could not be found!
Jan 31 07:31:43 np0005603663.novalocal dracut[1287]: Module 'ifcfg' will not be installed, because it's in the list to be omitted!
Jan 31 07:31:43 np0005603663.novalocal dracut[1287]: Module 'plymouth' will not be installed, because it's in the list to be omitted!
Jan 31 07:31:43 np0005603663.novalocal dracut[1287]: 62bluetooth: Could not find any command of '/usr/lib/bluetooth/bluetoothd /usr/libexec/bluetooth/bluetoothd'!
Jan 31 07:31:43 np0005603663.novalocal dracut[1287]: dracut module 'lvmmerge' will not be installed, because command 'lvm' could not be found!
Jan 31 07:31:43 np0005603663.novalocal dracut[1287]: dracut module 'lvmthinpool-monitor' will not be installed, because command 'lvm' could not be found!
Jan 31 07:31:43 np0005603663.novalocal dracut[1287]: dracut module 'btrfs' will not be installed, because command 'btrfs' could not be found!
Jan 31 07:31:43 np0005603663.novalocal dracut[1287]: dracut module 'dmraid' will not be installed, because command 'dmraid' could not be found!
Jan 31 07:31:43 np0005603663.novalocal dracut[1287]: dracut module 'lvm' will not be installed, because command 'lvm' could not be found!
Jan 31 07:31:43 np0005603663.novalocal dracut[1287]: dracut module 'mdraid' will not be installed, because command 'mdadm' could not be found!
Jan 31 07:31:43 np0005603663.novalocal dracut[1287]: dracut module 'pcsc' will not be installed, because command 'pcscd' could not be found!
Jan 31 07:31:43 np0005603663.novalocal dracut[1287]: dracut module 'tpm2-tss' will not be installed, because command 'tpm2' could not be found!
Jan 31 07:31:43 np0005603663.novalocal dracut[1287]: dracut module 'cifs' will not be installed, because command 'mount.cifs' could not be found!
Jan 31 07:31:43 np0005603663.novalocal dracut[1287]: dracut module 'iscsi' will not be installed, because command 'iscsi-iname' could not be found!
Jan 31 07:31:43 np0005603663.novalocal dracut[1287]: dracut module 'iscsi' will not be installed, because command 'iscsiadm' could not be found!
Jan 31 07:31:43 np0005603663.novalocal dracut[1287]: dracut module 'iscsi' will not be installed, because command 'iscsid' could not be found!
Jan 31 07:31:43 np0005603663.novalocal dracut[1287]: dracut module 'nvmf' will not be installed, because command 'nvme' could not be found!
Jan 31 07:31:43 np0005603663.novalocal dracut[1287]: Module 'resume' will not be installed, because it's in the list to be omitted!
Jan 31 07:31:43 np0005603663.novalocal dracut[1287]: dracut module 'biosdevname' will not be installed, because command 'biosdevname' could not be found!
Jan 31 07:31:43 np0005603663.novalocal dracut[1287]: Module 'earlykdump' will not be installed, because it's in the list to be omitted!
Jan 31 07:31:43 np0005603663.novalocal dracut[1287]: dracut module 'memstrack' will not be installed, because command 'memstrack' could not be found!
Jan 31 07:31:43 np0005603663.novalocal dracut[1287]: memstrack is not available
Jan 31 07:31:43 np0005603663.novalocal dracut[1287]: If you need to use rd.memdebug>=4, please install memstrack and procps-ng
Jan 31 07:31:44 np0005603663.novalocal dracut[1287]: dracut module 'systemd-resolved' will not be installed, because command 'resolvectl' could not be found!
Jan 31 07:31:44 np0005603663.novalocal dracut[1287]: dracut module 'systemd-resolved' will not be installed, because command '/usr/lib/systemd/systemd-resolved' could not be found!
Jan 31 07:31:44 np0005603663.novalocal dracut[1287]: dracut module 'systemd-timesyncd' will not be installed, because command '/usr/lib/systemd/systemd-timesyncd' could not be found!
Jan 31 07:31:44 np0005603663.novalocal dracut[1287]: dracut module 'systemd-timesyncd' will not be installed, because command '/usr/lib/systemd/systemd-time-wait-sync' could not be found!
Jan 31 07:31:44 np0005603663.novalocal dracut[1287]: dracut module 'busybox' will not be installed, because command 'busybox' could not be found!
Jan 31 07:31:44 np0005603663.novalocal dracut[1287]: dracut module 'dbus-daemon' will not be installed, because command 'dbus-daemon' could not be found!
Jan 31 07:31:44 np0005603663.novalocal dracut[1287]: dracut module 'rngd' will not be installed, because command 'rngd' could not be found!
Jan 31 07:31:44 np0005603663.novalocal dracut[1287]: dracut module 'connman' will not be installed, because command 'connmand' could not be found!
Jan 31 07:31:44 np0005603663.novalocal dracut[1287]: dracut module 'connman' will not be installed, because command 'connmanctl' could not be found!
Jan 31 07:31:44 np0005603663.novalocal dracut[1287]: dracut module 'connman' will not be installed, because command 'connmand-wait-online' could not be found!
Jan 31 07:31:44 np0005603663.novalocal dracut[1287]: dracut module 'network-wicked' will not be installed, because command 'wicked' could not be found!
Jan 31 07:31:44 np0005603663.novalocal dracut[1287]: 62bluetooth: Could not find any command of '/usr/lib/bluetooth/bluetoothd /usr/libexec/bluetooth/bluetoothd'!
Jan 31 07:31:44 np0005603663.novalocal dracut[1287]: dracut module 'lvmmerge' will not be installed, because command 'lvm' could not be found!
Jan 31 07:31:44 np0005603663.novalocal dracut[1287]: dracut module 'lvmthinpool-monitor' will not be installed, because command 'lvm' could not be found!
Jan 31 07:31:44 np0005603663.novalocal dracut[1287]: dracut module 'btrfs' will not be installed, because command 'btrfs' could not be found!
Jan 31 07:31:44 np0005603663.novalocal dracut[1287]: dracut module 'dmraid' will not be installed, because command 'dmraid' could not be found!
Jan 31 07:31:44 np0005603663.novalocal dracut[1287]: dracut module 'lvm' will not be installed, because command 'lvm' could not be found!
Jan 31 07:31:44 np0005603663.novalocal dracut[1287]: dracut module 'mdraid' will not be installed, because command 'mdadm' could not be found!
Jan 31 07:31:44 np0005603663.novalocal dracut[1287]: dracut module 'pcsc' will not be installed, because command 'pcscd' could not be found!
Jan 31 07:31:44 np0005603663.novalocal dracut[1287]: dracut module 'tpm2-tss' will not be installed, because command 'tpm2' could not be found!
Jan 31 07:31:44 np0005603663.novalocal dracut[1287]: dracut module 'cifs' will not be installed, because command 'mount.cifs' could not be found!
Jan 31 07:31:44 np0005603663.novalocal dracut[1287]: dracut module 'iscsi' will not be installed, because command 'iscsi-iname' could not be found!
Jan 31 07:31:44 np0005603663.novalocal dracut[1287]: dracut module 'iscsi' will not be installed, because command 'iscsiadm' could not be found!
Jan 31 07:31:44 np0005603663.novalocal dracut[1287]: dracut module 'iscsi' will not be installed, because command 'iscsid' could not be found!
Jan 31 07:31:44 np0005603663.novalocal dracut[1287]: dracut module 'nvmf' will not be installed, because command 'nvme' could not be found!
Jan 31 07:31:44 np0005603663.novalocal dracut[1287]: dracut module 'memstrack' will not be installed, because command 'memstrack' could not be found!
Jan 31 07:31:44 np0005603663.novalocal dracut[1287]: memstrack is not available
Jan 31 07:31:44 np0005603663.novalocal dracut[1287]: If you need to use rd.memdebug>=4, please install memstrack and procps-ng
Jan 31 07:31:44 np0005603663.novalocal dracut[1287]: *** Including module: systemd ***
Jan 31 07:31:44 np0005603663.novalocal dracut[1287]: *** Including module: fips ***
Jan 31 07:31:44 np0005603663.novalocal dracut[1287]: *** Including module: systemd-initrd ***
Jan 31 07:31:44 np0005603663.novalocal dracut[1287]: *** Including module: i18n ***
Jan 31 07:31:45 np0005603663.novalocal dracut[1287]: *** Including module: drm ***
Jan 31 07:31:45 np0005603663.novalocal chronyd[792]: Selected source 174.142.148.226 (2.centos.pool.ntp.org)
Jan 31 07:31:45 np0005603663.novalocal chronyd[792]: System clock TAI offset set to 37 seconds
Jan 31 07:31:45 np0005603663.novalocal dracut[1287]: *** Including module: prefixdevname ***
Jan 31 07:31:45 np0005603663.novalocal dracut[1287]: *** Including module: kernel-modules ***
Jan 31 07:31:45 np0005603663.novalocal kernel: block vda: the capability attribute has been deprecated.
Jan 31 07:31:45 np0005603663.novalocal dracut[1287]: *** Including module: kernel-modules-extra ***
Jan 31 07:31:45 np0005603663.novalocal dracut[1287]:   kernel-modules-extra: configuration source "/run/depmod.d" does not exist
Jan 31 07:31:45 np0005603663.novalocal dracut[1287]:   kernel-modules-extra: configuration source "/lib/depmod.d" does not exist
Jan 31 07:31:45 np0005603663.novalocal dracut[1287]:   kernel-modules-extra: parsing configuration file "/etc/depmod.d/dist.conf"
Jan 31 07:31:45 np0005603663.novalocal dracut[1287]:   kernel-modules-extra: /etc/depmod.d/dist.conf: added "updates extra built-in weak-updates" to the list of search directories
Jan 31 07:31:46 np0005603663.novalocal dracut[1287]: *** Including module: qemu ***
Jan 31 07:31:46 np0005603663.novalocal dracut[1287]: *** Including module: fstab-sys ***
Jan 31 07:31:46 np0005603663.novalocal dracut[1287]: *** Including module: rootfs-block ***
Jan 31 07:31:46 np0005603663.novalocal dracut[1287]: *** Including module: terminfo ***
Jan 31 07:31:46 np0005603663.novalocal dracut[1287]: *** Including module: udev-rules ***
Jan 31 07:31:46 np0005603663.novalocal dracut[1287]: Skipping udev rule: 91-permissions.rules
Jan 31 07:31:46 np0005603663.novalocal dracut[1287]: Skipping udev rule: 80-drivers-modprobe.rules
Jan 31 07:31:46 np0005603663.novalocal dracut[1287]: *** Including module: virtiofs ***
Jan 31 07:31:46 np0005603663.novalocal dracut[1287]: *** Including module: dracut-systemd ***
Jan 31 07:31:46 np0005603663.novalocal chronyd[792]: Selected source 54.39.23.64 (2.centos.pool.ntp.org)
Jan 31 07:31:46 np0005603663.novalocal dracut[1287]: *** Including module: usrmount ***
Jan 31 07:31:46 np0005603663.novalocal dracut[1287]: *** Including module: base ***
Jan 31 07:31:46 np0005603663.novalocal dracut[1287]: *** Including module: fs-lib ***
Jan 31 07:31:47 np0005603663.novalocal dracut[1287]: *** Including module: kdumpbase ***
Jan 31 07:31:47 np0005603663.novalocal dracut[1287]: *** Including module: microcode_ctl-fw_dir_override ***
Jan 31 07:31:47 np0005603663.novalocal dracut[1287]:   microcode_ctl module: mangling fw_dir
Jan 31 07:31:47 np0005603663.novalocal dracut[1287]:     microcode_ctl: reset fw_dir to "/lib/firmware/updates /lib/firmware"
Jan 31 07:31:47 np0005603663.novalocal dracut[1287]:     microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel"...
Jan 31 07:31:47 np0005603663.novalocal dracut[1287]:     microcode_ctl: configuration "intel" is ignored
Jan 31 07:31:47 np0005603663.novalocal dracut[1287]:     microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-2d-07"...
Jan 31 07:31:47 np0005603663.novalocal dracut[1287]:     microcode_ctl: configuration "intel-06-2d-07" is ignored
Jan 31 07:31:47 np0005603663.novalocal dracut[1287]:     microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-4e-03"...
Jan 31 07:31:47 np0005603663.novalocal dracut[1287]:     microcode_ctl: configuration "intel-06-4e-03" is ignored
Jan 31 07:31:47 np0005603663.novalocal dracut[1287]:     microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-4f-01"...
Jan 31 07:31:47 np0005603663.novalocal dracut[1287]:     microcode_ctl: configuration "intel-06-4f-01" is ignored
Jan 31 07:31:47 np0005603663.novalocal dracut[1287]:     microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-55-04"...
Jan 31 07:31:47 np0005603663.novalocal dracut[1287]:     microcode_ctl: configuration "intel-06-55-04" is ignored
Jan 31 07:31:47 np0005603663.novalocal dracut[1287]:     microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-5e-03"...
Jan 31 07:31:47 np0005603663.novalocal dracut[1287]:     microcode_ctl: configuration "intel-06-5e-03" is ignored
Jan 31 07:31:47 np0005603663.novalocal dracut[1287]:     microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-8c-01"...
Jan 31 07:31:47 np0005603663.novalocal dracut[1287]:     microcode_ctl: configuration "intel-06-8c-01" is ignored
Jan 31 07:31:47 np0005603663.novalocal dracut[1287]:     microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-8e-9e-0x-0xca"...
Jan 31 07:31:47 np0005603663.novalocal dracut[1287]:     microcode_ctl: configuration "intel-06-8e-9e-0x-0xca" is ignored
Jan 31 07:31:47 np0005603663.novalocal dracut[1287]:     microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-8e-9e-0x-dell"...
Jan 31 07:31:47 np0005603663.novalocal dracut[1287]:     microcode_ctl: configuration "intel-06-8e-9e-0x-dell" is ignored
Jan 31 07:31:47 np0005603663.novalocal dracut[1287]:     microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-8f-08"...
Jan 31 07:31:47 np0005603663.novalocal dracut[1287]:     microcode_ctl: configuration "intel-06-8f-08" is ignored
Jan 31 07:31:47 np0005603663.novalocal dracut[1287]:     microcode_ctl: final fw_dir: "/lib/firmware/updates /lib/firmware"
Jan 31 07:31:47 np0005603663.novalocal dracut[1287]: *** Including module: openssl ***
Jan 31 07:31:47 np0005603663.novalocal dracut[1287]: *** Including module: shutdown ***
Jan 31 07:31:47 np0005603663.novalocal dracut[1287]: *** Including module: squash ***
Jan 31 07:31:47 np0005603663.novalocal dracut[1287]: *** Including modules done ***
Jan 31 07:31:47 np0005603663.novalocal dracut[1287]: *** Installing kernel module dependencies ***
Jan 31 07:31:48 np0005603663.novalocal dracut[1287]: *** Installing kernel module dependencies done ***
Jan 31 07:31:48 np0005603663.novalocal dracut[1287]: *** Resolving executable dependencies ***
Jan 31 07:31:49 np0005603663.novalocal irqbalance[789]: Cannot change IRQ 25 affinity: Operation not permitted
Jan 31 07:31:49 np0005603663.novalocal irqbalance[789]: IRQ 25 affinity is now unmanaged
Jan 31 07:31:49 np0005603663.novalocal irqbalance[789]: Cannot change IRQ 31 affinity: Operation not permitted
Jan 31 07:31:49 np0005603663.novalocal irqbalance[789]: IRQ 31 affinity is now unmanaged
Jan 31 07:31:49 np0005603663.novalocal irqbalance[789]: Cannot change IRQ 28 affinity: Operation not permitted
Jan 31 07:31:49 np0005603663.novalocal irqbalance[789]: IRQ 28 affinity is now unmanaged
Jan 31 07:31:49 np0005603663.novalocal irqbalance[789]: Cannot change IRQ 32 affinity: Operation not permitted
Jan 31 07:31:49 np0005603663.novalocal irqbalance[789]: IRQ 32 affinity is now unmanaged
Jan 31 07:31:49 np0005603663.novalocal irqbalance[789]: Cannot change IRQ 30 affinity: Operation not permitted
Jan 31 07:31:49 np0005603663.novalocal irqbalance[789]: IRQ 30 affinity is now unmanaged
Jan 31 07:31:49 np0005603663.novalocal irqbalance[789]: Cannot change IRQ 29 affinity: Operation not permitted
Jan 31 07:31:49 np0005603663.novalocal irqbalance[789]: IRQ 29 affinity is now unmanaged
Jan 31 07:31:49 np0005603663.novalocal dracut[1287]: *** Resolving executable dependencies done ***
Jan 31 07:31:49 np0005603663.novalocal dracut[1287]: *** Generating early-microcode cpio image ***
Jan 31 07:31:49 np0005603663.novalocal dracut[1287]: *** Store current command line parameters ***
Jan 31 07:31:49 np0005603663.novalocal dracut[1287]: Stored kernel commandline:
Jan 31 07:31:49 np0005603663.novalocal dracut[1287]: No dracut internal kernel commandline stored in the initramfs
Jan 31 07:31:50 np0005603663.novalocal dracut[1287]: *** Install squash loader ***
Jan 31 07:31:50 np0005603663.novalocal systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Jan 31 07:31:50 np0005603663.novalocal dracut[1287]: *** Squashing the files inside the initramfs ***
Jan 31 07:31:51 np0005603663.novalocal dracut[1287]: *** Squashing the files inside the initramfs done ***
Jan 31 07:31:51 np0005603663.novalocal dracut[1287]: *** Creating image file '/boot/initramfs-5.14.0-665.el9.x86_64kdump.img' ***
Jan 31 07:31:51 np0005603663.novalocal dracut[1287]: *** Hardlinking files ***
Jan 31 07:31:51 np0005603663.novalocal dracut[1287]: Mode:           real
Jan 31 07:31:51 np0005603663.novalocal dracut[1287]: Files:          50
Jan 31 07:31:51 np0005603663.novalocal dracut[1287]: Linked:         0 files
Jan 31 07:31:51 np0005603663.novalocal dracut[1287]: Compared:       0 xattrs
Jan 31 07:31:51 np0005603663.novalocal dracut[1287]: Compared:       0 files
Jan 31 07:31:51 np0005603663.novalocal dracut[1287]: Saved:          0 B
Jan 31 07:31:51 np0005603663.novalocal dracut[1287]: Duration:       0.000532 seconds
Jan 31 07:31:52 np0005603663.novalocal dracut[1287]: *** Hardlinking files done ***
Jan 31 07:31:52 np0005603663.novalocal dracut[1287]: *** Creating initramfs image file '/boot/initramfs-5.14.0-665.el9.x86_64kdump.img' done ***
Jan 31 07:31:52 np0005603663.novalocal kdumpctl[1017]: kdump: kexec: loaded kdump kernel
Jan 31 07:31:52 np0005603663.novalocal kdumpctl[1017]: kdump: Starting kdump: [OK]
Jan 31 07:31:52 np0005603663.novalocal systemd[1]: Finished Crash recovery kernel arming.
Jan 31 07:31:52 np0005603663.novalocal systemd[1]: Startup finished in 1.224s (kernel) + 2.474s (initrd) + 15.620s (userspace) = 19.320s.
Jan 31 07:32:10 np0005603663.novalocal sshd-session[4303]: Accepted publickey for zuul from 38.102.83.114 port 37076 ssh2: RSA SHA256:zhs3MiW0JhxzckYcMHQES8SMYHj1iGcomnyzmbiwor8
Jan 31 07:32:10 np0005603663.novalocal systemd[1]: Created slice User Slice of UID 1000.
Jan 31 07:32:10 np0005603663.novalocal systemd[1]: Starting User Runtime Directory /run/user/1000...
Jan 31 07:32:10 np0005603663.novalocal systemd-logind[793]: New session 1 of user zuul.
Jan 31 07:32:10 np0005603663.novalocal systemd[1]: Finished User Runtime Directory /run/user/1000.
Jan 31 07:32:10 np0005603663.novalocal systemd[1]: Starting User Manager for UID 1000...
Jan 31 07:32:10 np0005603663.novalocal systemd[4307]: pam_unix(systemd-user:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 31 07:32:10 np0005603663.novalocal systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Jan 31 07:32:10 np0005603663.novalocal systemd[4307]: Queued start job for default target Main User Target.
Jan 31 07:32:10 np0005603663.novalocal systemd[4307]: Created slice User Application Slice.
Jan 31 07:32:10 np0005603663.novalocal systemd[4307]: Started Mark boot as successful after the user session has run 2 minutes.
Jan 31 07:32:10 np0005603663.novalocal systemd[4307]: Started Daily Cleanup of User's Temporary Directories.
Jan 31 07:32:10 np0005603663.novalocal systemd[4307]: Reached target Paths.
Jan 31 07:32:10 np0005603663.novalocal systemd[4307]: Reached target Timers.
Jan 31 07:32:10 np0005603663.novalocal systemd[4307]: Starting D-Bus User Message Bus Socket...
Jan 31 07:32:10 np0005603663.novalocal systemd[4307]: Starting Create User's Volatile Files and Directories...
Jan 31 07:32:10 np0005603663.novalocal systemd[4307]: Listening on D-Bus User Message Bus Socket.
Jan 31 07:32:10 np0005603663.novalocal systemd[4307]: Reached target Sockets.
Jan 31 07:32:10 np0005603663.novalocal systemd[4307]: Finished Create User's Volatile Files and Directories.
Jan 31 07:32:10 np0005603663.novalocal systemd[4307]: Reached target Basic System.
Jan 31 07:32:10 np0005603663.novalocal systemd[4307]: Reached target Main User Target.
Jan 31 07:32:10 np0005603663.novalocal systemd[4307]: Startup finished in 220ms.
Jan 31 07:32:10 np0005603663.novalocal systemd[1]: Started User Manager for UID 1000.
Jan 31 07:32:10 np0005603663.novalocal systemd[1]: Started Session 1 of User zuul.
Jan 31 07:32:10 np0005603663.novalocal sshd-session[4303]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 31 07:32:11 np0005603663.novalocal python3[4391]: ansible-setup Invoked with gather_subset=['!all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 31 07:32:13 np0005603663.novalocal python3[4419]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 31 07:32:19 np0005603663.novalocal python3[4477]: ansible-setup Invoked with gather_subset=['network'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 31 07:32:20 np0005603663.novalocal python3[4517]: ansible-zuul_console Invoked with path=/tmp/console-{log_uuid}.log port=19885 state=present
Jan 31 07:32:22 np0005603663.novalocal python3[4543]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC2Nk/3X7ElbK5UI3l4t5jdoE6KuIlCQvu2c4Ei9SOHuuE9jliuN7rH2FkoHI1foCUeuqIIbuVzRH47hK+pxsFN5aoANBwNDx1IwijCiyH4vm8xmIQQzFRxMcIchAX5xdujjhf3pqG0A9IW2WVdYY2aFX2RA0L7I2TgYUbHrrGO/Z/9EUolfHRtmZIGhQgTzUTv7hJNTs24+mTQctJVNQgt41VaDc+wjjcfbiqFy4OdGWxdxXTNnQY/NMkp/X72NSJtBMNl2a0AWJivbPkO9V0q5fAM8zrcLDTJkPuMScptn+k3t8abB/Jy9NFuwujTB+7X4XAxGqMei9w4QM4Ml9hlngPdHF7xq8hEq50HG9DhKc+swIne3H9ZWlpnRwna9KxB0DerbNki0ClbzqWuvIZmzf9YzUZHfRAQfSuzhJT1/BlmDmTzRel0q/1exqyzleQFl1dmb4wErD64iemohgYdLioDwHqXivKuNBLULdM/pt2E9yh6HJGNf6FwZ5zkjl0= zuul-build-sshkey manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 31 07:32:22 np0005603663.novalocal python3[4567]: ansible-file Invoked with state=directory path=/home/zuul/.ssh mode=448 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 07:32:23 np0005603663.novalocal python3[4666]: ansible-ansible.legacy.stat Invoked with path=/home/zuul/.ssh/id_rsa follow=False get_checksum=False checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 31 07:32:23 np0005603663.novalocal python3[4737]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1769844742.9382787-207-145605384033662/source dest=/home/zuul/.ssh/id_rsa mode=384 force=False _original_basename=b80ea159c8054a21a4460cdc1f619690_id_rsa follow=False checksum=a281031e2470a2409ffaebd8f471464a1a03b1ee backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 07:32:24 np0005603663.novalocal python3[4860]: ansible-ansible.legacy.stat Invoked with path=/home/zuul/.ssh/id_rsa.pub follow=False get_checksum=False checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 31 07:32:24 np0005603663.novalocal python3[4931]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1769844743.8844178-240-192301579928147/source dest=/home/zuul/.ssh/id_rsa.pub mode=420 force=False _original_basename=b80ea159c8054a21a4460cdc1f619690_id_rsa.pub follow=False checksum=20ff6b517a1b3593179ca6b6f5da64fb7270f957 backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 07:32:25 np0005603663.novalocal python3[4979]: ansible-ping Invoked with data=pong
Jan 31 07:32:26 np0005603663.novalocal python3[5003]: ansible-setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 31 07:32:29 np0005603663.novalocal python3[5061]: ansible-zuul_debug_info Invoked with ipv4_route_required=False ipv6_route_required=False image_manifest_files=['/etc/dib-builddate.txt', '/etc/image-hostname.txt'] image_manifest=None traceroute_host=None
Jan 31 07:32:30 np0005603663.novalocal python3[5093]: ansible-file Invoked with path=/home/zuul/zuul-output/logs state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 07:32:30 np0005603663.novalocal python3[5117]: ansible-file Invoked with path=/home/zuul/zuul-output/artifacts state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 07:32:30 np0005603663.novalocal python3[5141]: ansible-file Invoked with path=/home/zuul/zuul-output/docs state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 07:32:31 np0005603663.novalocal python3[5165]: ansible-file Invoked with path=/home/zuul/zuul-output/logs state=directory mode=493 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 07:32:31 np0005603663.novalocal python3[5189]: ansible-file Invoked with path=/home/zuul/zuul-output/artifacts state=directory mode=493 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 07:32:31 np0005603663.novalocal python3[5213]: ansible-file Invoked with path=/home/zuul/zuul-output/docs state=directory mode=493 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 07:32:33 np0005603663.novalocal sudo[5237]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ormxybsxsoreftplxkxgcfjzfyoevbum ; /usr/bin/python3'
Jan 31 07:32:33 np0005603663.novalocal sudo[5237]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:32:33 np0005603663.novalocal python3[5239]: ansible-file Invoked with path=/etc/ci state=directory owner=root group=root mode=493 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 07:32:33 np0005603663.novalocal sudo[5237]: pam_unix(sudo:session): session closed for user root
Jan 31 07:32:33 np0005603663.novalocal sudo[5315]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ioqnsghlhavxbhnfldysudunfjdycpgv ; /usr/bin/python3'
Jan 31 07:32:33 np0005603663.novalocal sudo[5315]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:32:33 np0005603663.novalocal python3[5317]: ansible-ansible.legacy.stat Invoked with path=/etc/ci/mirror_info.sh follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 31 07:32:33 np0005603663.novalocal sudo[5315]: pam_unix(sudo:session): session closed for user root
Jan 31 07:32:34 np0005603663.novalocal sudo[5388]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uxrrgdxdfoljemrimskxzalgcsdeyqau ; /usr/bin/python3'
Jan 31 07:32:34 np0005603663.novalocal sudo[5388]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:32:34 np0005603663.novalocal python3[5390]: ansible-ansible.legacy.copy Invoked with dest=/etc/ci/mirror_info.sh owner=root group=root mode=420 src=/home/zuul/.ansible/tmp/ansible-tmp-1769844753.5735636-21-83259087156497/source follow=False _original_basename=mirror_info.sh.j2 checksum=92d92a03afdddee82732741071f662c729080c35 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 07:32:34 np0005603663.novalocal sudo[5388]: pam_unix(sudo:session): session closed for user root
Jan 31 07:32:35 np0005603663.novalocal python3[5438]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAABIwAAAQEA4Z/c9osaGGtU6X8fgELwfj/yayRurfcKA0HMFfdpPxev2dbwljysMuzoVp4OZmW1gvGtyYPSNRvnzgsaabPNKNo2ym5NToCP6UM+KSe93aln4BcM/24mXChYAbXJQ5Bqq/pIzsGs/pKetQN+vwvMxLOwTvpcsCJBXaa981RKML6xj9l/UZ7IIq1HSEKMvPLxZMWdu0Ut8DkCd5F4nOw9Wgml2uYpDCj5LLCrQQ9ChdOMz8hz6SighhNlRpPkvPaet3OXxr/ytFMu7j7vv06CaEnuMMiY2aTWN1Imin9eHAylIqFHta/3gFfQSWt9jXM7owkBLKL7ATzhaAn+fjNupw== arxcruz@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 31 07:32:35 np0005603663.novalocal python3[5462]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDS4Fn6k4deCnIlOtLWqZJyksbepjQt04j8Ed8CGx9EKkj0fKiAxiI4TadXQYPuNHMixZy4Nevjb6aDhL5Z906TfvNHKUrjrG7G26a0k8vdc61NEQ7FmcGMWRLwwc6ReDO7lFpzYKBMk4YqfWgBuGU/K6WLKiVW2cVvwIuGIaYrE1OiiX0iVUUk7KApXlDJMXn7qjSYynfO4mF629NIp8FJal38+Kv+HA+0QkE5Y2xXnzD4Lar5+keymiCHRntPppXHeLIRzbt0gxC7v3L72hpQ3BTBEzwHpeS8KY+SX1y5lRMN45thCHfJqGmARJREDjBvWG8JXOPmVIKQtZmVcD5b mandreou@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 31 07:32:35 np0005603663.novalocal python3[5486]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQC9MiLfy30deHA7xPOAlew5qUq3UP2gmRMYJi8PtkjFB20/DKeWwWNnkZPqP9AayruRoo51SIiVg870gbZE2jYl+Ncx/FYDe56JeC3ySZsXoAVkC9bP7gkOGqOmJjirvAgPMI7bogVz8i+66Q4Ar7OKTp3762G4IuWPPEg4ce4Y7lx9qWocZapHYq4cYKMxrOZ7SEbFSATBbe2bPZAPKTw8do/Eny+Hq/LkHFhIeyra6cqTFQYShr+zPln0Cr+ro/pDX3bB+1ubFgTpjpkkkQsLhDfR6cCdCWM2lgnS3BTtYj5Ct9/JRPR5YOphqZz+uB+OEu2IL68hmU9vNTth1KeX rlandy@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 31 07:32:35 np0005603663.novalocal python3[5510]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIFCbgz8gdERiJlk2IKOtkjQxEXejrio6ZYMJAVJYpOIp raukadah@gmail.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 31 07:32:36 np0005603663.novalocal python3[5534]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIBqb3Q/9uDf4LmihQ7xeJ9gA/STIQUFPSfyyV0m8AoQi bshewale@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 31 07:32:36 np0005603663.novalocal python3[5558]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQC0I8QqQx0Az2ysJt2JuffucLijhBqnsXKEIx5GyHwxVULROa8VtNFXUDH6ZKZavhiMcmfHB2+TBTda+lDP4FldYj06dGmzCY+IYGa+uDRdxHNGYjvCfLFcmLlzRK6fNbTcui+KlUFUdKe0fb9CRoGKyhlJD5GRkM1Dv+Yb6Bj+RNnmm1fVGYxzmrD2utvffYEb0SZGWxq2R9gefx1q/3wCGjeqvufEV+AskPhVGc5T7t9eyZ4qmslkLh1/nMuaIBFcr9AUACRajsvk6mXrAN1g3HlBf2gQlhi1UEyfbqIQvzzFtsbLDlSum/KmKjy818GzvWjERfQ0VkGzCd9bSLVL dviroel@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 31 07:32:36 np0005603663.novalocal python3[5582]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDLOQd4ZLtkZXQGY6UwAr/06ppWQK4fDO3HaqxPk98csyOCBXsliSKK39Bso828+5srIXiW7aI6aC9P5mwi4mUZlGPfJlQbfrcGvY+b/SocuvaGK+1RrHLoJCT52LBhwgrzlXio2jeksZeein8iaTrhsPrOAs7KggIL/rB9hEiB3NaOPWhhoCP4vlW6MEMExGcqB/1FVxXFBPnLkEyW0Lk7ycVflZl2ocRxbfjZi0+tI1Wlinp8PvSQSc/WVrAcDgKjc/mB4ODPOyYy3G8FHgfMsrXSDEyjBKgLKMsdCrAUcqJQWjkqXleXSYOV4q3pzL+9umK+q/e3P/bIoSFQzmJKTU1eDfuvPXmow9F5H54fii/Da7ezlMJ+wPGHJrRAkmzvMbALy7xwswLhZMkOGNtRcPqaKYRmIBKpw3o6bCTtcNUHOtOQnzwY8JzrM2eBWJBXAANYw+9/ho80JIiwhg29CFNpVBuHbql2YxJQNrnl90guN65rYNpDxdIluweyUf8= anbanerj@kaermorhen manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 31 07:32:37 np0005603663.novalocal python3[5606]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC3VwV8Im9kRm49lt3tM36hj4Zv27FxGo4C1Q/0jqhzFmHY7RHbmeRr8ObhwWoHjXSozKWg8FL5ER0z3hTwL0W6lez3sL7hUaCmSuZmG5Hnl3x4vTSxDI9JZ/Y65rtYiiWQo2fC5xJhU/4+0e5e/pseCm8cKRSu+SaxhO+sd6FDojA2x1BzOzKiQRDy/1zWGp/cZkxcEuB1wHI5LMzN03c67vmbu+fhZRAUO4dQkvcnj2LrhQtpa+ytvnSjr8icMDosf1OsbSffwZFyHB/hfWGAfe0eIeSA2XPraxiPknXxiPKx2MJsaUTYbsZcm3EjFdHBBMumw5rBI74zLrMRvCO9GwBEmGT4rFng1nP+yw5DB8sn2zqpOsPg1LYRwCPOUveC13P6pgsZZPh812e8v5EKnETct+5XI3dVpdw6CnNiLwAyVAF15DJvBGT/u1k0Myg/bQn+Gv9k2MSj6LvQmf6WbZu2Wgjm30z3FyCneBqTL7mLF19YXzeC0ufHz5pnO1E= dasm@fedora manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 31 07:32:37 np0005603663.novalocal python3[5630]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIHUnwjB20UKmsSed9X73eGNV5AOEFccQ3NYrRW776pEk cjeanner manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 31 07:32:37 np0005603663.novalocal python3[5654]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDercCMGn8rW1C4P67tHgtflPdTeXlpyUJYH+6XDd2lR jgilaber@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 31 07:32:37 np0005603663.novalocal python3[5678]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIAMI6kkg9Wg0sG7jIJmyZemEBwUn1yzNpQQd3gnulOmZ adrianfuscoarnejo@gmail.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 31 07:32:38 np0005603663.novalocal python3[5702]: ansible-authorized_key Invoked with user=zuul state=present key=ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBPijwpQu/3jhhhBZInXNOLEH57DrknPc3PLbsRvYyJIFzwYjX+WD4a7+nGnMYS42MuZk6TJcVqgnqofVx4isoD4= ramishra@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 31 07:32:38 np0005603663.novalocal python3[5726]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIGpU/BepK3qX0NRf5Np+dOBDqzQEefhNrw2DCZaH3uWW rebtoor@monolith manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 31 07:32:38 np0005603663.novalocal python3[5750]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDK0iKdi8jQTpQrDdLVH/AAgLVYyTXF7AQ1gjc/5uT3t ykarel@yatinkarel manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 31 07:32:38 np0005603663.novalocal python3[5774]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIF/V/cLotA6LZeO32VL45Hd78skuA2lJA425Sm2LlQeZ fmount@horcrux manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 31 07:32:39 np0005603663.novalocal python3[5798]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDa7QCjuDMVmRPo1rREbGwzYeBCYVN+Ou/3WKXZEC6Sr manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 31 07:32:39 np0005603663.novalocal python3[5822]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAACAQCfNtF7NvKl915TGsGGoseUb06Hj8L/S4toWf0hExeY+F00woL6NvBlJD0nDct+P5a22I4EhvoQCRQ8reaPCm1lybR3uiRIJsj+8zkVvLwby9LXzfZorlNG9ofjd00FEmB09uW/YvTl6Q9XwwwX6tInzIOv3TMqTHHGOL74ibbj8J/FJR0cFEyj0z4WQRvtkh32xAHl83gbuINryMt0sqRI+clj2381NKL55DRLQrVw0gsfqqxiHAnXg21qWmc4J+b9e9kiuAFQjcjwTVkwJCcg3xbPwC/qokYRby/Y5S40UUd7/jEARGXT7RZgpzTuDd1oZiCVrnrqJNPaMNdVv5MLeFdf1B7iIe5aa/fGouX7AO4SdKhZUdnJmCFAGvjC6S3JMZ2wAcUl+OHnssfmdj7XL50cLo27vjuzMtLAgSqi6N99m92WCF2s8J9aVzszX7Xz9OKZCeGsiVJp3/NdABKzSEAyM9xBD/5Vho894Sav+otpySHe3p6RUTgbB5Zu8VyZRZ/UtB3ueXxyo764yrc6qWIDqrehm84Xm9g+/jpIBzGPl07NUNJpdt/6Sgf9RIKXw/7XypO5yZfUcuFNGTxLfqjTNrtgLZNcjfav6sSdVXVcMPL//XNuRdKmVFaO76eV/oGMQGr1fGcCD+N+CpI7+Q+fCNB6VFWG4nZFuI/Iuw== averdagu@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 31 07:32:39 np0005603663.novalocal irqbalance[789]: Cannot change IRQ 26 affinity: Operation not permitted
Jan 31 07:32:39 np0005603663.novalocal irqbalance[789]: IRQ 26 affinity is now unmanaged
Jan 31 07:32:39 np0005603663.novalocal python3[5846]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDq8l27xI+QlQVdS4djp9ogSoyrNE2+Ox6vKPdhSNL1J3PE5w+WCSvMz9A5gnNuH810zwbekEApbxTze/gLQJwBHA52CChfURpXrFaxY7ePXRElwKAL3mJfzBWY/c5jnNL9TCVmFJTGZkFZP3Nh+BMgZvL6xBkt3WKm6Uq18qzd9XeKcZusrA+O+uLv1fVeQnadY9RIqOCyeFYCzLWrUfTyE8x/XG0hAWIM7qpnF2cALQS2h9n4hW5ybiUN790H08wf9hFwEf5nxY9Z9dVkPFQiTSGKNBzmnCXU9skxS/xhpFjJ5duGSZdtAHe9O+nGZm9c67hxgtf8e5PDuqAdXEv2cf6e3VBAt+Bz8EKI3yosTj0oZHfwr42Yzb1l/SKy14Rggsrc9KAQlrGXan6+u2jcQqqx7l+SWmnpFiWTV9u5cWj2IgOhApOitmRBPYqk9rE2usfO0hLn/Pj/R/Nau4803e1/EikdLE7Ps95s9mX5jRDjAoUa2JwFF5RsVFyL910= ashigupt@ashigupt.remote.csb manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 31 07:32:39 np0005603663.novalocal python3[5870]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIOKLl0NYKwoZ/JY5KeZU8VwRAggeOxqQJeoqp3dsAaY9 manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 31 07:32:40 np0005603663.novalocal python3[5894]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIASASQOH2BcOyLKuuDOdWZlPi2orcjcA8q4400T73DLH evallesp@fedora manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 31 07:32:40 np0005603663.novalocal python3[5918]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAILeBWlamUph+jRKV2qrx1PGU7vWuGIt5+z9k96I8WehW amsinha@amsinha-mac manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 31 07:32:40 np0005603663.novalocal python3[5942]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIANvVgvJBlK3gb1yz5uef/JqIGq4HLEmY2dYA8e37swb morenod@redhat-laptop manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 31 07:32:40 np0005603663.novalocal python3[5966]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAACAQDZdI7t1cxYx65heVI24HTV4F7oQLW1zyfxHreL2TIJKxjyrUUKIFEUmTutcBlJRLNT2Eoix6x1sOw9YrchloCLcn//SGfTElr9mSc5jbjb7QXEU+zJMhtxyEJ1Po3CUGnj7ckiIXw7wcawZtrEOAQ9pH3ExYCJcEMiyNjRQZCxT3tPK+S4B95EWh5Fsrz9CkwpjNRPPH7LigCeQTM3Wc7r97utAslBUUvYceDSLA7rMgkitJE38b7rZBeYzsGQ8YYUBjTCtehqQXxCRjizbHWaaZkBU+N3zkKB6n/iCNGIO690NK7A/qb6msTijiz1PeuM8ThOsi9qXnbX5v0PoTpcFSojV7NHAQ71f0XXuS43FhZctT+Dcx44dT8Fb5vJu2cJGrk+qF8ZgJYNpRS7gPg0EG2EqjK7JMf9ULdjSu0r+KlqIAyLvtzT4eOnQipoKlb/WG5D/0ohKv7OMQ352ggfkBFIQsRXyyTCT98Ft9juqPuahi3CAQmP4H9dyE+7+Kz437PEtsxLmfm6naNmWi7Ee1DqWPwS8rEajsm4sNM4wW9gdBboJQtc0uZw0DfLj1I9r3Mc8Ol0jYtz0yNQDSzVLrGCaJlC311trU70tZ+ZkAVV6Mn8lOhSbj1cK0lvSr6ZK4dgqGl3I1eTZJJhbLNdg7UOVaiRx9543+C/p/As7w== brjackma@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 31 07:32:41 np0005603663.novalocal python3[5990]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIKwedoZ0TWPJX/z/4TAbO/kKcDZOQVgRH0hAqrL5UCI1 vcastell@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 31 07:32:41 np0005603663.novalocal python3[6014]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIEmv8sE8GCk6ZTPIqF0FQrttBdL3mq7rCm/IJy0xDFh7 michburk@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 31 07:32:41 np0005603663.novalocal python3[6038]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAICy6GpGEtwevXEEn4mmLR5lmSLe23dGgAvzkB9DMNbkf rsafrono@rsafrono manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 31 07:32:45 np0005603663.novalocal sudo[6062]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nuzbbkzpynlesmrfgtbjsozphdqwyuqt ; /usr/bin/python3'
Jan 31 07:32:45 np0005603663.novalocal sudo[6062]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:32:45 np0005603663.novalocal python3[6064]: ansible-community.general.timezone Invoked with name=UTC hwclock=None
Jan 31 07:32:45 np0005603663.novalocal systemd[1]: Starting Time & Date Service...
Jan 31 07:32:45 np0005603663.novalocal systemd[1]: Started Time & Date Service.
Jan 31 07:32:45 np0005603663.novalocal systemd-timedated[6066]: Changed time zone to 'UTC' (UTC).
Jan 31 07:32:45 np0005603663.novalocal sudo[6062]: pam_unix(sudo:session): session closed for user root
Jan 31 07:32:45 np0005603663.novalocal sudo[6093]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iqpcxbzdxjahwxdbvudltwiyspyyjkbu ; /usr/bin/python3'
Jan 31 07:32:45 np0005603663.novalocal sudo[6093]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:32:45 np0005603663.novalocal python3[6095]: ansible-file Invoked with path=/etc/nodepool state=directory mode=511 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 07:32:45 np0005603663.novalocal sudo[6093]: pam_unix(sudo:session): session closed for user root
Jan 31 07:32:46 np0005603663.novalocal python3[6171]: ansible-ansible.legacy.stat Invoked with path=/etc/nodepool/sub_nodes follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 31 07:32:46 np0005603663.novalocal python3[6242]: ansible-ansible.legacy.copy Invoked with dest=/etc/nodepool/sub_nodes src=/home/zuul/.ansible/tmp/ansible-tmp-1769844766.0813003-153-54903357118270/source _original_basename=tmp05qh8us3 follow=False checksum=da39a3ee5e6b4b0d3255bfef95601890afd80709 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 07:32:47 np0005603663.novalocal python3[6342]: ansible-ansible.legacy.stat Invoked with path=/etc/nodepool/sub_nodes_private follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 31 07:32:47 np0005603663.novalocal python3[6413]: ansible-ansible.legacy.copy Invoked with dest=/etc/nodepool/sub_nodes_private src=/home/zuul/.ansible/tmp/ansible-tmp-1769844766.94846-183-101293122512547/source _original_basename=tmpg6pll65s follow=False checksum=da39a3ee5e6b4b0d3255bfef95601890afd80709 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 07:32:48 np0005603663.novalocal sudo[6513]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ixkmiohgmnopusiccdrrcesliudfyava ; /usr/bin/python3'
Jan 31 07:32:48 np0005603663.novalocal sudo[6513]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:32:48 np0005603663.novalocal python3[6515]: ansible-ansible.legacy.stat Invoked with path=/etc/nodepool/node_private follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 31 07:32:48 np0005603663.novalocal sudo[6513]: pam_unix(sudo:session): session closed for user root
Jan 31 07:32:48 np0005603663.novalocal sudo[6586]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qjnhngprpienplayiqxozomofjjcxetg ; /usr/bin/python3'
Jan 31 07:32:48 np0005603663.novalocal sudo[6586]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:32:48 np0005603663.novalocal python3[6588]: ansible-ansible.legacy.copy Invoked with dest=/etc/nodepool/node_private src=/home/zuul/.ansible/tmp/ansible-tmp-1769844768.0524254-231-119833535994984/source _original_basename=tmpn1h2xzgv follow=False checksum=6bf095e75b543d66829428b8a294812d38465cfe backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 07:32:48 np0005603663.novalocal sudo[6586]: pam_unix(sudo:session): session closed for user root
Jan 31 07:32:49 np0005603663.novalocal python3[6636]: ansible-ansible.legacy.command Invoked with _raw_params=cp .ssh/id_rsa /etc/nodepool/id_rsa zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 07:32:49 np0005603663.novalocal python3[6662]: ansible-ansible.legacy.command Invoked with _raw_params=cp .ssh/id_rsa.pub /etc/nodepool/id_rsa.pub zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 07:32:49 np0005603663.novalocal sudo[6740]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-slkpqutlshhhhdhzfjcoddlwymtboorz ; /usr/bin/python3'
Jan 31 07:32:49 np0005603663.novalocal sudo[6740]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:32:49 np0005603663.novalocal python3[6742]: ansible-ansible.legacy.stat Invoked with path=/etc/sudoers.d/zuul-sudo-grep follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 31 07:32:49 np0005603663.novalocal sudo[6740]: pam_unix(sudo:session): session closed for user root
Jan 31 07:32:50 np0005603663.novalocal sudo[6813]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ayoppdcyzzlecuyyglnulipeadgnqrly ; /usr/bin/python3'
Jan 31 07:32:50 np0005603663.novalocal sudo[6813]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:32:50 np0005603663.novalocal python3[6815]: ansible-ansible.legacy.copy Invoked with dest=/etc/sudoers.d/zuul-sudo-grep mode=288 src=/home/zuul/.ansible/tmp/ansible-tmp-1769844769.7094617-273-50625801847282/source _original_basename=tmp8y66r_f4 follow=False checksum=bdca1a77493d00fb51567671791f4aa30f66c2f0 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 07:32:50 np0005603663.novalocal sudo[6813]: pam_unix(sudo:session): session closed for user root
Jan 31 07:32:50 np0005603663.novalocal sudo[6864]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qmcicmibijydgfigifsyvkbxfelfatsj ; /usr/bin/python3'
Jan 31 07:32:50 np0005603663.novalocal sudo[6864]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:32:51 np0005603663.novalocal python3[6866]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/visudo -c zuul_log_id=fa163ec2-ffbe-1870-bf1a-00000000001d-1-compute0 zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 07:32:51 np0005603663.novalocal sudo[6864]: pam_unix(sudo:session): session closed for user root
Jan 31 07:32:51 np0005603663.novalocal python3[6894]: ansible-ansible.legacy.command Invoked with executable=/bin/bash _raw_params=env
                                                       _uses_shell=True zuul_log_id=fa163ec2-ffbe-1870-bf1a-00000000001e-1-compute0 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None creates=None removes=None stdin=None
Jan 31 07:32:52 np0005603663.novalocal python3[6923]: ansible-file Invoked with path=/home/zuul/workspace state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 07:32:59 np0005603663.novalocal irqbalance[789]: Cannot change IRQ 27 affinity: Operation not permitted
Jan 31 07:32:59 np0005603663.novalocal irqbalance[789]: IRQ 27 affinity is now unmanaged
Jan 31 07:33:15 np0005603663.novalocal systemd[1]: systemd-timedated.service: Deactivated successfully.
Jan 31 07:33:16 np0005603663.novalocal sudo[6949]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jvalzzgwvnztxwbsvoaixclurjzlggmv ; /usr/bin/python3'
Jan 31 07:33:16 np0005603663.novalocal sudo[6949]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:33:16 np0005603663.novalocal python3[6951]: ansible-ansible.builtin.file Invoked with path=/etc/ci/env state=directory mode=0755 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 07:33:16 np0005603663.novalocal sudo[6949]: pam_unix(sudo:session): session closed for user root
Jan 31 07:33:50 np0005603663.novalocal kernel: pci 0000:00:07.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint
Jan 31 07:33:50 np0005603663.novalocal kernel: pci 0000:00:07.0: BAR 0 [io  0x0000-0x003f]
Jan 31 07:33:50 np0005603663.novalocal kernel: pci 0000:00:07.0: BAR 1 [mem 0x00000000-0x00000fff]
Jan 31 07:33:50 np0005603663.novalocal kernel: pci 0000:00:07.0: BAR 4 [mem 0x00000000-0x00003fff 64bit pref]
Jan 31 07:33:50 np0005603663.novalocal kernel: pci 0000:00:07.0: ROM [mem 0x00000000-0x0007ffff pref]
Jan 31 07:33:50 np0005603663.novalocal kernel: pci 0000:00:07.0: ROM [mem 0xc0000000-0xc007ffff pref]: assigned
Jan 31 07:33:50 np0005603663.novalocal kernel: pci 0000:00:07.0: BAR 4 [mem 0x240000000-0x240003fff 64bit pref]: assigned
Jan 31 07:33:50 np0005603663.novalocal kernel: pci 0000:00:07.0: BAR 1 [mem 0xc0080000-0xc0080fff]: assigned
Jan 31 07:33:50 np0005603663.novalocal kernel: pci 0000:00:07.0: BAR 0 [io  0x1000-0x103f]: assigned
Jan 31 07:33:50 np0005603663.novalocal kernel: virtio-pci 0000:00:07.0: enabling device (0000 -> 0003)
Jan 31 07:33:50 np0005603663.novalocal NetworkManager[861]: <info>  [1769844830.4266] manager: (eth1): new Ethernet device (/org/freedesktop/NetworkManager/Devices/3)
Jan 31 07:33:50 np0005603663.novalocal systemd-udevd[6952]: Network interface NamePolicy= disabled on kernel command line.
Jan 31 07:33:50 np0005603663.novalocal NetworkManager[861]: <info>  [1769844830.4437] device (eth1): state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Jan 31 07:33:50 np0005603663.novalocal NetworkManager[861]: <info>  [1769844830.4463] settings: (eth1): created default wired connection 'Wired connection 1'
Jan 31 07:33:50 np0005603663.novalocal NetworkManager[861]: <info>  [1769844830.4468] device (eth1): carrier: link connected
Jan 31 07:33:50 np0005603663.novalocal NetworkManager[861]: <info>  [1769844830.4471] device (eth1): state change: unavailable -> disconnected (reason 'carrier-changed', managed-type: 'full')
Jan 31 07:33:50 np0005603663.novalocal NetworkManager[861]: <info>  [1769844830.4478] policy: auto-activating connection 'Wired connection 1' (ab34d4aa-4908-314b-843b-ee48e300858c)
Jan 31 07:33:50 np0005603663.novalocal NetworkManager[861]: <info>  [1769844830.4484] device (eth1): Activation: starting connection 'Wired connection 1' (ab34d4aa-4908-314b-843b-ee48e300858c)
Jan 31 07:33:50 np0005603663.novalocal NetworkManager[861]: <info>  [1769844830.4486] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Jan 31 07:33:50 np0005603663.novalocal NetworkManager[861]: <info>  [1769844830.4490] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'full')
Jan 31 07:33:50 np0005603663.novalocal NetworkManager[861]: <info>  [1769844830.4495] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'full')
Jan 31 07:33:50 np0005603663.novalocal NetworkManager[861]: <info>  [1769844830.4499] dhcp4 (eth1): activation: beginning transaction (timeout in 45 seconds)
Jan 31 07:33:51 np0005603663.novalocal python3[6979]: ansible-ansible.legacy.command Invoked with _raw_params=ip -j link zuul_log_id=fa163ec2-ffbe-4cef-bc83-0000000000fc-0-controller zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 07:34:00 np0005603663.novalocal sudo[7057]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jsrscmarwjclkwbwoavqpovbxbgjmsus ; OS_CLOUD=vexxhost /usr/bin/python3'
Jan 31 07:34:00 np0005603663.novalocal sudo[7057]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:34:01 np0005603663.novalocal python3[7059]: ansible-ansible.legacy.stat Invoked with path=/etc/NetworkManager/system-connections/ci-private-network.nmconnection follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 31 07:34:01 np0005603663.novalocal sudo[7057]: pam_unix(sudo:session): session closed for user root
Jan 31 07:34:01 np0005603663.novalocal sudo[7130]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dnohxkyelwegwqfkninbsgzzbmcbgung ; OS_CLOUD=vexxhost /usr/bin/python3'
Jan 31 07:34:01 np0005603663.novalocal sudo[7130]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:34:01 np0005603663.novalocal python3[7132]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1769844840.7602708-102-256474702195950/source dest=/etc/NetworkManager/system-connections/ci-private-network.nmconnection mode=0600 owner=root group=root follow=False _original_basename=bootstrap-ci-network-nm-connection.nmconnection.j2 checksum=b65d917452bf41b08bb7a11e59261a67db6a7912 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 07:34:01 np0005603663.novalocal sudo[7130]: pam_unix(sudo:session): session closed for user root
Jan 31 07:34:02 np0005603663.novalocal sudo[7180]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lbrmkydoijodjglhudydeyavqspehdzy ; OS_CLOUD=vexxhost /usr/bin/python3'
Jan 31 07:34:02 np0005603663.novalocal sudo[7180]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:34:02 np0005603663.novalocal python3[7182]: ansible-ansible.builtin.systemd Invoked with name=NetworkManager state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 31 07:34:02 np0005603663.novalocal systemd[1]: NetworkManager-wait-online.service: Deactivated successfully.
Jan 31 07:34:02 np0005603663.novalocal systemd[1]: Stopped Network Manager Wait Online.
Jan 31 07:34:02 np0005603663.novalocal systemd[1]: Stopping Network Manager Wait Online...
Jan 31 07:34:02 np0005603663.novalocal systemd[1]: Stopping Network Manager...
Jan 31 07:34:02 np0005603663.novalocal NetworkManager[861]: <info>  [1769844842.5083] caught SIGTERM, shutting down normally.
Jan 31 07:34:02 np0005603663.novalocal NetworkManager[861]: <info>  [1769844842.5097] dhcp4 (eth0): canceled DHCP transaction
Jan 31 07:34:02 np0005603663.novalocal NetworkManager[861]: <info>  [1769844842.5098] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Jan 31 07:34:02 np0005603663.novalocal NetworkManager[861]: <info>  [1769844842.5098] dhcp4 (eth0): state changed no lease
Jan 31 07:34:02 np0005603663.novalocal NetworkManager[861]: <info>  [1769844842.5104] manager: NetworkManager state is now CONNECTING
Jan 31 07:34:02 np0005603663.novalocal NetworkManager[861]: <info>  [1769844842.5213] dhcp4 (eth1): canceled DHCP transaction
Jan 31 07:34:02 np0005603663.novalocal NetworkManager[861]: <info>  [1769844842.5213] dhcp4 (eth1): state changed no lease
Jan 31 07:34:02 np0005603663.novalocal systemd[1]: Starting Network Manager Script Dispatcher Service...
Jan 31 07:34:02 np0005603663.novalocal NetworkManager[861]: <info>  [1769844842.5259] exiting (success)
Jan 31 07:34:02 np0005603663.novalocal systemd[1]: Started Network Manager Script Dispatcher Service.
Jan 31 07:34:02 np0005603663.novalocal systemd[1]: NetworkManager.service: Deactivated successfully.
Jan 31 07:34:02 np0005603663.novalocal systemd[1]: Stopped Network Manager.
Jan 31 07:34:02 np0005603663.novalocal systemd[1]: NetworkManager.service: Consumed 1.311s CPU time, 10.1M memory peak.
Jan 31 07:34:02 np0005603663.novalocal systemd[1]: Starting Network Manager...
Jan 31 07:34:02 np0005603663.novalocal NetworkManager[7191]: <info>  [1769844842.5662] NetworkManager (version 1.54.3-2.el9) is starting... (after a restart, boot:46d0e983-b0c8-47a0-b578-409408b2d808)
Jan 31 07:34:02 np0005603663.novalocal NetworkManager[7191]: <info>  [1769844842.5665] Read config: /etc/NetworkManager/NetworkManager.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf
Jan 31 07:34:02 np0005603663.novalocal NetworkManager[7191]: <info>  [1769844842.5723] manager[0x563a90aee000]: monitoring kernel firmware directory '/lib/firmware'.
Jan 31 07:34:02 np0005603663.novalocal systemd[1]: Starting Hostname Service...
Jan 31 07:34:02 np0005603663.novalocal systemd[1]: Started Hostname Service.
Jan 31 07:34:02 np0005603663.novalocal NetworkManager[7191]: <info>  [1769844842.6400] hostname: hostname: using hostnamed
Jan 31 07:34:02 np0005603663.novalocal NetworkManager[7191]: <info>  [1769844842.6402] hostname: static hostname changed from (none) to "np0005603663.novalocal"
Jan 31 07:34:02 np0005603663.novalocal NetworkManager[7191]: <info>  [1769844842.6407] dns-mgr: init: dns=default,systemd-resolved rc-manager=symlink (auto)
Jan 31 07:34:02 np0005603663.novalocal NetworkManager[7191]: <info>  [1769844842.6414] manager[0x563a90aee000]: rfkill: Wi-Fi hardware radio set enabled
Jan 31 07:34:02 np0005603663.novalocal NetworkManager[7191]: <info>  [1769844842.6415] manager[0x563a90aee000]: rfkill: WWAN hardware radio set enabled
Jan 31 07:34:02 np0005603663.novalocal NetworkManager[7191]: <info>  [1769844842.6440] Loaded device plugin: NMTeamFactory (/usr/lib64/NetworkManager/1.54.3-2.el9/libnm-device-plugin-team.so)
Jan 31 07:34:02 np0005603663.novalocal NetworkManager[7191]: <info>  [1769844842.6440] manager: rfkill: Wi-Fi enabled by radio killswitch; enabled by state file
Jan 31 07:34:02 np0005603663.novalocal NetworkManager[7191]: <info>  [1769844842.6441] manager: rfkill: WWAN enabled by radio killswitch; enabled by state file
Jan 31 07:34:02 np0005603663.novalocal NetworkManager[7191]: <info>  [1769844842.6442] manager: Networking is enabled by state file
Jan 31 07:34:02 np0005603663.novalocal NetworkManager[7191]: <info>  [1769844842.6444] settings: Loaded settings plugin: keyfile (internal)
Jan 31 07:34:02 np0005603663.novalocal NetworkManager[7191]: <info>  [1769844842.6447] settings: Loaded settings plugin: ifcfg-rh ("/usr/lib64/NetworkManager/1.54.3-2.el9/libnm-settings-plugin-ifcfg-rh.so")
Jan 31 07:34:02 np0005603663.novalocal NetworkManager[7191]: <info>  [1769844842.6468] Warning: the ifcfg-rh plugin is deprecated, please migrate connections to the keyfile format using "nmcli connection migrate"
Jan 31 07:34:02 np0005603663.novalocal NetworkManager[7191]: <info>  [1769844842.6475] dhcp: init: Using DHCP client 'internal'
Jan 31 07:34:02 np0005603663.novalocal NetworkManager[7191]: <info>  [1769844842.6477] manager: (lo): new Loopback device (/org/freedesktop/NetworkManager/Devices/1)
Jan 31 07:34:02 np0005603663.novalocal NetworkManager[7191]: <info>  [1769844842.6480] device (lo): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 31 07:34:02 np0005603663.novalocal NetworkManager[7191]: <info>  [1769844842.6484] device (lo): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'external')
Jan 31 07:34:02 np0005603663.novalocal NetworkManager[7191]: <info>  [1769844842.6489] device (lo): Activation: starting connection 'lo' (4e410dfc-e55f-4386-a962-128f9b1580ba)
Jan 31 07:34:02 np0005603663.novalocal NetworkManager[7191]: <info>  [1769844842.6495] device (eth0): carrier: link connected
Jan 31 07:34:02 np0005603663.novalocal NetworkManager[7191]: <info>  [1769844842.6498] manager: (eth0): new Ethernet device (/org/freedesktop/NetworkManager/Devices/2)
Jan 31 07:34:02 np0005603663.novalocal NetworkManager[7191]: <info>  [1769844842.6502] manager: (eth0): assume: will attempt to assume matching connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03) (indicated)
Jan 31 07:34:02 np0005603663.novalocal NetworkManager[7191]: <info>  [1769844842.6503] device (eth0): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'assume')
Jan 31 07:34:02 np0005603663.novalocal NetworkManager[7191]: <info>  [1769844842.6507] device (eth0): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'assume')
Jan 31 07:34:02 np0005603663.novalocal NetworkManager[7191]: <info>  [1769844842.6511] device (eth0): Activation: starting connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03)
Jan 31 07:34:02 np0005603663.novalocal NetworkManager[7191]: <info>  [1769844842.6515] device (eth1): carrier: link connected
Jan 31 07:34:02 np0005603663.novalocal NetworkManager[7191]: <info>  [1769844842.6520] manager: (eth1): new Ethernet device (/org/freedesktop/NetworkManager/Devices/3)
Jan 31 07:34:02 np0005603663.novalocal NetworkManager[7191]: <info>  [1769844842.6524] manager: (eth1): assume: will attempt to assume matching connection 'Wired connection 1' (ab34d4aa-4908-314b-843b-ee48e300858c) (indicated)
Jan 31 07:34:02 np0005603663.novalocal NetworkManager[7191]: <info>  [1769844842.6524] device (eth1): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'assume')
Jan 31 07:34:02 np0005603663.novalocal NetworkManager[7191]: <info>  [1769844842.6527] device (eth1): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'assume')
Jan 31 07:34:02 np0005603663.novalocal NetworkManager[7191]: <info>  [1769844842.6533] device (eth1): Activation: starting connection 'Wired connection 1' (ab34d4aa-4908-314b-843b-ee48e300858c)
Jan 31 07:34:02 np0005603663.novalocal systemd[1]: Started Network Manager.
Jan 31 07:34:02 np0005603663.novalocal NetworkManager[7191]: <info>  [1769844842.6539] bus-manager: acquired D-Bus service "org.freedesktop.NetworkManager"
Jan 31 07:34:02 np0005603663.novalocal NetworkManager[7191]: <info>  [1769844842.6543] device (lo): state change: disconnected -> prepare (reason 'none', managed-type: 'external')
Jan 31 07:34:02 np0005603663.novalocal NetworkManager[7191]: <info>  [1769844842.6545] device (lo): state change: prepare -> config (reason 'none', managed-type: 'external')
Jan 31 07:34:02 np0005603663.novalocal NetworkManager[7191]: <info>  [1769844842.6546] device (lo): state change: config -> ip-config (reason 'none', managed-type: 'external')
Jan 31 07:34:02 np0005603663.novalocal NetworkManager[7191]: <info>  [1769844842.6548] device (eth0): state change: disconnected -> prepare (reason 'none', managed-type: 'assume')
Jan 31 07:34:02 np0005603663.novalocal NetworkManager[7191]: <info>  [1769844842.6550] device (eth0): state change: prepare -> config (reason 'none', managed-type: 'assume')
Jan 31 07:34:02 np0005603663.novalocal NetworkManager[7191]: <info>  [1769844842.6551] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'assume')
Jan 31 07:34:02 np0005603663.novalocal NetworkManager[7191]: <info>  [1769844842.6554] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'assume')
Jan 31 07:34:02 np0005603663.novalocal NetworkManager[7191]: <info>  [1769844842.6556] device (lo): state change: ip-config -> ip-check (reason 'none', managed-type: 'external')
Jan 31 07:34:02 np0005603663.novalocal NetworkManager[7191]: <info>  [1769844842.6560] device (eth0): state change: config -> ip-config (reason 'none', managed-type: 'assume')
Jan 31 07:34:02 np0005603663.novalocal NetworkManager[7191]: <info>  [1769844842.6562] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Jan 31 07:34:02 np0005603663.novalocal NetworkManager[7191]: <info>  [1769844842.6568] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'assume')
Jan 31 07:34:02 np0005603663.novalocal NetworkManager[7191]: <info>  [1769844842.6570] dhcp4 (eth1): activation: beginning transaction (timeout in 45 seconds)
Jan 31 07:34:02 np0005603663.novalocal NetworkManager[7191]: <info>  [1769844842.6589] device (lo): state change: ip-check -> secondaries (reason 'none', managed-type: 'external')
Jan 31 07:34:02 np0005603663.novalocal NetworkManager[7191]: <info>  [1769844842.6590] device (lo): state change: secondaries -> activated (reason 'none', managed-type: 'external')
Jan 31 07:34:02 np0005603663.novalocal NetworkManager[7191]: <info>  [1769844842.6595] device (lo): Activation: successful, device activated.
Jan 31 07:34:02 np0005603663.novalocal NetworkManager[7191]: <info>  [1769844842.6601] dhcp4 (eth0): state changed new lease, address=38.102.83.23
Jan 31 07:34:02 np0005603663.novalocal NetworkManager[7191]: <info>  [1769844842.6606] policy: set 'System eth0' (eth0) as default for IPv4 routing and DNS
Jan 31 07:34:02 np0005603663.novalocal NetworkManager[7191]: <info>  [1769844842.6674] device (eth0): state change: ip-config -> ip-check (reason 'none', managed-type: 'assume')
Jan 31 07:34:02 np0005603663.novalocal systemd[1]: Starting Network Manager Wait Online...
Jan 31 07:34:02 np0005603663.novalocal NetworkManager[7191]: <info>  [1769844842.6697] device (eth0): state change: ip-check -> secondaries (reason 'none', managed-type: 'assume')
Jan 31 07:34:02 np0005603663.novalocal NetworkManager[7191]: <info>  [1769844842.6698] device (eth0): state change: secondaries -> activated (reason 'none', managed-type: 'assume')
Jan 31 07:34:02 np0005603663.novalocal NetworkManager[7191]: <info>  [1769844842.6701] manager: NetworkManager state is now CONNECTED_SITE
Jan 31 07:34:02 np0005603663.novalocal NetworkManager[7191]: <info>  [1769844842.6704] device (eth0): Activation: successful, device activated.
Jan 31 07:34:02 np0005603663.novalocal NetworkManager[7191]: <info>  [1769844842.6709] manager: NetworkManager state is now CONNECTED_GLOBAL
Jan 31 07:34:02 np0005603663.novalocal sudo[7180]: pam_unix(sudo:session): session closed for user root
Jan 31 07:34:02 np0005603663.novalocal python3[7266]: ansible-ansible.legacy.command Invoked with _raw_params=ip route zuul_log_id=fa163ec2-ffbe-4cef-bc83-0000000000a7-0-controller zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 07:34:12 np0005603663.novalocal systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Jan 31 07:34:32 np0005603663.novalocal systemd[4307]: Starting Mark boot as successful...
Jan 31 07:34:32 np0005603663.novalocal systemd[4307]: Finished Mark boot as successful.
Jan 31 07:34:32 np0005603663.novalocal systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Jan 31 07:34:47 np0005603663.novalocal NetworkManager[7191]: <info>  [1769844887.6304] device (eth1): state change: ip-config -> ip-check (reason 'none', managed-type: 'assume')
Jan 31 07:34:47 np0005603663.novalocal systemd[1]: Starting Network Manager Script Dispatcher Service...
Jan 31 07:34:47 np0005603663.novalocal systemd[1]: Started Network Manager Script Dispatcher Service.
Jan 31 07:34:47 np0005603663.novalocal NetworkManager[7191]: <info>  [1769844887.6682] device (eth1): state change: ip-check -> secondaries (reason 'none', managed-type: 'assume')
Jan 31 07:34:47 np0005603663.novalocal NetworkManager[7191]: <info>  [1769844887.6691] device (eth1): state change: secondaries -> activated (reason 'none', managed-type: 'assume')
Jan 31 07:34:47 np0005603663.novalocal NetworkManager[7191]: <info>  [1769844887.6708] device (eth1): Activation: successful, device activated.
Jan 31 07:34:47 np0005603663.novalocal NetworkManager[7191]: <info>  [1769844887.6721] manager: startup complete
Jan 31 07:34:47 np0005603663.novalocal NetworkManager[7191]: <info>  [1769844887.6723] device (eth1): state change: activated -> failed (reason 'ip-config-unavailable', managed-type: 'full')
Jan 31 07:34:47 np0005603663.novalocal NetworkManager[7191]: <warn>  [1769844887.6741] device (eth1): Activation: failed for connection 'Wired connection 1'
Jan 31 07:34:47 np0005603663.novalocal NetworkManager[7191]: <info>  [1769844887.6751] device (eth1): state change: failed -> disconnected (reason 'none', managed-type: 'full')
Jan 31 07:34:47 np0005603663.novalocal systemd[1]: Finished Network Manager Wait Online.
Jan 31 07:34:47 np0005603663.novalocal NetworkManager[7191]: <info>  [1769844887.6893] dhcp4 (eth1): canceled DHCP transaction
Jan 31 07:34:47 np0005603663.novalocal NetworkManager[7191]: <info>  [1769844887.6894] dhcp4 (eth1): activation: beginning transaction (timeout in 45 seconds)
Jan 31 07:34:47 np0005603663.novalocal NetworkManager[7191]: <info>  [1769844887.6894] dhcp4 (eth1): state changed no lease
Jan 31 07:34:47 np0005603663.novalocal NetworkManager[7191]: <info>  [1769844887.6908] policy: auto-activating connection 'ci-private-network' (b1c2768f-0cc5-558f-b3d7-45fa9d4a2631)
Jan 31 07:34:47 np0005603663.novalocal NetworkManager[7191]: <info>  [1769844887.6913] device (eth1): Activation: starting connection 'ci-private-network' (b1c2768f-0cc5-558f-b3d7-45fa9d4a2631)
Jan 31 07:34:47 np0005603663.novalocal NetworkManager[7191]: <info>  [1769844887.6914] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Jan 31 07:34:47 np0005603663.novalocal NetworkManager[7191]: <info>  [1769844887.6917] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'full')
Jan 31 07:34:47 np0005603663.novalocal NetworkManager[7191]: <info>  [1769844887.6924] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'full')
Jan 31 07:34:47 np0005603663.novalocal NetworkManager[7191]: <info>  [1769844887.6933] device (eth1): state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Jan 31 07:34:47 np0005603663.novalocal NetworkManager[7191]: <info>  [1769844887.6974] device (eth1): state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Jan 31 07:34:47 np0005603663.novalocal NetworkManager[7191]: <info>  [1769844887.6975] device (eth1): state change: secondaries -> activated (reason 'none', managed-type: 'full')
Jan 31 07:34:47 np0005603663.novalocal NetworkManager[7191]: <info>  [1769844887.6979] device (eth1): Activation: successful, device activated.
Jan 31 07:34:57 np0005603663.novalocal systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Jan 31 07:35:03 np0005603663.novalocal sshd-session[4318]: Received disconnect from 38.102.83.114 port 37076:11: disconnected by user
Jan 31 07:35:03 np0005603663.novalocal sshd-session[4318]: Disconnected from user zuul 38.102.83.114 port 37076
Jan 31 07:35:03 np0005603663.novalocal sshd-session[4303]: pam_unix(sshd:session): session closed for user zuul
Jan 31 07:35:03 np0005603663.novalocal systemd-logind[793]: Session 1 logged out. Waiting for processes to exit.
Jan 31 07:35:06 np0005603663.novalocal sshd-session[7295]: Accepted publickey for zuul from 38.102.83.114 port 37274 ssh2: RSA SHA256:1cKsZJy0b8y0Op+4rpocXv0xojY9kddve1Dq+1Ump7k
Jan 31 07:35:06 np0005603663.novalocal systemd-logind[793]: New session 3 of user zuul.
Jan 31 07:35:06 np0005603663.novalocal systemd[1]: Started Session 3 of User zuul.
Jan 31 07:35:06 np0005603663.novalocal sshd-session[7295]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 31 07:35:06 np0005603663.novalocal sudo[7374]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bgxtzlcvoeyxltdlpiraojinjldbtgmk ; OS_CLOUD=vexxhost /usr/bin/python3'
Jan 31 07:35:06 np0005603663.novalocal sudo[7374]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:35:06 np0005603663.novalocal python3[7376]: ansible-ansible.legacy.stat Invoked with path=/etc/ci/env/networking-info.yml follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 31 07:35:06 np0005603663.novalocal sudo[7374]: pam_unix(sudo:session): session closed for user root
Jan 31 07:35:07 np0005603663.novalocal sudo[7447]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yreabmymvunaviybaaakgraacbgmazwe ; OS_CLOUD=vexxhost /usr/bin/python3'
Jan 31 07:35:07 np0005603663.novalocal sudo[7447]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:35:07 np0005603663.novalocal python3[7449]: ansible-ansible.legacy.copy Invoked with dest=/etc/ci/env/networking-info.yml owner=root group=root mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769844906.669143-267-82077250691609/source _original_basename=tmpss547bu9 follow=False checksum=1f1caabb57b4a4203f0a901b5db5015b865079c5 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 07:35:07 np0005603663.novalocal sudo[7447]: pam_unix(sudo:session): session closed for user root
Jan 31 07:35:09 np0005603663.novalocal sshd-session[7298]: Connection closed by 38.102.83.114 port 37274
Jan 31 07:35:09 np0005603663.novalocal sshd-session[7295]: pam_unix(sshd:session): session closed for user zuul
Jan 31 07:35:09 np0005603663.novalocal systemd[1]: session-3.scope: Deactivated successfully.
Jan 31 07:35:09 np0005603663.novalocal systemd-logind[793]: Session 3 logged out. Waiting for processes to exit.
Jan 31 07:35:09 np0005603663.novalocal systemd-logind[793]: Removed session 3.
Jan 31 07:36:53 np0005603663.novalocal sshd-session[7475]: error: kex_exchange_identification: read: Connection reset by peer
Jan 31 07:36:53 np0005603663.novalocal sshd-session[7475]: Connection reset by 176.120.22.52 port 5686
Jan 31 07:37:32 np0005603663.novalocal systemd[4307]: Created slice User Background Tasks Slice.
Jan 31 07:37:32 np0005603663.novalocal systemd[4307]: Starting Cleanup of User's Temporary Files and Directories...
Jan 31 07:37:32 np0005603663.novalocal systemd[4307]: Finished Cleanup of User's Temporary Files and Directories.
Jan 31 07:40:00 np0005603663.novalocal sshd-session[7481]: Accepted publickey for zuul from 38.102.83.114 port 41952 ssh2: RSA SHA256:1cKsZJy0b8y0Op+4rpocXv0xojY9kddve1Dq+1Ump7k
Jan 31 07:40:00 np0005603663.novalocal systemd-logind[793]: New session 4 of user zuul.
Jan 31 07:40:00 np0005603663.novalocal systemd[1]: Started Session 4 of User zuul.
Jan 31 07:40:00 np0005603663.novalocal sshd-session[7481]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 31 07:40:00 np0005603663.novalocal sudo[7508]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bqpzzghdqejhyjmidkaavpckqwllczcw ; /usr/bin/python3'
Jan 31 07:40:00 np0005603663.novalocal sudo[7508]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:40:00 np0005603663.novalocal python3[7510]: ansible-ansible.legacy.command Invoked with _raw_params=lsblk -nd -o MAJ:MIN /dev/vda
                                                       _uses_shell=True zuul_log_id=fa163ec2-ffbe-0250-99f0-000000002159-1-compute0 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 07:40:00 np0005603663.novalocal sudo[7508]: pam_unix(sudo:session): session closed for user root
Jan 31 07:40:01 np0005603663.novalocal sudo[7537]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ixuwjioprpjfbrjwpwlbwqwkjkcskomr ; /usr/bin/python3'
Jan 31 07:40:01 np0005603663.novalocal sudo[7537]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:40:01 np0005603663.novalocal python3[7539]: ansible-ansible.builtin.file Invoked with path=/sys/fs/cgroup/init.scope state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 07:40:01 np0005603663.novalocal sudo[7537]: pam_unix(sudo:session): session closed for user root
Jan 31 07:40:01 np0005603663.novalocal sudo[7563]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dmlpeypvmkdmusuhzdhhsoyaeyqpbqrx ; /usr/bin/python3'
Jan 31 07:40:01 np0005603663.novalocal sudo[7563]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:40:01 np0005603663.novalocal python3[7565]: ansible-ansible.builtin.file Invoked with path=/sys/fs/cgroup/machine.slice state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 07:40:01 np0005603663.novalocal sudo[7563]: pam_unix(sudo:session): session closed for user root
Jan 31 07:40:01 np0005603663.novalocal sudo[7589]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-znfcagbvkuynumjrhsumhvipcvmqaigv ; /usr/bin/python3'
Jan 31 07:40:01 np0005603663.novalocal sudo[7589]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:40:01 np0005603663.novalocal python3[7591]: ansible-ansible.builtin.file Invoked with path=/sys/fs/cgroup/system.slice state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 07:40:01 np0005603663.novalocal sudo[7589]: pam_unix(sudo:session): session closed for user root
Jan 31 07:40:01 np0005603663.novalocal sudo[7615]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wdxyppmnicwonsepfibtzbqcxwvdsmsi ; /usr/bin/python3'
Jan 31 07:40:01 np0005603663.novalocal sudo[7615]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:40:01 np0005603663.novalocal python3[7617]: ansible-ansible.builtin.file Invoked with path=/sys/fs/cgroup/user.slice state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 07:40:01 np0005603663.novalocal sudo[7615]: pam_unix(sudo:session): session closed for user root
Jan 31 07:40:02 np0005603663.novalocal sudo[7641]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wwybzyljfkxncnnvsxoharmdsvrseehp ; /usr/bin/python3'
Jan 31 07:40:02 np0005603663.novalocal sudo[7641]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:40:02 np0005603663.novalocal python3[7643]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system.conf.d state=directory mode=0755 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 07:40:02 np0005603663.novalocal sudo[7641]: pam_unix(sudo:session): session closed for user root
Jan 31 07:40:03 np0005603663.novalocal sudo[7719]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sekngjynclowrwmcvyibxqodhqnixbcd ; /usr/bin/python3'
Jan 31 07:40:03 np0005603663.novalocal sudo[7719]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:40:03 np0005603663.novalocal python3[7721]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system.conf.d/override.conf follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 31 07:40:03 np0005603663.novalocal sudo[7719]: pam_unix(sudo:session): session closed for user root
Jan 31 07:40:03 np0005603663.novalocal sudo[7792]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-guqzyxxgrdqhzbeiwzrltupjpvyrkltz ; /usr/bin/python3'
Jan 31 07:40:03 np0005603663.novalocal sudo[7792]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:40:03 np0005603663.novalocal python3[7794]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system.conf.d/override.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769845203.037129-488-173544911627575/source _original_basename=tmpbqhi5qh2 follow=False checksum=a05098bd3d2321238ea1169d0e6f135b35b392d4 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 07:40:03 np0005603663.novalocal sudo[7792]: pam_unix(sudo:session): session closed for user root
Jan 31 07:40:04 np0005603663.novalocal sudo[7842]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-skojutbtkpwcjkrmsdlpwqhqddzvfgpq ; /usr/bin/python3'
Jan 31 07:40:04 np0005603663.novalocal sudo[7842]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:40:04 np0005603663.novalocal python3[7844]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Jan 31 07:40:04 np0005603663.novalocal systemd[1]: Reloading.
Jan 31 07:40:04 np0005603663.novalocal systemd-rc-local-generator[7862]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 07:40:04 np0005603663.novalocal sudo[7842]: pam_unix(sudo:session): session closed for user root
Jan 31 07:40:06 np0005603663.novalocal sudo[7898]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hjurztimpeeytdteauqktdcxovezkejz ; /usr/bin/python3'
Jan 31 07:40:06 np0005603663.novalocal sudo[7898]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:40:06 np0005603663.novalocal python3[7900]: ansible-ansible.builtin.wait_for Invoked with path=/sys/fs/cgroup/system.slice/io.max state=present timeout=30 host=127.0.0.1 connect_timeout=5 delay=0 active_connection_states=['ESTABLISHED', 'FIN_WAIT1', 'FIN_WAIT2', 'SYN_RECV', 'SYN_SENT', 'TIME_WAIT'] sleep=1 port=None search_regex=None exclude_hosts=None msg=None
Jan 31 07:40:06 np0005603663.novalocal sudo[7898]: pam_unix(sudo:session): session closed for user root
Jan 31 07:40:06 np0005603663.novalocal sudo[7924]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qvptawgfnrfbldegiutqdtmmbydvpcku ; /usr/bin/python3'
Jan 31 07:40:06 np0005603663.novalocal sudo[7924]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:40:06 np0005603663.novalocal python3[7926]: ansible-ansible.legacy.command Invoked with _raw_params=echo "252:0   riops=18000 wiops=18000 rbps=262144000 wbps=262144000" > /sys/fs/cgroup/init.scope/io.max
                                                       _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 07:40:06 np0005603663.novalocal sudo[7924]: pam_unix(sudo:session): session closed for user root
Jan 31 07:40:07 np0005603663.novalocal sudo[7952]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kejywaqiufxpthxnwvdvargwqhbllpah ; /usr/bin/python3'
Jan 31 07:40:07 np0005603663.novalocal sudo[7952]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:40:07 np0005603663.novalocal python3[7954]: ansible-ansible.legacy.command Invoked with _raw_params=echo "252:0   riops=18000 wiops=18000 rbps=262144000 wbps=262144000" > /sys/fs/cgroup/machine.slice/io.max
                                                       _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 07:40:07 np0005603663.novalocal sudo[7952]: pam_unix(sudo:session): session closed for user root
Jan 31 07:40:07 np0005603663.novalocal sudo[7980]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gttutladisespklpasdpiaeneevaxopv ; /usr/bin/python3'
Jan 31 07:40:07 np0005603663.novalocal sudo[7980]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:40:07 np0005603663.novalocal python3[7982]: ansible-ansible.legacy.command Invoked with _raw_params=echo "252:0   riops=18000 wiops=18000 rbps=262144000 wbps=262144000" > /sys/fs/cgroup/system.slice/io.max
                                                       _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 07:40:07 np0005603663.novalocal sudo[7980]: pam_unix(sudo:session): session closed for user root
Jan 31 07:40:07 np0005603663.novalocal sudo[8008]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tvafwdohsaogncunjnpgysirrinjfknb ; /usr/bin/python3'
Jan 31 07:40:07 np0005603663.novalocal sudo[8008]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:40:07 np0005603663.novalocal python3[8010]: ansible-ansible.legacy.command Invoked with _raw_params=echo "252:0   riops=18000 wiops=18000 rbps=262144000 wbps=262144000" > /sys/fs/cgroup/user.slice/io.max
                                                       _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 07:40:07 np0005603663.novalocal sudo[8008]: pam_unix(sudo:session): session closed for user root
Jan 31 07:40:08 np0005603663.novalocal python3[8037]: ansible-ansible.legacy.command Invoked with _raw_params=echo "init";    cat /sys/fs/cgroup/init.scope/io.max; echo "machine"; cat /sys/fs/cgroup/machine.slice/io.max; echo "system";  cat /sys/fs/cgroup/system.slice/io.max; echo "user";    cat /sys/fs/cgroup/user.slice/io.max;
                                                       _uses_shell=True zuul_log_id=fa163ec2-ffbe-0250-99f0-000000002160-1-compute0 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 07:40:08 np0005603663.novalocal python3[8067]: ansible-ansible.builtin.stat Invoked with path=/sys/fs/cgroup/kubepods.slice/io.max follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Jan 31 07:40:10 np0005603663.novalocal sshd-session[7484]: Connection closed by 38.102.83.114 port 41952
Jan 31 07:40:10 np0005603663.novalocal sshd-session[7481]: pam_unix(sshd:session): session closed for user zuul
Jan 31 07:40:10 np0005603663.novalocal systemd-logind[793]: Session 4 logged out. Waiting for processes to exit.
Jan 31 07:40:10 np0005603663.novalocal systemd[1]: session-4.scope: Deactivated successfully.
Jan 31 07:40:10 np0005603663.novalocal systemd[1]: session-4.scope: Consumed 3.728s CPU time.
Jan 31 07:40:10 np0005603663.novalocal systemd-logind[793]: Removed session 4.
Jan 31 07:40:12 np0005603663.novalocal sshd-session[8075]: Accepted publickey for zuul from 38.102.83.114 port 56334 ssh2: RSA SHA256:1cKsZJy0b8y0Op+4rpocXv0xojY9kddve1Dq+1Ump7k
Jan 31 07:40:12 np0005603663.novalocal systemd-logind[793]: New session 5 of user zuul.
Jan 31 07:40:12 np0005603663.novalocal systemd[1]: Started Session 5 of User zuul.
Jan 31 07:40:12 np0005603663.novalocal sshd-session[8075]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 31 07:40:12 np0005603663.novalocal sudo[8102]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-glvoocxiyadnjutblmthzdwmfmbsesic ; /usr/bin/python3'
Jan 31 07:40:12 np0005603663.novalocal sudo[8102]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:40:12 np0005603663.novalocal python3[8104]: ansible-ansible.legacy.dnf Invoked with name=['podman', 'buildah'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False use_backend=auto conf_file=None disable_excludes=None download_dir=None list=None releasever=None
Jan 31 07:40:20 np0005603663.novalocal setsebool[8147]: The virt_use_nfs policy boolean was changed to 1 by root
Jan 31 07:40:20 np0005603663.novalocal setsebool[8147]: The virt_sandbox_use_all_caps policy boolean was changed to 1 by root
Jan 31 07:40:37 np0005603663.novalocal kernel: SELinux:  Converting 385 SID table entries...
Jan 31 07:40:37 np0005603663.novalocal kernel: SELinux:  policy capability network_peer_controls=1
Jan 31 07:40:37 np0005603663.novalocal kernel: SELinux:  policy capability open_perms=1
Jan 31 07:40:37 np0005603663.novalocal kernel: SELinux:  policy capability extended_socket_class=1
Jan 31 07:40:37 np0005603663.novalocal kernel: SELinux:  policy capability always_check_network=0
Jan 31 07:40:37 np0005603663.novalocal kernel: SELinux:  policy capability cgroup_seclabel=1
Jan 31 07:40:37 np0005603663.novalocal kernel: SELinux:  policy capability nnp_nosuid_transition=1
Jan 31 07:40:37 np0005603663.novalocal kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Jan 31 07:40:47 np0005603663.novalocal kernel: SELinux:  Converting 388 SID table entries...
Jan 31 07:40:47 np0005603663.novalocal kernel: SELinux:  policy capability network_peer_controls=1
Jan 31 07:40:47 np0005603663.novalocal kernel: SELinux:  policy capability open_perms=1
Jan 31 07:40:47 np0005603663.novalocal kernel: SELinux:  policy capability extended_socket_class=1
Jan 31 07:40:47 np0005603663.novalocal kernel: SELinux:  policy capability always_check_network=0
Jan 31 07:40:47 np0005603663.novalocal kernel: SELinux:  policy capability cgroup_seclabel=1
Jan 31 07:40:47 np0005603663.novalocal kernel: SELinux:  policy capability nnp_nosuid_transition=1
Jan 31 07:40:47 np0005603663.novalocal kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Jan 31 07:41:04 np0005603663.novalocal dbus-broker-launch[778]: avc:  op=load_policy lsm=selinux seqno=4 res=1
Jan 31 07:41:05 np0005603663.novalocal systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Jan 31 07:41:05 np0005603663.novalocal systemd[1]: Starting man-db-cache-update.service...
Jan 31 07:41:05 np0005603663.novalocal systemd[1]: Reloading.
Jan 31 07:41:05 np0005603663.novalocal systemd-rc-local-generator[8912]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 07:41:05 np0005603663.novalocal systemd[1]: Queuing reload/restart jobs for marked units…
Jan 31 07:41:06 np0005603663.novalocal sudo[8102]: pam_unix(sudo:session): session closed for user root
Jan 31 07:41:07 np0005603663.novalocal python3[10701]: ansible-ansible.legacy.command Invoked with _raw_params=echo "openstack-k8s-operators+cirobot"
                                                        _uses_shell=True zuul_log_id=fa163ec2-ffbe-c553-bc95-00000000000a-1-compute0 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 07:41:07 np0005603663.novalocal kernel: evm: overlay not supported
Jan 31 07:41:07 np0005603663.novalocal systemd[4307]: Starting D-Bus User Message Bus...
Jan 31 07:41:07 np0005603663.novalocal dbus-broker-launch[11877]: Policy to allow eavesdropping in /usr/share/dbus-1/session.conf +31: Eavesdropping is deprecated and ignored
Jan 31 07:41:07 np0005603663.novalocal dbus-broker-launch[11877]: Policy to allow eavesdropping in /usr/share/dbus-1/session.conf +33: Eavesdropping is deprecated and ignored
Jan 31 07:41:07 np0005603663.novalocal systemd[4307]: Started D-Bus User Message Bus.
Jan 31 07:41:07 np0005603663.novalocal dbus-broker-lau[11877]: Ready
Jan 31 07:41:07 np0005603663.novalocal systemd[4307]: selinux: avc:  op=load_policy lsm=selinux seqno=4 res=1
Jan 31 07:41:07 np0005603663.novalocal systemd[4307]: Created slice Slice /user.
Jan 31 07:41:07 np0005603663.novalocal systemd[4307]: podman-11747.scope: unit configures an IP firewall, but not running as root.
Jan 31 07:41:07 np0005603663.novalocal systemd[4307]: (This warning is only shown for the first unit using IP firewalling.)
Jan 31 07:41:07 np0005603663.novalocal systemd[4307]: Started podman-11747.scope.
Jan 31 07:41:08 np0005603663.novalocal systemd[4307]: Started podman-pause-7512294e.scope.
Jan 31 07:41:08 np0005603663.novalocal sudo[12942]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-upyrafdrymehddntflugalwctmdluyeu ; /usr/bin/python3'
Jan 31 07:41:08 np0005603663.novalocal sudo[12942]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:41:09 np0005603663.novalocal python3[12965]: ansible-ansible.builtin.blockinfile Invoked with state=present insertafter=EOF dest=/etc/containers/registries.conf content=[[registry]]
                                                       location = "38.129.56.245:5001"
                                                       insecure = true path=/etc/containers/registries.conf block=[[registry]]
                                                       location = "38.129.56.245:5001"
                                                       insecure = true marker=# {mark} ANSIBLE MANAGED BLOCK create=False backup=False marker_begin=BEGIN marker_end=END unsafe_writes=False insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 07:41:09 np0005603663.novalocal python3[12965]: ansible-ansible.builtin.blockinfile [WARNING] Module remote_tmp /root/.ansible/tmp did not exist and was created with a mode of 0700, this may cause issues when running as another user. To avoid this, create the remote_tmp dir with the correct permissions manually
Jan 31 07:41:09 np0005603663.novalocal sudo[12942]: pam_unix(sudo:session): session closed for user root
Jan 31 07:41:09 np0005603663.novalocal sshd-session[8078]: Connection closed by 38.102.83.114 port 56334
Jan 31 07:41:09 np0005603663.novalocal sshd-session[8075]: pam_unix(sshd:session): session closed for user zuul
Jan 31 07:41:09 np0005603663.novalocal systemd[1]: session-5.scope: Deactivated successfully.
Jan 31 07:41:09 np0005603663.novalocal systemd[1]: session-5.scope: Consumed 39.903s CPU time.
Jan 31 07:41:09 np0005603663.novalocal systemd-logind[793]: Session 5 logged out. Waiting for processes to exit.
Jan 31 07:41:09 np0005603663.novalocal systemd-logind[793]: Removed session 5.
Jan 31 07:41:29 np0005603663.novalocal sshd-session[24283]: Unable to negotiate with 38.102.83.220 port 32976: no matching host key type found. Their offer: sk-ssh-ed25519@openssh.com [preauth]
Jan 31 07:41:29 np0005603663.novalocal sshd-session[24286]: Connection closed by 38.102.83.220 port 32936 [preauth]
Jan 31 07:41:29 np0005603663.novalocal sshd-session[24284]: Connection closed by 38.102.83.220 port 32948 [preauth]
Jan 31 07:41:29 np0005603663.novalocal sshd-session[24281]: Unable to negotiate with 38.102.83.220 port 32952: no matching host key type found. Their offer: ssh-ed25519 [preauth]
Jan 31 07:41:29 np0005603663.novalocal sshd-session[24287]: Unable to negotiate with 38.102.83.220 port 32962: no matching host key type found. Their offer: sk-ecdsa-sha2-nistp256@openssh.com [preauth]
Jan 31 07:41:32 np0005603663.novalocal sshd-session[26059]: Accepted publickey for zuul from 38.102.83.114 port 36564 ssh2: RSA SHA256:1cKsZJy0b8y0Op+4rpocXv0xojY9kddve1Dq+1Ump7k
Jan 31 07:41:32 np0005603663.novalocal systemd-logind[793]: New session 6 of user zuul.
Jan 31 07:41:32 np0005603663.novalocal systemd[1]: Started Session 6 of User zuul.
Jan 31 07:41:32 np0005603663.novalocal sshd-session[26059]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 31 07:41:33 np0005603663.novalocal python3[26172]: ansible-ansible.posix.authorized_key Invoked with user=zuul key=ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBNjjUgFuyNt0hmZtqStAw9s3JKw0g6jz1BiB4AD1tE2sQNpVPKYzLIUbhhGJMGEywRb0aZD3E65SfsYEJ5sq0hg= zuul@np0005603662.novalocal
                                                        manage_dir=True state=present exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 31 07:41:33 np0005603663.novalocal sudo[26381]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lzrbgkqrnuswetlbwcgfkakqolnqbwpw ; /usr/bin/python3'
Jan 31 07:41:33 np0005603663.novalocal sudo[26381]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:41:33 np0005603663.novalocal python3[26394]: ansible-ansible.posix.authorized_key Invoked with user=root key=ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBNjjUgFuyNt0hmZtqStAw9s3JKw0g6jz1BiB4AD1tE2sQNpVPKYzLIUbhhGJMGEywRb0aZD3E65SfsYEJ5sq0hg= zuul@np0005603662.novalocal
                                                        manage_dir=True state=present exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 31 07:41:33 np0005603663.novalocal sudo[26381]: pam_unix(sudo:session): session closed for user root
Jan 31 07:41:34 np0005603663.novalocal sudo[26843]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dpwxmxkuzmltlkasnwcsxvfnanazxnfo ; /usr/bin/python3'
Jan 31 07:41:34 np0005603663.novalocal sudo[26843]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:41:34 np0005603663.novalocal python3[26856]: ansible-ansible.builtin.user Invoked with name=cloud-admin shell=/bin/bash state=present non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on np0005603663.novalocal update_password=always uid=None group=None groups=None comment=None home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None
Jan 31 07:41:34 np0005603663.novalocal useradd[26947]: new group: name=cloud-admin, GID=1002
Jan 31 07:41:34 np0005603663.novalocal useradd[26947]: new user: name=cloud-admin, UID=1002, GID=1002, home=/home/cloud-admin, shell=/bin/bash, from=none
Jan 31 07:41:34 np0005603663.novalocal sudo[26843]: pam_unix(sudo:session): session closed for user root
Jan 31 07:41:34 np0005603663.novalocal sudo[27090]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zsaxdblcpbtbfzvyevqnyobhcsipssgm ; /usr/bin/python3'
Jan 31 07:41:34 np0005603663.novalocal sudo[27090]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:41:34 np0005603663.novalocal python3[27101]: ansible-ansible.posix.authorized_key Invoked with user=cloud-admin key=ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBNjjUgFuyNt0hmZtqStAw9s3JKw0g6jz1BiB4AD1tE2sQNpVPKYzLIUbhhGJMGEywRb0aZD3E65SfsYEJ5sq0hg= zuul@np0005603662.novalocal
                                                        manage_dir=True state=present exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 31 07:41:34 np0005603663.novalocal sudo[27090]: pam_unix(sudo:session): session closed for user root
Jan 31 07:41:35 np0005603663.novalocal sudo[27385]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jkgfrqwcrakhxvaecdzhpnjwhvbxxjtp ; /usr/bin/python3'
Jan 31 07:41:35 np0005603663.novalocal sudo[27385]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:41:35 np0005603663.novalocal python3[27396]: ansible-ansible.legacy.stat Invoked with path=/etc/sudoers.d/cloud-admin follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 31 07:41:35 np0005603663.novalocal sudo[27385]: pam_unix(sudo:session): session closed for user root
Jan 31 07:41:35 np0005603663.novalocal sudo[27697]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-swjdkewtkcuxkqknfzshdchijrewggno ; /usr/bin/python3'
Jan 31 07:41:35 np0005603663.novalocal sudo[27697]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:41:35 np0005603663.novalocal python3[27705]: ansible-ansible.legacy.copy Invoked with dest=/etc/sudoers.d/cloud-admin mode=0640 src=/home/zuul/.ansible/tmp/ansible-tmp-1769845294.9553869-135-181466581987942/source _original_basename=tmphrrxpvdr follow=False checksum=e7614e5ad3ab06eaae55b8efaa2ed81b63ea5634 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 07:41:35 np0005603663.novalocal sudo[27697]: pam_unix(sudo:session): session closed for user root
Jan 31 07:41:36 np0005603663.novalocal sudo[28057]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iykdlhhtihrmsbsrurermigmwngwfizf ; /usr/bin/python3'
Jan 31 07:41:36 np0005603663.novalocal sudo[28057]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:41:36 np0005603663.novalocal python3[28065]: ansible-ansible.builtin.hostname Invoked with name=compute-0 use=systemd
Jan 31 07:41:36 np0005603663.novalocal systemd[1]: Starting Hostname Service...
Jan 31 07:41:36 np0005603663.novalocal systemd[1]: Started Hostname Service.
Jan 31 07:41:36 np0005603663.novalocal systemd-hostnamed[28173]: Changed pretty hostname to 'compute-0'
Jan 31 07:41:36 compute-0 systemd-hostnamed[28173]: Hostname set to <compute-0> (static)
Jan 31 07:41:36 compute-0 NetworkManager[7191]: <info>  [1769845296.5788] hostname: static hostname changed from "np0005603663.novalocal" to "compute-0"
Jan 31 07:41:36 compute-0 systemd[1]: Starting Network Manager Script Dispatcher Service...
Jan 31 07:41:36 compute-0 systemd[1]: Started Network Manager Script Dispatcher Service.
Jan 31 07:41:36 compute-0 sudo[28057]: pam_unix(sudo:session): session closed for user root
Jan 31 07:41:36 compute-0 sshd-session[26109]: Connection closed by 38.102.83.114 port 36564
Jan 31 07:41:36 compute-0 sshd-session[26059]: pam_unix(sshd:session): session closed for user zuul
Jan 31 07:41:36 compute-0 systemd[1]: session-6.scope: Deactivated successfully.
Jan 31 07:41:36 compute-0 systemd[1]: session-6.scope: Consumed 2.166s CPU time.
Jan 31 07:41:36 compute-0 systemd-logind[793]: Session 6 logged out. Waiting for processes to exit.
Jan 31 07:41:36 compute-0 systemd-logind[793]: Removed session 6.
Jan 31 07:41:40 compute-0 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Jan 31 07:41:40 compute-0 systemd[1]: Finished man-db-cache-update.service.
Jan 31 07:41:40 compute-0 systemd[1]: man-db-cache-update.service: Consumed 41.050s CPU time.
Jan 31 07:41:40 compute-0 systemd[1]: run-racebbd5a8ccf495490f48e06e73692f3.service: Deactivated successfully.
Jan 31 07:41:46 compute-0 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Jan 31 07:42:06 compute-0 systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Jan 31 07:44:57 compute-0 sshd-session[29994]: Connection closed by 92.118.39.76 port 35596
Jan 31 07:45:56 compute-0 sshd-session[29996]: Accepted publickey for zuul from 38.102.83.220 port 45508 ssh2: RSA SHA256:1cKsZJy0b8y0Op+4rpocXv0xojY9kddve1Dq+1Ump7k
Jan 31 07:45:56 compute-0 systemd-logind[793]: New session 7 of user zuul.
Jan 31 07:45:56 compute-0 systemd[1]: Started Session 7 of User zuul.
Jan 31 07:45:56 compute-0 sshd-session[29996]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 31 07:45:57 compute-0 python3[30072]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 31 07:45:58 compute-0 sudo[30186]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jjpzumrccxhilmyeppveqxdtmxbqloxk ; /usr/bin/python3'
Jan 31 07:45:58 compute-0 sudo[30186]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:45:58 compute-0 python3[30188]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/delorean.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 31 07:45:58 compute-0 sudo[30186]: pam_unix(sudo:session): session closed for user root
Jan 31 07:45:59 compute-0 sudo[30259]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mqrrdmxkmikwinluuybourqmgciqspna ; /usr/bin/python3'
Jan 31 07:45:59 compute-0 sudo[30259]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:45:59 compute-0 python3[30261]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1769845558.3829346-33691-21007893850813/source mode=0755 _original_basename=delorean.repo follow=False checksum=cc4ab4695da8ec58c451521a3dd2f41014af145d backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 07:45:59 compute-0 sudo[30259]: pam_unix(sudo:session): session closed for user root
Jan 31 07:45:59 compute-0 sudo[30285]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gbplkberqqfjdpfyofroisgsebthqxhu ; /usr/bin/python3'
Jan 31 07:45:59 compute-0 sudo[30285]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:45:59 compute-0 python3[30287]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/delorean-antelope-testing.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 31 07:45:59 compute-0 sudo[30285]: pam_unix(sudo:session): session closed for user root
Jan 31 07:45:59 compute-0 sudo[30358]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zqficoyasqricwntkxyfbfgixpkbggzf ; /usr/bin/python3'
Jan 31 07:45:59 compute-0 sudo[30358]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:45:59 compute-0 python3[30360]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1769845558.3829346-33691-21007893850813/source mode=0755 _original_basename=delorean-antelope-testing.repo follow=False checksum=4ebc56dead962b5d40b8d420dad43b948b84d3fc backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 07:45:59 compute-0 sudo[30358]: pam_unix(sudo:session): session closed for user root
Jan 31 07:46:00 compute-0 sudo[30384]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xjyzieaiexwucjvsrjplogmeniwnxnzo ; /usr/bin/python3'
Jan 31 07:46:00 compute-0 sudo[30384]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:46:00 compute-0 python3[30386]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/repo-setup-centos-highavailability.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 31 07:46:00 compute-0 sudo[30384]: pam_unix(sudo:session): session closed for user root
Jan 31 07:46:00 compute-0 sudo[30457]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xyppzmfmqhhluwsdxmkoesvxryaiemmo ; /usr/bin/python3'
Jan 31 07:46:00 compute-0 sudo[30457]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:46:00 compute-0 python3[30459]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1769845558.3829346-33691-21007893850813/source mode=0755 _original_basename=repo-setup-centos-highavailability.repo follow=False checksum=55d0f695fd0d8f47cbc3044ce0dcf5f88862490f backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 07:46:00 compute-0 sudo[30457]: pam_unix(sudo:session): session closed for user root
Jan 31 07:46:00 compute-0 sudo[30483]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dkgjoctafbqtfpqxpgeswkpzdlnsdfmt ; /usr/bin/python3'
Jan 31 07:46:00 compute-0 sudo[30483]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:46:00 compute-0 python3[30485]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/repo-setup-centos-powertools.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 31 07:46:00 compute-0 sudo[30483]: pam_unix(sudo:session): session closed for user root
Jan 31 07:46:00 compute-0 sudo[30556]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vdimreugakpvkkylwvbcjequujyeozbd ; /usr/bin/python3'
Jan 31 07:46:00 compute-0 sudo[30556]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:46:01 compute-0 python3[30558]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1769845558.3829346-33691-21007893850813/source mode=0755 _original_basename=repo-setup-centos-powertools.repo follow=False checksum=4b0cf99aa89c5c5be0151545863a7a7568f67568 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 07:46:01 compute-0 sudo[30556]: pam_unix(sudo:session): session closed for user root
Jan 31 07:46:01 compute-0 sudo[30582]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-inzypaluxviiidqaxelotlzxrjadyagp ; /usr/bin/python3'
Jan 31 07:46:01 compute-0 sudo[30582]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:46:01 compute-0 python3[30584]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/repo-setup-centos-appstream.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 31 07:46:01 compute-0 sudo[30582]: pam_unix(sudo:session): session closed for user root
Jan 31 07:46:01 compute-0 sudo[30655]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uvobhqtwyntredjjtuscqoxkbruruxmg ; /usr/bin/python3'
Jan 31 07:46:01 compute-0 sudo[30655]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:46:01 compute-0 python3[30657]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1769845558.3829346-33691-21007893850813/source mode=0755 _original_basename=repo-setup-centos-appstream.repo follow=False checksum=e89244d2503b2996429dda1857290c1e91e393a1 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 07:46:01 compute-0 sudo[30655]: pam_unix(sudo:session): session closed for user root
Jan 31 07:46:01 compute-0 sudo[30681]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rzkaxccxdhupddupfyobufcyqhacgiud ; /usr/bin/python3'
Jan 31 07:46:01 compute-0 sudo[30681]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:46:01 compute-0 python3[30683]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/repo-setup-centos-baseos.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 31 07:46:01 compute-0 sudo[30681]: pam_unix(sudo:session): session closed for user root
Jan 31 07:46:02 compute-0 sudo[30754]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jkirtbhaortuerzxopignlmkivjaezxw ; /usr/bin/python3'
Jan 31 07:46:02 compute-0 sudo[30754]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:46:02 compute-0 python3[30756]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1769845558.3829346-33691-21007893850813/source mode=0755 _original_basename=repo-setup-centos-baseos.repo follow=False checksum=36d926db23a40dbfa5c84b5e4d43eac6fa2301d6 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 07:46:02 compute-0 sudo[30754]: pam_unix(sudo:session): session closed for user root
Jan 31 07:46:02 compute-0 sudo[30780]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mfllyotyntxpdefrfdswxkzlvslkznef ; /usr/bin/python3'
Jan 31 07:46:02 compute-0 sudo[30780]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:46:02 compute-0 python3[30782]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/delorean.repo.md5 follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 31 07:46:02 compute-0 sudo[30780]: pam_unix(sudo:session): session closed for user root
Jan 31 07:46:02 compute-0 sudo[30853]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rxdwjjmnsrdrjpfojhbwhavdbiagcebl ; /usr/bin/python3'
Jan 31 07:46:02 compute-0 sudo[30853]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:46:02 compute-0 python3[30855]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1769845558.3829346-33691-21007893850813/source mode=0755 _original_basename=delorean.repo.md5 follow=False checksum=362a603578148d54e8cd25942b88d7f471cc677a backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 07:46:02 compute-0 sudo[30853]: pam_unix(sudo:session): session closed for user root
Jan 31 07:46:04 compute-0 sshd-session[30880]: Unable to negotiate with 192.168.122.11 port 35754: no matching host key type found. Their offer: ssh-ed25519 [preauth]
Jan 31 07:46:04 compute-0 sshd-session[30883]: Connection closed by 192.168.122.11 port 35732 [preauth]
Jan 31 07:46:04 compute-0 sshd-session[30881]: Connection closed by 192.168.122.11 port 35740 [preauth]
Jan 31 07:46:04 compute-0 sshd-session[30885]: Unable to negotiate with 192.168.122.11 port 35774: no matching host key type found. Their offer: sk-ssh-ed25519@openssh.com [preauth]
Jan 31 07:46:04 compute-0 sshd-session[30882]: Unable to negotiate with 192.168.122.11 port 35760: no matching host key type found. Their offer: sk-ecdsa-sha2-nistp256@openssh.com [preauth]
Jan 31 07:46:17 compute-0 python3[30913]: ansible-ansible.legacy.command Invoked with _raw_params=hostname _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 07:47:02 compute-0 systemd[1]: Starting Cleanup of Temporary Directories...
Jan 31 07:47:02 compute-0 systemd[1]: systemd-tmpfiles-clean.service: Deactivated successfully.
Jan 31 07:47:02 compute-0 systemd[1]: Finished Cleanup of Temporary Directories.
Jan 31 07:47:02 compute-0 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dclean.service.mount: Deactivated successfully.
Jan 31 07:50:35 compute-0 sshd-session[30919]: Invalid user solana from 92.118.39.76 port 37366
Jan 31 07:50:35 compute-0 sshd-session[30919]: Connection closed by invalid user solana 92.118.39.76 port 37366 [preauth]
Jan 31 07:51:17 compute-0 sshd-session[29999]: Received disconnect from 38.102.83.220 port 45508:11: disconnected by user
Jan 31 07:51:17 compute-0 sshd-session[29999]: Disconnected from user zuul 38.102.83.220 port 45508
Jan 31 07:51:17 compute-0 sshd-session[29996]: pam_unix(sshd:session): session closed for user zuul
Jan 31 07:51:17 compute-0 systemd-logind[793]: Session 7 logged out. Waiting for processes to exit.
Jan 31 07:51:17 compute-0 systemd[1]: session-7.scope: Deactivated successfully.
Jan 31 07:51:17 compute-0 systemd[1]: session-7.scope: Consumed 4.465s CPU time.
Jan 31 07:51:17 compute-0 systemd-logind[793]: Removed session 7.
Jan 31 07:52:57 compute-0 sshd-session[30924]: Invalid user sol from 92.118.39.76 port 47784
Jan 31 07:52:57 compute-0 sshd-session[30924]: Connection closed by invalid user sol 92.118.39.76 port 47784 [preauth]
Jan 31 07:55:13 compute-0 sshd-session[30927]: Invalid user sol from 92.118.39.76 port 58180
Jan 31 07:55:13 compute-0 sshd-session[30927]: Connection closed by invalid user sol 92.118.39.76 port 58180 [preauth]
Jan 31 07:57:24 compute-0 sshd-session[30929]: Invalid user sol from 92.118.39.76 port 40346
Jan 31 07:57:24 compute-0 sshd-session[30929]: Connection closed by invalid user sol 92.118.39.76 port 40346 [preauth]
Jan 31 07:57:51 compute-0 sshd-session[30931]: Accepted publickey for zuul from 192.168.122.30 port 57160 ssh2: ECDSA SHA256:Skb+4tfaoVfLHQIqkRSeA/sFlTrVc6ZnX8V66qTLHY8
Jan 31 07:57:51 compute-0 systemd-logind[793]: New session 8 of user zuul.
Jan 31 07:57:51 compute-0 systemd[1]: Started Session 8 of User zuul.
Jan 31 07:57:51 compute-0 sshd-session[30931]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 31 07:57:52 compute-0 python3.9[31084]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 31 07:57:53 compute-0 sudo[31263]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ingstiubbxultnfxyjjkdjkstsaycvkh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769846272.8175566-27-58137073711304/AnsiballZ_command.py'
Jan 31 07:57:53 compute-0 sudo[31263]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:57:53 compute-0 python3.9[31265]: ansible-ansible.legacy.command Invoked with _raw_params=set -euxo pipefail
                                            pushd /var/tmp
                                            curl -sL https://github.com/openstack-k8s-operators/repo-setup/archive/refs/heads/main.tar.gz | tar -xz
                                            pushd repo-setup-main
                                            python3 -m venv ./venv
                                            PBR_VERSION=0.0.0 ./venv/bin/pip install ./
                                            ./venv/bin/repo-setup current-podified -b antelope
                                            popd
                                            rm -rf repo-setup-main
                                             _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 07:58:05 compute-0 sudo[31263]: pam_unix(sudo:session): session closed for user root
Jan 31 07:58:05 compute-0 sshd-session[30934]: Connection closed by 192.168.122.30 port 57160
Jan 31 07:58:05 compute-0 sshd-session[30931]: pam_unix(sshd:session): session closed for user zuul
Jan 31 07:58:05 compute-0 systemd[1]: session-8.scope: Deactivated successfully.
Jan 31 07:58:05 compute-0 systemd[1]: session-8.scope: Consumed 7.996s CPU time.
Jan 31 07:58:05 compute-0 systemd-logind[793]: Session 8 logged out. Waiting for processes to exit.
Jan 31 07:58:05 compute-0 systemd-logind[793]: Removed session 8.
Jan 31 07:58:21 compute-0 sshd-session[31323]: Accepted publickey for zuul from 192.168.122.30 port 51954 ssh2: ECDSA SHA256:Skb+4tfaoVfLHQIqkRSeA/sFlTrVc6ZnX8V66qTLHY8
Jan 31 07:58:21 compute-0 systemd-logind[793]: New session 9 of user zuul.
Jan 31 07:58:21 compute-0 systemd[1]: Started Session 9 of User zuul.
Jan 31 07:58:21 compute-0 sshd-session[31323]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 31 07:58:21 compute-0 python3.9[31476]: ansible-ansible.legacy.ping Invoked with data=pong
Jan 31 07:58:22 compute-0 python3.9[31650]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 31 07:58:23 compute-0 sudo[31800]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dbqtlbhjljtxxkvxssvkmhzzajqjyqnq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769846303.2218447-40-183526229998339/AnsiballZ_command.py'
Jan 31 07:58:23 compute-0 sudo[31800]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:58:23 compute-0 python3.9[31802]: ansible-ansible.legacy.command Invoked with _raw_params=PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin which growvols
                                             _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 07:58:23 compute-0 sudo[31800]: pam_unix(sudo:session): session closed for user root
Jan 31 07:58:24 compute-0 sudo[31953]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dtquxpzqyrktgrjpjptpkupnbgzxbfhc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769846304.2074122-52-162100787341117/AnsiballZ_stat.py'
Jan 31 07:58:24 compute-0 sudo[31953]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:58:24 compute-0 python3.9[31955]: ansible-ansible.builtin.stat Invoked with path=/etc/ansible/facts.d/bootc.fact follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 31 07:58:24 compute-0 sudo[31953]: pam_unix(sudo:session): session closed for user root
Jan 31 07:58:25 compute-0 sudo[32105]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yrowbalntjtptyvpwjvgnlxdtsgykvpv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769846305.029675-60-54883685453533/AnsiballZ_file.py'
Jan 31 07:58:25 compute-0 sudo[32105]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:58:25 compute-0 python3.9[32107]: ansible-ansible.builtin.file Invoked with mode=755 path=/etc/ansible/facts.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 07:58:25 compute-0 sudo[32105]: pam_unix(sudo:session): session closed for user root
Jan 31 07:58:26 compute-0 sudo[32257]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vlosliisadyfxtlvbtyzsmjgdsacqkpy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769846305.8833394-68-166759595749535/AnsiballZ_stat.py'
Jan 31 07:58:26 compute-0 sudo[32257]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:58:26 compute-0 python3.9[32259]: ansible-ansible.legacy.stat Invoked with path=/etc/ansible/facts.d/bootc.fact follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 07:58:26 compute-0 sudo[32257]: pam_unix(sudo:session): session closed for user root
Jan 31 07:58:26 compute-0 sudo[32380]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zyfcjoffneiqevscufodeaoxqwyylbca ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769846305.8833394-68-166759595749535/AnsiballZ_copy.py'
Jan 31 07:58:26 compute-0 sudo[32380]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:58:27 compute-0 python3.9[32382]: ansible-ansible.legacy.copy Invoked with dest=/etc/ansible/facts.d/bootc.fact mode=755 src=/home/zuul/.ansible/tmp/ansible-tmp-1769846305.8833394-68-166759595749535/.source.fact _original_basename=bootc.fact follow=False checksum=eb4122ce7fc50a38407beb511c4ff8c178005b12 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 07:58:27 compute-0 sudo[32380]: pam_unix(sudo:session): session closed for user root
Jan 31 07:58:27 compute-0 sudo[32532]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ataqrmxumbvhweqqfvogrdbzckuckvkf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769846307.2500856-83-50445425607354/AnsiballZ_setup.py'
Jan 31 07:58:27 compute-0 sudo[32532]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:58:27 compute-0 python3.9[32534]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 31 07:58:27 compute-0 sudo[32532]: pam_unix(sudo:session): session closed for user root
Jan 31 07:58:28 compute-0 sudo[32688]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-slssnadkxguzpnmayzmqhjgistixrxcs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769846308.1264343-91-23520478963666/AnsiballZ_file.py'
Jan 31 07:58:28 compute-0 sudo[32688]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:58:28 compute-0 python3.9[32690]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/log/journal setype=var_log_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 31 07:58:28 compute-0 sudo[32688]: pam_unix(sudo:session): session closed for user root
Jan 31 07:58:28 compute-0 sudo[32840]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ypfoowexhfqjzcarqhutxybmxsvlajmt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769846308.7265022-100-131474050770443/AnsiballZ_file.py'
Jan 31 07:58:28 compute-0 sudo[32840]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:58:29 compute-0 python3.9[32842]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/config-data/ansible-generated recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 31 07:58:29 compute-0 sudo[32840]: pam_unix(sudo:session): session closed for user root
Jan 31 07:58:29 compute-0 python3.9[32992]: ansible-ansible.builtin.service_facts Invoked
Jan 31 07:58:32 compute-0 python3.9[33245]: ansible-ansible.builtin.lineinfile Invoked with line=cloud-init=disabled path=/proc/cmdline state=present encoding=utf-8 backrefs=False create=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 07:58:33 compute-0 python3.9[33395]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 31 07:58:34 compute-0 python3.9[33549]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local', 'distribution'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 31 07:58:35 compute-0 sudo[33705]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rcbdytbcukalzmcuwkbifopydtsujfob ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769846315.163502-148-5374928475893/AnsiballZ_setup.py'
Jan 31 07:58:35 compute-0 sudo[33705]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:58:35 compute-0 python3.9[33707]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 31 07:58:35 compute-0 sudo[33705]: pam_unix(sudo:session): session closed for user root
Jan 31 07:58:36 compute-0 sudo[33789]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ujkioqqxqswleobeijvmbiygvnkravgt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769846315.163502-148-5374928475893/AnsiballZ_dnf.py'
Jan 31 07:58:36 compute-0 sudo[33789]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 07:58:36 compute-0 python3.9[33791]: ansible-ansible.legacy.dnf Invoked with name=['driverctl', 'lvm2', 'crudini', 'jq', 'nftables', 'NetworkManager', 'openstack-selinux', 'python3-libselinux', 'python3-pyyaml', 'rsync', 'tmpwatch', 'sysstat', 'iproute-tc', 'ksmtuned', 'systemd-container', 'crypto-policies-scripts', 'grubby', 'sos'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 31 07:59:17 compute-0 systemd[1]: Reloading.
Jan 31 07:59:17 compute-0 systemd-rc-local-generator[33986]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 07:59:17 compute-0 systemd[1]: Starting dnf makecache...
Jan 31 07:59:17 compute-0 systemd[1]: Listening on Device-mapper event daemon FIFOs.
Jan 31 07:59:18 compute-0 dnf[34001]: Failed determining last makecache time.
Jan 31 07:59:18 compute-0 dnf[34001]: delorean-openstack-barbican-42b4c41831408a8e323 151 kB/s | 3.0 kB     00:00
Jan 31 07:59:18 compute-0 dnf[34001]: delorean-python-glean-642fffe0203a8ffcc2443db52 177 kB/s | 3.0 kB     00:00
Jan 31 07:59:18 compute-0 dnf[34001]: delorean-openstack-cinder-1c00d6490d88e436f26ef 173 kB/s | 3.0 kB     00:00
Jan 31 07:59:18 compute-0 dnf[34001]: delorean-python-stevedore-c4acc5639fd2329372142 190 kB/s | 3.0 kB     00:00
Jan 31 07:59:18 compute-0 dnf[34001]: delorean-python-cloudkitty-tests-tempest-783703 184 kB/s | 3.0 kB     00:00
Jan 31 07:59:18 compute-0 dnf[34001]: delorean-diskimage-builder-61b717cc45660834fe9a 166 kB/s | 3.0 kB     00:00
Jan 31 07:59:18 compute-0 dnf[34001]: delorean-openstack-nova-eaa65f0b85123a4ee343246 159 kB/s | 3.0 kB     00:00
Jan 31 07:59:18 compute-0 systemd[1]: Reloading.
Jan 31 07:59:18 compute-0 dnf[34001]: delorean-python-designate-tests-tempest-347fdbc 166 kB/s | 3.0 kB     00:00
Jan 31 07:59:18 compute-0 dnf[34001]: delorean-openstack-glance-1fd12c29b339f30fe823e 154 kB/s | 3.0 kB     00:00
Jan 31 07:59:18 compute-0 dnf[34001]: delorean-openstack-keystone-e4b40af0ae3698fbbbb 137 kB/s | 3.0 kB     00:00
Jan 31 07:59:18 compute-0 systemd-rc-local-generator[34039]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 07:59:18 compute-0 dnf[34001]: delorean-openstack-manila-d783d10e75495b73866db 162 kB/s | 3.0 kB     00:00
Jan 31 07:59:18 compute-0 dnf[34001]: delorean-openstack-neutron-95cadbd379667c8520c8 161 kB/s | 3.0 kB     00:00
Jan 31 07:59:18 compute-0 dnf[34001]: delorean-openstack-octavia-5975097dd4b021385178 194 kB/s | 3.0 kB     00:00
Jan 31 07:59:18 compute-0 dnf[34001]: delorean-openstack-watcher-c014f81a8647287f6dcc 175 kB/s | 3.0 kB     00:00
Jan 31 07:59:18 compute-0 dnf[34001]: delorean-python-tcib-78032d201b02cee27e8e644c61 191 kB/s | 3.0 kB     00:00
Jan 31 07:59:18 compute-0 dnf[34001]: delorean-puppet-ceph-7352068d7b8c84ded636ab3158 187 kB/s | 3.0 kB     00:00
Jan 31 07:59:18 compute-0 systemd[1]: Starting Monitoring of LVM2 mirrors, snapshots etc. using dmeventd or progress polling...
Jan 31 07:59:18 compute-0 dnf[34001]: delorean-openstack-swift-dc98a8463506ac520c469a 129 kB/s | 3.0 kB     00:00
Jan 31 07:59:18 compute-0 dnf[34001]: delorean-python-tempestconf-8515371b7cceebd4282 193 kB/s | 3.0 kB     00:00
Jan 31 07:59:18 compute-0 dnf[34001]: delorean-openstack-heat-ui-013accbfd179753bc3f0 197 kB/s | 3.0 kB     00:00
Jan 31 07:59:18 compute-0 systemd[1]: Finished Monitoring of LVM2 mirrors, snapshots etc. using dmeventd or progress polling.
Jan 31 07:59:18 compute-0 systemd[1]: Reloading.
Jan 31 07:59:18 compute-0 dnf[34001]: CentOS Stream 9 - BaseOS                         60 kB/s | 6.1 kB     00:00
Jan 31 07:59:18 compute-0 systemd-rc-local-generator[34089]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 07:59:18 compute-0 systemd[1]: Listening on LVM2 poll daemon socket.
Jan 31 07:59:18 compute-0 dnf[34001]: CentOS Stream 9 - AppStream                      63 kB/s | 6.5 kB     00:00
Jan 31 07:59:18 compute-0 dbus-broker-launch[771]: Noticed file-system modification, trigger reload.
Jan 31 07:59:18 compute-0 dbus-broker-launch[771]: Noticed file-system modification, trigger reload.
Jan 31 07:59:18 compute-0 dbus-broker-launch[771]: Noticed file-system modification, trigger reload.
Jan 31 07:59:18 compute-0 dnf[34001]: CentOS Stream 9 - CRB                            60 kB/s | 6.0 kB     00:00
Jan 31 07:59:19 compute-0 dnf[34001]: CentOS Stream 9 - Extras packages                73 kB/s | 7.3 kB     00:00
Jan 31 07:59:19 compute-0 dnf[34001]: dlrn-antelope-testing                           146 kB/s | 3.0 kB     00:00
Jan 31 07:59:19 compute-0 dnf[34001]: dlrn-antelope-build-deps                        147 kB/s | 3.0 kB     00:00
Jan 31 07:59:19 compute-0 dnf[34001]: centos9-rabbitmq                                122 kB/s | 3.0 kB     00:00
Jan 31 07:59:19 compute-0 dnf[34001]: centos9-storage                                 125 kB/s | 3.0 kB     00:00
Jan 31 07:59:19 compute-0 dnf[34001]: centos9-opstools                                139 kB/s | 3.0 kB     00:00
Jan 31 07:59:19 compute-0 dnf[34001]: NFV SIG OpenvSwitch                             138 kB/s | 3.0 kB     00:00
Jan 31 07:59:19 compute-0 dnf[34001]: repo-setup-centos-appstream                     179 kB/s | 4.4 kB     00:00
Jan 31 07:59:19 compute-0 dnf[34001]: repo-setup-centos-baseos                        119 kB/s | 3.9 kB     00:00
Jan 31 07:59:19 compute-0 dnf[34001]: repo-setup-centos-highavailability              115 kB/s | 3.9 kB     00:00
Jan 31 07:59:19 compute-0 dnf[34001]: repo-setup-centos-powertools                    168 kB/s | 4.3 kB     00:00
Jan 31 07:59:19 compute-0 dnf[34001]: Extra Packages for Enterprise Linux 9 - x86_64  233 kB/s |  31 kB     00:00
Jan 31 07:59:20 compute-0 dnf[34001]: Metadata cache created.
Jan 31 07:59:20 compute-0 systemd[1]: dnf-makecache.service: Deactivated successfully.
Jan 31 07:59:20 compute-0 systemd[1]: Finished dnf makecache.
Jan 31 07:59:20 compute-0 systemd[1]: dnf-makecache.service: Consumed 1.731s CPU time.
Jan 31 07:59:25 compute-0 sshd-session[34155]: Invalid user admin from 139.19.117.131 port 53578
Jan 31 07:59:25 compute-0 sshd-session[34155]: userauth_pubkey: signature algorithm ssh-rsa not in PubkeyAcceptedAlgorithms [preauth]
Jan 31 07:59:35 compute-0 sshd-session[34155]: Connection closed by invalid user admin 139.19.117.131 port 53578 [preauth]
Jan 31 07:59:36 compute-0 sshd-session[34195]: Invalid user ubuntu from 92.118.39.76 port 50758
Jan 31 07:59:36 compute-0 sshd-session[34195]: Connection closed by invalid user ubuntu 92.118.39.76 port 50758 [preauth]
Jan 31 08:00:24 compute-0 kernel: SELinux:  Converting 2727 SID table entries...
Jan 31 08:00:24 compute-0 kernel: SELinux:  policy capability network_peer_controls=1
Jan 31 08:00:24 compute-0 kernel: SELinux:  policy capability open_perms=1
Jan 31 08:00:24 compute-0 kernel: SELinux:  policy capability extended_socket_class=1
Jan 31 08:00:24 compute-0 kernel: SELinux:  policy capability always_check_network=0
Jan 31 08:00:24 compute-0 kernel: SELinux:  policy capability cgroup_seclabel=1
Jan 31 08:00:24 compute-0 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Jan 31 08:00:24 compute-0 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Jan 31 08:00:24 compute-0 dbus-broker-launch[778]: avc:  op=load_policy lsm=selinux seqno=6 res=1
Jan 31 08:00:24 compute-0 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Jan 31 08:00:24 compute-0 systemd[1]: Starting man-db-cache-update.service...
Jan 31 08:00:24 compute-0 systemd[1]: Reloading.
Jan 31 08:00:24 compute-0 systemd-rc-local-generator[34433]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 08:00:24 compute-0 systemd[1]: Queuing reload/restart jobs for marked units…
Jan 31 08:00:25 compute-0 sudo[33789]: pam_unix(sudo:session): session closed for user root
Jan 31 08:00:25 compute-0 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Jan 31 08:00:25 compute-0 systemd[1]: Finished man-db-cache-update.service.
Jan 31 08:00:25 compute-0 systemd[1]: run-refecc68b3f4743bf9e3a292f814a0b05.service: Deactivated successfully.
Jan 31 08:00:25 compute-0 sudo[35353]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-umcfedatqlhxihmvodgnequihsaoyoio ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769846425.669912-160-127692203020129/AnsiballZ_command.py'
Jan 31 08:00:25 compute-0 sudo[35353]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:00:26 compute-0 python3.9[35355]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -V driverctl lvm2 crudini jq nftables NetworkManager openstack-selinux python3-libselinux python3-pyyaml rsync tmpwatch sysstat iproute-tc ksmtuned systemd-container crypto-policies-scripts grubby sos _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 08:00:27 compute-0 sudo[35353]: pam_unix(sudo:session): session closed for user root
Jan 31 08:00:27 compute-0 sudo[35634]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bmebxepecyjjsxlheqlmrepksfotlypg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769846427.220827-168-198788571179182/AnsiballZ_selinux.py'
Jan 31 08:00:27 compute-0 sudo[35634]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:00:28 compute-0 python3.9[35636]: ansible-ansible.posix.selinux Invoked with policy=targeted state=enforcing configfile=/etc/selinux/config update_kernel_param=False
Jan 31 08:00:28 compute-0 sudo[35634]: pam_unix(sudo:session): session closed for user root
Jan 31 08:00:28 compute-0 sudo[35786]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pojompebisxdpbsquetmuuvrvzlscrlf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769846428.7112565-179-72097339350569/AnsiballZ_command.py'
Jan 31 08:00:28 compute-0 sudo[35786]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:00:29 compute-0 python3.9[35788]: ansible-ansible.legacy.command Invoked with cmd=dd if=/dev/zero of=/swap count=1024 bs=1M creates=/swap _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None removes=None stdin=None
Jan 31 08:00:29 compute-0 sudo[35786]: pam_unix(sudo:session): session closed for user root
Jan 31 08:00:30 compute-0 sudo[35939]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hfrqtpnsthbvgzudkpxpmmawrmjumvux ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769846429.9383829-187-233567441418721/AnsiballZ_file.py'
Jan 31 08:00:30 compute-0 sudo[35939]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:00:32 compute-0 python3.9[35941]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/swap recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False state=None _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 08:00:32 compute-0 sudo[35939]: pam_unix(sudo:session): session closed for user root
Jan 31 08:00:33 compute-0 sudo[36091]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fzbqoanfotwpglbnihxmmaldnxyittfb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769846432.7416904-195-207003800940543/AnsiballZ_mount.py'
Jan 31 08:00:33 compute-0 sudo[36091]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:00:33 compute-0 python3.9[36093]: ansible-ansible.posix.mount Invoked with dump=0 fstype=swap name=none opts=sw passno=0 src=/swap state=present path=none boot=True opts_no_log=False backup=False fstab=None
Jan 31 08:00:33 compute-0 sudo[36091]: pam_unix(sudo:session): session closed for user root
Jan 31 08:00:34 compute-0 sudo[36243]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qvytjbxucwvxmfhgzxbgjpmdjacflbvm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769846434.1903703-223-148464637156111/AnsiballZ_file.py'
Jan 31 08:00:34 compute-0 sudo[36243]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:00:34 compute-0 python3.9[36245]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/ca-trust/source/anchors setype=cert_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 31 08:00:34 compute-0 sudo[36243]: pam_unix(sudo:session): session closed for user root
Jan 31 08:00:35 compute-0 sudo[36395]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uydpvkctmpkbsacwglmkunufjkredrku ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769846434.9325702-231-227987619527251/AnsiballZ_stat.py'
Jan 31 08:00:35 compute-0 sudo[36395]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:00:40 compute-0 python3.9[36397]: ansible-ansible.legacy.stat Invoked with path=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 08:00:40 compute-0 sudo[36395]: pam_unix(sudo:session): session closed for user root
Jan 31 08:00:40 compute-0 sudo[36518]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qqtzsgqqtpcqdcfherocdawfpiklqevz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769846434.9325702-231-227987619527251/AnsiballZ_copy.py'
Jan 31 08:00:40 compute-0 sudo[36518]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:00:40 compute-0 python3.9[36520]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769846434.9325702-231-227987619527251/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=ade25fea9b4947a8606692264e6e294ddcaac679 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 08:00:40 compute-0 sudo[36518]: pam_unix(sudo:session): session closed for user root
Jan 31 08:00:41 compute-0 sudo[36670]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-utllsbevxzzipftpimzqpgoiqsnaqdbm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769846441.11189-255-13698818362472/AnsiballZ_stat.py'
Jan 31 08:00:41 compute-0 sudo[36670]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:00:41 compute-0 python3.9[36672]: ansible-ansible.builtin.stat Invoked with path=/etc/lvm/devices/system.devices follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 31 08:00:41 compute-0 sudo[36670]: pam_unix(sudo:session): session closed for user root
Jan 31 08:00:41 compute-0 sudo[36822]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ldrjxjtnxovmldjthvlcrgjyhdbuiinr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769846441.6794379-263-104611246706308/AnsiballZ_command.py'
Jan 31 08:00:41 compute-0 sudo[36822]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:00:42 compute-0 python3.9[36824]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/vgimportdevices --all _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 08:00:42 compute-0 sudo[36822]: pam_unix(sudo:session): session closed for user root
Jan 31 08:00:42 compute-0 sudo[36975]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gbwxjvzrvvrprfhbnmqcbslfeawjzifj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769846442.340866-271-44140351035556/AnsiballZ_file.py'
Jan 31 08:00:42 compute-0 sudo[36975]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:00:42 compute-0 python3.9[36977]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/lvm/devices/system.devices state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 08:00:42 compute-0 sudo[36975]: pam_unix(sudo:session): session closed for user root
Jan 31 08:00:43 compute-0 sudo[37127]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-puxhsekliaijfkdobmjugwsjmtpwaegb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769846443.1461222-282-130762288697949/AnsiballZ_getent.py'
Jan 31 08:00:43 compute-0 sudo[37127]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:00:43 compute-0 python3.9[37129]: ansible-ansible.builtin.getent Invoked with database=passwd key=qemu fail_key=True service=None split=None
Jan 31 08:00:43 compute-0 sudo[37127]: pam_unix(sudo:session): session closed for user root
Jan 31 08:00:43 compute-0 rsyslogd[1007]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 31 08:00:43 compute-0 rsyslogd[1007]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 31 08:00:44 compute-0 sudo[37281]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zxlodkjyevgzlasrynjoymchtkztgpul ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769846443.8271801-290-263716787597948/AnsiballZ_group.py'
Jan 31 08:00:44 compute-0 sudo[37281]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:00:44 compute-0 python3.9[37283]: ansible-ansible.builtin.group Invoked with gid=107 name=qemu state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Jan 31 08:00:44 compute-0 groupadd[37284]: group added to /etc/group: name=qemu, GID=107
Jan 31 08:00:44 compute-0 groupadd[37284]: group added to /etc/gshadow: name=qemu
Jan 31 08:00:44 compute-0 groupadd[37284]: new group: name=qemu, GID=107
Jan 31 08:00:44 compute-0 sudo[37281]: pam_unix(sudo:session): session closed for user root
Jan 31 08:00:45 compute-0 sudo[37439]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lczcnssjkxvypghtpcsqtyuqgjgcxqmo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769846444.6043332-298-121746511372503/AnsiballZ_user.py'
Jan 31 08:00:45 compute-0 sudo[37439]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:00:45 compute-0 python3.9[37441]: ansible-ansible.builtin.user Invoked with comment=qemu user group=qemu groups=[''] name=qemu shell=/sbin/nologin state=present uid=107 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on compute-0 update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None
Jan 31 08:00:45 compute-0 useradd[37443]: new user: name=qemu, UID=107, GID=107, home=/home/qemu, shell=/sbin/nologin, from=/dev/pts/0
Jan 31 08:00:45 compute-0 sudo[37439]: pam_unix(sudo:session): session closed for user root
Jan 31 08:00:45 compute-0 sudo[37599]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-clslqytfjqxcpyqjdbdjmoxxbycjwymp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769846445.4158766-306-14319970529067/AnsiballZ_getent.py'
Jan 31 08:00:45 compute-0 sudo[37599]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:00:45 compute-0 python3.9[37601]: ansible-ansible.builtin.getent Invoked with database=passwd key=hugetlbfs fail_key=True service=None split=None
Jan 31 08:00:45 compute-0 sudo[37599]: pam_unix(sudo:session): session closed for user root
Jan 31 08:00:46 compute-0 sudo[37752]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pdcbqflycylzcawvwrnduxjimkovrggn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769846446.0278687-314-199762703320483/AnsiballZ_group.py'
Jan 31 08:00:46 compute-0 sudo[37752]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:00:46 compute-0 python3.9[37754]: ansible-ansible.builtin.group Invoked with gid=42477 name=hugetlbfs state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Jan 31 08:00:46 compute-0 groupadd[37755]: group added to /etc/group: name=hugetlbfs, GID=42477
Jan 31 08:00:46 compute-0 groupadd[37755]: group added to /etc/gshadow: name=hugetlbfs
Jan 31 08:00:46 compute-0 groupadd[37755]: new group: name=hugetlbfs, GID=42477
Jan 31 08:00:46 compute-0 sudo[37752]: pam_unix(sudo:session): session closed for user root
Jan 31 08:00:46 compute-0 sudo[37910]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kkfiowymwxjtddnkygyklediffbfluab ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769846446.722093-323-101406787410622/AnsiballZ_file.py'
Jan 31 08:00:46 compute-0 sudo[37910]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:00:47 compute-0 python3.9[37912]: ansible-ansible.builtin.file Invoked with group=qemu mode=0755 owner=qemu path=/var/lib/vhost_sockets setype=virt_cache_t seuser=system_u state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None serole=None selevel=None attributes=None
Jan 31 08:00:47 compute-0 sudo[37910]: pam_unix(sudo:session): session closed for user root
Jan 31 08:00:47 compute-0 sudo[38062]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gxkvyjwmibotzhnpnberajqnnxidahlv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769846447.3917553-334-92620194289190/AnsiballZ_dnf.py'
Jan 31 08:00:47 compute-0 sudo[38062]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:00:47 compute-0 python3.9[38064]: ansible-ansible.legacy.dnf Invoked with name=['dracut-config-generic'] state=absent allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 31 08:00:49 compute-0 sudo[38062]: pam_unix(sudo:session): session closed for user root
Jan 31 08:00:49 compute-0 sudo[38215]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ldngwnwptrilvvzgrmeqtivstogqlffy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769846449.564458-342-61793039550533/AnsiballZ_file.py'
Jan 31 08:00:49 compute-0 sudo[38215]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:00:49 compute-0 python3.9[38217]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/modules-load.d setype=etc_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 31 08:00:50 compute-0 sudo[38215]: pam_unix(sudo:session): session closed for user root
Jan 31 08:00:50 compute-0 sudo[38367]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hbiugfipovauonihhhzmsesjotwqtqas ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769846450.1510189-350-265004746518100/AnsiballZ_stat.py'
Jan 31 08:00:50 compute-0 sudo[38367]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:00:50 compute-0 python3.9[38369]: ansible-ansible.legacy.stat Invoked with path=/etc/modules-load.d/99-edpm.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 08:00:50 compute-0 sudo[38367]: pam_unix(sudo:session): session closed for user root
Jan 31 08:00:50 compute-0 sudo[38490]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jmsrfdvirclvkdfochluhcyofmytnlkh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769846450.1510189-350-265004746518100/AnsiballZ_copy.py'
Jan 31 08:00:50 compute-0 sudo[38490]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:00:51 compute-0 python3.9[38492]: ansible-ansible.legacy.copy Invoked with dest=/etc/modules-load.d/99-edpm.conf group=root mode=0644 owner=root setype=etc_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769846450.1510189-350-265004746518100/.source.conf follow=False _original_basename=edpm-modprobe.conf.j2 checksum=8021efe01721d8fa8cab46b95c00ec1be6dbb9d0 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Jan 31 08:00:51 compute-0 sudo[38490]: pam_unix(sudo:session): session closed for user root
Jan 31 08:00:51 compute-0 sudo[38642]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xzqxtpjsnnypaefbvqtdxlwuplnlixvw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769846451.2875736-365-25857089633829/AnsiballZ_systemd.py'
Jan 31 08:00:51 compute-0 sudo[38642]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:00:52 compute-0 python3.9[38644]: ansible-ansible.builtin.systemd Invoked with name=systemd-modules-load.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 31 08:00:52 compute-0 systemd[1]: Starting Load Kernel Modules...
Jan 31 08:00:52 compute-0 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this.
Jan 31 08:00:52 compute-0 kernel: Bridge firewalling registered
Jan 31 08:00:52 compute-0 systemd-modules-load[38648]: Inserted module 'br_netfilter'
Jan 31 08:00:52 compute-0 systemd[1]: Finished Load Kernel Modules.
Jan 31 08:00:52 compute-0 sudo[38642]: pam_unix(sudo:session): session closed for user root
Jan 31 08:00:52 compute-0 sudo[38802]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iuclzbrrivabwslgsqzdwhcbcievtaqs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769846452.3765864-373-118189603131491/AnsiballZ_stat.py'
Jan 31 08:00:52 compute-0 sudo[38802]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:00:52 compute-0 python3.9[38804]: ansible-ansible.legacy.stat Invoked with path=/etc/sysctl.d/99-edpm.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 08:00:52 compute-0 sudo[38802]: pam_unix(sudo:session): session closed for user root
Jan 31 08:00:53 compute-0 sudo[38925]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jbzfzxrutpljzaarhcfrftfmolsyoloe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769846452.3765864-373-118189603131491/AnsiballZ_copy.py'
Jan 31 08:00:53 compute-0 sudo[38925]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:00:53 compute-0 python3.9[38927]: ansible-ansible.legacy.copy Invoked with dest=/etc/sysctl.d/99-edpm.conf group=root mode=0644 owner=root setype=etc_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769846452.3765864-373-118189603131491/.source.conf follow=False _original_basename=edpm-sysctl.conf.j2 checksum=2a366439721b855adcfe4d7f152babb68596a007 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Jan 31 08:00:53 compute-0 sudo[38925]: pam_unix(sudo:session): session closed for user root
Jan 31 08:00:53 compute-0 sudo[39077]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zzrrjcgylyyumlmvigyugjnlhuzdbqme ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769846453.547573-391-68080783919230/AnsiballZ_dnf.py'
Jan 31 08:00:53 compute-0 sudo[39077]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:00:53 compute-0 python3.9[39079]: ansible-ansible.legacy.dnf Invoked with name=['tuned', 'tuned-profiles-cpu-partitioning'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 31 08:00:56 compute-0 dbus-broker-launch[771]: Noticed file-system modification, trigger reload.
Jan 31 08:00:57 compute-0 dbus-broker-launch[771]: Noticed file-system modification, trigger reload.
Jan 31 08:00:57 compute-0 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Jan 31 08:00:57 compute-0 systemd[1]: Starting man-db-cache-update.service...
Jan 31 08:00:57 compute-0 systemd[1]: Reloading.
Jan 31 08:00:57 compute-0 systemd-rc-local-generator[39140]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 08:00:57 compute-0 systemd[1]: Queuing reload/restart jobs for marked units…
Jan 31 08:00:58 compute-0 sudo[39077]: pam_unix(sudo:session): session closed for user root
Jan 31 08:00:58 compute-0 python3.9[40631]: ansible-ansible.builtin.stat Invoked with path=/etc/tuned/active_profile follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 31 08:00:59 compute-0 python3.9[41801]: ansible-ansible.builtin.slurp Invoked with src=/etc/tuned/active_profile
Jan 31 08:01:00 compute-0 python3.9[42667]: ansible-ansible.builtin.stat Invoked with path=/etc/tuned/throughput-performance-variables.conf follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 31 08:01:00 compute-0 sudo[43308]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ykqbcwspcmsjmfermzcyummvggkylxvs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769846460.3627212-430-86320530731780/AnsiballZ_command.py'
Jan 31 08:01:00 compute-0 sudo[43308]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:01:00 compute-0 python3.9[43310]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/tuned-adm profile throughput-performance _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 08:01:01 compute-0 systemd[1]: Starting Dynamic System Tuning Daemon...
Jan 31 08:01:01 compute-0 systemd[1]: Starting Authorization Manager...
Jan 31 08:01:01 compute-0 CROND[43530]: (root) CMD (run-parts /etc/cron.hourly)
Jan 31 08:01:01 compute-0 systemd[1]: Started Dynamic System Tuning Daemon.
Jan 31 08:01:01 compute-0 run-parts[43534]: (/etc/cron.hourly) starting 0anacron
Jan 31 08:01:01 compute-0 anacron[43542]: Anacron started on 2026-01-31
Jan 31 08:01:01 compute-0 anacron[43542]: Will run job `cron.daily' in 40 min.
Jan 31 08:01:01 compute-0 anacron[43542]: Will run job `cron.weekly' in 60 min.
Jan 31 08:01:01 compute-0 anacron[43542]: Will run job `cron.monthly' in 80 min.
Jan 31 08:01:01 compute-0 anacron[43542]: Jobs will be executed sequentially
Jan 31 08:01:01 compute-0 run-parts[43544]: (/etc/cron.hourly) finished 0anacron
Jan 31 08:01:01 compute-0 CROND[43529]: (root) CMDEND (run-parts /etc/cron.hourly)
Jan 31 08:01:01 compute-0 polkitd[43527]: Started polkitd version 0.117
Jan 31 08:01:01 compute-0 polkitd[43527]: Loading rules from directory /etc/polkit-1/rules.d
Jan 31 08:01:01 compute-0 polkitd[43527]: Loading rules from directory /usr/share/polkit-1/rules.d
Jan 31 08:01:01 compute-0 polkitd[43527]: Finished loading, compiling and executing 2 rules
Jan 31 08:01:01 compute-0 systemd[1]: Started Authorization Manager.
Jan 31 08:01:01 compute-0 polkitd[43527]: Acquired the name org.freedesktop.PolicyKit1 on the system bus
Jan 31 08:01:01 compute-0 sudo[43308]: pam_unix(sudo:session): session closed for user root
Jan 31 08:01:01 compute-0 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Jan 31 08:01:01 compute-0 systemd[1]: Finished man-db-cache-update.service.
Jan 31 08:01:01 compute-0 systemd[1]: man-db-cache-update.service: Consumed 3.680s CPU time.
Jan 31 08:01:01 compute-0 systemd[1]: run-rf67aeb44f1144730aad031132dccd811.service: Deactivated successfully.
Jan 31 08:01:02 compute-0 sudo[43711]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-roiqysnysohuungfbeibtqgqxqdbspgw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769846461.8178854-439-2580906022315/AnsiballZ_systemd.py'
Jan 31 08:01:02 compute-0 sudo[43711]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:01:02 compute-0 python3.9[43713]: ansible-ansible.builtin.systemd Invoked with enabled=True name=tuned state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 31 08:01:02 compute-0 systemd[1]: Stopping Dynamic System Tuning Daemon...
Jan 31 08:01:02 compute-0 systemd[1]: tuned.service: Deactivated successfully.
Jan 31 08:01:02 compute-0 systemd[1]: Stopped Dynamic System Tuning Daemon.
Jan 31 08:01:02 compute-0 systemd[1]: Starting Dynamic System Tuning Daemon...
Jan 31 08:01:02 compute-0 systemd[1]: Started Dynamic System Tuning Daemon.
Jan 31 08:01:02 compute-0 sudo[43711]: pam_unix(sudo:session): session closed for user root
Jan 31 08:01:03 compute-0 python3.9[43875]: ansible-ansible.builtin.slurp Invoked with src=/proc/cmdline
Jan 31 08:01:05 compute-0 sudo[44025]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yuysykpvyyptdtuepqxfisuqccybubak ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769846464.9961383-496-154476132596386/AnsiballZ_systemd.py'
Jan 31 08:01:05 compute-0 sudo[44025]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:01:05 compute-0 python3.9[44027]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ksm.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 31 08:01:05 compute-0 systemd[1]: Reloading.
Jan 31 08:01:05 compute-0 systemd-rc-local-generator[44052]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 08:01:05 compute-0 sudo[44025]: pam_unix(sudo:session): session closed for user root
Jan 31 08:01:06 compute-0 sudo[44214]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nigdubmngnrwyvrubruqebgqmafryapb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769846466.0276186-496-79335981316858/AnsiballZ_systemd.py'
Jan 31 08:01:06 compute-0 sudo[44214]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:01:06 compute-0 python3.9[44216]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ksmtuned.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 31 08:01:06 compute-0 systemd[1]: Reloading.
Jan 31 08:01:06 compute-0 systemd-rc-local-generator[44240]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 08:01:07 compute-0 sudo[44214]: pam_unix(sudo:session): session closed for user root
Jan 31 08:01:07 compute-0 sudo[44402]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dmrlwgiznsjvxrwtbbsqznhvuasfiqrr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769846467.2708735-512-154683771681381/AnsiballZ_command.py'
Jan 31 08:01:07 compute-0 sudo[44402]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:01:07 compute-0 python3.9[44404]: ansible-ansible.legacy.command Invoked with _raw_params=mkswap "/swap" _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 08:01:07 compute-0 sudo[44402]: pam_unix(sudo:session): session closed for user root
Jan 31 08:01:08 compute-0 sudo[44555]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ccyyxidpzkhrvusnuhinbazyskxrfhde ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769846468.0168796-520-188991381946333/AnsiballZ_command.py'
Jan 31 08:01:08 compute-0 sudo[44555]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:01:08 compute-0 python3.9[44557]: ansible-ansible.legacy.command Invoked with _raw_params=swapon "/swap" _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 08:01:08 compute-0 kernel: Adding 1048572k swap on /swap.  Priority:-2 extents:1 across:1048572k 
Jan 31 08:01:08 compute-0 sudo[44555]: pam_unix(sudo:session): session closed for user root
Jan 31 08:01:08 compute-0 sudo[44708]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dwavpobhyhmamkafjqtqdqacgyttnxlv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769846468.6924798-528-152525035124811/AnsiballZ_command.py'
Jan 31 08:01:08 compute-0 sudo[44708]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:01:09 compute-0 python3.9[44710]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/bin/update-ca-trust _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 08:01:10 compute-0 sudo[44708]: pam_unix(sudo:session): session closed for user root
Jan 31 08:01:11 compute-0 sudo[44870]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xqedvpkhhpfezqajdbarvowcehtykawd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769846470.9303715-536-162638156877103/AnsiballZ_command.py'
Jan 31 08:01:11 compute-0 sudo[44870]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:01:11 compute-0 python3.9[44872]: ansible-ansible.legacy.command Invoked with _raw_params=echo 2 >/sys/kernel/mm/ksm/run _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 08:01:11 compute-0 sudo[44870]: pam_unix(sudo:session): session closed for user root
Jan 31 08:01:11 compute-0 sudo[45023]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vmfisdnpiqaflmhvktqasjutkoemncwx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769846471.5917957-544-236943267417892/AnsiballZ_systemd.py'
Jan 31 08:01:11 compute-0 sudo[45023]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:01:12 compute-0 python3.9[45025]: ansible-ansible.builtin.systemd Invoked with name=systemd-sysctl.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 31 08:01:12 compute-0 systemd[1]: systemd-sysctl.service: Deactivated successfully.
Jan 31 08:01:12 compute-0 systemd[1]: Stopped Apply Kernel Variables.
Jan 31 08:01:12 compute-0 systemd[1]: Stopping Apply Kernel Variables...
Jan 31 08:01:12 compute-0 systemd[1]: Starting Apply Kernel Variables...
Jan 31 08:01:12 compute-0 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully.
Jan 31 08:01:12 compute-0 systemd[1]: Finished Apply Kernel Variables.
Jan 31 08:01:12 compute-0 sudo[45023]: pam_unix(sudo:session): session closed for user root
Jan 31 08:01:12 compute-0 sshd-session[31326]: Connection closed by 192.168.122.30 port 51954
Jan 31 08:01:12 compute-0 sshd-session[31323]: pam_unix(sshd:session): session closed for user zuul
Jan 31 08:01:12 compute-0 systemd[1]: session-9.scope: Deactivated successfully.
Jan 31 08:01:12 compute-0 systemd[1]: session-9.scope: Consumed 2min 2.094s CPU time.
Jan 31 08:01:12 compute-0 systemd-logind[793]: Session 9 logged out. Waiting for processes to exit.
Jan 31 08:01:12 compute-0 systemd-logind[793]: Removed session 9.
Jan 31 08:01:17 compute-0 sshd-session[45055]: Accepted publickey for zuul from 192.168.122.30 port 34616 ssh2: ECDSA SHA256:Skb+4tfaoVfLHQIqkRSeA/sFlTrVc6ZnX8V66qTLHY8
Jan 31 08:01:17 compute-0 systemd-logind[793]: New session 10 of user zuul.
Jan 31 08:01:17 compute-0 systemd[1]: Started Session 10 of User zuul.
Jan 31 08:01:17 compute-0 sshd-session[45055]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 31 08:01:18 compute-0 python3.9[45208]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 31 08:01:19 compute-0 sudo[45362]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ufjkgchubunigfbhpzexczwteroeamla ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769846479.1178174-31-4783875330936/AnsiballZ_getent.py'
Jan 31 08:01:19 compute-0 sudo[45362]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:01:19 compute-0 python3.9[45364]: ansible-ansible.builtin.getent Invoked with database=passwd key=openvswitch fail_key=True service=None split=None
Jan 31 08:01:19 compute-0 sudo[45362]: pam_unix(sudo:session): session closed for user root
Jan 31 08:01:20 compute-0 sudo[45515]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gsonxqssqsoabuasxkccruozisprnaaw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769846479.876993-39-12241601381547/AnsiballZ_group.py'
Jan 31 08:01:20 compute-0 sudo[45515]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:01:20 compute-0 python3.9[45517]: ansible-ansible.builtin.group Invoked with gid=42476 name=openvswitch state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Jan 31 08:01:20 compute-0 groupadd[45518]: group added to /etc/group: name=openvswitch, GID=42476
Jan 31 08:01:21 compute-0 groupadd[45518]: group added to /etc/gshadow: name=openvswitch
Jan 31 08:01:21 compute-0 groupadd[45518]: new group: name=openvswitch, GID=42476
Jan 31 08:01:21 compute-0 sudo[45515]: pam_unix(sudo:session): session closed for user root
Jan 31 08:01:21 compute-0 sudo[45673]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-flrdhhwanqrjizidezkwslbidvdnmzyi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769846481.1915503-47-45044446184332/AnsiballZ_user.py'
Jan 31 08:01:21 compute-0 sudo[45673]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:01:21 compute-0 python3.9[45675]: ansible-ansible.builtin.user Invoked with comment=openvswitch user group=openvswitch groups=['hugetlbfs'] name=openvswitch shell=/sbin/nologin state=present uid=42476 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on compute-0 update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None
Jan 31 08:01:21 compute-0 useradd[45677]: new user: name=openvswitch, UID=42476, GID=42476, home=/home/openvswitch, shell=/sbin/nologin, from=/dev/pts/0
Jan 31 08:01:21 compute-0 useradd[45677]: add 'openvswitch' to group 'hugetlbfs'
Jan 31 08:01:21 compute-0 useradd[45677]: add 'openvswitch' to shadow group 'hugetlbfs'
Jan 31 08:01:21 compute-0 sudo[45673]: pam_unix(sudo:session): session closed for user root
Jan 31 08:01:22 compute-0 sudo[45833]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kubugbwmpdiirwagaxdgdounfanotffr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769846482.1670322-57-181855551821912/AnsiballZ_setup.py'
Jan 31 08:01:22 compute-0 sudo[45833]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:01:22 compute-0 python3.9[45835]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 31 08:01:22 compute-0 sudo[45833]: pam_unix(sudo:session): session closed for user root
Jan 31 08:01:23 compute-0 sudo[45917]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pptuzyhuaifbaxlcuouhhnjmjfrmrgux ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769846482.1670322-57-181855551821912/AnsiballZ_dnf.py'
Jan 31 08:01:23 compute-0 sudo[45917]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:01:23 compute-0 python3.9[45919]: ansible-ansible.legacy.dnf Invoked with download_only=True name=['openvswitch'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Jan 31 08:01:28 compute-0 sudo[45917]: pam_unix(sudo:session): session closed for user root
Jan 31 08:01:28 compute-0 sudo[46080]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tfabalngducuocdelgoovegdcpqddmgd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769846488.2573788-71-141949927989662/AnsiballZ_dnf.py'
Jan 31 08:01:28 compute-0 sudo[46080]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:01:28 compute-0 python3.9[46082]: ansible-ansible.legacy.dnf Invoked with name=['openvswitch'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 31 08:01:42 compute-0 kernel: SELinux:  Converting 2739 SID table entries...
Jan 31 08:01:42 compute-0 kernel: SELinux:  policy capability network_peer_controls=1
Jan 31 08:01:42 compute-0 kernel: SELinux:  policy capability open_perms=1
Jan 31 08:01:42 compute-0 kernel: SELinux:  policy capability extended_socket_class=1
Jan 31 08:01:42 compute-0 kernel: SELinux:  policy capability always_check_network=0
Jan 31 08:01:42 compute-0 kernel: SELinux:  policy capability cgroup_seclabel=1
Jan 31 08:01:42 compute-0 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Jan 31 08:01:42 compute-0 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Jan 31 08:01:42 compute-0 groupadd[46105]: group added to /etc/group: name=unbound, GID=994
Jan 31 08:01:42 compute-0 groupadd[46105]: group added to /etc/gshadow: name=unbound
Jan 31 08:01:42 compute-0 groupadd[46105]: new group: name=unbound, GID=994
Jan 31 08:01:43 compute-0 useradd[46112]: new user: name=unbound, UID=993, GID=994, home=/var/lib/unbound, shell=/sbin/nologin, from=none
Jan 31 08:01:43 compute-0 dbus-broker-launch[778]: avc:  op=load_policy lsm=selinux seqno=7 res=1
Jan 31 08:01:43 compute-0 systemd[1]: Started daily update of the root trust anchor for DNSSEC.
Jan 31 08:01:44 compute-0 sshd-session[46130]: Invalid user solv from 92.118.39.76 port 33002
Jan 31 08:01:44 compute-0 sshd-session[46130]: Connection closed by invalid user solv 92.118.39.76 port 33002 [preauth]
Jan 31 08:01:45 compute-0 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Jan 31 08:01:45 compute-0 systemd[1]: Starting man-db-cache-update.service...
Jan 31 08:01:45 compute-0 systemd[1]: Reloading.
Jan 31 08:01:45 compute-0 systemd-sysv-generator[46610]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 08:01:45 compute-0 systemd-rc-local-generator[46607]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 08:01:45 compute-0 systemd[1]: Queuing reload/restart jobs for marked units…
Jan 31 08:01:47 compute-0 sudo[46080]: pam_unix(sudo:session): session closed for user root
Jan 31 08:01:48 compute-0 sudo[47181]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kdioohkckwputwxnuadpnvmigdbugcop ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769846508.1258028-79-240552856858265/AnsiballZ_systemd.py'
Jan 31 08:01:48 compute-0 sudo[47181]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:01:49 compute-0 python3.9[47183]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=openvswitch.service state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Jan 31 08:01:49 compute-0 systemd[1]: Reloading.
Jan 31 08:01:49 compute-0 systemd-rc-local-generator[47210]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 08:01:49 compute-0 systemd-sysv-generator[47213]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 08:01:49 compute-0 systemd[1]: Starting Open vSwitch Database Unit...
Jan 31 08:01:49 compute-0 chown[47224]: /usr/bin/chown: cannot access '/run/openvswitch': No such file or directory
Jan 31 08:01:49 compute-0 ovs-ctl[47229]: /etc/openvswitch/conf.db does not exist ... (warning).
Jan 31 08:01:49 compute-0 ovs-ctl[47229]: Creating empty database /etc/openvswitch/conf.db [  OK  ]
Jan 31 08:01:49 compute-0 ovs-ctl[47229]: Starting ovsdb-server [  OK  ]
Jan 31 08:01:49 compute-0 ovs-vsctl[47278]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait -- init -- set Open_vSwitch . db-version=8.5.1
Jan 31 08:01:49 compute-0 ovs-vsctl[47298]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait set Open_vSwitch . ovs-version=3.3.5-115.el9s "external-ids:system-id=\"c8bc61c4-1b90-42d4-9c52-3d83532ede66\"" "external-ids:rundir=\"/var/run/openvswitch\"" "system-type=\"centos\"" "system-version=\"9\""
Jan 31 08:01:49 compute-0 ovs-ctl[47229]: Configuring Open vSwitch system IDs [  OK  ]
Jan 31 08:01:49 compute-0 ovs-ctl[47229]: Enabling remote OVSDB managers [  OK  ]
Jan 31 08:01:49 compute-0 ovs-vsctl[47304]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait add Open_vSwitch . external-ids hostname=compute-0
Jan 31 08:01:49 compute-0 systemd[1]: Started Open vSwitch Database Unit.
Jan 31 08:01:49 compute-0 systemd[1]: Starting Open vSwitch Delete Transient Ports...
Jan 31 08:01:49 compute-0 systemd[1]: Finished Open vSwitch Delete Transient Ports.
Jan 31 08:01:49 compute-0 systemd[1]: Starting Open vSwitch Forwarding Unit...
Jan 31 08:01:49 compute-0 kernel: openvswitch: Open vSwitch switching datapath
Jan 31 08:01:49 compute-0 ovs-ctl[47349]: Inserting openvswitch module [  OK  ]
Jan 31 08:01:50 compute-0 ovs-ctl[47317]: Starting ovs-vswitchd [  OK  ]
Jan 31 08:01:50 compute-0 ovs-ctl[47317]: Enabling remote OVSDB managers [  OK  ]
Jan 31 08:01:50 compute-0 systemd[1]: Started Open vSwitch Forwarding Unit.
Jan 31 08:01:50 compute-0 ovs-vsctl[47366]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait add Open_vSwitch . external-ids hostname=compute-0
Jan 31 08:01:50 compute-0 systemd[1]: Starting Open vSwitch...
Jan 31 08:01:50 compute-0 systemd[1]: Finished Open vSwitch.
Jan 31 08:01:50 compute-0 sudo[47181]: pam_unix(sudo:session): session closed for user root
Jan 31 08:01:50 compute-0 python3.9[47518]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'selinux'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 31 08:01:50 compute-0 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Jan 31 08:01:50 compute-0 systemd[1]: Finished man-db-cache-update.service.
Jan 31 08:01:50 compute-0 systemd[1]: run-rec31d1dc21a242df9f010955ddf265db.service: Deactivated successfully.
Jan 31 08:01:51 compute-0 sudo[47669]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-teckqqezavzddveyzaughsojwrjlqbwm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769846511.139833-97-215253409389349/AnsiballZ_sefcontext.py'
Jan 31 08:01:51 compute-0 sudo[47669]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:01:51 compute-0 python3.9[47671]: ansible-community.general.sefcontext Invoked with selevel=s0 setype=container_file_t state=present target=/var/lib/edpm-config(/.*)? ignore_selinux_state=False ftype=a reload=True substitute=None seuser=None
Jan 31 08:01:54 compute-0 kernel: SELinux:  Converting 2753 SID table entries...
Jan 31 08:01:54 compute-0 kernel: SELinux:  policy capability network_peer_controls=1
Jan 31 08:01:54 compute-0 kernel: SELinux:  policy capability open_perms=1
Jan 31 08:01:54 compute-0 kernel: SELinux:  policy capability extended_socket_class=1
Jan 31 08:01:54 compute-0 kernel: SELinux:  policy capability always_check_network=0
Jan 31 08:01:54 compute-0 kernel: SELinux:  policy capability cgroup_seclabel=1
Jan 31 08:01:54 compute-0 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Jan 31 08:01:54 compute-0 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Jan 31 08:01:54 compute-0 sudo[47669]: pam_unix(sudo:session): session closed for user root
Jan 31 08:01:55 compute-0 python3.9[47826]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local', 'distribution'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 31 08:01:56 compute-0 sudo[47982]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ftkyvfbljndqqdbfohpcrkorpileucza ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769846516.028703-115-38409914219704/AnsiballZ_dnf.py'
Jan 31 08:01:56 compute-0 dbus-broker-launch[778]: avc:  op=load_policy lsm=selinux seqno=8 res=1
Jan 31 08:01:56 compute-0 sudo[47982]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:01:56 compute-0 python3.9[47984]: ansible-ansible.legacy.dnf Invoked with name=['driverctl', 'lvm2', 'crudini', 'jq', 'nftables', 'NetworkManager', 'openstack-selinux', 'python3-libselinux', 'python3-pyyaml', 'rsync', 'tmpwatch', 'sysstat', 'iproute-tc', 'ksmtuned', 'systemd-container', 'crypto-policies-scripts', 'grubby', 'sos'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 31 08:01:57 compute-0 sudo[47982]: pam_unix(sudo:session): session closed for user root
Jan 31 08:01:58 compute-0 sudo[48135]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jvqmflawklxsmtgfxpbntqkcutpnsafs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769846518.0473166-123-22661142905698/AnsiballZ_command.py'
Jan 31 08:01:58 compute-0 sudo[48135]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:01:58 compute-0 python3.9[48137]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -V driverctl lvm2 crudini jq nftables NetworkManager openstack-selinux python3-libselinux python3-pyyaml rsync tmpwatch sysstat iproute-tc ksmtuned systemd-container crypto-policies-scripts grubby sos _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 08:01:59 compute-0 sudo[48135]: pam_unix(sudo:session): session closed for user root
Jan 31 08:01:59 compute-0 sudo[48422]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kiyyokiembxpelkfknxlcdfxynryldac ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769846519.3492916-131-184331888774465/AnsiballZ_file.py'
Jan 31 08:01:59 compute-0 sudo[48422]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:01:59 compute-0 python3.9[48424]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/edpm-config selevel=s0 setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None attributes=None
Jan 31 08:01:59 compute-0 sudo[48422]: pam_unix(sudo:session): session closed for user root
Jan 31 08:02:00 compute-0 python3.9[48574]: ansible-ansible.builtin.stat Invoked with path=/etc/cloud/cloud.cfg.d follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 31 08:02:01 compute-0 sudo[48726]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bhdnqqriihvpxgolruicjonwhbqhnzlg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769846520.9722776-147-39121250974655/AnsiballZ_dnf.py'
Jan 31 08:02:01 compute-0 sudo[48726]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:02:01 compute-0 python3.9[48728]: ansible-ansible.legacy.dnf Invoked with name=['NetworkManager-ovs'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 31 08:02:03 compute-0 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Jan 31 08:02:03 compute-0 systemd[1]: Starting man-db-cache-update.service...
Jan 31 08:02:03 compute-0 systemd[1]: Reloading.
Jan 31 08:02:03 compute-0 systemd-rc-local-generator[48765]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 08:02:03 compute-0 systemd-sysv-generator[48768]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 08:02:03 compute-0 systemd[1]: Queuing reload/restart jobs for marked units…
Jan 31 08:02:03 compute-0 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Jan 31 08:02:03 compute-0 systemd[1]: Finished man-db-cache-update.service.
Jan 31 08:02:03 compute-0 systemd[1]: run-r9af21980559144b7b34fe72aa952b5f0.service: Deactivated successfully.
Jan 31 08:02:03 compute-0 sudo[48726]: pam_unix(sudo:session): session closed for user root
Jan 31 08:02:04 compute-0 sudo[49043]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qdedpnqkyqtvgfpskoodedxfuzwkhjaf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769846523.8172102-155-273687077881454/AnsiballZ_systemd.py'
Jan 31 08:02:04 compute-0 sudo[49043]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:02:04 compute-0 python3.9[49045]: ansible-ansible.builtin.systemd Invoked with name=NetworkManager state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 31 08:02:04 compute-0 systemd[1]: NetworkManager-wait-online.service: Deactivated successfully.
Jan 31 08:02:04 compute-0 systemd[1]: Stopped Network Manager Wait Online.
Jan 31 08:02:04 compute-0 systemd[1]: Stopping Network Manager Wait Online...
Jan 31 08:02:04 compute-0 systemd[1]: Stopping Network Manager...
Jan 31 08:02:04 compute-0 NetworkManager[7191]: <info>  [1769846524.3630] caught SIGTERM, shutting down normally.
Jan 31 08:02:04 compute-0 NetworkManager[7191]: <info>  [1769846524.3643] dhcp4 (eth0): canceled DHCP transaction
Jan 31 08:02:04 compute-0 NetworkManager[7191]: <info>  [1769846524.3643] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Jan 31 08:02:04 compute-0 NetworkManager[7191]: <info>  [1769846524.3643] dhcp4 (eth0): state changed no lease
Jan 31 08:02:04 compute-0 NetworkManager[7191]: <info>  [1769846524.3645] manager: NetworkManager state is now CONNECTED_SITE
Jan 31 08:02:04 compute-0 NetworkManager[7191]: <info>  [1769846524.3696] exiting (success)
Jan 31 08:02:04 compute-0 systemd[1]: Starting Network Manager Script Dispatcher Service...
Jan 31 08:02:04 compute-0 systemd[1]: NetworkManager.service: Deactivated successfully.
Jan 31 08:02:04 compute-0 systemd[1]: Stopped Network Manager.
Jan 31 08:02:04 compute-0 systemd[1]: NetworkManager.service: Consumed 14.651s CPU time, 4.1M memory peak, read 0B from disk, written 35.0K to disk.
Jan 31 08:02:04 compute-0 systemd[1]: Started Network Manager Script Dispatcher Service.
Jan 31 08:02:04 compute-0 systemd[1]: Starting Network Manager...
Jan 31 08:02:04 compute-0 NetworkManager[49054]: <info>  [1769846524.4189] NetworkManager (version 1.54.3-2.el9) is starting... (after a restart, boot:46d0e983-b0c8-47a0-b578-409408b2d808)
Jan 31 08:02:04 compute-0 NetworkManager[49054]: <info>  [1769846524.4190] Read config: /etc/NetworkManager/NetworkManager.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf
Jan 31 08:02:04 compute-0 NetworkManager[49054]: <info>  [1769846524.4233] manager[0x55c1ea6eb000]: monitoring kernel firmware directory '/lib/firmware'.
Jan 31 08:02:04 compute-0 systemd[1]: Starting Hostname Service...
Jan 31 08:02:04 compute-0 systemd[1]: Started Hostname Service.
Jan 31 08:02:04 compute-0 NetworkManager[49054]: <info>  [1769846524.4914] hostname: hostname: using hostnamed
Jan 31 08:02:04 compute-0 NetworkManager[49054]: <info>  [1769846524.4916] hostname: static hostname changed from (none) to "compute-0"
Jan 31 08:02:04 compute-0 NetworkManager[49054]: <info>  [1769846524.4922] dns-mgr: init: dns=default,systemd-resolved rc-manager=symlink (auto)
Jan 31 08:02:04 compute-0 NetworkManager[49054]: <info>  [1769846524.4927] manager[0x55c1ea6eb000]: rfkill: Wi-Fi hardware radio set enabled
Jan 31 08:02:04 compute-0 NetworkManager[49054]: <info>  [1769846524.4931] manager[0x55c1ea6eb000]: rfkill: WWAN hardware radio set enabled
Jan 31 08:02:04 compute-0 NetworkManager[49054]: <info>  [1769846524.4949] Loaded device plugin: NMOvsFactory (/usr/lib64/NetworkManager/1.54.3-2.el9/libnm-device-plugin-ovs.so)
Jan 31 08:02:04 compute-0 NetworkManager[49054]: <info>  [1769846524.4960] Loaded device plugin: NMTeamFactory (/usr/lib64/NetworkManager/1.54.3-2.el9/libnm-device-plugin-team.so)
Jan 31 08:02:04 compute-0 NetworkManager[49054]: <info>  [1769846524.4961] manager: rfkill: Wi-Fi enabled by radio killswitch; enabled by state file
Jan 31 08:02:04 compute-0 NetworkManager[49054]: <info>  [1769846524.4962] manager: rfkill: WWAN enabled by radio killswitch; enabled by state file
Jan 31 08:02:04 compute-0 NetworkManager[49054]: <info>  [1769846524.4963] manager: Networking is enabled by state file
Jan 31 08:02:04 compute-0 NetworkManager[49054]: <info>  [1769846524.4965] settings: Loaded settings plugin: keyfile (internal)
Jan 31 08:02:04 compute-0 NetworkManager[49054]: <info>  [1769846524.4970] settings: Loaded settings plugin: ifcfg-rh ("/usr/lib64/NetworkManager/1.54.3-2.el9/libnm-settings-plugin-ifcfg-rh.so")
Jan 31 08:02:04 compute-0 NetworkManager[49054]: <info>  [1769846524.4996] Warning: the ifcfg-rh plugin is deprecated, please migrate connections to the keyfile format using "nmcli connection migrate"
Jan 31 08:02:04 compute-0 NetworkManager[49054]: <info>  [1769846524.5005] dhcp: init: Using DHCP client 'internal'
Jan 31 08:02:04 compute-0 NetworkManager[49054]: <info>  [1769846524.5008] manager: (lo): new Loopback device (/org/freedesktop/NetworkManager/Devices/1)
Jan 31 08:02:04 compute-0 NetworkManager[49054]: <info>  [1769846524.5014] device (lo): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 31 08:02:04 compute-0 NetworkManager[49054]: <info>  [1769846524.5021] device (lo): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'external')
Jan 31 08:02:04 compute-0 NetworkManager[49054]: <info>  [1769846524.5030] device (lo): Activation: starting connection 'lo' (4e410dfc-e55f-4386-a962-128f9b1580ba)
Jan 31 08:02:04 compute-0 NetworkManager[49054]: <info>  [1769846524.5035] device (eth0): carrier: link connected
Jan 31 08:02:04 compute-0 NetworkManager[49054]: <info>  [1769846524.5040] manager: (eth0): new Ethernet device (/org/freedesktop/NetworkManager/Devices/2)
Jan 31 08:02:04 compute-0 NetworkManager[49054]: <info>  [1769846524.5048] manager: (eth0): assume: will attempt to assume matching connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03) (indicated)
Jan 31 08:02:04 compute-0 NetworkManager[49054]: <info>  [1769846524.5049] device (eth0): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'assume')
Jan 31 08:02:04 compute-0 NetworkManager[49054]: <info>  [1769846524.5056] device (eth0): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'assume')
Jan 31 08:02:04 compute-0 NetworkManager[49054]: <info>  [1769846524.5066] device (eth0): Activation: starting connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03)
Jan 31 08:02:04 compute-0 NetworkManager[49054]: <info>  [1769846524.5071] device (eth1): carrier: link connected
Jan 31 08:02:04 compute-0 NetworkManager[49054]: <info>  [1769846524.5075] manager: (eth1): new Ethernet device (/org/freedesktop/NetworkManager/Devices/3)
Jan 31 08:02:04 compute-0 NetworkManager[49054]: <info>  [1769846524.5082] manager: (eth1): assume: will attempt to assume matching connection 'ci-private-network' (b1c2768f-0cc5-558f-b3d7-45fa9d4a2631) (indicated)
Jan 31 08:02:04 compute-0 NetworkManager[49054]: <info>  [1769846524.5083] device (eth1): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'assume')
Jan 31 08:02:04 compute-0 NetworkManager[49054]: <info>  [1769846524.5091] device (eth1): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'assume')
Jan 31 08:02:04 compute-0 NetworkManager[49054]: <info>  [1769846524.5101] device (eth1): Activation: starting connection 'ci-private-network' (b1c2768f-0cc5-558f-b3d7-45fa9d4a2631)
Jan 31 08:02:04 compute-0 systemd[1]: Started Network Manager.
Jan 31 08:02:04 compute-0 NetworkManager[49054]: <info>  [1769846524.5107] bus-manager: acquired D-Bus service "org.freedesktop.NetworkManager"
Jan 31 08:02:04 compute-0 NetworkManager[49054]: <info>  [1769846524.5120] device (lo): state change: disconnected -> prepare (reason 'none', managed-type: 'external')
Jan 31 08:02:04 compute-0 NetworkManager[49054]: <info>  [1769846524.5124] device (lo): state change: prepare -> config (reason 'none', managed-type: 'external')
Jan 31 08:02:04 compute-0 NetworkManager[49054]: <info>  [1769846524.5127] device (lo): state change: config -> ip-config (reason 'none', managed-type: 'external')
Jan 31 08:02:04 compute-0 NetworkManager[49054]: <info>  [1769846524.5130] device (eth0): state change: disconnected -> prepare (reason 'none', managed-type: 'assume')
Jan 31 08:02:04 compute-0 NetworkManager[49054]: <info>  [1769846524.5137] device (eth0): state change: prepare -> config (reason 'none', managed-type: 'assume')
Jan 31 08:02:04 compute-0 NetworkManager[49054]: <info>  [1769846524.5140] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'assume')
Jan 31 08:02:04 compute-0 NetworkManager[49054]: <info>  [1769846524.5145] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'assume')
Jan 31 08:02:04 compute-0 NetworkManager[49054]: <info>  [1769846524.5148] device (lo): state change: ip-config -> ip-check (reason 'none', managed-type: 'external')
Jan 31 08:02:04 compute-0 NetworkManager[49054]: <info>  [1769846524.5156] device (eth0): state change: config -> ip-config (reason 'none', managed-type: 'assume')
Jan 31 08:02:04 compute-0 NetworkManager[49054]: <info>  [1769846524.5160] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Jan 31 08:02:04 compute-0 NetworkManager[49054]: <info>  [1769846524.5168] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'assume')
Jan 31 08:02:04 compute-0 NetworkManager[49054]: <info>  [1769846524.5184] device (eth1): state change: ip-config -> ip-check (reason 'none', managed-type: 'assume')
Jan 31 08:02:04 compute-0 NetworkManager[49054]: <info>  [1769846524.5193] device (lo): state change: ip-check -> secondaries (reason 'none', managed-type: 'external')
Jan 31 08:02:04 compute-0 NetworkManager[49054]: <info>  [1769846524.5198] device (lo): state change: secondaries -> activated (reason 'none', managed-type: 'external')
Jan 31 08:02:04 compute-0 NetworkManager[49054]: <info>  [1769846524.5204] device (lo): Activation: successful, device activated.
Jan 31 08:02:04 compute-0 NetworkManager[49054]: <info>  [1769846524.5214] device (eth1): state change: ip-check -> secondaries (reason 'none', managed-type: 'assume')
Jan 31 08:02:04 compute-0 NetworkManager[49054]: <info>  [1769846524.5218] device (eth1): state change: secondaries -> activated (reason 'none', managed-type: 'assume')
Jan 31 08:02:04 compute-0 NetworkManager[49054]: <info>  [1769846524.5223] manager: NetworkManager state is now CONNECTED_LOCAL
Jan 31 08:02:04 compute-0 NetworkManager[49054]: <info>  [1769846524.5228] device (eth1): Activation: successful, device activated.
Jan 31 08:02:04 compute-0 systemd[1]: Starting Network Manager Wait Online...
Jan 31 08:02:04 compute-0 sudo[49043]: pam_unix(sudo:session): session closed for user root
Jan 31 08:02:04 compute-0 sudo[49250]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kqlltzmcojabekmujmepaiyusxnjvbmp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769846524.6956642-163-190156364589985/AnsiballZ_dnf.py'
Jan 31 08:02:04 compute-0 sudo[49250]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:02:05 compute-0 python3.9[49252]: ansible-ansible.legacy.dnf Invoked with name=['os-net-config'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 31 08:02:06 compute-0 NetworkManager[49054]: <info>  [1769846526.4000] dhcp4 (eth0): state changed new lease, address=38.102.83.23
Jan 31 08:02:06 compute-0 NetworkManager[49054]: <info>  [1769846526.4009] policy: set 'System eth0' (eth0) as default for IPv4 routing and DNS
Jan 31 08:02:06 compute-0 NetworkManager[49054]: <info>  [1769846526.5952] device (eth0): state change: ip-config -> ip-check (reason 'none', managed-type: 'assume')
Jan 31 08:02:06 compute-0 NetworkManager[49054]: <info>  [1769846526.5986] device (eth0): state change: ip-check -> secondaries (reason 'none', managed-type: 'assume')
Jan 31 08:02:06 compute-0 NetworkManager[49054]: <info>  [1769846526.5988] device (eth0): state change: secondaries -> activated (reason 'none', managed-type: 'assume')
Jan 31 08:02:06 compute-0 NetworkManager[49054]: <info>  [1769846526.5991] manager: NetworkManager state is now CONNECTED_SITE
Jan 31 08:02:06 compute-0 NetworkManager[49054]: <info>  [1769846526.5993] device (eth0): Activation: successful, device activated.
Jan 31 08:02:06 compute-0 NetworkManager[49054]: <info>  [1769846526.5998] manager: NetworkManager state is now CONNECTED_GLOBAL
Jan 31 08:02:06 compute-0 NetworkManager[49054]: <info>  [1769846526.6001] manager: startup complete
Jan 31 08:02:06 compute-0 systemd[1]: Finished Network Manager Wait Online.
Jan 31 08:02:10 compute-0 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Jan 31 08:02:10 compute-0 systemd[1]: Starting man-db-cache-update.service...
Jan 31 08:02:10 compute-0 systemd[1]: Reloading.
Jan 31 08:02:11 compute-0 systemd-sysv-generator[49323]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 08:02:11 compute-0 systemd-rc-local-generator[49317]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 08:02:11 compute-0 systemd[1]: Queuing reload/restart jobs for marked units…
Jan 31 08:02:12 compute-0 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Jan 31 08:02:12 compute-0 systemd[1]: Finished man-db-cache-update.service.
Jan 31 08:02:12 compute-0 systemd[1]: run-rdce7cd79d0ce4e5c9df465071fbc7bee.service: Deactivated successfully.
Jan 31 08:02:12 compute-0 sudo[49250]: pam_unix(sudo:session): session closed for user root
Jan 31 08:02:12 compute-0 sudo[49730]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ttwiblifydxhkjnidjvtnxqijnslwzfm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769846532.5316455-175-33913433837207/AnsiballZ_stat.py'
Jan 31 08:02:12 compute-0 sudo[49730]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:02:12 compute-0 python3.9[49732]: ansible-ansible.builtin.stat Invoked with path=/var/lib/edpm-config/os-net-config.returncode follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 31 08:02:13 compute-0 sudo[49730]: pam_unix(sudo:session): session closed for user root
Jan 31 08:02:13 compute-0 sudo[49882]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yeyefnhnkulthkwjbkyzslmkpzhkloqd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769846533.1756244-184-144444145729989/AnsiballZ_ini_file.py'
Jan 31 08:02:13 compute-0 sudo[49882]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:02:13 compute-0 python3.9[49884]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=no-auto-default path=/etc/NetworkManager/NetworkManager.conf section=main state=present value=* exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 08:02:13 compute-0 sudo[49882]: pam_unix(sudo:session): session closed for user root
Jan 31 08:02:14 compute-0 sudo[50036]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tptueuuhjyjojsqcykdmdvpwuhtddplo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769846533.972121-194-51431591194693/AnsiballZ_ini_file.py'
Jan 31 08:02:14 compute-0 sudo[50036]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:02:14 compute-0 python3.9[50038]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=dns path=/etc/NetworkManager/NetworkManager.conf section=main state=absent value=none exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 08:02:14 compute-0 sudo[50036]: pam_unix(sudo:session): session closed for user root
Jan 31 08:02:14 compute-0 sudo[50188]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-owbvddqquuudtzhtlhehwffiihgwovhp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769846534.5631075-194-116960299266777/AnsiballZ_ini_file.py'
Jan 31 08:02:14 compute-0 sudo[50188]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:02:14 compute-0 python3.9[50190]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=dns path=/etc/NetworkManager/conf.d/99-cloud-init.conf section=main state=absent value=none exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 08:02:15 compute-0 sudo[50188]: pam_unix(sudo:session): session closed for user root
Jan 31 08:02:15 compute-0 sudo[50340]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qvxkmjbsqwuygbggkzxnqcqdlafbkldj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769846535.1833725-209-134418046215482/AnsiballZ_ini_file.py'
Jan 31 08:02:15 compute-0 sudo[50340]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:02:15 compute-0 python3.9[50342]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=rc-manager path=/etc/NetworkManager/NetworkManager.conf section=main state=absent value=unmanaged exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 08:02:15 compute-0 sudo[50340]: pam_unix(sudo:session): session closed for user root
Jan 31 08:02:15 compute-0 sudo[50492]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fbiyfiuseonvsvtnqznavfainqiuqexg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769846535.7500396-209-52511458784060/AnsiballZ_ini_file.py'
Jan 31 08:02:15 compute-0 sudo[50492]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:02:16 compute-0 python3.9[50494]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=rc-manager path=/etc/NetworkManager/conf.d/99-cloud-init.conf section=main state=absent value=unmanaged exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 08:02:16 compute-0 sudo[50492]: pam_unix(sudo:session): session closed for user root
Jan 31 08:02:16 compute-0 sudo[50644]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vfhqurywjtjctgghtyimlfxgunylouhn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769846536.300346-224-248937413928306/AnsiballZ_stat.py'
Jan 31 08:02:16 compute-0 sudo[50644]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:02:16 compute-0 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Jan 31 08:02:16 compute-0 python3.9[50646]: ansible-ansible.legacy.stat Invoked with path=/etc/dhcp/dhclient-enter-hooks follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 08:02:16 compute-0 sudo[50644]: pam_unix(sudo:session): session closed for user root
Jan 31 08:02:17 compute-0 sudo[50767]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uujrgozikxxiqvmvpyrktedwfmvkwxnz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769846536.300346-224-248937413928306/AnsiballZ_copy.py'
Jan 31 08:02:17 compute-0 sudo[50767]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:02:17 compute-0 python3.9[50769]: ansible-ansible.legacy.copy Invoked with dest=/etc/dhcp/dhclient-enter-hooks mode=0755 src=/home/zuul/.ansible/tmp/ansible-tmp-1769846536.300346-224-248937413928306/.source _original_basename=.5ygojt6x follow=False checksum=f6278a40de79a9841f6ed1fc584538225566990c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 08:02:17 compute-0 sudo[50767]: pam_unix(sudo:session): session closed for user root
Jan 31 08:02:17 compute-0 sudo[50919]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iqvkzoflauxwsqonfjkxhzsmoxhwpbvl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769846537.5892196-239-233200097152007/AnsiballZ_file.py'
Jan 31 08:02:17 compute-0 sudo[50919]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:02:17 compute-0 python3.9[50921]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/os-net-config state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 08:02:18 compute-0 sudo[50919]: pam_unix(sudo:session): session closed for user root
Jan 31 08:02:18 compute-0 sudo[51071]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gspzfdcbgjgjffyjpojubpmhhosrheth ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769846538.1491501-247-32039083474518/AnsiballZ_edpm_os_net_config_mappings.py'
Jan 31 08:02:18 compute-0 sudo[51071]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:02:18 compute-0 python3.9[51073]: ansible-edpm_os_net_config_mappings Invoked with net_config_data_lookup={}
Jan 31 08:02:18 compute-0 sudo[51071]: pam_unix(sudo:session): session closed for user root
Jan 31 08:02:19 compute-0 sudo[51223]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oktxjbhvjuhglnruvfflsqgvrklgktky ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769846538.9661517-256-169736147797047/AnsiballZ_file.py'
Jan 31 08:02:19 compute-0 sudo[51223]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:02:19 compute-0 python3.9[51225]: ansible-ansible.builtin.file Invoked with path=/var/lib/edpm-config/scripts state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 08:02:19 compute-0 sudo[51223]: pam_unix(sudo:session): session closed for user root
Jan 31 08:02:19 compute-0 sudo[51375]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ryxpmztduiffybpuarjfeucarsqzaixl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769846539.675266-266-122445457971851/AnsiballZ_stat.py'
Jan 31 08:02:19 compute-0 sudo[51375]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:02:20 compute-0 sudo[51375]: pam_unix(sudo:session): session closed for user root
Jan 31 08:02:20 compute-0 sudo[51498]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-avvulqdokwcpozriaibapusjpdjcshpv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769846539.675266-266-122445457971851/AnsiballZ_copy.py'
Jan 31 08:02:20 compute-0 sudo[51498]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:02:20 compute-0 sudo[51498]: pam_unix(sudo:session): session closed for user root
Jan 31 08:02:21 compute-0 sudo[51650]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-umlwdgimriihojcmkpipvfsddnlxjioi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769846540.875133-281-106909750696792/AnsiballZ_slurp.py'
Jan 31 08:02:21 compute-0 sudo[51650]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:02:21 compute-0 python3.9[51652]: ansible-ansible.builtin.slurp Invoked with path=/etc/os-net-config/config.yaml src=/etc/os-net-config/config.yaml
Jan 31 08:02:21 compute-0 sudo[51650]: pam_unix(sudo:session): session closed for user root
Jan 31 08:02:22 compute-0 sudo[51825]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vggcjwczwiylxtdbnbcdsltpsyoswrzy ; ANSIBLE_ASYNC_DIR=\'~/.ansible_async\' /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769846541.7012212-290-27978434884099/async_wrapper.py j484590135125 300 /home/zuul/.ansible/tmp/ansible-tmp-1769846541.7012212-290-27978434884099/AnsiballZ_edpm_os_net_config.py _'
Jan 31 08:02:22 compute-0 sudo[51825]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:02:22 compute-0 ansible-async_wrapper.py[51827]: Invoked with j484590135125 300 /home/zuul/.ansible/tmp/ansible-tmp-1769846541.7012212-290-27978434884099/AnsiballZ_edpm_os_net_config.py _
Jan 31 08:02:22 compute-0 ansible-async_wrapper.py[51830]: Starting module and watcher
Jan 31 08:02:22 compute-0 ansible-async_wrapper.py[51830]: Start watching 51831 (300)
Jan 31 08:02:22 compute-0 ansible-async_wrapper.py[51831]: Start module (51831)
Jan 31 08:02:22 compute-0 ansible-async_wrapper.py[51827]: Return async_wrapper task started.
Jan 31 08:02:22 compute-0 sudo[51825]: pam_unix(sudo:session): session closed for user root
Jan 31 08:02:22 compute-0 python3.9[51832]: ansible-edpm_os_net_config Invoked with cleanup=True config_file=/etc/os-net-config/config.yaml debug=True detailed_exit_codes=True safe_defaults=False use_nmstate=True
Jan 31 08:02:23 compute-0 kernel: cfg80211: Loading compiled-in X.509 certificates for regulatory database
Jan 31 08:02:23 compute-0 kernel: Loaded X.509 cert 'sforshee: 00b28ddf47aef9cea7'
Jan 31 08:02:23 compute-0 kernel: Loaded X.509 cert 'wens: 61c038651aabdcf94bd0ac7ff06c7248db18c600'
Jan 31 08:02:23 compute-0 kernel: platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
Jan 31 08:02:23 compute-0 kernel: cfg80211: failed to load regulatory.db
Jan 31 08:02:24 compute-0 NetworkManager[49054]: <info>  [1769846544.2027] audit: op="checkpoint-create" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=51833 uid=0 result="success"
Jan 31 08:02:24 compute-0 NetworkManager[49054]: <info>  [1769846544.2048] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=51833 uid=0 result="success"
Jan 31 08:02:24 compute-0 NetworkManager[49054]: <info>  [1769846544.2614] manager: (br-ex): new Open vSwitch Bridge device (/org/freedesktop/NetworkManager/Devices/4)
Jan 31 08:02:24 compute-0 NetworkManager[49054]: <info>  [1769846544.2616] audit: op="connection-add" uuid="9ee11f4b-31f1-43a5-8f98-0087a73970cd" name="br-ex-br" pid=51833 uid=0 result="success"
Jan 31 08:02:24 compute-0 NetworkManager[49054]: <info>  [1769846544.2628] manager: (br-ex): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/5)
Jan 31 08:02:24 compute-0 NetworkManager[49054]: <info>  [1769846544.2629] audit: op="connection-add" uuid="6f4f3e81-680d-4928-b50c-a7b92a461894" name="br-ex-port" pid=51833 uid=0 result="success"
Jan 31 08:02:24 compute-0 NetworkManager[49054]: <info>  [1769846544.2640] manager: (eth1): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/6)
Jan 31 08:02:24 compute-0 NetworkManager[49054]: <info>  [1769846544.2641] audit: op="connection-add" uuid="d5a2bc86-9453-4ca9-bca0-3397e82c83de" name="eth1-port" pid=51833 uid=0 result="success"
Jan 31 08:02:24 compute-0 NetworkManager[49054]: <info>  [1769846544.2653] manager: (vlan20): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/7)
Jan 31 08:02:24 compute-0 NetworkManager[49054]: <info>  [1769846544.2654] audit: op="connection-add" uuid="aaa52c88-497c-414c-915e-b58dc0f180a8" name="vlan20-port" pid=51833 uid=0 result="success"
Jan 31 08:02:24 compute-0 NetworkManager[49054]: <info>  [1769846544.2665] manager: (vlan21): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/8)
Jan 31 08:02:24 compute-0 NetworkManager[49054]: <info>  [1769846544.2666] audit: op="connection-add" uuid="d506542e-c2b4-431f-8734-c2c067b68112" name="vlan21-port" pid=51833 uid=0 result="success"
Jan 31 08:02:24 compute-0 NetworkManager[49054]: <info>  [1769846544.2677] manager: (vlan22): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/9)
Jan 31 08:02:24 compute-0 NetworkManager[49054]: <info>  [1769846544.2679] audit: op="connection-add" uuid="62acf6a5-d2ed-4d60-a333-951bcdf17f4b" name="vlan22-port" pid=51833 uid=0 result="success"
Jan 31 08:02:24 compute-0 NetworkManager[49054]: <info>  [1769846544.2688] manager: (vlan23): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/10)
Jan 31 08:02:24 compute-0 NetworkManager[49054]: <info>  [1769846544.2690] audit: op="connection-add" uuid="26e21aa9-d0c6-4793-8106-8ace15032fc8" name="vlan23-port" pid=51833 uid=0 result="success"
Jan 31 08:02:24 compute-0 NetworkManager[49054]: <info>  [1769846544.2709] audit: op="connection-update" uuid="5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03" name="System eth0" args="802-3-ethernet.mtu,connection.timestamp,connection.autoconnect-priority,ipv6.dhcp-timeout,ipv6.method,ipv6.addr-gen-mode,ipv4.dhcp-timeout,ipv4.dhcp-client-id" pid=51833 uid=0 result="success"
Jan 31 08:02:24 compute-0 NetworkManager[49054]: <info>  [1769846544.2725] manager: (br-ex): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/11)
Jan 31 08:02:24 compute-0 NetworkManager[49054]: <info>  [1769846544.2727] audit: op="connection-add" uuid="ce25c6e1-6866-4c39-aed6-0b91b3458a87" name="br-ex-if" pid=51833 uid=0 result="success"
Jan 31 08:02:24 compute-0 NetworkManager[49054]: <info>  [1769846544.2769] audit: op="connection-update" uuid="b1c2768f-0cc5-558f-b3d7-45fa9d4a2631" name="ci-private-network" args="ovs-external-ids.data,connection.master,connection.controller,connection.port-type,connection.timestamp,connection.slave-type,ovs-interface.type,ipv6.routes,ipv6.addresses,ipv6.routing-rules,ipv6.dns,ipv6.method,ipv6.addr-gen-mode,ipv4.addresses,ipv4.routing-rules,ipv4.never-default,ipv4.dns,ipv4.method,ipv4.routes" pid=51833 uid=0 result="success"
Jan 31 08:02:24 compute-0 NetworkManager[49054]: <info>  [1769846544.2783] manager: (vlan20): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/12)
Jan 31 08:02:24 compute-0 NetworkManager[49054]: <info>  [1769846544.2785] audit: op="connection-add" uuid="2701d975-ede4-4c82-a55d-27ab5cda392a" name="vlan20-if" pid=51833 uid=0 result="success"
Jan 31 08:02:24 compute-0 NetworkManager[49054]: <info>  [1769846544.2799] manager: (vlan21): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/13)
Jan 31 08:02:24 compute-0 NetworkManager[49054]: <info>  [1769846544.2801] audit: op="connection-add" uuid="ee581e17-f1bb-4e5e-89c7-5ea2dffcb049" name="vlan21-if" pid=51833 uid=0 result="success"
Jan 31 08:02:24 compute-0 NetworkManager[49054]: <info>  [1769846544.2815] manager: (vlan22): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/14)
Jan 31 08:02:24 compute-0 NetworkManager[49054]: <info>  [1769846544.2817] audit: op="connection-add" uuid="5c088239-6ddc-4503-b787-c018fb7084c7" name="vlan22-if" pid=51833 uid=0 result="success"
Jan 31 08:02:24 compute-0 NetworkManager[49054]: <info>  [1769846544.2831] manager: (vlan23): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/15)
Jan 31 08:02:24 compute-0 NetworkManager[49054]: <info>  [1769846544.2833] audit: op="connection-add" uuid="b4f0b405-f89a-494e-ab03-ec55ecb4f1de" name="vlan23-if" pid=51833 uid=0 result="success"
Jan 31 08:02:24 compute-0 NetworkManager[49054]: <info>  [1769846544.2843] audit: op="connection-delete" uuid="ab34d4aa-4908-314b-843b-ee48e300858c" name="Wired connection 1" pid=51833 uid=0 result="success"
Jan 31 08:02:24 compute-0 NetworkManager[49054]: <info>  [1769846544.2854] device (br-ex)[Open vSwitch Bridge]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Jan 31 08:02:24 compute-0 NetworkManager[49054]: <warn>  [1769846544.2856] device (br-ex)[Open vSwitch Bridge]: error setting IPv4 forwarding to '1': Success
Jan 31 08:02:24 compute-0 NetworkManager[49054]: <info>  [1769846544.2863] device (br-ex)[Open vSwitch Bridge]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Jan 31 08:02:24 compute-0 NetworkManager[49054]: <info>  [1769846544.2867] device (br-ex)[Open vSwitch Bridge]: Activation: starting connection 'br-ex-br' (9ee11f4b-31f1-43a5-8f98-0087a73970cd)
Jan 31 08:02:24 compute-0 NetworkManager[49054]: <info>  [1769846544.2868] audit: op="connection-activate" uuid="9ee11f4b-31f1-43a5-8f98-0087a73970cd" name="br-ex-br" pid=51833 uid=0 result="success"
Jan 31 08:02:24 compute-0 NetworkManager[49054]: <info>  [1769846544.2870] device (br-ex)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Jan 31 08:02:24 compute-0 NetworkManager[49054]: <warn>  [1769846544.2872] device (br-ex)[Open vSwitch Port]: error setting IPv4 forwarding to '1': Resource temporarily unavailable
Jan 31 08:02:24 compute-0 NetworkManager[49054]: <info>  [1769846544.2877] device (br-ex)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Jan 31 08:02:24 compute-0 NetworkManager[49054]: <info>  [1769846544.2881] device (br-ex)[Open vSwitch Port]: Activation: starting connection 'br-ex-port' (6f4f3e81-680d-4928-b50c-a7b92a461894)
Jan 31 08:02:24 compute-0 NetworkManager[49054]: <info>  [1769846544.2883] device (eth1)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Jan 31 08:02:24 compute-0 NetworkManager[49054]: <warn>  [1769846544.2885] device (eth1)[Open vSwitch Port]: error setting IPv4 forwarding to '1': Resource temporarily unavailable
Jan 31 08:02:24 compute-0 NetworkManager[49054]: <info>  [1769846544.2889] device (eth1)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Jan 31 08:02:24 compute-0 NetworkManager[49054]: <info>  [1769846544.2893] device (eth1)[Open vSwitch Port]: Activation: starting connection 'eth1-port' (d5a2bc86-9453-4ca9-bca0-3397e82c83de)
Jan 31 08:02:24 compute-0 NetworkManager[49054]: <info>  [1769846544.2895] device (vlan20)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Jan 31 08:02:24 compute-0 NetworkManager[49054]: <warn>  [1769846544.2897] device (vlan20)[Open vSwitch Port]: error setting IPv4 forwarding to '1': Resource temporarily unavailable
Jan 31 08:02:24 compute-0 NetworkManager[49054]: <info>  [1769846544.2902] device (vlan20)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Jan 31 08:02:24 compute-0 NetworkManager[49054]: <info>  [1769846544.2906] device (vlan20)[Open vSwitch Port]: Activation: starting connection 'vlan20-port' (aaa52c88-497c-414c-915e-b58dc0f180a8)
Jan 31 08:02:24 compute-0 NetworkManager[49054]: <info>  [1769846544.2908] device (vlan21)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Jan 31 08:02:24 compute-0 NetworkManager[49054]: <warn>  [1769846544.2910] device (vlan21)[Open vSwitch Port]: error setting IPv4 forwarding to '1': Resource temporarily unavailable
Jan 31 08:02:24 compute-0 NetworkManager[49054]: <info>  [1769846544.2915] device (vlan21)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Jan 31 08:02:24 compute-0 NetworkManager[49054]: <info>  [1769846544.2919] device (vlan21)[Open vSwitch Port]: Activation: starting connection 'vlan21-port' (d506542e-c2b4-431f-8734-c2c067b68112)
Jan 31 08:02:24 compute-0 NetworkManager[49054]: <info>  [1769846544.2921] device (vlan22)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Jan 31 08:02:24 compute-0 NetworkManager[49054]: <warn>  [1769846544.2922] device (vlan22)[Open vSwitch Port]: error setting IPv4 forwarding to '1': Resource temporarily unavailable
Jan 31 08:02:24 compute-0 NetworkManager[49054]: <info>  [1769846544.2927] device (vlan22)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Jan 31 08:02:24 compute-0 NetworkManager[49054]: <info>  [1769846544.2932] device (vlan22)[Open vSwitch Port]: Activation: starting connection 'vlan22-port' (62acf6a5-d2ed-4d60-a333-951bcdf17f4b)
Jan 31 08:02:24 compute-0 NetworkManager[49054]: <info>  [1769846544.2934] device (vlan23)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Jan 31 08:02:24 compute-0 NetworkManager[49054]: <warn>  [1769846544.2935] device (vlan23)[Open vSwitch Port]: error setting IPv4 forwarding to '1': Resource temporarily unavailable
Jan 31 08:02:24 compute-0 NetworkManager[49054]: <info>  [1769846544.2940] device (vlan23)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Jan 31 08:02:24 compute-0 NetworkManager[49054]: <info>  [1769846544.2945] device (vlan23)[Open vSwitch Port]: Activation: starting connection 'vlan23-port' (26e21aa9-d0c6-4793-8106-8ace15032fc8)
Jan 31 08:02:24 compute-0 NetworkManager[49054]: <info>  [1769846544.2946] device (br-ex)[Open vSwitch Bridge]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Jan 31 08:02:24 compute-0 NetworkManager[49054]: <info>  [1769846544.2949] device (br-ex)[Open vSwitch Bridge]: state change: prepare -> config (reason 'none', managed-type: 'full')
Jan 31 08:02:24 compute-0 NetworkManager[49054]: <info>  [1769846544.2951] device (br-ex)[Open vSwitch Bridge]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Jan 31 08:02:24 compute-0 NetworkManager[49054]: <info>  [1769846544.2957] device (br-ex)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Jan 31 08:02:24 compute-0 NetworkManager[49054]: <warn>  [1769846544.2959] device (br-ex)[Open vSwitch Interface]: error setting IPv4 forwarding to '1': No such file or directory
Jan 31 08:02:24 compute-0 NetworkManager[49054]: <info>  [1769846544.2962] device (br-ex)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Jan 31 08:02:24 compute-0 NetworkManager[49054]: <info>  [1769846544.2966] device (br-ex)[Open vSwitch Interface]: Activation: starting connection 'br-ex-if' (ce25c6e1-6866-4c39-aed6-0b91b3458a87)
Jan 31 08:02:24 compute-0 NetworkManager[49054]: <info>  [1769846544.2967] device (br-ex)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Jan 31 08:02:24 compute-0 NetworkManager[49054]: <info>  [1769846544.2971] device (br-ex)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Jan 31 08:02:24 compute-0 NetworkManager[49054]: <info>  [1769846544.2974] device (br-ex)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Jan 31 08:02:24 compute-0 NetworkManager[49054]: <info>  [1769846544.2976] device (br-ex)[Open vSwitch Port]: Activation: connection 'br-ex-port' attached as port, continuing activation
Jan 31 08:02:24 compute-0 NetworkManager[49054]: <info>  [1769846544.2978] device (eth1): state change: activated -> deactivating (reason 'new-activation', managed-type: 'full')
Jan 31 08:02:24 compute-0 NetworkManager[49054]: <info>  [1769846544.2987] device (eth1): disconnecting for new activation request.
Jan 31 08:02:24 compute-0 NetworkManager[49054]: <info>  [1769846544.2988] device (eth1)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Jan 31 08:02:24 compute-0 NetworkManager[49054]: <info>  [1769846544.2991] device (eth1)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Jan 31 08:02:24 compute-0 NetworkManager[49054]: <info>  [1769846544.2994] device (eth1)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Jan 31 08:02:24 compute-0 NetworkManager[49054]: <info>  [1769846544.2996] device (eth1)[Open vSwitch Port]: Activation: connection 'eth1-port' attached as port, continuing activation
Jan 31 08:02:24 compute-0 NetworkManager[49054]: <info>  [1769846544.2999] device (vlan20)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Jan 31 08:02:24 compute-0 NetworkManager[49054]: <warn>  [1769846544.3000] device (vlan20)[Open vSwitch Interface]: error setting IPv4 forwarding to '1': No such file or directory
Jan 31 08:02:24 compute-0 NetworkManager[49054]: <info>  [1769846544.3004] device (vlan20)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Jan 31 08:02:24 compute-0 NetworkManager[49054]: <info>  [1769846544.3008] device (vlan20)[Open vSwitch Interface]: Activation: starting connection 'vlan20-if' (2701d975-ede4-4c82-a55d-27ab5cda392a)
Jan 31 08:02:24 compute-0 NetworkManager[49054]: <info>  [1769846544.3009] device (vlan20)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Jan 31 08:02:24 compute-0 NetworkManager[49054]: <info>  [1769846544.3012] device (vlan20)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Jan 31 08:02:24 compute-0 NetworkManager[49054]: <info>  [1769846544.3015] device (vlan20)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Jan 31 08:02:24 compute-0 NetworkManager[49054]: <info>  [1769846544.3016] device (vlan20)[Open vSwitch Port]: Activation: connection 'vlan20-port' attached as port, continuing activation
Jan 31 08:02:24 compute-0 NetworkManager[49054]: <info>  [1769846544.3019] device (vlan21)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Jan 31 08:02:24 compute-0 NetworkManager[49054]: <warn>  [1769846544.3021] device (vlan21)[Open vSwitch Interface]: error setting IPv4 forwarding to '1': No such file or directory
Jan 31 08:02:24 compute-0 NetworkManager[49054]: <info>  [1769846544.3024] device (vlan21)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Jan 31 08:02:24 compute-0 NetworkManager[49054]: <info>  [1769846544.3028] device (vlan21)[Open vSwitch Interface]: Activation: starting connection 'vlan21-if' (ee581e17-f1bb-4e5e-89c7-5ea2dffcb049)
Jan 31 08:02:24 compute-0 NetworkManager[49054]: <info>  [1769846544.3030] device (vlan21)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Jan 31 08:02:24 compute-0 NetworkManager[49054]: <info>  [1769846544.3033] device (vlan21)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Jan 31 08:02:24 compute-0 NetworkManager[49054]: <info>  [1769846544.3035] device (vlan21)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Jan 31 08:02:24 compute-0 NetworkManager[49054]: <info>  [1769846544.3037] device (vlan21)[Open vSwitch Port]: Activation: connection 'vlan21-port' attached as port, continuing activation
Jan 31 08:02:24 compute-0 NetworkManager[49054]: <info>  [1769846544.3040] device (vlan22)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Jan 31 08:02:24 compute-0 NetworkManager[49054]: <warn>  [1769846544.3041] device (vlan22)[Open vSwitch Interface]: error setting IPv4 forwarding to '1': No such file or directory
Jan 31 08:02:24 compute-0 NetworkManager[49054]: <info>  [1769846544.3045] device (vlan22)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Jan 31 08:02:24 compute-0 NetworkManager[49054]: <info>  [1769846544.3049] device (vlan22)[Open vSwitch Interface]: Activation: starting connection 'vlan22-if' (5c088239-6ddc-4503-b787-c018fb7084c7)
Jan 31 08:02:24 compute-0 NetworkManager[49054]: <info>  [1769846544.3050] device (vlan22)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Jan 31 08:02:24 compute-0 NetworkManager[49054]: <info>  [1769846544.3055] device (vlan22)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Jan 31 08:02:24 compute-0 NetworkManager[49054]: <info>  [1769846544.3057] device (vlan22)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Jan 31 08:02:24 compute-0 NetworkManager[49054]: <info>  [1769846544.3059] device (vlan22)[Open vSwitch Port]: Activation: connection 'vlan22-port' attached as port, continuing activation
Jan 31 08:02:24 compute-0 NetworkManager[49054]: <info>  [1769846544.3062] device (vlan23)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Jan 31 08:02:24 compute-0 NetworkManager[49054]: <warn>  [1769846544.3063] device (vlan23)[Open vSwitch Interface]: error setting IPv4 forwarding to '1': No such file or directory
Jan 31 08:02:24 compute-0 NetworkManager[49054]: <info>  [1769846544.3067] device (vlan23)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Jan 31 08:02:24 compute-0 NetworkManager[49054]: <info>  [1769846544.3071] device (vlan23)[Open vSwitch Interface]: Activation: starting connection 'vlan23-if' (b4f0b405-f89a-494e-ab03-ec55ecb4f1de)
Jan 31 08:02:24 compute-0 NetworkManager[49054]: <info>  [1769846544.3072] device (vlan23)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Jan 31 08:02:24 compute-0 NetworkManager[49054]: <info>  [1769846544.3076] device (vlan23)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Jan 31 08:02:24 compute-0 NetworkManager[49054]: <info>  [1769846544.3078] device (vlan23)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Jan 31 08:02:24 compute-0 NetworkManager[49054]: <info>  [1769846544.3080] device (vlan23)[Open vSwitch Port]: Activation: connection 'vlan23-port' attached as port, continuing activation
Jan 31 08:02:24 compute-0 NetworkManager[49054]: <info>  [1769846544.3081] device (br-ex)[Open vSwitch Bridge]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Jan 31 08:02:24 compute-0 NetworkManager[49054]: <info>  [1769846544.3093] audit: op="device-reapply" interface="eth0" ifindex=2 args="802-3-ethernet.mtu,connection.autoconnect-priority,ipv6.method,ipv6.addr-gen-mode,ipv4.dhcp-timeout,ipv4.dhcp-client-id" pid=51833 uid=0 result="success"
Jan 31 08:02:24 compute-0 NetworkManager[49054]: <info>  [1769846544.3095] device (br-ex)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Jan 31 08:02:24 compute-0 NetworkManager[49054]: <info>  [1769846544.3098] device (br-ex)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Jan 31 08:02:24 compute-0 NetworkManager[49054]: <info>  [1769846544.3101] device (br-ex)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Jan 31 08:02:24 compute-0 NetworkManager[49054]: <info>  [1769846544.3107] device (br-ex)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Jan 31 08:02:24 compute-0 NetworkManager[49054]: <info>  [1769846544.3111] device (eth1)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Jan 31 08:02:24 compute-0 NetworkManager[49054]: <info>  [1769846544.3115] device (vlan20)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Jan 31 08:02:24 compute-0 NetworkManager[49054]: <info>  [1769846544.3119] device (vlan20)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Jan 31 08:02:24 compute-0 NetworkManager[49054]: <info>  [1769846544.3121] device (vlan20)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Jan 31 08:02:24 compute-0 NetworkManager[49054]: <info>  [1769846544.3134] device (vlan20)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Jan 31 08:02:24 compute-0 kernel: ovs-system: entered promiscuous mode
Jan 31 08:02:24 compute-0 NetworkManager[49054]: <info>  [1769846544.3138] device (vlan21)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Jan 31 08:02:24 compute-0 NetworkManager[49054]: <info>  [1769846544.3142] device (vlan21)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Jan 31 08:02:24 compute-0 systemd-udevd[51838]: Network interface NamePolicy= disabled on kernel command line.
Jan 31 08:02:24 compute-0 NetworkManager[49054]: <info>  [1769846544.3144] device (vlan21)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Jan 31 08:02:24 compute-0 NetworkManager[49054]: <info>  [1769846544.3151] device (vlan21)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Jan 31 08:02:24 compute-0 NetworkManager[49054]: <info>  [1769846544.3156] device (vlan22)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Jan 31 08:02:24 compute-0 systemd[1]: Starting Network Manager Script Dispatcher Service...
Jan 31 08:02:24 compute-0 kernel: Timeout policy base is empty
Jan 31 08:02:24 compute-0 NetworkManager[49054]: <info>  [1769846544.3160] device (vlan22)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Jan 31 08:02:24 compute-0 NetworkManager[49054]: <info>  [1769846544.3162] device (vlan22)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Jan 31 08:02:24 compute-0 NetworkManager[49054]: <info>  [1769846544.3166] device (vlan22)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Jan 31 08:02:24 compute-0 NetworkManager[49054]: <info>  [1769846544.3169] device (vlan23)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Jan 31 08:02:24 compute-0 NetworkManager[49054]: <info>  [1769846544.3172] device (vlan23)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Jan 31 08:02:24 compute-0 NetworkManager[49054]: <info>  [1769846544.3174] device (vlan23)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Jan 31 08:02:24 compute-0 NetworkManager[49054]: <info>  [1769846544.3178] device (vlan23)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Jan 31 08:02:24 compute-0 NetworkManager[49054]: <info>  [1769846544.3181] dhcp4 (eth0): canceled DHCP transaction
Jan 31 08:02:24 compute-0 NetworkManager[49054]: <info>  [1769846544.3181] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Jan 31 08:02:24 compute-0 NetworkManager[49054]: <info>  [1769846544.3181] dhcp4 (eth0): state changed no lease
Jan 31 08:02:24 compute-0 NetworkManager[49054]: <info>  [1769846544.3183] dhcp4 (eth0): activation: beginning transaction (no timeout)
Jan 31 08:02:24 compute-0 NetworkManager[49054]: <info>  [1769846544.3190] device (br-ex)[Open vSwitch Interface]: Activation: connection 'br-ex-if' attached as port, continuing activation
Jan 31 08:02:24 compute-0 NetworkManager[49054]: <info>  [1769846544.3192] audit: op="device-reapply" interface="eth1" ifindex=3 pid=51833 uid=0 result="fail" reason="Device is not activated"
Jan 31 08:02:24 compute-0 NetworkManager[49054]: <info>  [1769846544.3267] device (vlan20)[Open vSwitch Interface]: Activation: connection 'vlan20-if' attached as port, continuing activation
Jan 31 08:02:24 compute-0 NetworkManager[49054]: <info>  [1769846544.3271] dhcp4 (eth0): state changed new lease, address=38.102.83.23
Jan 31 08:02:24 compute-0 NetworkManager[49054]: <info>  [1769846544.3277] device (vlan21)[Open vSwitch Interface]: Activation: connection 'vlan21-if' attached as port, continuing activation
Jan 31 08:02:24 compute-0 systemd[1]: Started Network Manager Script Dispatcher Service.
Jan 31 08:02:24 compute-0 NetworkManager[49054]: <info>  [1769846544.3328] device (eth1): disconnecting for new activation request.
Jan 31 08:02:24 compute-0 NetworkManager[49054]: <info>  [1769846544.3329] audit: op="connection-activate" uuid="b1c2768f-0cc5-558f-b3d7-45fa9d4a2631" name="ci-private-network" pid=51833 uid=0 result="success"
Jan 31 08:02:24 compute-0 NetworkManager[49054]: <info>  [1769846544.3330] device (vlan22)[Open vSwitch Interface]: Activation: connection 'vlan22-if' attached as port, continuing activation
Jan 31 08:02:24 compute-0 NetworkManager[49054]: <info>  [1769846544.3339] device (eth1): state change: deactivating -> disconnected (reason 'new-activation', managed-type: 'full')
Jan 31 08:02:24 compute-0 kernel: br-ex: entered promiscuous mode
Jan 31 08:02:24 compute-0 kernel: vlan22: entered promiscuous mode
Jan 31 08:02:24 compute-0 systemd-udevd[51839]: Network interface NamePolicy= disabled on kernel command line.
Jan 31 08:02:24 compute-0 NetworkManager[49054]: <info>  [1769846544.3538] device (eth1): Activation: starting connection 'ci-private-network' (b1c2768f-0cc5-558f-b3d7-45fa9d4a2631)
Jan 31 08:02:24 compute-0 NetworkManager[49054]: <info>  [1769846544.3553] device (br-ex)[Open vSwitch Bridge]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Jan 31 08:02:24 compute-0 NetworkManager[49054]: <info>  [1769846544.3554] device (br-ex)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Jan 31 08:02:24 compute-0 NetworkManager[49054]: <info>  [1769846544.3559] device (eth1)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Jan 31 08:02:24 compute-0 NetworkManager[49054]: <info>  [1769846544.3560] device (vlan20)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Jan 31 08:02:24 compute-0 NetworkManager[49054]: <info>  [1769846544.3561] device (vlan21)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Jan 31 08:02:24 compute-0 NetworkManager[49054]: <info>  [1769846544.3563] device (vlan22)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Jan 31 08:02:24 compute-0 NetworkManager[49054]: <info>  [1769846544.3564] device (vlan23)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Jan 31 08:02:24 compute-0 NetworkManager[49054]: <info>  [1769846544.3569] device (vlan23)[Open vSwitch Interface]: Activation: connection 'vlan23-if' attached as port, continuing activation
Jan 31 08:02:24 compute-0 kernel: vlan21: entered promiscuous mode
Jan 31 08:02:24 compute-0 NetworkManager[49054]: <info>  [1769846544.3591] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Jan 31 08:02:24 compute-0 NetworkManager[49054]: <info>  [1769846544.3595] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'full')
Jan 31 08:02:24 compute-0 NetworkManager[49054]: <info>  [1769846544.3600] device (br-ex)[Open vSwitch Bridge]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Jan 31 08:02:24 compute-0 NetworkManager[49054]: <info>  [1769846544.3603] device (br-ex)[Open vSwitch Bridge]: Activation: successful, device activated.
Jan 31 08:02:24 compute-0 NetworkManager[49054]: <info>  [1769846544.3607] device (br-ex)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Jan 31 08:02:24 compute-0 NetworkManager[49054]: <info>  [1769846544.3611] device (br-ex)[Open vSwitch Port]: Activation: successful, device activated.
Jan 31 08:02:24 compute-0 NetworkManager[49054]: <info>  [1769846544.3614] device (eth1)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Jan 31 08:02:24 compute-0 NetworkManager[49054]: <info>  [1769846544.3617] device (eth1)[Open vSwitch Port]: Activation: successful, device activated.
Jan 31 08:02:24 compute-0 NetworkManager[49054]: <info>  [1769846544.3620] device (vlan20)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Jan 31 08:02:24 compute-0 NetworkManager[49054]: <info>  [1769846544.3623] device (vlan20)[Open vSwitch Port]: Activation: successful, device activated.
Jan 31 08:02:24 compute-0 NetworkManager[49054]: <info>  [1769846544.3634] device (vlan21)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Jan 31 08:02:24 compute-0 NetworkManager[49054]: <info>  [1769846544.3637] device (vlan21)[Open vSwitch Port]: Activation: successful, device activated.
Jan 31 08:02:24 compute-0 NetworkManager[49054]: <info>  [1769846544.3639] device (vlan22)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Jan 31 08:02:24 compute-0 NetworkManager[49054]: <info>  [1769846544.3641] device (vlan22)[Open vSwitch Port]: Activation: successful, device activated.
Jan 31 08:02:24 compute-0 kernel: vlan20: entered promiscuous mode
Jan 31 08:02:24 compute-0 NetworkManager[49054]: <info>  [1769846544.3645] device (vlan23)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Jan 31 08:02:24 compute-0 NetworkManager[49054]: <info>  [1769846544.3648] device (vlan23)[Open vSwitch Port]: Activation: successful, device activated.
Jan 31 08:02:24 compute-0 NetworkManager[49054]: <info>  [1769846544.3656] device (br-ex)[Open vSwitch Interface]: carrier: link connected
Jan 31 08:02:24 compute-0 NetworkManager[49054]: <info>  [1769846544.3661] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=51833 uid=0 result="success"
Jan 31 08:02:24 compute-0 NetworkManager[49054]: <info>  [1769846544.3668] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'full')
Jan 31 08:02:24 compute-0 NetworkManager[49054]: <info>  [1769846544.3680] device (vlan22)[Open vSwitch Interface]: carrier: link connected
Jan 31 08:02:24 compute-0 NetworkManager[49054]: <info>  [1769846544.3717] device (eth1): Activation: connection 'ci-private-network' attached as port, continuing activation
Jan 31 08:02:24 compute-0 NetworkManager[49054]: <info>  [1769846544.3725] device (vlan21)[Open vSwitch Interface]: carrier: link connected
Jan 31 08:02:24 compute-0 NetworkManager[49054]: <info>  [1769846544.3727] device (br-ex)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Jan 31 08:02:24 compute-0 NetworkManager[49054]: <info>  [1769846544.3737] device (eth1): state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Jan 31 08:02:24 compute-0 NetworkManager[49054]: <info>  [1769846544.3743] device (vlan20)[Open vSwitch Interface]: carrier: link connected
Jan 31 08:02:24 compute-0 NetworkManager[49054]: <info>  [1769846544.3744] device (vlan22)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Jan 31 08:02:24 compute-0 NetworkManager[49054]: <info>  [1769846544.3759] device (vlan21)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Jan 31 08:02:24 compute-0 kernel: vlan23: entered promiscuous mode
Jan 31 08:02:24 compute-0 NetworkManager[49054]: <info>  [1769846544.3782] device (vlan20)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Jan 31 08:02:24 compute-0 kernel: virtio_net virtio5 eth1: entered promiscuous mode
Jan 31 08:02:24 compute-0 NetworkManager[49054]: <info>  [1769846544.3826] device (br-ex)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Jan 31 08:02:24 compute-0 NetworkManager[49054]: <info>  [1769846544.3828] device (br-ex)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Jan 31 08:02:24 compute-0 NetworkManager[49054]: <info>  [1769846544.3833] device (br-ex)[Open vSwitch Interface]: Activation: successful, device activated.
Jan 31 08:02:24 compute-0 NetworkManager[49054]: <info>  [1769846544.3838] device (eth1): state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Jan 31 08:02:24 compute-0 NetworkManager[49054]: <info>  [1769846544.3840] device (vlan22)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Jan 31 08:02:24 compute-0 NetworkManager[49054]: <info>  [1769846544.3842] device (eth1): state change: secondaries -> activated (reason 'none', managed-type: 'full')
Jan 31 08:02:24 compute-0 NetworkManager[49054]: <info>  [1769846544.3846] device (eth1): Activation: successful, device activated.
Jan 31 08:02:24 compute-0 NetworkManager[49054]: <info>  [1769846544.3850] device (vlan22)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Jan 31 08:02:24 compute-0 NetworkManager[49054]: <info>  [1769846544.3854] device (vlan22)[Open vSwitch Interface]: Activation: successful, device activated.
Jan 31 08:02:24 compute-0 NetworkManager[49054]: <info>  [1769846544.3859] device (vlan21)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Jan 31 08:02:24 compute-0 NetworkManager[49054]: <info>  [1769846544.3860] device (vlan20)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Jan 31 08:02:24 compute-0 NetworkManager[49054]: <info>  [1769846544.3877] device (vlan21)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Jan 31 08:02:24 compute-0 NetworkManager[49054]: <info>  [1769846544.3880] device (vlan21)[Open vSwitch Interface]: Activation: successful, device activated.
Jan 31 08:02:24 compute-0 NetworkManager[49054]: <info>  [1769846544.3885] device (vlan20)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Jan 31 08:02:24 compute-0 NetworkManager[49054]: <info>  [1769846544.3888] device (vlan20)[Open vSwitch Interface]: Activation: successful, device activated.
Jan 31 08:02:24 compute-0 NetworkManager[49054]: <info>  [1769846544.3895] device (vlan23)[Open vSwitch Interface]: carrier: link connected
Jan 31 08:02:24 compute-0 NetworkManager[49054]: <info>  [1769846544.3907] device (vlan23)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Jan 31 08:02:24 compute-0 NetworkManager[49054]: <info>  [1769846544.3946] device (vlan23)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Jan 31 08:02:24 compute-0 NetworkManager[49054]: <info>  [1769846544.3947] device (vlan23)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Jan 31 08:02:24 compute-0 NetworkManager[49054]: <info>  [1769846544.3950] device (vlan23)[Open vSwitch Interface]: Activation: successful, device activated.
Jan 31 08:02:25 compute-0 NetworkManager[49054]: <info>  [1769846545.5143] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=51833 uid=0 result="success"
Jan 31 08:02:25 compute-0 NetworkManager[49054]: <info>  [1769846545.6684] checkpoint[0x55c1ea6c1950]: destroy /org/freedesktop/NetworkManager/Checkpoint/1
Jan 31 08:02:25 compute-0 NetworkManager[49054]: <info>  [1769846545.6686] audit: op="checkpoint-destroy" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=51833 uid=0 result="success"
Jan 31 08:02:25 compute-0 NetworkManager[49054]: <info>  [1769846545.9763] audit: op="checkpoint-create" arg="/org/freedesktop/NetworkManager/Checkpoint/2" pid=51833 uid=0 result="success"
Jan 31 08:02:25 compute-0 NetworkManager[49054]: <info>  [1769846545.9779] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/2" pid=51833 uid=0 result="success"
Jan 31 08:02:26 compute-0 sudo[52190]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bmdakwuesnkuhjjrjdvghbbzpipeyjky ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769846545.6558535-290-226077967027530/AnsiballZ_async_status.py'
Jan 31 08:02:26 compute-0 sudo[52190]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:02:26 compute-0 NetworkManager[49054]: <info>  [1769846546.1851] audit: op="networking-control" arg="global-dns-configuration" pid=51833 uid=0 result="success"
Jan 31 08:02:26 compute-0 NetworkManager[49054]: <info>  [1769846546.1881] config: signal: SET_VALUES,values,values-intern,global-dns-config (/etc/NetworkManager/NetworkManager.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf)
Jan 31 08:02:26 compute-0 NetworkManager[49054]: <info>  [1769846546.1909] audit: op="networking-control" arg="global-dns-configuration" pid=51833 uid=0 result="success"
Jan 31 08:02:26 compute-0 NetworkManager[49054]: <info>  [1769846546.1936] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/2" pid=51833 uid=0 result="success"
Jan 31 08:02:26 compute-0 python3.9[52192]: ansible-ansible.legacy.async_status Invoked with jid=j484590135125.51827 mode=status _async_dir=/root/.ansible_async
Jan 31 08:02:26 compute-0 sudo[52190]: pam_unix(sudo:session): session closed for user root
Jan 31 08:02:26 compute-0 NetworkManager[49054]: <info>  [1769846546.3590] checkpoint[0x55c1ea6c1a20]: destroy /org/freedesktop/NetworkManager/Checkpoint/2
Jan 31 08:02:26 compute-0 NetworkManager[49054]: <info>  [1769846546.3596] audit: op="checkpoint-destroy" arg="/org/freedesktop/NetworkManager/Checkpoint/2" pid=51833 uid=0 result="success"
Jan 31 08:02:26 compute-0 ansible-async_wrapper.py[51831]: Module complete (51831)
Jan 31 08:02:27 compute-0 ansible-async_wrapper.py[51830]: Done in kid B.
Jan 31 08:02:29 compute-0 sudo[52294]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-egaphvnefodeidqsrdzuoivdgmnexlvo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769846545.6558535-290-226077967027530/AnsiballZ_async_status.py'
Jan 31 08:02:29 compute-0 sudo[52294]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:02:29 compute-0 python3.9[52296]: ansible-ansible.legacy.async_status Invoked with jid=j484590135125.51827 mode=status _async_dir=/root/.ansible_async
Jan 31 08:02:29 compute-0 sudo[52294]: pam_unix(sudo:session): session closed for user root
Jan 31 08:02:30 compute-0 sudo[52394]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sjfbgkdlbpjrsquxqpbvstulsydzrimn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769846545.6558535-290-226077967027530/AnsiballZ_async_status.py'
Jan 31 08:02:30 compute-0 sudo[52394]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:02:30 compute-0 python3.9[52396]: ansible-ansible.legacy.async_status Invoked with jid=j484590135125.51827 mode=cleanup _async_dir=/root/.ansible_async
Jan 31 08:02:30 compute-0 sudo[52394]: pam_unix(sudo:session): session closed for user root
Jan 31 08:02:30 compute-0 sudo[52546]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vyeoohzeeesllustmbyimvjcycszeoxw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769846550.404236-317-25361877950196/AnsiballZ_stat.py'
Jan 31 08:02:30 compute-0 sudo[52546]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:02:30 compute-0 python3.9[52548]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/os-net-config.returncode follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 08:02:30 compute-0 sudo[52546]: pam_unix(sudo:session): session closed for user root
Jan 31 08:02:31 compute-0 sudo[52669]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ywogpljqinxmuqxziokmmafsjahxuskv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769846550.404236-317-25361877950196/AnsiballZ_copy.py'
Jan 31 08:02:31 compute-0 sudo[52669]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:02:31 compute-0 python3.9[52671]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/os-net-config.returncode mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769846550.404236-317-25361877950196/.source.returncode _original_basename=.n0ksc5pc follow=False checksum=b6589fc6ab0dc82cf12099d1c2d40ab994e8410c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 08:02:31 compute-0 sudo[52669]: pam_unix(sudo:session): session closed for user root
Jan 31 08:02:31 compute-0 sudo[52821]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vczhjjukqotiboueabcuzorvpablnzbp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769846551.5032341-333-173049882578869/AnsiballZ_stat.py'
Jan 31 08:02:31 compute-0 sudo[52821]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:02:31 compute-0 python3.9[52823]: ansible-ansible.legacy.stat Invoked with path=/etc/cloud/cloud.cfg.d/99-edpm-disable-network-config.cfg follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 08:02:31 compute-0 sudo[52821]: pam_unix(sudo:session): session closed for user root
Jan 31 08:02:32 compute-0 sudo[52944]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jthyintqipbsxcjirkfmdqkfnhtdnnxg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769846551.5032341-333-173049882578869/AnsiballZ_copy.py'
Jan 31 08:02:32 compute-0 sudo[52944]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:02:32 compute-0 python3.9[52946]: ansible-ansible.legacy.copy Invoked with dest=/etc/cloud/cloud.cfg.d/99-edpm-disable-network-config.cfg mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769846551.5032341-333-173049882578869/.source.cfg _original_basename=.bpvcphoh follow=False checksum=f3c5952a9cd4c6c31b314b25eb897168971cc86e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 08:02:32 compute-0 sudo[52944]: pam_unix(sudo:session): session closed for user root
Jan 31 08:02:33 compute-0 sudo[53097]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ufgifatvkxfjxqxldlcadnkhdrpewoqq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769846552.7920496-348-100594524086466/AnsiballZ_systemd.py'
Jan 31 08:02:33 compute-0 sudo[53097]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:02:33 compute-0 python3.9[53099]: ansible-ansible.builtin.systemd Invoked with name=NetworkManager state=reloaded daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 31 08:02:33 compute-0 systemd[1]: Reloading Network Manager...
Jan 31 08:02:33 compute-0 NetworkManager[49054]: <info>  [1769846553.4826] audit: op="reload" arg="0" pid=53103 uid=0 result="success"
Jan 31 08:02:33 compute-0 NetworkManager[49054]: <info>  [1769846553.4839] config: signal: SIGHUP,config-files,values,values-user,no-auto-default (/etc/NetworkManager/NetworkManager.conf, /usr/lib/NetworkManager/conf.d/00-server.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf, /var/lib/NetworkManager/NetworkManager-intern.conf)
Jan 31 08:02:33 compute-0 systemd[1]: Reloaded Network Manager.
Jan 31 08:02:33 compute-0 sudo[53097]: pam_unix(sudo:session): session closed for user root
Jan 31 08:02:33 compute-0 sshd-session[45058]: Connection closed by 192.168.122.30 port 34616
Jan 31 08:02:33 compute-0 sshd-session[45055]: pam_unix(sshd:session): session closed for user zuul
Jan 31 08:02:33 compute-0 systemd[1]: session-10.scope: Deactivated successfully.
Jan 31 08:02:33 compute-0 systemd[1]: session-10.scope: Consumed 44.019s CPU time.
Jan 31 08:02:33 compute-0 systemd-logind[793]: Session 10 logged out. Waiting for processes to exit.
Jan 31 08:02:33 compute-0 systemd-logind[793]: Removed session 10.
Jan 31 08:02:34 compute-0 systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Jan 31 08:02:39 compute-0 sshd-session[53136]: Accepted publickey for zuul from 192.168.122.30 port 53768 ssh2: ECDSA SHA256:Skb+4tfaoVfLHQIqkRSeA/sFlTrVc6ZnX8V66qTLHY8
Jan 31 08:02:39 compute-0 systemd-logind[793]: New session 11 of user zuul.
Jan 31 08:02:39 compute-0 systemd[1]: Started Session 11 of User zuul.
Jan 31 08:02:39 compute-0 sshd-session[53136]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 31 08:02:40 compute-0 python3.9[53289]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 31 08:02:41 compute-0 python3.9[53443]: ansible-ansible.builtin.setup Invoked with filter=['ansible_default_ipv4'] gather_subset=['!all', '!min', 'network'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 31 08:02:42 compute-0 python3.9[53637]: ansible-ansible.legacy.command Invoked with _raw_params=hostname -f _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 08:02:43 compute-0 sshd-session[53139]: Connection closed by 192.168.122.30 port 53768
Jan 31 08:02:43 compute-0 sshd-session[53136]: pam_unix(sshd:session): session closed for user zuul
Jan 31 08:02:43 compute-0 systemd[1]: session-11.scope: Deactivated successfully.
Jan 31 08:02:43 compute-0 systemd[1]: session-11.scope: Consumed 2.134s CPU time.
Jan 31 08:02:43 compute-0 systemd-logind[793]: Session 11 logged out. Waiting for processes to exit.
Jan 31 08:02:43 compute-0 systemd-logind[793]: Removed session 11.
Jan 31 08:02:43 compute-0 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Jan 31 08:02:49 compute-0 sshd-session[53667]: Accepted publickey for zuul from 192.168.122.30 port 39758 ssh2: ECDSA SHA256:Skb+4tfaoVfLHQIqkRSeA/sFlTrVc6ZnX8V66qTLHY8
Jan 31 08:02:49 compute-0 systemd-logind[793]: New session 12 of user zuul.
Jan 31 08:02:49 compute-0 systemd[1]: Started Session 12 of User zuul.
Jan 31 08:02:49 compute-0 sshd-session[53667]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 31 08:02:50 compute-0 python3.9[53820]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 31 08:02:51 compute-0 python3.9[53974]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 31 08:02:51 compute-0 sudo[54128]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-crbratvpwzfwmntklksozymaktkiwuig ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769846571.4853873-35-24509015941923/AnsiballZ_setup.py'
Jan 31 08:02:51 compute-0 sudo[54128]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:02:52 compute-0 python3.9[54130]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 31 08:02:52 compute-0 sudo[54128]: pam_unix(sudo:session): session closed for user root
Jan 31 08:02:52 compute-0 sudo[54213]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-klogpfpbxsdcsrijysfbzvtzgvgqbkdw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769846571.4853873-35-24509015941923/AnsiballZ_dnf.py'
Jan 31 08:02:52 compute-0 sudo[54213]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:02:52 compute-0 python3.9[54215]: ansible-ansible.legacy.dnf Invoked with name=['podman'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 31 08:02:54 compute-0 sudo[54213]: pam_unix(sudo:session): session closed for user root
Jan 31 08:02:54 compute-0 sudo[54366]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ggzyrihskymmpdugckutgqxxmdwonwej ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769846574.2035306-47-153034049042015/AnsiballZ_setup.py'
Jan 31 08:02:54 compute-0 sudo[54366]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:02:54 compute-0 python3.9[54368]: ansible-ansible.builtin.setup Invoked with filter=['ansible_interfaces'] gather_subset=['!all', '!min', 'network'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 31 08:02:54 compute-0 sudo[54366]: pam_unix(sudo:session): session closed for user root
Jan 31 08:02:55 compute-0 sudo[54562]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kcjjcxdckunenvrfvrhclqvnnhfliaim ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769846575.129155-58-240195022243576/AnsiballZ_file.py'
Jan 31 08:02:55 compute-0 sudo[54562]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:02:55 compute-0 python3.9[54564]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/containers/networks recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 08:02:55 compute-0 sudo[54562]: pam_unix(sudo:session): session closed for user root
Jan 31 08:02:56 compute-0 sudo[54714]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vwcjfokxgrowogzylncwgwdwxfszwdwj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769846575.8134856-66-33437718963293/AnsiballZ_command.py'
Jan 31 08:02:56 compute-0 sudo[54714]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:02:56 compute-0 python3.9[54716]: ansible-ansible.legacy.command Invoked with _raw_params=podman network inspect podman
                                             _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 08:02:56 compute-0 systemd[1]: var-lib-containers-storage-overlay-compat440514473-merged.mount: Deactivated successfully.
Jan 31 08:02:56 compute-0 systemd[1]: var-lib-containers-storage-overlay-metacopy\x2dcheck4089264200-merged.mount: Deactivated successfully.
Jan 31 08:02:56 compute-0 podman[54717]: 2026-01-31 08:02:56.445443642 +0000 UTC m=+0.050421303 system refresh
Jan 31 08:02:56 compute-0 sudo[54714]: pam_unix(sudo:session): session closed for user root
Jan 31 08:02:56 compute-0 sudo[54876]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mzmeztlfrxsseygvpclklrbbozyeicti ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769846576.6018455-74-167306454914483/AnsiballZ_stat.py'
Jan 31 08:02:56 compute-0 sudo[54876]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:02:57 compute-0 python3.9[54878]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/networks/podman.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 08:02:57 compute-0 sudo[54876]: pam_unix(sudo:session): session closed for user root
Jan 31 08:02:57 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 31 08:02:57 compute-0 sudo[54999]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pzcudxqdxpytvwjxknopiazdcqzmukkg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769846576.6018455-74-167306454914483/AnsiballZ_copy.py'
Jan 31 08:02:57 compute-0 sudo[54999]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:02:57 compute-0 python3.9[55001]: ansible-ansible.legacy.copy Invoked with dest=/etc/containers/networks/podman.json group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769846576.6018455-74-167306454914483/.source.json follow=False _original_basename=podman_network_config.j2 checksum=a0d0de64980f92bd74a85cb943f41651d4ddd4a2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 08:02:57 compute-0 sudo[54999]: pam_unix(sudo:session): session closed for user root
Jan 31 08:02:58 compute-0 sudo[55151]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kmdhydlsbrrfkfdjhvkeohprggilpamd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769846578.0274599-89-30705589071548/AnsiballZ_stat.py'
Jan 31 08:02:58 compute-0 sudo[55151]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:02:58 compute-0 python3.9[55153]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 08:02:58 compute-0 sudo[55151]: pam_unix(sudo:session): session closed for user root
Jan 31 08:02:58 compute-0 sudo[55274]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jfiliodupbmhmsnltlstibhtbbdoiqky ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769846578.0274599-89-30705589071548/AnsiballZ_copy.py'
Jan 31 08:02:58 compute-0 sudo[55274]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:02:58 compute-0 python3.9[55276]: ansible-ansible.legacy.copy Invoked with dest=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf group=root mode=0644 owner=root setype=etc_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769846578.0274599-89-30705589071548/.source.conf follow=False _original_basename=registries.conf.j2 checksum=fb9ecd0f69b71ff4fcaafa5405e2d3d2be108c65 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Jan 31 08:02:59 compute-0 sudo[55274]: pam_unix(sudo:session): session closed for user root
Jan 31 08:02:59 compute-0 sudo[55426]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qihcltqiyasoghbkfvfqocznrzczqgug ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769846579.1846092-105-184388228872479/AnsiballZ_ini_file.py'
Jan 31 08:02:59 compute-0 sudo[55426]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:02:59 compute-0 python3.9[55428]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=pids_limit owner=root path=/etc/containers/containers.conf section=containers setype=etc_t value=4096 backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Jan 31 08:02:59 compute-0 sudo[55426]: pam_unix(sudo:session): session closed for user root
Jan 31 08:03:00 compute-0 sudo[55578]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vivaxxacwgvndigdjvwaafijmucwawgq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769846579.882486-105-110121975111608/AnsiballZ_ini_file.py'
Jan 31 08:03:00 compute-0 sudo[55578]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:03:00 compute-0 python3.9[55580]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=events_logger owner=root path=/etc/containers/containers.conf section=engine setype=etc_t value="journald" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Jan 31 08:03:00 compute-0 sudo[55578]: pam_unix(sudo:session): session closed for user root
Jan 31 08:03:00 compute-0 sudo[55730]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dxjxwzzbzpwvnwysyjdfhofaxssgvwzy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769846580.4046302-105-251225374291328/AnsiballZ_ini_file.py'
Jan 31 08:03:00 compute-0 sudo[55730]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:03:00 compute-0 python3.9[55732]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=runtime owner=root path=/etc/containers/containers.conf section=engine setype=etc_t value="crun" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Jan 31 08:03:00 compute-0 sudo[55730]: pam_unix(sudo:session): session closed for user root
Jan 31 08:03:01 compute-0 sudo[55882]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iwceekjpbnlvqgkoxmqiifjemtcgyqce ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769846580.999452-105-107535822274459/AnsiballZ_ini_file.py'
Jan 31 08:03:01 compute-0 sudo[55882]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:03:01 compute-0 python3.9[55884]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=network_backend owner=root path=/etc/containers/containers.conf section=network setype=etc_t value="netavark" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Jan 31 08:03:01 compute-0 sudo[55882]: pam_unix(sudo:session): session closed for user root
Jan 31 08:03:01 compute-0 sudo[56034]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pioynpkcsuzkyjzfbmavhfpqalfpfdcf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769846581.6287007-136-10433278776293/AnsiballZ_dnf.py'
Jan 31 08:03:01 compute-0 sudo[56034]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:03:02 compute-0 python3.9[56036]: ansible-ansible.legacy.dnf Invoked with name=['openssh-server'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 31 08:03:03 compute-0 sudo[56034]: pam_unix(sudo:session): session closed for user root
Jan 31 08:03:04 compute-0 sudo[56187]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-brtrucqnpahqvmpqjuknxybtrxxmricl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769846583.981862-147-183004696530396/AnsiballZ_setup.py'
Jan 31 08:03:04 compute-0 sudo[56187]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:03:04 compute-0 python3.9[56189]: ansible-setup Invoked with gather_subset=['!all', '!min', 'distribution', 'distribution_major_version', 'distribution_version', 'os_family'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 31 08:03:04 compute-0 sudo[56187]: pam_unix(sudo:session): session closed for user root
Jan 31 08:03:04 compute-0 sudo[56341]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tqfxavcifwnrkynczwbwutidvkciqdfo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769846584.7111797-155-220870972560263/AnsiballZ_stat.py'
Jan 31 08:03:04 compute-0 sudo[56341]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:03:05 compute-0 python3.9[56343]: ansible-stat Invoked with path=/run/ostree-booted follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 31 08:03:05 compute-0 sudo[56341]: pam_unix(sudo:session): session closed for user root
Jan 31 08:03:05 compute-0 sudo[56493]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sjlbmgjzltadpgfbujrznldpdlthnvxl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769846585.3393874-164-180302485252945/AnsiballZ_stat.py'
Jan 31 08:03:05 compute-0 sudo[56493]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:03:05 compute-0 python3.9[56495]: ansible-stat Invoked with path=/sbin/transactional-update follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 31 08:03:05 compute-0 sudo[56493]: pam_unix(sudo:session): session closed for user root
Jan 31 08:03:06 compute-0 sudo[56645]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ugfguqgradqsgtbuqwulsfrtglneryew ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769846586.0503304-174-193880991203635/AnsiballZ_command.py'
Jan 31 08:03:06 compute-0 sudo[56645]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:03:06 compute-0 python3.9[56647]: ansible-ansible.legacy.command Invoked with _raw_params=systemctl is-system-running _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 08:03:06 compute-0 sudo[56645]: pam_unix(sudo:session): session closed for user root
Jan 31 08:03:07 compute-0 sudo[56798]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-czuscpuibggripasdpqpjctcibhrxqou ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769846586.7281244-184-224669766559775/AnsiballZ_service_facts.py'
Jan 31 08:03:07 compute-0 sudo[56798]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:03:07 compute-0 python3.9[56800]: ansible-service_facts Invoked
Jan 31 08:03:07 compute-0 network[56817]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Jan 31 08:03:07 compute-0 network[56818]: 'network-scripts' will be removed from distribution in near future.
Jan 31 08:03:07 compute-0 network[56819]: It is advised to switch to 'NetworkManager' instead for network management.
Jan 31 08:03:09 compute-0 sudo[56798]: pam_unix(sudo:session): session closed for user root
Jan 31 08:03:10 compute-0 sudo[57102]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jkfjrkxyntidrqswbezpgeihbihgpabm ; /bin/bash /home/zuul/.ansible/tmp/ansible-tmp-1769846590.1077375-199-272882639395429/AnsiballZ_timesync_provider.sh /home/zuul/.ansible/tmp/ansible-tmp-1769846590.1077375-199-272882639395429/args'
Jan 31 08:03:10 compute-0 sudo[57102]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:03:10 compute-0 sudo[57102]: pam_unix(sudo:session): session closed for user root
Jan 31 08:03:11 compute-0 sudo[57269]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hpfeocybcbntqozcwzulpzaesrztyulp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769846590.7679017-210-182977320822970/AnsiballZ_dnf.py'
Jan 31 08:03:11 compute-0 sudo[57269]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:03:11 compute-0 python3.9[57271]: ansible-ansible.legacy.dnf Invoked with name=['chrony'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 31 08:03:12 compute-0 sudo[57269]: pam_unix(sudo:session): session closed for user root
Jan 31 08:03:13 compute-0 sudo[57422]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-irmrvdcuxdintfnsvrlnuhyjpeerkebh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769846592.7264218-223-87944267555885/AnsiballZ_package_facts.py'
Jan 31 08:03:13 compute-0 sudo[57422]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:03:13 compute-0 python3.9[57424]: ansible-package_facts Invoked with manager=['auto'] strategy=first
Jan 31 08:03:13 compute-0 sudo[57422]: pam_unix(sudo:session): session closed for user root
Jan 31 08:03:14 compute-0 sudo[57574]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hfppcushbhyothamionqxrhyhjuagzhp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769846594.2738013-233-73186446001738/AnsiballZ_stat.py'
Jan 31 08:03:14 compute-0 sudo[57574]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:03:14 compute-0 python3.9[57576]: ansible-ansible.legacy.stat Invoked with path=/etc/chrony.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 08:03:14 compute-0 sudo[57574]: pam_unix(sudo:session): session closed for user root
Jan 31 08:03:15 compute-0 sudo[57699]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hosfhwmgywufolwqzvlskpnrvoqozrzx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769846594.2738013-233-73186446001738/AnsiballZ_copy.py'
Jan 31 08:03:15 compute-0 sudo[57699]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:03:15 compute-0 python3.9[57701]: ansible-ansible.legacy.copy Invoked with backup=True dest=/etc/chrony.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769846594.2738013-233-73186446001738/.source.conf follow=False _original_basename=chrony.conf.j2 checksum=cfb003e56d02d0d2c65555452eb1a05073fecdad force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 08:03:15 compute-0 sudo[57699]: pam_unix(sudo:session): session closed for user root
Jan 31 08:03:15 compute-0 sudo[57853]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jjqgumchlodterzyhiadoccigyxlfdwg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769846595.6311169-248-51889708224660/AnsiballZ_stat.py'
Jan 31 08:03:15 compute-0 sudo[57853]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:03:16 compute-0 python3.9[57855]: ansible-ansible.legacy.stat Invoked with path=/etc/sysconfig/chronyd follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 08:03:16 compute-0 sudo[57853]: pam_unix(sudo:session): session closed for user root
Jan 31 08:03:16 compute-0 sudo[57978]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pvypfdyunevraubeklohvplelfoxxyvy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769846595.6311169-248-51889708224660/AnsiballZ_copy.py'
Jan 31 08:03:16 compute-0 sudo[57978]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:03:16 compute-0 python3.9[57980]: ansible-ansible.legacy.copy Invoked with backup=True dest=/etc/sysconfig/chronyd mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769846595.6311169-248-51889708224660/.source follow=False _original_basename=chronyd.sysconfig.j2 checksum=dd196b1ff1f915b23eebc37ec77405b5dd3df76c force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 08:03:16 compute-0 sudo[57978]: pam_unix(sudo:session): session closed for user root
Jan 31 08:03:17 compute-0 sudo[58132]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bmrsnjdjebybyspjpbndyjdvmwydosqc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769846597.1939962-269-200405590978992/AnsiballZ_lineinfile.py'
Jan 31 08:03:17 compute-0 sudo[58132]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:03:17 compute-0 python3.9[58134]: ansible-lineinfile Invoked with backup=True create=True dest=/etc/sysconfig/network line=PEERNTP=no mode=0644 regexp=^PEERNTP= state=present path=/etc/sysconfig/network encoding=utf-8 backrefs=False firstmatch=False unsafe_writes=False search_string=None insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 08:03:17 compute-0 sudo[58132]: pam_unix(sudo:session): session closed for user root
Jan 31 08:03:18 compute-0 sudo[58286]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fxnjtychbgmtjweoxqhlenbsfubfgixn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769846598.2901258-284-136823011806260/AnsiballZ_setup.py'
Jan 31 08:03:18 compute-0 sudo[58286]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:03:18 compute-0 python3.9[58288]: ansible-ansible.legacy.setup Invoked with gather_subset=['!all'] filter=['ansible_service_mgr'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 31 08:03:19 compute-0 sudo[58286]: pam_unix(sudo:session): session closed for user root
Jan 31 08:03:19 compute-0 sudo[58370]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pxxwveqlljhotowtpwrkgjtjuqcuewdo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769846598.2901258-284-136823011806260/AnsiballZ_systemd.py'
Jan 31 08:03:19 compute-0 sudo[58370]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:03:20 compute-0 python3.9[58372]: ansible-ansible.legacy.systemd Invoked with enabled=True name=chronyd state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 31 08:03:20 compute-0 sudo[58370]: pam_unix(sudo:session): session closed for user root
Jan 31 08:03:20 compute-0 sudo[58524]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tdwjacffkznlqwvjqqxgtfycfujcjslo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769846600.6090574-300-154358651696380/AnsiballZ_setup.py'
Jan 31 08:03:20 compute-0 sudo[58524]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:03:21 compute-0 python3.9[58526]: ansible-ansible.legacy.setup Invoked with gather_subset=['!all'] filter=['ansible_service_mgr'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 31 08:03:21 compute-0 sudo[58524]: pam_unix(sudo:session): session closed for user root
Jan 31 08:03:21 compute-0 sudo[58608]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lzechnqhzqbqifviyjzapwlkkncamtxp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769846600.6090574-300-154358651696380/AnsiballZ_systemd.py'
Jan 31 08:03:21 compute-0 sudo[58608]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:03:21 compute-0 python3.9[58610]: ansible-ansible.legacy.systemd Invoked with name=chronyd state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 31 08:03:21 compute-0 chronyd[792]: chronyd exiting
Jan 31 08:03:21 compute-0 systemd[1]: Stopping NTP client/server...
Jan 31 08:03:21 compute-0 systemd[1]: chronyd.service: Deactivated successfully.
Jan 31 08:03:21 compute-0 systemd[1]: Stopped NTP client/server.
Jan 31 08:03:21 compute-0 systemd[1]: Starting NTP client/server...
Jan 31 08:03:22 compute-0 chronyd[58619]: chronyd version 4.8 starting (+CMDMON +REFCLOCK +RTC +PRIVDROP +SCFILTER +SIGND +NTS +SECHASH +IPV6 +DEBUG)
Jan 31 08:03:22 compute-0 chronyd[58619]: Frequency -28.532 +/- 0.471 ppm read from /var/lib/chrony/drift
Jan 31 08:03:22 compute-0 chronyd[58619]: Loaded seccomp filter (level 2)
Jan 31 08:03:22 compute-0 systemd[1]: Started NTP client/server.
Jan 31 08:03:22 compute-0 sudo[58608]: pam_unix(sudo:session): session closed for user root
Jan 31 08:03:22 compute-0 sshd-session[53670]: Connection closed by 192.168.122.30 port 39758
Jan 31 08:03:22 compute-0 sshd-session[53667]: pam_unix(sshd:session): session closed for user zuul
Jan 31 08:03:22 compute-0 systemd[1]: session-12.scope: Deactivated successfully.
Jan 31 08:03:22 compute-0 systemd[1]: session-12.scope: Consumed 23.188s CPU time.
Jan 31 08:03:22 compute-0 systemd-logind[793]: Session 12 logged out. Waiting for processes to exit.
Jan 31 08:03:22 compute-0 systemd-logind[793]: Removed session 12.
Jan 31 08:03:29 compute-0 sshd-session[58645]: Accepted publickey for zuul from 192.168.122.30 port 42334 ssh2: ECDSA SHA256:Skb+4tfaoVfLHQIqkRSeA/sFlTrVc6ZnX8V66qTLHY8
Jan 31 08:03:29 compute-0 systemd-logind[793]: New session 13 of user zuul.
Jan 31 08:03:29 compute-0 systemd[1]: Started Session 13 of User zuul.
Jan 31 08:03:29 compute-0 sshd-session[58645]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 31 08:03:29 compute-0 sudo[58798]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vxqnxifanxzkuyxugrhzxdhsjofsszdr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769846609.424102-17-266892242761867/AnsiballZ_file.py'
Jan 31 08:03:29 compute-0 sudo[58798]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:03:30 compute-0 python3.9[58800]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 08:03:30 compute-0 sudo[58798]: pam_unix(sudo:session): session closed for user root
Jan 31 08:03:30 compute-0 sudo[58950]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xcaazjcvhxewydqzvwsgaymjhbzsbjga ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769846610.3053641-29-162700165575738/AnsiballZ_stat.py'
Jan 31 08:03:30 compute-0 sudo[58950]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:03:30 compute-0 python3.9[58952]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/ceph-networks.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 08:03:30 compute-0 sudo[58950]: pam_unix(sudo:session): session closed for user root
Jan 31 08:03:31 compute-0 sudo[59073]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mdguwdtfdkzqdkzkamsswwkeprzfgfvg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769846610.3053641-29-162700165575738/AnsiballZ_copy.py'
Jan 31 08:03:31 compute-0 sudo[59073]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:03:31 compute-0 python3.9[59075]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/ceph-networks.yaml mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769846610.3053641-29-162700165575738/.source.yaml follow=False _original_basename=firewall.yaml.j2 checksum=729ea8396013e3343245d6e934e0dcef55029ad2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 08:03:31 compute-0 sudo[59073]: pam_unix(sudo:session): session closed for user root
Jan 31 08:03:31 compute-0 sshd-session[58648]: Connection closed by 192.168.122.30 port 42334
Jan 31 08:03:31 compute-0 sshd-session[58645]: pam_unix(sshd:session): session closed for user zuul
Jan 31 08:03:31 compute-0 systemd[1]: session-13.scope: Deactivated successfully.
Jan 31 08:03:31 compute-0 systemd[1]: session-13.scope: Consumed 1.525s CPU time.
Jan 31 08:03:31 compute-0 systemd-logind[793]: Session 13 logged out. Waiting for processes to exit.
Jan 31 08:03:31 compute-0 systemd-logind[793]: Removed session 13.
Jan 31 08:03:37 compute-0 sshd-session[59100]: Accepted publickey for zuul from 192.168.122.30 port 52302 ssh2: ECDSA SHA256:Skb+4tfaoVfLHQIqkRSeA/sFlTrVc6ZnX8V66qTLHY8
Jan 31 08:03:37 compute-0 systemd-logind[793]: New session 14 of user zuul.
Jan 31 08:03:37 compute-0 systemd[1]: Started Session 14 of User zuul.
Jan 31 08:03:37 compute-0 sshd-session[59100]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 31 08:03:38 compute-0 python3.9[59253]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 31 08:03:39 compute-0 sudo[59407]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vklwoybntvtjyfzoakafnksqwrghuhxe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769846618.6254792-28-98210784991746/AnsiballZ_file.py'
Jan 31 08:03:39 compute-0 sudo[59407]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:03:39 compute-0 python3.9[59409]: ansible-ansible.builtin.file Invoked with group=zuul mode=0770 owner=zuul path=/root/.config/containers recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 08:03:39 compute-0 sudo[59407]: pam_unix(sudo:session): session closed for user root
Jan 31 08:03:39 compute-0 sudo[59582]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-byjrmwriyrusfawmnmtefwxvwopeqyuh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769846619.431571-36-212876184170098/AnsiballZ_stat.py'
Jan 31 08:03:39 compute-0 sudo[59582]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:03:40 compute-0 python3.9[59584]: ansible-ansible.legacy.stat Invoked with path=/root/.config/containers/auth.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 08:03:40 compute-0 sudo[59582]: pam_unix(sudo:session): session closed for user root
Jan 31 08:03:40 compute-0 sudo[59705]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zpecuxlubtueowqmsximbzaeelzpubjh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769846619.431571-36-212876184170098/AnsiballZ_copy.py'
Jan 31 08:03:40 compute-0 sudo[59705]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:03:40 compute-0 python3.9[59707]: ansible-ansible.legacy.copy Invoked with dest=/root/.config/containers/auth.json group=zuul mode=0660 owner=zuul src=/home/zuul/.ansible/tmp/ansible-tmp-1769846619.431571-36-212876184170098/.source.json _original_basename=.wgu1uijb follow=False checksum=bf21a9e8fbc5a3846fb05b4fa0859e0917b2202f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 08:03:40 compute-0 sudo[59705]: pam_unix(sudo:session): session closed for user root
Jan 31 08:03:41 compute-0 sudo[59857]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-axxispwkhgekmztgdzyweccuvhnsbemw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769846621.3824642-59-43092221843208/AnsiballZ_stat.py'
Jan 31 08:03:41 compute-0 sudo[59857]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:03:41 compute-0 python3.9[59859]: ansible-ansible.legacy.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 08:03:41 compute-0 sudo[59857]: pam_unix(sudo:session): session closed for user root
Jan 31 08:03:42 compute-0 sudo[59980]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ikquyenotjxgfqrreapqvjmwxajxjevj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769846621.3824642-59-43092221843208/AnsiballZ_copy.py'
Jan 31 08:03:42 compute-0 sudo[59980]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:03:42 compute-0 python3.9[59982]: ansible-ansible.legacy.copy Invoked with dest=/etc/sysconfig/podman_drop_in mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769846621.3824642-59-43092221843208/.source _original_basename=.wpy1veem follow=False checksum=125299ce8dea7711a76292961206447f0043248b backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 08:03:42 compute-0 sudo[59980]: pam_unix(sudo:session): session closed for user root
Jan 31 08:03:43 compute-0 sudo[60132]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wjhsmrirrxhljavqkxuszytkhlszamsu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769846622.745428-75-215200082826913/AnsiballZ_file.py'
Jan 31 08:03:43 compute-0 sudo[60132]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:03:43 compute-0 python3.9[60134]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 31 08:03:43 compute-0 sudo[60132]: pam_unix(sudo:session): session closed for user root
Jan 31 08:03:43 compute-0 sudo[60284]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sjresxkfqicwddzjczmdpcbojghhldgv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769846623.4673817-83-75942054987205/AnsiballZ_stat.py'
Jan 31 08:03:43 compute-0 sudo[60284]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:03:43 compute-0 python3.9[60286]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 08:03:43 compute-0 sudo[60284]: pam_unix(sudo:session): session closed for user root
Jan 31 08:03:44 compute-0 sudo[60407]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dvapwjopfnxypmudtpbbmxanvssoxseb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769846623.4673817-83-75942054987205/AnsiballZ_copy.py'
Jan 31 08:03:44 compute-0 sudo[60407]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:03:44 compute-0 python3.9[60409]: ansible-ansible.legacy.copy Invoked with dest=/var/local/libexec/edpm-container-shutdown group=root mode=0700 owner=root setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769846623.4673817-83-75942054987205/.source _original_basename=edpm-container-shutdown follow=False checksum=632c3792eb3dce4288b33ae7b265b71950d69f13 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Jan 31 08:03:44 compute-0 sudo[60407]: pam_unix(sudo:session): session closed for user root
Jan 31 08:03:44 compute-0 sudo[60559]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xxrswufymabxtvucqpozjapwvmadznyl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769846624.6211345-83-207611462919307/AnsiballZ_stat.py'
Jan 31 08:03:44 compute-0 sudo[60559]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:03:45 compute-0 python3.9[60561]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 08:03:45 compute-0 sudo[60559]: pam_unix(sudo:session): session closed for user root
Jan 31 08:03:45 compute-0 sudo[60682]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dmlhehlgaskdlzebiinsenabufhkppvu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769846624.6211345-83-207611462919307/AnsiballZ_copy.py'
Jan 31 08:03:45 compute-0 sudo[60682]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:03:45 compute-0 python3.9[60684]: ansible-ansible.legacy.copy Invoked with dest=/var/local/libexec/edpm-start-podman-container group=root mode=0700 owner=root setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769846624.6211345-83-207611462919307/.source _original_basename=edpm-start-podman-container follow=False checksum=b963c569d75a655c0ccae95d9bb4a2a9a4df27d1 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Jan 31 08:03:45 compute-0 sudo[60682]: pam_unix(sudo:session): session closed for user root
Jan 31 08:03:46 compute-0 sudo[60834]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bstedwettmzknlaciogyuzbubyhyyoqb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769846625.7493703-112-6929869816526/AnsiballZ_file.py'
Jan 31 08:03:46 compute-0 sudo[60834]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:03:46 compute-0 python3.9[60836]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 08:03:46 compute-0 sudo[60834]: pam_unix(sudo:session): session closed for user root
Jan 31 08:03:46 compute-0 sudo[60986]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xizizsaeroltbxtxzkfhzluvntoekjnb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769846626.3524013-120-182153191120037/AnsiballZ_stat.py'
Jan 31 08:03:46 compute-0 sudo[60986]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:03:46 compute-0 python3.9[60988]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 08:03:46 compute-0 sudo[60986]: pam_unix(sudo:session): session closed for user root
Jan 31 08:03:47 compute-0 sudo[61109]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jtyxrxqarlddejmqcehnivvsgbriwyrq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769846626.3524013-120-182153191120037/AnsiballZ_copy.py'
Jan 31 08:03:47 compute-0 sudo[61109]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:03:47 compute-0 python3.9[61111]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/edpm-container-shutdown.service group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769846626.3524013-120-182153191120037/.source.service _original_basename=edpm-container-shutdown-service follow=False checksum=6336835cb0f888670cc99de31e19c8c071444d33 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 08:03:47 compute-0 sudo[61109]: pam_unix(sudo:session): session closed for user root
Jan 31 08:03:47 compute-0 sudo[61261]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nalfsddsgvwusmrdfcguyytfvpxvlpnx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769846627.5183442-135-113327242731834/AnsiballZ_stat.py'
Jan 31 08:03:47 compute-0 sudo[61261]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:03:47 compute-0 python3.9[61263]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 08:03:47 compute-0 sudo[61261]: pam_unix(sudo:session): session closed for user root
Jan 31 08:03:48 compute-0 sudo[61384]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ajnrqqhypxpstehagknkczbvwbtxbbla ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769846627.5183442-135-113327242731834/AnsiballZ_copy.py'
Jan 31 08:03:48 compute-0 sudo[61384]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:03:48 compute-0 python3.9[61386]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769846627.5183442-135-113327242731834/.source.preset _original_basename=91-edpm-container-shutdown-preset follow=False checksum=b275e4375287528cb63464dd32f622c4f142a915 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 08:03:48 compute-0 sudo[61384]: pam_unix(sudo:session): session closed for user root
Jan 31 08:03:49 compute-0 sudo[61536]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-shnmgxwotraiomsjsfvztzkaqqmjdbrc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769846628.4749422-150-119026630335912/AnsiballZ_systemd.py'
Jan 31 08:03:49 compute-0 sudo[61536]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:03:49 compute-0 python3.9[61538]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 31 08:03:49 compute-0 systemd[1]: Reloading.
Jan 31 08:03:49 compute-0 systemd-sysv-generator[61566]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 08:03:49 compute-0 systemd-rc-local-generator[61559]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 08:03:49 compute-0 systemd[1]: Reloading.
Jan 31 08:03:49 compute-0 systemd-rc-local-generator[61598]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 08:03:49 compute-0 systemd-sysv-generator[61601]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 08:03:49 compute-0 systemd[1]: Starting EDPM Container Shutdown...
Jan 31 08:03:49 compute-0 systemd[1]: Finished EDPM Container Shutdown.
Jan 31 08:03:49 compute-0 sudo[61536]: pam_unix(sudo:session): session closed for user root
Jan 31 08:03:50 compute-0 sudo[61762]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mbrprcueborkvcyzcajzbjlvkvibqlna ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769846630.008669-158-229850904310803/AnsiballZ_stat.py'
Jan 31 08:03:50 compute-0 sudo[61762]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:03:50 compute-0 python3.9[61764]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 08:03:50 compute-0 sudo[61762]: pam_unix(sudo:session): session closed for user root
Jan 31 08:03:50 compute-0 sudo[61885]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-skztnvoqfvdspwggjsdsvarowuczhbzt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769846630.008669-158-229850904310803/AnsiballZ_copy.py'
Jan 31 08:03:50 compute-0 sudo[61885]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:03:50 compute-0 python3.9[61887]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/netns-placeholder.service group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769846630.008669-158-229850904310803/.source.service _original_basename=netns-placeholder-service follow=False checksum=b61b1b5918c20c877b8b226fbf34ff89a082d972 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 08:03:50 compute-0 sudo[61885]: pam_unix(sudo:session): session closed for user root
Jan 31 08:03:51 compute-0 sudo[62037]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-msfmrcunhpzognekcnycupuzylfqsdwc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769846631.1145935-173-139390245400861/AnsiballZ_stat.py'
Jan 31 08:03:51 compute-0 sudo[62037]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:03:51 compute-0 python3.9[62039]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 08:03:51 compute-0 sudo[62037]: pam_unix(sudo:session): session closed for user root
Jan 31 08:03:51 compute-0 sudo[62160]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rljjuziuvsllkorwepoprhvkkpkibhie ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769846631.1145935-173-139390245400861/AnsiballZ_copy.py'
Jan 31 08:03:51 compute-0 sudo[62160]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:03:52 compute-0 python3.9[62162]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system-preset/91-netns-placeholder.preset group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769846631.1145935-173-139390245400861/.source.preset _original_basename=91-netns-placeholder-preset follow=False checksum=28b7b9aa893525d134a1eeda8a0a48fb25b736b9 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 08:03:52 compute-0 sudo[62160]: pam_unix(sudo:session): session closed for user root
Jan 31 08:03:52 compute-0 sudo[62312]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wnffhgynddgiaeefdhsxhulqdszizatf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769846632.191907-188-200068663490837/AnsiballZ_systemd.py'
Jan 31 08:03:52 compute-0 sudo[62312]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:03:52 compute-0 python3.9[62314]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 31 08:03:52 compute-0 systemd[1]: Reloading.
Jan 31 08:03:52 compute-0 systemd-rc-local-generator[62340]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 08:03:52 compute-0 systemd-sysv-generator[62343]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 08:03:52 compute-0 systemd[1]: Reloading.
Jan 31 08:03:52 compute-0 systemd-rc-local-generator[62380]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 08:03:52 compute-0 systemd-sysv-generator[62383]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 08:03:53 compute-0 systemd[1]: Starting Create netns directory...
Jan 31 08:03:53 compute-0 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Jan 31 08:03:53 compute-0 systemd[1]: netns-placeholder.service: Deactivated successfully.
Jan 31 08:03:53 compute-0 systemd[1]: Finished Create netns directory.
Jan 31 08:03:53 compute-0 sudo[62312]: pam_unix(sudo:session): session closed for user root
Jan 31 08:03:53 compute-0 python3.9[62541]: ansible-ansible.builtin.service_facts Invoked
Jan 31 08:03:53 compute-0 network[62558]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Jan 31 08:03:53 compute-0 network[62559]: 'network-scripts' will be removed from distribution in near future.
Jan 31 08:03:53 compute-0 network[62560]: It is advised to switch to 'NetworkManager' instead for network management.
Jan 31 08:03:56 compute-0 sudo[62820]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tvthiexqfhnmfletfskavxvboxjbumnc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769846636.3192399-204-199525871930653/AnsiballZ_systemd.py'
Jan 31 08:03:56 compute-0 sudo[62820]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:03:56 compute-0 python3.9[62822]: ansible-ansible.builtin.systemd Invoked with enabled=False name=iptables.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 31 08:03:56 compute-0 systemd[1]: Reloading.
Jan 31 08:03:56 compute-0 systemd-rc-local-generator[62845]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 08:03:56 compute-0 systemd-sysv-generator[62852]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 08:03:57 compute-0 systemd[1]: Stopping IPv4 firewall with iptables...
Jan 31 08:03:57 compute-0 iptables.init[62861]: iptables: Setting chains to policy ACCEPT: raw mangle filter nat [  OK  ]
Jan 31 08:03:57 compute-0 iptables.init[62861]: iptables: Flushing firewall rules: [  OK  ]
Jan 31 08:03:57 compute-0 systemd[1]: iptables.service: Deactivated successfully.
Jan 31 08:03:57 compute-0 systemd[1]: Stopped IPv4 firewall with iptables.
Jan 31 08:03:57 compute-0 sudo[62820]: pam_unix(sudo:session): session closed for user root
Jan 31 08:03:57 compute-0 sudo[63055]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vchewgucgcfqzzdhbhdjgckxqdvfkxph ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769846637.6084664-204-5329287178932/AnsiballZ_systemd.py'
Jan 31 08:03:57 compute-0 sudo[63055]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:03:58 compute-0 python3.9[63057]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ip6tables.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 31 08:03:58 compute-0 sudo[63055]: pam_unix(sudo:session): session closed for user root
Jan 31 08:03:58 compute-0 sudo[63209]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qejobvuwfwgvxwlkmmadypgvktkgprmd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769846638.4615312-220-104566435729837/AnsiballZ_systemd.py'
Jan 31 08:03:58 compute-0 sudo[63209]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:03:59 compute-0 python3.9[63211]: ansible-ansible.builtin.systemd Invoked with enabled=True name=nftables state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 31 08:03:59 compute-0 systemd[1]: Reloading.
Jan 31 08:03:59 compute-0 systemd-sysv-generator[63240]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 08:03:59 compute-0 systemd-rc-local-generator[63236]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 08:03:59 compute-0 systemd[1]: Starting Netfilter Tables...
Jan 31 08:03:59 compute-0 systemd[1]: Finished Netfilter Tables.
Jan 31 08:03:59 compute-0 sudo[63209]: pam_unix(sudo:session): session closed for user root
Jan 31 08:04:00 compute-0 sudo[63401]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fhfkepflkhnfhlpxgiqcwqakgbbbzwjx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769846639.8848403-228-137000003152136/AnsiballZ_command.py'
Jan 31 08:04:00 compute-0 sudo[63401]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:04:00 compute-0 python3.9[63403]: ansible-ansible.legacy.command Invoked with _raw_params=nft flush ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 08:04:00 compute-0 sudo[63401]: pam_unix(sudo:session): session closed for user root
Jan 31 08:04:01 compute-0 sudo[63554]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pxyhapvbdbneuedxxilqixvgmsosurhl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769846640.8543825-242-237684181591787/AnsiballZ_stat.py'
Jan 31 08:04:01 compute-0 sudo[63554]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:04:01 compute-0 python3.9[63556]: ansible-ansible.legacy.stat Invoked with path=/etc/ssh/sshd_config follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 08:04:01 compute-0 sudo[63554]: pam_unix(sudo:session): session closed for user root
Jan 31 08:04:01 compute-0 sudo[63679]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-duffmitcqfaqyvrnzapkyfkvzqryaila ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769846640.8543825-242-237684181591787/AnsiballZ_copy.py'
Jan 31 08:04:01 compute-0 sudo[63679]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:04:01 compute-0 python3.9[63681]: ansible-ansible.legacy.copy Invoked with dest=/etc/ssh/sshd_config mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1769846640.8543825-242-237684181591787/.source validate=/usr/sbin/sshd -T -f %s follow=False _original_basename=sshd_config_block.j2 checksum=6c79f4cb960ad444688fde322eeacb8402e22d79 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 08:04:01 compute-0 sudo[63679]: pam_unix(sudo:session): session closed for user root
Jan 31 08:04:02 compute-0 sudo[63832]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-buinqzowzmmndhgmrhbuzhgsbrgsldcg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769846641.9820585-257-96416223731766/AnsiballZ_systemd.py'
Jan 31 08:04:02 compute-0 sudo[63832]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:04:02 compute-0 python3.9[63834]: ansible-ansible.builtin.systemd Invoked with name=sshd state=reloaded daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 31 08:04:02 compute-0 systemd[1]: Reloading OpenSSH server daemon...
Jan 31 08:04:02 compute-0 sshd[1008]: Received SIGHUP; restarting.
Jan 31 08:04:02 compute-0 systemd[1]: Reloaded OpenSSH server daemon.
Jan 31 08:04:02 compute-0 sshd[1008]: Server listening on 0.0.0.0 port 22.
Jan 31 08:04:02 compute-0 sshd[1008]: Server listening on :: port 22.
Jan 31 08:04:02 compute-0 sudo[63832]: pam_unix(sudo:session): session closed for user root
Jan 31 08:04:03 compute-0 sudo[63988]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wpvryexvlmdzfrmtclooffpmauqpymuz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769846642.7946491-265-187712560161288/AnsiballZ_file.py'
Jan 31 08:04:03 compute-0 sudo[63988]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:04:03 compute-0 python3.9[63990]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 08:04:03 compute-0 sudo[63988]: pam_unix(sudo:session): session closed for user root
Jan 31 08:04:03 compute-0 sudo[64140]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-spxnmehhqwgmhfswxljmdukgtfaceumn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769846643.3404284-273-165664477151750/AnsiballZ_stat.py'
Jan 31 08:04:03 compute-0 sudo[64140]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:04:03 compute-0 python3.9[64142]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/sshd-networks.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 08:04:03 compute-0 sudo[64140]: pam_unix(sudo:session): session closed for user root
Jan 31 08:04:04 compute-0 sudo[64263]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vctayiexompkqsktyjjruyvrdycrysra ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769846643.3404284-273-165664477151750/AnsiballZ_copy.py'
Jan 31 08:04:04 compute-0 sudo[64263]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:04:04 compute-0 python3.9[64265]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/sshd-networks.yaml group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769846643.3404284-273-165664477151750/.source.yaml follow=False _original_basename=firewall.yaml.j2 checksum=0bfc8440fd8f39002ab90252479fb794f51b5ae8 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 08:04:04 compute-0 sudo[64263]: pam_unix(sudo:session): session closed for user root
Jan 31 08:04:04 compute-0 sudo[64415]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zjjkxgebythbwvasfstrhffkgevbhjof ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769846644.6125824-291-253506681883870/AnsiballZ_timezone.py'
Jan 31 08:04:04 compute-0 sudo[64415]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:04:05 compute-0 python3.9[64417]: ansible-community.general.timezone Invoked with name=UTC hwclock=None
Jan 31 08:04:05 compute-0 systemd[1]: Starting Time & Date Service...
Jan 31 08:04:05 compute-0 systemd[1]: Started Time & Date Service.
Jan 31 08:04:05 compute-0 sudo[64415]: pam_unix(sudo:session): session closed for user root
Jan 31 08:04:05 compute-0 sudo[64571]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ndtenzqwaaktiahcnrkuneyysldyaveb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769846645.5239377-300-50673103634570/AnsiballZ_file.py'
Jan 31 08:04:05 compute-0 sudo[64571]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:04:05 compute-0 python3.9[64573]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 08:04:05 compute-0 sudo[64571]: pam_unix(sudo:session): session closed for user root
Jan 31 08:04:06 compute-0 sudo[64723]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ariijyqpdztnczwovdsuttthqtaaputc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769846646.0832753-308-32194648216786/AnsiballZ_stat.py'
Jan 31 08:04:06 compute-0 sudo[64723]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:04:06 compute-0 python3.9[64725]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 08:04:06 compute-0 sudo[64723]: pam_unix(sudo:session): session closed for user root
Jan 31 08:04:06 compute-0 sudo[64846]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-llafrpparjmzngeiyijvzifzkkmmtshm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769846646.0832753-308-32194648216786/AnsiballZ_copy.py'
Jan 31 08:04:06 compute-0 sudo[64846]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:04:07 compute-0 python3.9[64848]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769846646.0832753-308-32194648216786/.source.yaml follow=False _original_basename=base-rules.yaml.j2 checksum=450456afcafded6d4bdecceec7a02e806eebd8b3 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 08:04:07 compute-0 sudo[64846]: pam_unix(sudo:session): session closed for user root
Jan 31 08:04:07 compute-0 sudo[64998]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cahqbuayaifdhabwpmurbzhklqjtmyes ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769846647.2067745-323-93897081966963/AnsiballZ_stat.py'
Jan 31 08:04:07 compute-0 sudo[64998]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:04:07 compute-0 python3.9[65000]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 08:04:07 compute-0 sudo[64998]: pam_unix(sudo:session): session closed for user root
Jan 31 08:04:07 compute-0 sudo[65121]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mtpytqdqexerypjxzjhzxityxhwkrunz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769846647.2067745-323-93897081966963/AnsiballZ_copy.py'
Jan 31 08:04:07 compute-0 sudo[65121]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:04:08 compute-0 python3.9[65123]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769846647.2067745-323-93897081966963/.source.yaml _original_basename=.rfkq3vex follow=False checksum=97d170e1550eee4afc0af065b78cda302a97674c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 08:04:08 compute-0 sudo[65121]: pam_unix(sudo:session): session closed for user root
Jan 31 08:04:08 compute-0 sudo[65273]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xbpqqjlpaxuudigbjtwtrienbikkageg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769846648.2833638-338-973658554962/AnsiballZ_stat.py'
Jan 31 08:04:08 compute-0 sudo[65273]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:04:08 compute-0 python3.9[65275]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 08:04:08 compute-0 sudo[65273]: pam_unix(sudo:session): session closed for user root
Jan 31 08:04:09 compute-0 sudo[65396]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vylyhxqoncysvzyhucmdefttowolznjg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769846648.2833638-338-973658554962/AnsiballZ_copy.py'
Jan 31 08:04:09 compute-0 sudo[65396]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:04:09 compute-0 python3.9[65398]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/iptables.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769846648.2833638-338-973658554962/.source.nft _original_basename=iptables.nft follow=False checksum=3e02df08f1f3ab4a513e94056dbd390e3d38fe30 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 08:04:09 compute-0 sudo[65396]: pam_unix(sudo:session): session closed for user root
Jan 31 08:04:09 compute-0 sudo[65548]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kgjysiyekxlajtewzewvwriknopdncqg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769846649.3922992-353-267309097634265/AnsiballZ_command.py'
Jan 31 08:04:09 compute-0 sudo[65548]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:04:09 compute-0 python3.9[65550]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/iptables.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 08:04:09 compute-0 sudo[65548]: pam_unix(sudo:session): session closed for user root
Jan 31 08:04:10 compute-0 sudo[65701]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kvsmdvfnfrntdngcaxdhwlllorcfoejw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769846650.0552337-361-112888638022276/AnsiballZ_command.py'
Jan 31 08:04:10 compute-0 sudo[65701]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:04:10 compute-0 python3.9[65703]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 08:04:10 compute-0 sudo[65701]: pam_unix(sudo:session): session closed for user root
Jan 31 08:04:11 compute-0 sudo[65854]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lrhafhrdedpxpsxajtfrlugwuxcyrnme ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1769846650.7536547-369-158688300967301/AnsiballZ_edpm_nftables_from_files.py'
Jan 31 08:04:11 compute-0 sudo[65854]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:04:11 compute-0 python3[65856]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall
Jan 31 08:04:11 compute-0 sudo[65854]: pam_unix(sudo:session): session closed for user root
Jan 31 08:04:11 compute-0 sudo[66006]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-csxlsqsqkufbymtoaubfyzephvvseqfz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769846651.5271509-377-39972684576228/AnsiballZ_stat.py'
Jan 31 08:04:11 compute-0 sudo[66006]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:04:11 compute-0 python3.9[66008]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 08:04:11 compute-0 sudo[66006]: pam_unix(sudo:session): session closed for user root
Jan 31 08:04:12 compute-0 sudo[66129]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ycczrbxfocwbhhhsczdvnooqhftkwass ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769846651.5271509-377-39972684576228/AnsiballZ_copy.py'
Jan 31 08:04:12 compute-0 sudo[66129]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:04:12 compute-0 python3.9[66131]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769846651.5271509-377-39972684576228/.source.nft follow=False _original_basename=jump-chain.j2 checksum=4c6f036d2d5808f109acc0880c19aa74ca48c961 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 08:04:12 compute-0 sudo[66129]: pam_unix(sudo:session): session closed for user root
Jan 31 08:04:12 compute-0 sudo[66281]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mfkawnfgjqzjqdkwymkaxdsiugzjtkyk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769846652.5353765-392-19829098134456/AnsiballZ_stat.py'
Jan 31 08:04:12 compute-0 sudo[66281]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:04:12 compute-0 python3.9[66283]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 08:04:12 compute-0 sudo[66281]: pam_unix(sudo:session): session closed for user root
Jan 31 08:04:13 compute-0 sudo[66404]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hiljmvaytjovbkzlhdxaprpqqlysufji ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769846652.5353765-392-19829098134456/AnsiballZ_copy.py'
Jan 31 08:04:13 compute-0 sudo[66404]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:04:13 compute-0 python3.9[66406]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-update-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769846652.5353765-392-19829098134456/.source.nft follow=False _original_basename=jump-chain.j2 checksum=4c6f036d2d5808f109acc0880c19aa74ca48c961 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 08:04:13 compute-0 sudo[66404]: pam_unix(sudo:session): session closed for user root
Jan 31 08:04:13 compute-0 sudo[66556]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mbaynvgapxafbvyddjgxhmxqcgjcaupp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769846653.5433662-407-225303096826217/AnsiballZ_stat.py'
Jan 31 08:04:13 compute-0 sudo[66556]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:04:13 compute-0 python3.9[66558]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 08:04:13 compute-0 sudo[66556]: pam_unix(sudo:session): session closed for user root
Jan 31 08:04:14 compute-0 sudo[66679]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hkbjveriosknlrjdzqrdvyfbzwcogqnm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769846653.5433662-407-225303096826217/AnsiballZ_copy.py'
Jan 31 08:04:14 compute-0 sudo[66679]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:04:14 compute-0 python3.9[66681]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-flushes.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769846653.5433662-407-225303096826217/.source.nft follow=False _original_basename=flush-chain.j2 checksum=d16337256a56373421842284fe09e4e6c7df417e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 08:04:14 compute-0 sudo[66679]: pam_unix(sudo:session): session closed for user root
Jan 31 08:04:14 compute-0 sudo[66831]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-slvegwuodxqqbkmpuexdmfkxmfbvzrrz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769846654.5986328-422-78700857142203/AnsiballZ_stat.py'
Jan 31 08:04:14 compute-0 sudo[66831]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:04:14 compute-0 python3.9[66833]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 08:04:14 compute-0 sudo[66831]: pam_unix(sudo:session): session closed for user root
Jan 31 08:04:15 compute-0 sudo[66954]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uwzwpozjweopsgwhbwrzyqhytkvmuqxi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769846654.5986328-422-78700857142203/AnsiballZ_copy.py'
Jan 31 08:04:15 compute-0 sudo[66954]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:04:15 compute-0 python3.9[66956]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-chains.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769846654.5986328-422-78700857142203/.source.nft follow=False _original_basename=chains.j2 checksum=2079f3b60590a165d1d502e763170876fc8e2984 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 08:04:15 compute-0 sudo[66954]: pam_unix(sudo:session): session closed for user root
Jan 31 08:04:15 compute-0 sudo[67106]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oxzxndagbuvfcmgehwkcjxmbodsposfh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769846655.578394-437-89056132597835/AnsiballZ_stat.py'
Jan 31 08:04:15 compute-0 sudo[67106]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:04:16 compute-0 python3.9[67108]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 08:04:16 compute-0 sudo[67106]: pam_unix(sudo:session): session closed for user root
Jan 31 08:04:16 compute-0 sudo[67229]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ytbkkjkpwytumbrrcmwpjawlogyabhnm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769846655.578394-437-89056132597835/AnsiballZ_copy.py'
Jan 31 08:04:16 compute-0 sudo[67229]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:04:16 compute-0 python3.9[67231]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-rules.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769846655.578394-437-89056132597835/.source.nft follow=False _original_basename=ruleset.j2 checksum=693377dc03e5b6b24713cb537b18b88774724e35 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 08:04:16 compute-0 sudo[67229]: pam_unix(sudo:session): session closed for user root
Jan 31 08:04:16 compute-0 sudo[67381]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dfjbygellrgdzmdoqylhpeelgnbusplf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769846656.6872566-452-67397539497787/AnsiballZ_file.py'
Jan 31 08:04:16 compute-0 sudo[67381]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:04:17 compute-0 python3.9[67383]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/nftables/edpm-rules.nft.changed state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 08:04:17 compute-0 sudo[67381]: pam_unix(sudo:session): session closed for user root
Jan 31 08:04:17 compute-0 sudo[67533]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-knreiljfqanlrvcgdsmnwxhktumsjztb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769846657.2615035-460-88314458739074/AnsiballZ_command.py'
Jan 31 08:04:17 compute-0 sudo[67533]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:04:17 compute-0 python3.9[67535]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 08:04:17 compute-0 sudo[67533]: pam_unix(sudo:session): session closed for user root
Jan 31 08:04:18 compute-0 sudo[67692]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jhkxklufnxmfoxbbjaemnpjpzcekjfek ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769846657.843449-468-10553655320496/AnsiballZ_blockinfile.py'
Jan 31 08:04:18 compute-0 sudo[67692]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:04:18 compute-0 python3.9[67694]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"
                                            include "/etc/nftables/edpm-chains.nft"
                                            include "/etc/nftables/edpm-rules.nft"
                                            include "/etc/nftables/edpm-jumps.nft"
                                             path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 08:04:18 compute-0 sudo[67692]: pam_unix(sudo:session): session closed for user root
Jan 31 08:04:18 compute-0 sudo[67845]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pvykghlnglzauwirnbbcqgohpuinzhjt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769846658.633212-477-74088116672891/AnsiballZ_file.py'
Jan 31 08:04:18 compute-0 sudo[67845]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:04:19 compute-0 python3.9[67847]: ansible-ansible.builtin.file Invoked with group=hugetlbfs mode=0775 owner=zuul path=/dev/hugepages1G state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 08:04:19 compute-0 sudo[67845]: pam_unix(sudo:session): session closed for user root
Jan 31 08:04:19 compute-0 sudo[67997]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ldzhomrvjicntccwgbcvfnhujahsvuad ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769846659.202904-477-274386128940616/AnsiballZ_file.py'
Jan 31 08:04:19 compute-0 sudo[67997]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:04:19 compute-0 python3.9[67999]: ansible-ansible.builtin.file Invoked with group=hugetlbfs mode=0775 owner=zuul path=/dev/hugepages2M state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 08:04:19 compute-0 sudo[67997]: pam_unix(sudo:session): session closed for user root
Jan 31 08:04:20 compute-0 sudo[68149]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ohjeerfdqnpjztqpqsoqrugxhnrzeukz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769846659.8206851-492-81572114333070/AnsiballZ_mount.py'
Jan 31 08:04:20 compute-0 sudo[68149]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:04:20 compute-0 python3.9[68151]: ansible-ansible.posix.mount Invoked with fstype=hugetlbfs opts=pagesize=1G path=/dev/hugepages1G src=none state=mounted boot=True dump=0 opts_no_log=False passno=0 backup=False fstab=None
Jan 31 08:04:20 compute-0 sudo[68149]: pam_unix(sudo:session): session closed for user root
Jan 31 08:04:20 compute-0 sudo[68302]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dqzgrqcrmfbekxecglcmnmuiwfruxbpl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769846660.6455286-492-176907339347888/AnsiballZ_mount.py'
Jan 31 08:04:20 compute-0 sudo[68302]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:04:21 compute-0 python3.9[68304]: ansible-ansible.posix.mount Invoked with fstype=hugetlbfs opts=pagesize=2M path=/dev/hugepages2M src=none state=mounted boot=True dump=0 opts_no_log=False passno=0 backup=False fstab=None
Jan 31 08:04:21 compute-0 sudo[68302]: pam_unix(sudo:session): session closed for user root
Jan 31 08:04:21 compute-0 sshd-session[59103]: Connection closed by 192.168.122.30 port 52302
Jan 31 08:04:21 compute-0 sshd-session[59100]: pam_unix(sshd:session): session closed for user zuul
Jan 31 08:04:21 compute-0 systemd[1]: session-14.scope: Deactivated successfully.
Jan 31 08:04:21 compute-0 systemd[1]: session-14.scope: Consumed 31.446s CPU time.
Jan 31 08:04:21 compute-0 systemd-logind[793]: Session 14 logged out. Waiting for processes to exit.
Jan 31 08:04:21 compute-0 systemd-logind[793]: Removed session 14.
Jan 31 08:04:26 compute-0 sshd-session[68330]: Accepted publickey for zuul from 192.168.122.30 port 35280 ssh2: ECDSA SHA256:Skb+4tfaoVfLHQIqkRSeA/sFlTrVc6ZnX8V66qTLHY8
Jan 31 08:04:26 compute-0 systemd-logind[793]: New session 15 of user zuul.
Jan 31 08:04:26 compute-0 systemd[1]: Started Session 15 of User zuul.
Jan 31 08:04:26 compute-0 sshd-session[68330]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 31 08:04:27 compute-0 sudo[68483]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bznzzorumzohiarhshihqboeyttlyfsk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769846666.7619169-16-43035387014737/AnsiballZ_tempfile.py'
Jan 31 08:04:27 compute-0 sudo[68483]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:04:27 compute-0 python3.9[68485]: ansible-ansible.builtin.tempfile Invoked with state=file prefix=ansible. suffix= path=None
Jan 31 08:04:27 compute-0 sudo[68483]: pam_unix(sudo:session): session closed for user root
Jan 31 08:04:27 compute-0 sudo[68635]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dyobrjneujtsfwtltldcstibwrqohsdt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769846667.5456834-28-190985504034489/AnsiballZ_stat.py'
Jan 31 08:04:27 compute-0 sudo[68635]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:04:28 compute-0 python3.9[68637]: ansible-ansible.builtin.stat Invoked with path=/etc/ssh/ssh_known_hosts follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 31 08:04:28 compute-0 sudo[68635]: pam_unix(sudo:session): session closed for user root
Jan 31 08:04:28 compute-0 sudo[68787]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bmiettitvkigawikykjurbzgexpwizzi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769846668.31987-38-26710973844294/AnsiballZ_setup.py'
Jan 31 08:04:28 compute-0 sudo[68787]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:04:29 compute-0 python3.9[68789]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'ssh_host_key_rsa_public', 'ssh_host_key_ed25519_public', 'ssh_host_key_ecdsa_public'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 31 08:04:29 compute-0 sudo[68787]: pam_unix(sudo:session): session closed for user root
Jan 31 08:04:29 compute-0 sudo[68939]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rzwzcxlofymiudihovypvegjuhmdynam ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769846669.3306336-47-68592254347345/AnsiballZ_blockinfile.py'
Jan 31 08:04:29 compute-0 sudo[68939]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:04:29 compute-0 python3.9[68941]: ansible-ansible.builtin.blockinfile Invoked with block=compute-0.ctlplane.example.com,192.168.122.100,compute-0* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDWE2JVgZg7/u8eKJOhyXjs2p2Qt39hyygdPIhluejh1YW6dcdEylP4WBj6s+q3E0jylhkLknf3rSZ3V/k+1w4fdSUak8G4nLiV+h7jI0m37zoSEXpQABHGJkpgi2eMs0YNEF9ZbgIO31d28SspBpNxFqovrMK9sOzJD3jRaR2TV2FGV4csI4Je0LNdEV2NmeRljWtF7PlqQKs424iGvqmWC0B3yHCfBTNvXWNKzGR1N9odg9DQrU9iQl+1eRKkj6BTvJgzpUrsqny5n8vohkDGBUxN/PXOEp7pqhuJUPSphsqmLwQwrLfwDu7A7dJJfZkVKkpzZyD6doTBm0NvOOS1P7M8/iclLU1KEYLp51WWXc+cX67skjn1vfDJa7CGV5YlXA3q5QP5xqR6eDbptMG7KpRBt6sSG7A44KIXdmzbWGFuBJYi0sjVIDfXPkfJOcwxwUzMotpbCYCDOV94CS6XESh8ZKogwpuB8qVCTqZEJz/qxAkpdL1xxLZ6iM3SA2k=
                                            compute-0.ctlplane.example.com,192.168.122.100,compute-0* ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIBdV4ImCUSap74vh7n2NTRmfyoKbp4X6QTOOZaAU/4X4
                                            compute-0.ctlplane.example.com,192.168.122.100,compute-0* ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBNKN9rH1fl1KXYyt+swOzNYmow6bIvU77b90jfMS4wXtyUATZdas4vlUZ46SayVV+s+nKQQloJFhgnR/5ots9Yc=
                                             create=True mode=0644 path=/tmp/ansible.gos1m2ug state=present marker=# {mark} ANSIBLE MANAGED BLOCK backup=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 08:04:29 compute-0 sudo[68939]: pam_unix(sudo:session): session closed for user root
Jan 31 08:04:30 compute-0 sudo[69091]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tsrpendlubyufdkgvrjrktidogtezvxp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769846670.0618534-55-156696503972817/AnsiballZ_command.py'
Jan 31 08:04:30 compute-0 sudo[69091]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:04:30 compute-0 python3.9[69093]: ansible-ansible.legacy.command Invoked with _raw_params=cat '/tmp/ansible.gos1m2ug' > /etc/ssh/ssh_known_hosts _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 08:04:30 compute-0 sudo[69091]: pam_unix(sudo:session): session closed for user root
Jan 31 08:04:31 compute-0 sudo[69245]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bqtjsgmncgbvttthbjbbmrpcgpcsdqru ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769846670.8574715-63-183702718910526/AnsiballZ_file.py'
Jan 31 08:04:31 compute-0 sudo[69245]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:04:31 compute-0 python3.9[69247]: ansible-ansible.builtin.file Invoked with path=/tmp/ansible.gos1m2ug state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 08:04:31 compute-0 sudo[69245]: pam_unix(sudo:session): session closed for user root
Jan 31 08:04:31 compute-0 sshd-session[68333]: Connection closed by 192.168.122.30 port 35280
Jan 31 08:04:31 compute-0 sshd-session[68330]: pam_unix(sshd:session): session closed for user zuul
Jan 31 08:04:31 compute-0 systemd[1]: session-15.scope: Deactivated successfully.
Jan 31 08:04:31 compute-0 systemd[1]: session-15.scope: Consumed 3.101s CPU time.
Jan 31 08:04:31 compute-0 systemd-logind[793]: Session 15 logged out. Waiting for processes to exit.
Jan 31 08:04:31 compute-0 systemd-logind[793]: Removed session 15.
Jan 31 08:04:35 compute-0 systemd[1]: systemd-timedated.service: Deactivated successfully.
Jan 31 08:04:37 compute-0 sshd-session[69274]: Accepted publickey for zuul from 192.168.122.30 port 52546 ssh2: ECDSA SHA256:Skb+4tfaoVfLHQIqkRSeA/sFlTrVc6ZnX8V66qTLHY8
Jan 31 08:04:37 compute-0 systemd-logind[793]: New session 16 of user zuul.
Jan 31 08:04:37 compute-0 systemd[1]: Started Session 16 of User zuul.
Jan 31 08:04:37 compute-0 sshd-session[69274]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 31 08:04:38 compute-0 python3.9[69427]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 31 08:04:39 compute-0 sudo[69581]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-olikuhjccqxmtiatoaontgdykwnbxdvk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769846678.9144566-27-133662336116504/AnsiballZ_systemd.py'
Jan 31 08:04:39 compute-0 sudo[69581]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:04:39 compute-0 python3.9[69583]: ansible-ansible.builtin.systemd Invoked with enabled=True name=sshd daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None masked=None
Jan 31 08:04:39 compute-0 sudo[69581]: pam_unix(sudo:session): session closed for user root
Jan 31 08:04:40 compute-0 sudo[69735]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jvpnoysqbynmdugtpmpnifrtltcmoznf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769846680.028424-35-204677667319810/AnsiballZ_systemd.py'
Jan 31 08:04:40 compute-0 sudo[69735]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:04:40 compute-0 python3.9[69737]: ansible-ansible.builtin.systemd Invoked with name=sshd state=started daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 31 08:04:40 compute-0 sudo[69735]: pam_unix(sudo:session): session closed for user root
Jan 31 08:04:41 compute-0 sudo[69888]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mqffjmuoskxfluumglbwukiawfzmfnnt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769846680.8131561-44-232409432511609/AnsiballZ_command.py'
Jan 31 08:04:41 compute-0 sudo[69888]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:04:41 compute-0 python3.9[69890]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 08:04:41 compute-0 sudo[69888]: pam_unix(sudo:session): session closed for user root
Jan 31 08:04:42 compute-0 sudo[70041]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ucqvggcfypkmbrroryuzdhramfdbtgkk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769846681.5811546-52-110858517473725/AnsiballZ_stat.py'
Jan 31 08:04:42 compute-0 sudo[70041]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:04:42 compute-0 python3.9[70043]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 31 08:04:42 compute-0 sudo[70041]: pam_unix(sudo:session): session closed for user root
Jan 31 08:04:42 compute-0 sudo[70195]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-feynmvvwbiimioctaznnyxmteusjkkly ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769846682.6855783-60-166163203584366/AnsiballZ_command.py'
Jan 31 08:04:42 compute-0 sudo[70195]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:04:43 compute-0 python3.9[70197]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft | nft -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 08:04:43 compute-0 sudo[70195]: pam_unix(sudo:session): session closed for user root
Jan 31 08:04:44 compute-0 sudo[70350]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jhujgmfyowcbrugqigfsnzffzarpyyri ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769846683.5989475-68-133089986054811/AnsiballZ_file.py'
Jan 31 08:04:44 compute-0 sudo[70350]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:04:44 compute-0 python3.9[70352]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 08:04:44 compute-0 sudo[70350]: pam_unix(sudo:session): session closed for user root
Jan 31 08:04:44 compute-0 sshd-session[69277]: Connection closed by 192.168.122.30 port 52546
Jan 31 08:04:44 compute-0 sshd-session[69274]: pam_unix(sshd:session): session closed for user zuul
Jan 31 08:04:44 compute-0 systemd[1]: session-16.scope: Deactivated successfully.
Jan 31 08:04:44 compute-0 systemd[1]: session-16.scope: Consumed 4.289s CPU time.
Jan 31 08:04:44 compute-0 systemd-logind[793]: Session 16 logged out. Waiting for processes to exit.
Jan 31 08:04:44 compute-0 systemd-logind[793]: Removed session 16.
Jan 31 08:04:56 compute-0 sshd-session[70377]: Accepted publickey for zuul from 192.168.122.30 port 39448 ssh2: ECDSA SHA256:Skb+4tfaoVfLHQIqkRSeA/sFlTrVc6ZnX8V66qTLHY8
Jan 31 08:04:56 compute-0 systemd-logind[793]: New session 17 of user zuul.
Jan 31 08:04:56 compute-0 systemd[1]: Started Session 17 of User zuul.
Jan 31 08:04:56 compute-0 sshd-session[70377]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 31 08:04:57 compute-0 python3.9[70530]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 31 08:04:57 compute-0 sudo[70684]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-osmvfoshprdfmuqallmswhynwpurthss ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769846697.5327299-29-115025647649495/AnsiballZ_setup.py'
Jan 31 08:04:57 compute-0 sudo[70684]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:04:58 compute-0 python3.9[70686]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 31 08:04:58 compute-0 sudo[70684]: pam_unix(sudo:session): session closed for user root
Jan 31 08:04:58 compute-0 sudo[70768]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ncrylfyqtdzycelfkquxcfcvvdbxxipl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769846697.5327299-29-115025647649495/AnsiballZ_dnf.py'
Jan 31 08:04:58 compute-0 sudo[70768]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:04:59 compute-0 python3.9[70770]: ansible-ansible.legacy.dnf Invoked with name=['yum-utils'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Jan 31 08:05:00 compute-0 sudo[70768]: pam_unix(sudo:session): session closed for user root
Jan 31 08:05:00 compute-0 python3.9[70921]: ansible-ansible.legacy.command Invoked with _raw_params=needs-restarting -r _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 08:05:02 compute-0 python3.9[71072]: ansible-ansible.builtin.find Invoked with paths=['/var/lib/openstack/reboot_required/'] patterns=[] read_whole_file=False file_type=file age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Jan 31 08:05:02 compute-0 python3.9[71222]: ansible-ansible.builtin.stat Invoked with path=/var/lib/config-data/puppet-generated follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 31 08:05:02 compute-0 rsyslogd[1007]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 31 08:05:02 compute-0 rsyslogd[1007]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 31 08:05:03 compute-0 python3.9[71373]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 31 08:05:03 compute-0 sshd-session[70380]: Connection closed by 192.168.122.30 port 39448
Jan 31 08:05:03 compute-0 sshd-session[70377]: pam_unix(sshd:session): session closed for user zuul
Jan 31 08:05:03 compute-0 systemd[1]: session-17.scope: Deactivated successfully.
Jan 31 08:05:03 compute-0 systemd[1]: session-17.scope: Consumed 5.574s CPU time.
Jan 31 08:05:03 compute-0 systemd-logind[793]: Session 17 logged out. Waiting for processes to exit.
Jan 31 08:05:03 compute-0 systemd-logind[793]: Removed session 17.
Jan 31 08:05:11 compute-0 sshd-session[71398]: Accepted publickey for zuul from 38.102.83.220 port 57680 ssh2: RSA SHA256:1cKsZJy0b8y0Op+4rpocXv0xojY9kddve1Dq+1Ump7k
Jan 31 08:05:11 compute-0 systemd-logind[793]: New session 18 of user zuul.
Jan 31 08:05:11 compute-0 systemd[1]: Started Session 18 of User zuul.
Jan 31 08:05:11 compute-0 sshd-session[71398]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 31 08:05:11 compute-0 sudo[71474]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-avzwzmpzrqzmxfquouubkrgnhsyxjsjb ; /usr/bin/python3'
Jan 31 08:05:11 compute-0 sudo[71474]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:05:11 compute-0 useradd[71478]: new group: name=ceph-admin, GID=42478
Jan 31 08:05:11 compute-0 useradd[71478]: new user: name=ceph-admin, UID=42477, GID=42478, home=/home/ceph-admin, shell=/bin/bash, from=none
Jan 31 08:05:11 compute-0 sudo[71474]: pam_unix(sudo:session): session closed for user root
Jan 31 08:05:12 compute-0 sudo[71560]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tgyfrcxysjgzdptsbrgapeizjvsoqwlh ; /usr/bin/python3'
Jan 31 08:05:12 compute-0 sudo[71560]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:05:12 compute-0 sudo[71560]: pam_unix(sudo:session): session closed for user root
Jan 31 08:05:12 compute-0 sudo[71633]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bkvvhoruylyzwtwlxbeftikodvrnslwd ; /usr/bin/python3'
Jan 31 08:05:12 compute-0 sudo[71633]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:05:12 compute-0 sudo[71633]: pam_unix(sudo:session): session closed for user root
Jan 31 08:05:13 compute-0 sudo[71683]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jorxdlhbfktmxttmelcplmvnxhrveyki ; /usr/bin/python3'
Jan 31 08:05:13 compute-0 sudo[71683]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:05:13 compute-0 sudo[71683]: pam_unix(sudo:session): session closed for user root
Jan 31 08:05:13 compute-0 sudo[71709]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zhucqrawkxozteqogeuszawdmozhrnev ; /usr/bin/python3'
Jan 31 08:05:13 compute-0 sudo[71709]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:05:13 compute-0 sudo[71709]: pam_unix(sudo:session): session closed for user root
Jan 31 08:05:13 compute-0 sudo[71735]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yarwfzxytwfuknyrsnodxanpafxiphxs ; /usr/bin/python3'
Jan 31 08:05:13 compute-0 sudo[71735]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:05:13 compute-0 sudo[71735]: pam_unix(sudo:session): session closed for user root
Jan 31 08:05:14 compute-0 sudo[71761]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dmuhygdigmubonaacxokywrtrldefmhl ; /usr/bin/python3'
Jan 31 08:05:14 compute-0 sudo[71761]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:05:14 compute-0 sudo[71761]: pam_unix(sudo:session): session closed for user root
Jan 31 08:05:14 compute-0 sudo[71839]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yqfdgxkfmigeveahklaikmwehvjngvna ; /usr/bin/python3'
Jan 31 08:05:14 compute-0 sudo[71839]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:05:14 compute-0 sudo[71839]: pam_unix(sudo:session): session closed for user root
Jan 31 08:05:14 compute-0 sudo[71912]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kqvhdjzjvowfvzhwjzesawcpqbubkxnv ; /usr/bin/python3'
Jan 31 08:05:14 compute-0 sudo[71912]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:05:14 compute-0 sudo[71912]: pam_unix(sudo:session): session closed for user root
Jan 31 08:05:15 compute-0 sudo[72014]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cbujjmgmmajvxvkdjsygdqijyzleqaak ; /usr/bin/python3'
Jan 31 08:05:15 compute-0 sudo[72014]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:05:15 compute-0 sudo[72014]: pam_unix(sudo:session): session closed for user root
Jan 31 08:05:15 compute-0 sudo[72087]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kfqqrkumddrqslhqcaczwhszbszsixwr ; /usr/bin/python3'
Jan 31 08:05:15 compute-0 sudo[72087]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:05:15 compute-0 sudo[72087]: pam_unix(sudo:session): session closed for user root
Jan 31 08:05:16 compute-0 sudo[72137]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-atjubchhrvoyaoblugwtafpsgsmfteij ; /usr/bin/python3'
Jan 31 08:05:16 compute-0 sudo[72137]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:05:16 compute-0 python3[72139]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 31 08:05:17 compute-0 sudo[72137]: pam_unix(sudo:session): session closed for user root
Jan 31 08:05:17 compute-0 sudo[72232]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uoazreqnfxaiuaywuzmtnfjbtzrtjrqv ; /usr/bin/python3'
Jan 31 08:05:17 compute-0 sudo[72232]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:05:18 compute-0 python3[72234]: ansible-ansible.legacy.dnf Invoked with name=['util-linux', 'lvm2', 'jq', 'podman'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False use_backend=auto conf_file=None disable_excludes=None download_dir=None list=None releasever=None
Jan 31 08:05:19 compute-0 sudo[72232]: pam_unix(sudo:session): session closed for user root
Jan 31 08:05:19 compute-0 sudo[72259]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nhnpekgzezkdjlvimjwdwacouizigkae ; /usr/bin/python3'
Jan 31 08:05:19 compute-0 sudo[72259]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:05:19 compute-0 python3[72261]: ansible-ansible.builtin.stat Invoked with path=/dev/loop3 follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Jan 31 08:05:19 compute-0 sudo[72259]: pam_unix(sudo:session): session closed for user root
Jan 31 08:05:19 compute-0 sudo[72285]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mgldjzydedyejcmjvrhfhfqoyhxwkwag ; /usr/bin/python3'
Jan 31 08:05:19 compute-0 sudo[72285]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:05:19 compute-0 python3[72287]: ansible-ansible.legacy.command Invoked with _raw_params=dd if=/dev/zero of=/var/lib/ceph-osd-0.img bs=1 count=0 seek=20G
                                          losetup /dev/loop3 /var/lib/ceph-osd-0.img
                                          lsblk _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 08:05:19 compute-0 kernel: loop: module loaded
Jan 31 08:05:19 compute-0 kernel: loop3: detected capacity change from 0 to 41943040
Jan 31 08:05:19 compute-0 sudo[72285]: pam_unix(sudo:session): session closed for user root
Jan 31 08:05:19 compute-0 sudo[72320]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iqzjzdeesstyteeijuhvxalaolupvwnp ; /usr/bin/python3'
Jan 31 08:05:19 compute-0 sudo[72320]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:05:20 compute-0 python3[72322]: ansible-ansible.legacy.command Invoked with _raw_params=pvcreate /dev/loop3
                                          vgcreate ceph_vg0 /dev/loop3
                                          lvcreate -n ceph_lv0 -l +100%FREE ceph_vg0
                                          lvs _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 08:05:20 compute-0 lvm[72325]: PV /dev/loop3 not used.
Jan 31 08:05:20 compute-0 lvm[72327]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 31 08:05:20 compute-0 systemd[1]: Started /usr/sbin/lvm vgchange -aay --autoactivation event ceph_vg0.
Jan 31 08:05:20 compute-0 lvm[72331]:   1 logical volume(s) in volume group "ceph_vg0" now active
Jan 31 08:05:20 compute-0 lvm[72337]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 31 08:05:20 compute-0 lvm[72337]: VG ceph_vg0 finished
Jan 31 08:05:20 compute-0 systemd[1]: lvm-activate-ceph_vg0.service: Deactivated successfully.
Jan 31 08:05:20 compute-0 sudo[72320]: pam_unix(sudo:session): session closed for user root
Jan 31 08:05:20 compute-0 sudo[72413]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sncatbtcdummdmfcwcxtpjwtwdebixow ; /usr/bin/python3'
Jan 31 08:05:20 compute-0 sudo[72413]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:05:20 compute-0 python3[72415]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/ceph-osd-losetup-0.service follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 31 08:05:20 compute-0 sudo[72413]: pam_unix(sudo:session): session closed for user root
Jan 31 08:05:20 compute-0 sudo[72486]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-spxpywwoyfbhljnltsltwiwevxfuimey ; /usr/bin/python3'
Jan 31 08:05:20 compute-0 sudo[72486]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:05:21 compute-0 python3[72488]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1769846720.5403633-36393-214961255597074/source dest=/etc/systemd/system/ceph-osd-losetup-0.service mode=0644 force=True follow=False _original_basename=ceph-osd-losetup.service.j2 checksum=427b1db064a970126b729b07acf99fa7d0eecb9c backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 08:05:21 compute-0 sudo[72486]: pam_unix(sudo:session): session closed for user root
Jan 31 08:05:21 compute-0 sudo[72536]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kkdvnvuoxgtkqfgfeylcpqvijinrmttd ; /usr/bin/python3'
Jan 31 08:05:21 compute-0 sudo[72536]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:05:21 compute-0 python3[72538]: ansible-ansible.builtin.systemd Invoked with state=started enabled=True name=ceph-osd-losetup-0.service daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 31 08:05:21 compute-0 systemd[1]: Reloading.
Jan 31 08:05:21 compute-0 systemd-rc-local-generator[72559]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 08:05:21 compute-0 systemd-sysv-generator[72566]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 08:05:22 compute-0 systemd[1]: Starting Ceph OSD losetup...
Jan 31 08:05:22 compute-0 bash[72580]: /dev/loop3: [64513]:4329562 (/var/lib/ceph-osd-0.img)
Jan 31 08:05:22 compute-0 systemd[1]: Finished Ceph OSD losetup.
Jan 31 08:05:22 compute-0 lvm[72581]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 31 08:05:22 compute-0 lvm[72581]: VG ceph_vg0 finished
Jan 31 08:05:22 compute-0 sudo[72536]: pam_unix(sudo:session): session closed for user root
Jan 31 08:05:22 compute-0 sudo[72605]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ymppdbembjqdwfskkgyrxipeugxojoxv ; /usr/bin/python3'
Jan 31 08:05:22 compute-0 sudo[72605]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:05:22 compute-0 python3[72607]: ansible-ansible.legacy.dnf Invoked with name=['util-linux', 'lvm2', 'jq', 'podman'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False use_backend=auto conf_file=None disable_excludes=None download_dir=None list=None releasever=None
Jan 31 08:05:23 compute-0 sudo[72605]: pam_unix(sudo:session): session closed for user root
Jan 31 08:05:23 compute-0 sudo[72632]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ffxtgdvlewabmyhppsmvhwvzyyynrsra ; /usr/bin/python3'
Jan 31 08:05:23 compute-0 sudo[72632]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:05:23 compute-0 python3[72634]: ansible-ansible.builtin.stat Invoked with path=/dev/loop4 follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Jan 31 08:05:23 compute-0 sudo[72632]: pam_unix(sudo:session): session closed for user root
Jan 31 08:05:24 compute-0 sudo[72658]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xhimutjktgaghfbkrqimqiigyssantbs ; /usr/bin/python3'
Jan 31 08:05:24 compute-0 sudo[72658]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:05:24 compute-0 python3[72660]: ansible-ansible.legacy.command Invoked with _raw_params=dd if=/dev/zero of=/var/lib/ceph-osd-1.img bs=1 count=0 seek=20G
                                          losetup /dev/loop4 /var/lib/ceph-osd-1.img
                                          lsblk _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 08:05:24 compute-0 kernel: loop4: detected capacity change from 0 to 41943040
Jan 31 08:05:24 compute-0 sudo[72658]: pam_unix(sudo:session): session closed for user root
Jan 31 08:05:24 compute-0 sudo[72690]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-edufizsustrbggkxokmgavcjebhxjzpb ; /usr/bin/python3'
Jan 31 08:05:24 compute-0 sudo[72690]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:05:24 compute-0 python3[72692]: ansible-ansible.legacy.command Invoked with _raw_params=pvcreate /dev/loop4
                                          vgcreate ceph_vg1 /dev/loop4
                                          lvcreate -n ceph_lv1 -l +100%FREE ceph_vg1
                                          lvs _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 08:05:24 compute-0 lvm[72695]: PV /dev/loop4 not used.
Jan 31 08:05:24 compute-0 lvm[72697]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 31 08:05:24 compute-0 systemd[1]: Started /usr/sbin/lvm vgchange -aay --autoactivation event ceph_vg1.
Jan 31 08:05:24 compute-0 lvm[72700]:   1 logical volume(s) in volume group "ceph_vg1" now active
Jan 31 08:05:24 compute-0 lvm[72707]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 31 08:05:24 compute-0 lvm[72707]: VG ceph_vg1 finished
Jan 31 08:05:24 compute-0 systemd[1]: lvm-activate-ceph_vg1.service: Deactivated successfully.
Jan 31 08:05:24 compute-0 sudo[72690]: pam_unix(sudo:session): session closed for user root
Jan 31 08:05:25 compute-0 sudo[72783]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ibtstgnqswnkjbwgxyhezqfwnlhbnxqo ; /usr/bin/python3'
Jan 31 08:05:25 compute-0 sudo[72783]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:05:25 compute-0 python3[72785]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/ceph-osd-losetup-1.service follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 31 08:05:25 compute-0 sudo[72783]: pam_unix(sudo:session): session closed for user root
Jan 31 08:05:25 compute-0 sudo[72856]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-grziyhujrpmlnqmvesicrlrbfeitwgah ; /usr/bin/python3'
Jan 31 08:05:25 compute-0 sudo[72856]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:05:25 compute-0 python3[72858]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1769846724.97736-36420-213522309033381/source dest=/etc/systemd/system/ceph-osd-losetup-1.service mode=0644 force=True follow=False _original_basename=ceph-osd-losetup.service.j2 checksum=19612168ea279db4171b94ee1f8625de1ec44b58 backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 08:05:25 compute-0 sudo[72856]: pam_unix(sudo:session): session closed for user root
Jan 31 08:05:25 compute-0 sudo[72906]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-miwacbsarngtwkhqhugiwkvndfaswbdx ; /usr/bin/python3'
Jan 31 08:05:25 compute-0 sudo[72906]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:05:26 compute-0 python3[72908]: ansible-ansible.builtin.systemd Invoked with state=started enabled=True name=ceph-osd-losetup-1.service daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 31 08:05:26 compute-0 systemd[1]: Reloading.
Jan 31 08:05:26 compute-0 systemd-rc-local-generator[72933]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 08:05:26 compute-0 systemd-sysv-generator[72939]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 08:05:26 compute-0 systemd[1]: Starting Ceph OSD losetup...
Jan 31 08:05:26 compute-0 bash[72948]: /dev/loop4: [64513]:4355723 (/var/lib/ceph-osd-1.img)
Jan 31 08:05:26 compute-0 systemd[1]: Finished Ceph OSD losetup.
Jan 31 08:05:26 compute-0 lvm[72949]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 31 08:05:26 compute-0 lvm[72949]: VG ceph_vg1 finished
Jan 31 08:05:26 compute-0 sudo[72906]: pam_unix(sudo:session): session closed for user root
Jan 31 08:05:26 compute-0 sudo[72973]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-umyfrleivimzvuskemhwyloyzqspfncz ; /usr/bin/python3'
Jan 31 08:05:26 compute-0 sudo[72973]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:05:26 compute-0 python3[72975]: ansible-ansible.legacy.dnf Invoked with name=['util-linux', 'lvm2', 'jq', 'podman'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False use_backend=auto conf_file=None disable_excludes=None download_dir=None list=None releasever=None
Jan 31 08:05:28 compute-0 sudo[72973]: pam_unix(sudo:session): session closed for user root
Jan 31 08:05:28 compute-0 sudo[73000]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dxoztkmpkilkuvieegqrdzpimxffldry ; /usr/bin/python3'
Jan 31 08:05:28 compute-0 sudo[73000]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:05:28 compute-0 python3[73002]: ansible-ansible.builtin.stat Invoked with path=/dev/loop5 follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Jan 31 08:05:28 compute-0 sudo[73000]: pam_unix(sudo:session): session closed for user root
Jan 31 08:05:28 compute-0 sudo[73026]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aqjjegrlclcscfxbrpqxjyjdlhbbuwjp ; /usr/bin/python3'
Jan 31 08:05:28 compute-0 sudo[73026]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:05:28 compute-0 python3[73028]: ansible-ansible.legacy.command Invoked with _raw_params=dd if=/dev/zero of=/var/lib/ceph-osd-2.img bs=1 count=0 seek=20G
                                          losetup /dev/loop5 /var/lib/ceph-osd-2.img
                                          lsblk _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 08:05:28 compute-0 kernel: loop5: detected capacity change from 0 to 41943040
Jan 31 08:05:28 compute-0 sudo[73026]: pam_unix(sudo:session): session closed for user root
Jan 31 08:05:28 compute-0 sudo[73058]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xwurypfcyraquggmtvnekxgrdpnzgrtf ; /usr/bin/python3'
Jan 31 08:05:28 compute-0 sudo[73058]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:05:29 compute-0 python3[73060]: ansible-ansible.legacy.command Invoked with _raw_params=pvcreate /dev/loop5
                                          vgcreate ceph_vg2 /dev/loop5
                                          lvcreate -n ceph_lv2 -l +100%FREE ceph_vg2
                                          lvs _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 08:05:29 compute-0 lvm[73063]: PV /dev/loop5 not used.
Jan 31 08:05:29 compute-0 lvm[73065]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 31 08:05:29 compute-0 systemd[1]: Started /usr/sbin/lvm vgchange -aay --autoactivation event ceph_vg2.
Jan 31 08:05:29 compute-0 lvm[73076]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 31 08:05:29 compute-0 lvm[73076]: VG ceph_vg2 finished
Jan 31 08:05:29 compute-0 lvm[73073]:   1 logical volume(s) in volume group "ceph_vg2" now active
Jan 31 08:05:29 compute-0 systemd[1]: lvm-activate-ceph_vg2.service: Deactivated successfully.
Jan 31 08:05:29 compute-0 sudo[73058]: pam_unix(sudo:session): session closed for user root
Jan 31 08:05:29 compute-0 sudo[73152]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jngkjcvxzjrnyhvubgdxbwtcogxsyaim ; /usr/bin/python3'
Jan 31 08:05:29 compute-0 sudo[73152]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:05:29 compute-0 python3[73154]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/ceph-osd-losetup-2.service follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 31 08:05:29 compute-0 sudo[73152]: pam_unix(sudo:session): session closed for user root
Jan 31 08:05:29 compute-0 sudo[73225]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gnjjsqpjlvevfamshulxxrxtsvpgqpdo ; /usr/bin/python3'
Jan 31 08:05:29 compute-0 sudo[73225]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:05:30 compute-0 python3[73227]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1769846729.4119883-36447-227531220003309/source dest=/etc/systemd/system/ceph-osd-losetup-2.service mode=0644 force=True follow=False _original_basename=ceph-osd-losetup.service.j2 checksum=4c5b1bc5693c499ffe2edaa97d63f5df7075d845 backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 08:05:30 compute-0 sudo[73225]: pam_unix(sudo:session): session closed for user root
Jan 31 08:05:30 compute-0 sudo[73275]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-woimfcwnbhtglkombrlxljebnkysjblw ; /usr/bin/python3'
Jan 31 08:05:30 compute-0 sudo[73275]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:05:30 compute-0 python3[73277]: ansible-ansible.builtin.systemd Invoked with state=started enabled=True name=ceph-osd-losetup-2.service daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 31 08:05:30 compute-0 systemd[1]: Reloading.
Jan 31 08:05:30 compute-0 systemd-rc-local-generator[73303]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 08:05:30 compute-0 systemd-sysv-generator[73309]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 08:05:30 compute-0 systemd[1]: Starting Ceph OSD losetup...
Jan 31 08:05:30 compute-0 bash[73316]: /dev/loop5: [64513]:4355725 (/var/lib/ceph-osd-2.img)
Jan 31 08:05:30 compute-0 systemd[1]: Finished Ceph OSD losetup.
Jan 31 08:05:30 compute-0 lvm[73317]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 31 08:05:30 compute-0 lvm[73317]: VG ceph_vg2 finished
Jan 31 08:05:30 compute-0 sudo[73275]: pam_unix(sudo:session): session closed for user root
Jan 31 08:05:32 compute-0 chronyd[58619]: Selected source 23.133.168.247 (pool.ntp.org)
Jan 31 08:05:32 compute-0 python3[73341]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'network'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 31 08:05:34 compute-0 sudo[73432]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fbbngwzbmyhjqqjesgokycqryitnpgxp ; /usr/bin/python3'
Jan 31 08:05:34 compute-0 sudo[73432]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:05:34 compute-0 python3[73434]: ansible-ansible.legacy.dnf Invoked with name=['centos-release-ceph-tentacle'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False use_backend=auto conf_file=None disable_excludes=None download_dir=None list=None releasever=None
Jan 31 08:05:37 compute-0 sudo[73432]: pam_unix(sudo:session): session closed for user root
Jan 31 08:05:37 compute-0 sudo[73489]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uxupvhznbjhaebifibqddabhbvzdxlkb ; /usr/bin/python3'
Jan 31 08:05:37 compute-0 sudo[73489]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:05:38 compute-0 python3[73491]: ansible-ansible.legacy.dnf Invoked with name=['cephadm'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False use_backend=auto conf_file=None disable_excludes=None download_dir=None list=None releasever=None
Jan 31 08:05:42 compute-0 groupadd[73501]: group added to /etc/group: name=cephadm, GID=993
Jan 31 08:05:42 compute-0 groupadd[73501]: group added to /etc/gshadow: name=cephadm
Jan 31 08:05:42 compute-0 groupadd[73501]: new group: name=cephadm, GID=993
Jan 31 08:05:43 compute-0 useradd[73508]: new user: name=cephadm, UID=992, GID=993, home=/var/lib/cephadm, shell=/bin/bash, from=none
Jan 31 08:05:43 compute-0 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Jan 31 08:05:43 compute-0 systemd[1]: Starting man-db-cache-update.service...
Jan 31 08:05:44 compute-0 sudo[73489]: pam_unix(sudo:session): session closed for user root
Jan 31 08:05:44 compute-0 sudo[73607]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pmrkiarrgrkmybeomgogbaplycqkgdlw ; /usr/bin/python3'
Jan 31 08:05:44 compute-0 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Jan 31 08:05:44 compute-0 systemd[1]: Finished man-db-cache-update.service.
Jan 31 08:05:44 compute-0 systemd[1]: run-r64c7e20c282348a1a1a85ae7d09131ec.service: Deactivated successfully.
Jan 31 08:05:44 compute-0 sudo[73607]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:05:44 compute-0 python3[73610]: ansible-ansible.builtin.stat Invoked with path=/usr/sbin/cephadm follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Jan 31 08:05:44 compute-0 sudo[73607]: pam_unix(sudo:session): session closed for user root
Jan 31 08:05:44 compute-0 sudo[73636]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-msrduwxbnbwofoppvdwzsewzwgjiynjb ; /usr/bin/python3'
Jan 31 08:05:44 compute-0 sudo[73636]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:05:45 compute-0 python3[73638]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/cephadm ls --no-detail _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 08:05:45 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 31 08:05:45 compute-0 sudo[73636]: pam_unix(sudo:session): session closed for user root
Jan 31 08:05:45 compute-0 sudo[73676]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mtldkqmvyjdspfemuxmpfwclwpkopjqf ; /usr/bin/python3'
Jan 31 08:05:45 compute-0 sudo[73676]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:05:45 compute-0 python3[73678]: ansible-ansible.builtin.file Invoked with path=/etc/ceph state=directory mode=0755 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 08:05:45 compute-0 sudo[73676]: pam_unix(sudo:session): session closed for user root
Jan 31 08:05:45 compute-0 sudo[73702]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pqzqyybliupmfkjdfsruaodddtxhkoyt ; /usr/bin/python3'
Jan 31 08:05:46 compute-0 sudo[73702]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:05:46 compute-0 python3[73704]: ansible-ansible.builtin.file Invoked with path=/home/ceph-admin/specs owner=ceph-admin group=ceph-admin mode=0755 state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 08:05:46 compute-0 sudo[73702]: pam_unix(sudo:session): session closed for user root
Jan 31 08:05:46 compute-0 sudo[73780]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gmkphmoztzwlldwqchwfjpgqjshjwgzc ; /usr/bin/python3'
Jan 31 08:05:46 compute-0 sudo[73780]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:05:46 compute-0 python3[73782]: ansible-ansible.legacy.stat Invoked with path=/home/ceph-admin/specs/ceph_spec.yaml follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 31 08:05:46 compute-0 sudo[73780]: pam_unix(sudo:session): session closed for user root
Jan 31 08:05:47 compute-0 sudo[73853]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ivtbjfliqlagsddxtajdenffxgicbenm ; /usr/bin/python3'
Jan 31 08:05:47 compute-0 sudo[73853]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:05:47 compute-0 python3[73855]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1769846746.6134808-36595-24726983862530/source dest=/home/ceph-admin/specs/ceph_spec.yaml owner=ceph-admin group=ceph-admin mode=0644 _original_basename=ceph_spec.yml follow=False checksum=bb83c53af4ffd926a3f1eafe26a8be437df6401f backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 08:05:47 compute-0 sudo[73853]: pam_unix(sudo:session): session closed for user root
Jan 31 08:05:47 compute-0 sudo[73955]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-guhpbzmojppmterqbybvijsruxaowglw ; /usr/bin/python3'
Jan 31 08:05:47 compute-0 sudo[73955]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:05:47 compute-0 python3[73957]: ansible-ansible.legacy.stat Invoked with path=/home/ceph-admin/assimilate_ceph.conf follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 31 08:05:47 compute-0 sudo[73955]: pam_unix(sudo:session): session closed for user root
Jan 31 08:05:48 compute-0 sudo[74028]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tuqyvgfcdngclxhsmajflqntzkybgplx ; /usr/bin/python3'
Jan 31 08:05:48 compute-0 sudo[74028]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:05:48 compute-0 python3[74030]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1769846747.6756558-36613-281222421558314/source dest=/home/ceph-admin/assimilate_ceph.conf owner=ceph-admin group=ceph-admin mode=0644 _original_basename=initial_ceph.conf follow=False checksum=41828f7c2442fdf376911255e33c12863fc3b1b3 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 08:05:48 compute-0 sudo[74028]: pam_unix(sudo:session): session closed for user root
Jan 31 08:05:48 compute-0 sudo[74078]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ezsvcfaravhowlgqsjohlnrxaccwhjyb ; /usr/bin/python3'
Jan 31 08:05:48 compute-0 sudo[74078]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:05:48 compute-0 python3[74080]: ansible-ansible.builtin.stat Invoked with path=/home/ceph-admin/.ssh/id_rsa follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Jan 31 08:05:48 compute-0 sudo[74078]: pam_unix(sudo:session): session closed for user root
Jan 31 08:05:48 compute-0 sudo[74106]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bmnpmfoxiqtvqfzeifllzvssdqitpasw ; /usr/bin/python3'
Jan 31 08:05:48 compute-0 sudo[74106]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:05:49 compute-0 python3[74108]: ansible-ansible.builtin.stat Invoked with path=/home/ceph-admin/.ssh/id_rsa.pub follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Jan 31 08:05:49 compute-0 sudo[74106]: pam_unix(sudo:session): session closed for user root
Jan 31 08:05:49 compute-0 sudo[74134]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ccdjobxaerwfppoijqcxvgedtcyshhwp ; /usr/bin/python3'
Jan 31 08:05:49 compute-0 sudo[74134]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:05:49 compute-0 python3[74136]: ansible-ansible.builtin.stat Invoked with path=/home/ceph-admin/assimilate_ceph.conf follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Jan 31 08:05:49 compute-0 sudo[74134]: pam_unix(sudo:session): session closed for user root
Jan 31 08:05:49 compute-0 python3[74162]: ansible-ansible.builtin.stat Invoked with path=/tmp/cephadm_registry.json follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Jan 31 08:05:50 compute-0 sudo[74186]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-olghtgbpaupuuvhslzhxwlpbuzbfohef ; /usr/bin/python3'
Jan 31 08:05:50 compute-0 sudo[74186]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:05:50 compute-0 python3[74188]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/cephadm bootstrap --skip-firewalld --ssh-private-key /home/ceph-admin/.ssh/id_rsa --ssh-public-key /home/ceph-admin/.ssh/id_rsa.pub --ssh-user ceph-admin --allow-fqdn-hostname --output-keyring /etc/ceph/ceph.client.admin.keyring --output-config /etc/ceph/ceph.conf --fsid 82c880e6-d992-5408-8b12-efff9c275473 --config /home/ceph-admin/assimilate_ceph.conf \--single-host-defaults \--skip-monitoring-stack --skip-dashboard --mon-ip 192.168.122.100
                                           _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 08:05:50 compute-0 sshd-session[74192]: Accepted publickey for ceph-admin from 192.168.122.100 port 54406 ssh2: RSA SHA256:JZpQN7Htt0viR9Hdw23gW7S/V9mK0acyC3Hf9I+9Mfc
Jan 31 08:05:50 compute-0 systemd-logind[793]: New session 19 of user ceph-admin.
Jan 31 08:05:50 compute-0 systemd[1]: Created slice User Slice of UID 42477.
Jan 31 08:05:50 compute-0 systemd[1]: Starting User Runtime Directory /run/user/42477...
Jan 31 08:05:50 compute-0 systemd[1]: Finished User Runtime Directory /run/user/42477.
Jan 31 08:05:50 compute-0 systemd[1]: Starting User Manager for UID 42477...
Jan 31 08:05:50 compute-0 systemd[74196]: pam_unix(systemd-user:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Jan 31 08:05:50 compute-0 systemd[74196]: Queued start job for default target Main User Target.
Jan 31 08:05:50 compute-0 systemd[74196]: Created slice User Application Slice.
Jan 31 08:05:50 compute-0 systemd[74196]: Started Mark boot as successful after the user session has run 2 minutes.
Jan 31 08:05:50 compute-0 systemd[74196]: Started Daily Cleanup of User's Temporary Directories.
Jan 31 08:05:50 compute-0 systemd[74196]: Reached target Paths.
Jan 31 08:05:50 compute-0 systemd[74196]: Reached target Timers.
Jan 31 08:05:50 compute-0 systemd[74196]: Starting D-Bus User Message Bus Socket...
Jan 31 08:05:50 compute-0 systemd[74196]: Starting Create User's Volatile Files and Directories...
Jan 31 08:05:50 compute-0 systemd[74196]: Finished Create User's Volatile Files and Directories.
Jan 31 08:05:50 compute-0 systemd[74196]: Listening on D-Bus User Message Bus Socket.
Jan 31 08:05:50 compute-0 systemd[74196]: Reached target Sockets.
Jan 31 08:05:50 compute-0 systemd[74196]: Reached target Basic System.
Jan 31 08:05:50 compute-0 systemd[74196]: Reached target Main User Target.
Jan 31 08:05:50 compute-0 systemd[74196]: Startup finished in 140ms.
Jan 31 08:05:50 compute-0 systemd[1]: Started User Manager for UID 42477.
Jan 31 08:05:50 compute-0 systemd[1]: Started Session 19 of User ceph-admin.
Jan 31 08:05:50 compute-0 sshd-session[74192]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Jan 31 08:05:50 compute-0 sudo[74212]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/echo
Jan 31 08:05:50 compute-0 sudo[74212]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:05:50 compute-0 sudo[74212]: pam_unix(sudo:session): session closed for user root
Jan 31 08:05:50 compute-0 sshd-session[74211]: Received disconnect from 192.168.122.100 port 54406:11: disconnected by user
Jan 31 08:05:50 compute-0 sshd-session[74211]: Disconnected from user ceph-admin 192.168.122.100 port 54406
Jan 31 08:05:50 compute-0 sshd-session[74192]: pam_unix(sshd:session): session closed for user ceph-admin
Jan 31 08:05:50 compute-0 systemd[1]: session-19.scope: Deactivated successfully.
Jan 31 08:05:50 compute-0 systemd-logind[793]: Session 19 logged out. Waiting for processes to exit.
Jan 31 08:05:50 compute-0 systemd-logind[793]: Removed session 19.
Jan 31 08:05:50 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 31 08:05:50 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 31 08:05:52 compute-0 systemd[1]: var-lib-containers-storage-overlay-compat913069277-merged.mount: Deactivated successfully.
Jan 31 08:05:53 compute-0 systemd[1]: var-lib-containers-storage-overlay-compat913069277-lower\x2dmapped.mount: Deactivated successfully.
Jan 31 08:06:00 compute-0 systemd[1]: Stopping User Manager for UID 42477...
Jan 31 08:06:00 compute-0 systemd[74196]: Activating special unit Exit the Session...
Jan 31 08:06:00 compute-0 systemd[74196]: Stopped target Main User Target.
Jan 31 08:06:00 compute-0 systemd[74196]: Stopped target Basic System.
Jan 31 08:06:00 compute-0 systemd[74196]: Stopped target Paths.
Jan 31 08:06:00 compute-0 systemd[74196]: Stopped target Sockets.
Jan 31 08:06:00 compute-0 systemd[74196]: Stopped target Timers.
Jan 31 08:06:00 compute-0 systemd[74196]: Stopped Mark boot as successful after the user session has run 2 minutes.
Jan 31 08:06:00 compute-0 systemd[74196]: Stopped Daily Cleanup of User's Temporary Directories.
Jan 31 08:06:00 compute-0 systemd[74196]: Closed D-Bus User Message Bus Socket.
Jan 31 08:06:00 compute-0 systemd[74196]: Stopped Create User's Volatile Files and Directories.
Jan 31 08:06:00 compute-0 systemd[74196]: Removed slice User Application Slice.
Jan 31 08:06:00 compute-0 systemd[74196]: Reached target Shutdown.
Jan 31 08:06:00 compute-0 systemd[74196]: Finished Exit the Session.
Jan 31 08:06:00 compute-0 systemd[74196]: Reached target Exit the Session.
Jan 31 08:06:00 compute-0 systemd[1]: user@42477.service: Deactivated successfully.
Jan 31 08:06:00 compute-0 systemd[1]: Stopped User Manager for UID 42477.
Jan 31 08:06:00 compute-0 systemd[1]: Stopping User Runtime Directory /run/user/42477...
Jan 31 08:06:00 compute-0 systemd[1]: run-user-42477.mount: Deactivated successfully.
Jan 31 08:06:00 compute-0 systemd[1]: user-runtime-dir@42477.service: Deactivated successfully.
Jan 31 08:06:00 compute-0 systemd[1]: Stopped User Runtime Directory /run/user/42477.
Jan 31 08:06:00 compute-0 systemd[1]: Removed slice User Slice of UID 42477.
Jan 31 08:06:08 compute-0 podman[74290]: 2026-01-31 08:06:08.099106432 +0000 UTC m=+17.113487475 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 31 08:06:08 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 31 08:06:08 compute-0 podman[74352]: 2026-01-31 08:06:08.157332784 +0000 UTC m=+0.038613963 container create 87ddcaf2608721335c0720a45555bec92cfa06894c506c3bf054e952a0279111 (image=quay.io/ceph/ceph:v20, name=elastic_heisenberg, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.build-date=20251030)
Jan 31 08:06:08 compute-0 systemd[1]: Created slice Virtual Machine and Container Slice.
Jan 31 08:06:08 compute-0 systemd[1]: Started libpod-conmon-87ddcaf2608721335c0720a45555bec92cfa06894c506c3bf054e952a0279111.scope.
Jan 31 08:06:08 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:06:08 compute-0 podman[74352]: 2026-01-31 08:06:08.235506274 +0000 UTC m=+0.116787503 container init 87ddcaf2608721335c0720a45555bec92cfa06894c506c3bf054e952a0279111 (image=quay.io/ceph/ceph:v20, name=elastic_heisenberg, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3)
Jan 31 08:06:08 compute-0 podman[74352]: 2026-01-31 08:06:08.139105363 +0000 UTC m=+0.020386522 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 31 08:06:08 compute-0 podman[74352]: 2026-01-31 08:06:08.244666305 +0000 UTC m=+0.125947484 container start 87ddcaf2608721335c0720a45555bec92cfa06894c506c3bf054e952a0279111 (image=quay.io/ceph/ceph:v20, name=elastic_heisenberg, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030)
Jan 31 08:06:08 compute-0 podman[74352]: 2026-01-31 08:06:08.248629868 +0000 UTC m=+0.129911117 container attach 87ddcaf2608721335c0720a45555bec92cfa06894c506c3bf054e952a0279111 (image=quay.io/ceph/ceph:v20, name=elastic_heisenberg, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_REF=tentacle)
Jan 31 08:06:08 compute-0 elastic_heisenberg[74367]: ceph version 20.2.0 (69f84cc2651aa259a15bc192ddaabd3baba07489) tentacle (stable)
Jan 31 08:06:08 compute-0 systemd[1]: libpod-87ddcaf2608721335c0720a45555bec92cfa06894c506c3bf054e952a0279111.scope: Deactivated successfully.
Jan 31 08:06:08 compute-0 podman[74352]: 2026-01-31 08:06:08.343575757 +0000 UTC m=+0.224856896 container died 87ddcaf2608721335c0720a45555bec92cfa06894c506c3bf054e952a0279111 (image=quay.io/ceph/ceph:v20, name=elastic_heisenberg, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True)
Jan 31 08:06:08 compute-0 systemd[1]: var-lib-containers-storage-overlay-6caee2d88ccc528950b32865eacd9cd7652eed77bba0a31065347f0a36f5e901-merged.mount: Deactivated successfully.
Jan 31 08:06:08 compute-0 podman[74352]: 2026-01-31 08:06:08.383089405 +0000 UTC m=+0.264370544 container remove 87ddcaf2608721335c0720a45555bec92cfa06894c506c3bf054e952a0279111 (image=quay.io/ceph/ceph:v20, name=elastic_heisenberg, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 08:06:08 compute-0 systemd[1]: libpod-conmon-87ddcaf2608721335c0720a45555bec92cfa06894c506c3bf054e952a0279111.scope: Deactivated successfully.
Jan 31 08:06:08 compute-0 podman[74384]: 2026-01-31 08:06:08.437137207 +0000 UTC m=+0.031921402 container create adff60186a5cad5b26ebb09c26511e59b305d28c03a9e1dd2c6272b3e19f0433 (image=quay.io/ceph/ceph:v20, name=funny_kowalevski, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 31 08:06:08 compute-0 systemd[1]: Started libpod-conmon-adff60186a5cad5b26ebb09c26511e59b305d28c03a9e1dd2c6272b3e19f0433.scope.
Jan 31 08:06:08 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:06:08 compute-0 podman[74384]: 2026-01-31 08:06:08.493243758 +0000 UTC m=+0.088027973 container init adff60186a5cad5b26ebb09c26511e59b305d28c03a9e1dd2c6272b3e19f0433 (image=quay.io/ceph/ceph:v20, name=funny_kowalevski, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 08:06:08 compute-0 podman[74384]: 2026-01-31 08:06:08.498464736 +0000 UTC m=+0.093248971 container start adff60186a5cad5b26ebb09c26511e59b305d28c03a9e1dd2c6272b3e19f0433 (image=quay.io/ceph/ceph:v20, name=funny_kowalevski, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Jan 31 08:06:08 compute-0 funny_kowalevski[74401]: 167 167
Jan 31 08:06:08 compute-0 systemd[1]: libpod-adff60186a5cad5b26ebb09c26511e59b305d28c03a9e1dd2c6272b3e19f0433.scope: Deactivated successfully.
Jan 31 08:06:08 compute-0 conmon[74401]: conmon adff60186a5cad5b26eb <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-adff60186a5cad5b26ebb09c26511e59b305d28c03a9e1dd2c6272b3e19f0433.scope/container/memory.events
Jan 31 08:06:08 compute-0 podman[74384]: 2026-01-31 08:06:08.502630075 +0000 UTC m=+0.097414290 container attach adff60186a5cad5b26ebb09c26511e59b305d28c03a9e1dd2c6272b3e19f0433 (image=quay.io/ceph/ceph:v20, name=funny_kowalevski, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030)
Jan 31 08:06:08 compute-0 podman[74384]: 2026-01-31 08:06:08.503171791 +0000 UTC m=+0.097955996 container died adff60186a5cad5b26ebb09c26511e59b305d28c03a9e1dd2c6272b3e19f0433 (image=quay.io/ceph/ceph:v20, name=funny_kowalevski, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Jan 31 08:06:08 compute-0 podman[74384]: 2026-01-31 08:06:08.422424557 +0000 UTC m=+0.017208772 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 31 08:06:08 compute-0 podman[74384]: 2026-01-31 08:06:08.54870786 +0000 UTC m=+0.143492095 container remove adff60186a5cad5b26ebb09c26511e59b305d28c03a9e1dd2c6272b3e19f0433 (image=quay.io/ceph/ceph:v20, name=funny_kowalevski, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 08:06:08 compute-0 systemd[1]: libpod-conmon-adff60186a5cad5b26ebb09c26511e59b305d28c03a9e1dd2c6272b3e19f0433.scope: Deactivated successfully.
Jan 31 08:06:08 compute-0 podman[74418]: 2026-01-31 08:06:08.621097465 +0000 UTC m=+0.051180411 container create 25fb8379b7e61499f320ff255b1125c266a4e748739d29ab3e5e4056bd08aae7 (image=quay.io/ceph/ceph:v20, name=nice_wu, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 31 08:06:08 compute-0 systemd[1]: Started libpod-conmon-25fb8379b7e61499f320ff255b1125c266a4e748739d29ab3e5e4056bd08aae7.scope.
Jan 31 08:06:08 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:06:08 compute-0 podman[74418]: 2026-01-31 08:06:08.686767369 +0000 UTC m=+0.116850405 container init 25fb8379b7e61499f320ff255b1125c266a4e748739d29ab3e5e4056bd08aae7 (image=quay.io/ceph/ceph:v20, name=nice_wu, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 08:06:08 compute-0 podman[74418]: 2026-01-31 08:06:08.693857791 +0000 UTC m=+0.123940737 container start 25fb8379b7e61499f320ff255b1125c266a4e748739d29ab3e5e4056bd08aae7 (image=quay.io/ceph/ceph:v20, name=nice_wu, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Jan 31 08:06:08 compute-0 podman[74418]: 2026-01-31 08:06:08.601110515 +0000 UTC m=+0.031193471 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 31 08:06:08 compute-0 podman[74418]: 2026-01-31 08:06:08.697599438 +0000 UTC m=+0.127682384 container attach 25fb8379b7e61499f320ff255b1125c266a4e748739d29ab3e5e4056bd08aae7 (image=quay.io/ceph/ceph:v20, name=nice_wu, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True)
Jan 31 08:06:08 compute-0 nice_wu[74434]: AQDwt31pqzpvKhAAwCq/IaEW0vIW+tDr7dDTpQ==
Jan 31 08:06:08 compute-0 systemd[1]: libpod-25fb8379b7e61499f320ff255b1125c266a4e748739d29ab3e5e4056bd08aae7.scope: Deactivated successfully.
Jan 31 08:06:08 compute-0 podman[74418]: 2026-01-31 08:06:08.714223022 +0000 UTC m=+0.144305988 container died 25fb8379b7e61499f320ff255b1125c266a4e748739d29ab3e5e4056bd08aae7 (image=quay.io/ceph/ceph:v20, name=nice_wu, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 08:06:08 compute-0 podman[74418]: 2026-01-31 08:06:08.762488019 +0000 UTC m=+0.192570995 container remove 25fb8379b7e61499f320ff255b1125c266a4e748739d29ab3e5e4056bd08aae7 (image=quay.io/ceph/ceph:v20, name=nice_wu, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030)
Jan 31 08:06:08 compute-0 systemd[1]: libpod-conmon-25fb8379b7e61499f320ff255b1125c266a4e748739d29ab3e5e4056bd08aae7.scope: Deactivated successfully.
Jan 31 08:06:08 compute-0 podman[74453]: 2026-01-31 08:06:08.81823689 +0000 UTC m=+0.039835058 container create 4acf7c159be8117ec4a4d1dfc4aa0fb331b6c60d7a135e11b72eb0cc1a1fd531 (image=quay.io/ceph/ceph:v20, name=boring_mendeleev, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Jan 31 08:06:08 compute-0 systemd[1]: Started libpod-conmon-4acf7c159be8117ec4a4d1dfc4aa0fb331b6c60d7a135e11b72eb0cc1a1fd531.scope.
Jan 31 08:06:08 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:06:08 compute-0 podman[74453]: 2026-01-31 08:06:08.870551663 +0000 UTC m=+0.092149851 container init 4acf7c159be8117ec4a4d1dfc4aa0fb331b6c60d7a135e11b72eb0cc1a1fd531 (image=quay.io/ceph/ceph:v20, name=boring_mendeleev, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Jan 31 08:06:08 compute-0 podman[74453]: 2026-01-31 08:06:08.8739728 +0000 UTC m=+0.095570968 container start 4acf7c159be8117ec4a4d1dfc4aa0fb331b6c60d7a135e11b72eb0cc1a1fd531 (image=quay.io/ceph/ceph:v20, name=boring_mendeleev, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Jan 31 08:06:08 compute-0 boring_mendeleev[74470]: AQDwt31pAm35NBAAi6sd/vujGwQEztDRC4JBcg==
Jan 31 08:06:08 compute-0 systemd[1]: libpod-4acf7c159be8117ec4a4d1dfc4aa0fb331b6c60d7a135e11b72eb0cc1a1fd531.scope: Deactivated successfully.
Jan 31 08:06:08 compute-0 podman[74453]: 2026-01-31 08:06:08.802239514 +0000 UTC m=+0.023837702 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 31 08:06:08 compute-0 podman[74453]: 2026-01-31 08:06:08.937848633 +0000 UTC m=+0.159446801 container attach 4acf7c159be8117ec4a4d1dfc4aa0fb331b6c60d7a135e11b72eb0cc1a1fd531 (image=quay.io/ceph/ceph:v20, name=boring_mendeleev, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 08:06:08 compute-0 podman[74453]: 2026-01-31 08:06:08.938304156 +0000 UTC m=+0.159902334 container died 4acf7c159be8117ec4a4d1dfc4aa0fb331b6c60d7a135e11b72eb0cc1a1fd531 (image=quay.io/ceph/ceph:v20, name=boring_mendeleev, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Jan 31 08:06:09 compute-0 podman[74453]: 2026-01-31 08:06:09.056834847 +0000 UTC m=+0.278433055 container remove 4acf7c159be8117ec4a4d1dfc4aa0fb331b6c60d7a135e11b72eb0cc1a1fd531 (image=quay.io/ceph/ceph:v20, name=boring_mendeleev, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Jan 31 08:06:09 compute-0 systemd[1]: libpod-conmon-4acf7c159be8117ec4a4d1dfc4aa0fb331b6c60d7a135e11b72eb0cc1a1fd531.scope: Deactivated successfully.
Jan 31 08:06:09 compute-0 podman[74489]: 2026-01-31 08:06:09.115920333 +0000 UTC m=+0.043409709 container create 5908a23937be2183051c08f790fdb036a597f7bd6f429cda1817e28110ae8b8b (image=quay.io/ceph/ceph:v20, name=priceless_easley, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 08:06:09 compute-0 systemd[1]: Started libpod-conmon-5908a23937be2183051c08f790fdb036a597f7bd6f429cda1817e28110ae8b8b.scope.
Jan 31 08:06:09 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:06:09 compute-0 podman[74489]: 2026-01-31 08:06:09.165318473 +0000 UTC m=+0.092807849 container init 5908a23937be2183051c08f790fdb036a597f7bd6f429cda1817e28110ae8b8b (image=quay.io/ceph/ceph:v20, name=priceless_easley, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030)
Jan 31 08:06:09 compute-0 podman[74489]: 2026-01-31 08:06:09.173228978 +0000 UTC m=+0.100718364 container start 5908a23937be2183051c08f790fdb036a597f7bd6f429cda1817e28110ae8b8b (image=quay.io/ceph/ceph:v20, name=priceless_easley, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 08:06:09 compute-0 podman[74489]: 2026-01-31 08:06:09.178270242 +0000 UTC m=+0.105759618 container attach 5908a23937be2183051c08f790fdb036a597f7bd6f429cda1817e28110ae8b8b (image=quay.io/ceph/ceph:v20, name=priceless_easley, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default)
Jan 31 08:06:09 compute-0 priceless_easley[74506]: AQDxt31pO1lCCxAA+A+ZdrL5Mu1L29mq8hUmxQ==
Jan 31 08:06:09 compute-0 podman[74489]: 2026-01-31 08:06:09.093803782 +0000 UTC m=+0.021293208 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 31 08:06:09 compute-0 systemd[1]: libpod-5908a23937be2183051c08f790fdb036a597f7bd6f429cda1817e28110ae8b8b.scope: Deactivated successfully.
Jan 31 08:06:09 compute-0 podman[74489]: 2026-01-31 08:06:09.19116593 +0000 UTC m=+0.118655316 container died 5908a23937be2183051c08f790fdb036a597f7bd6f429cda1817e28110ae8b8b (image=quay.io/ceph/ceph:v20, name=priceless_easley, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030)
Jan 31 08:06:09 compute-0 systemd[1]: var-lib-containers-storage-overlay-22183a456f3e5aa264afdc08381e8efb5db60695eb8338f38fa200c9d3d21eaf-merged.mount: Deactivated successfully.
Jan 31 08:06:09 compute-0 podman[74489]: 2026-01-31 08:06:09.224867342 +0000 UTC m=+0.152356688 container remove 5908a23937be2183051c08f790fdb036a597f7bd6f429cda1817e28110ae8b8b (image=quay.io/ceph/ceph:v20, name=priceless_easley, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2)
Jan 31 08:06:09 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 31 08:06:09 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 31 08:06:09 compute-0 systemd[1]: libpod-conmon-5908a23937be2183051c08f790fdb036a597f7bd6f429cda1817e28110ae8b8b.scope: Deactivated successfully.
Jan 31 08:06:09 compute-0 podman[74523]: 2026-01-31 08:06:09.282413703 +0000 UTC m=+0.038698575 container create 6df10fb94a3094f8f7a18e707f895a01470c4c1cf43d0a8164b78c3002a77533 (image=quay.io/ceph/ceph:v20, name=goofy_knuth, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle)
Jan 31 08:06:09 compute-0 systemd[1]: Started libpod-conmon-6df10fb94a3094f8f7a18e707f895a01470c4c1cf43d0a8164b78c3002a77533.scope.
Jan 31 08:06:09 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:06:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b7052a080da12557d25557e1b4f4e4847d22dd554ddec371bad6f88e4fd740a7/merged/tmp/monmap supports timestamps until 2038 (0x7fffffff)
Jan 31 08:06:09 compute-0 podman[74523]: 2026-01-31 08:06:09.265143211 +0000 UTC m=+0.021428133 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 31 08:06:09 compute-0 podman[74523]: 2026-01-31 08:06:09.379304148 +0000 UTC m=+0.135589050 container init 6df10fb94a3094f8f7a18e707f895a01470c4c1cf43d0a8164b78c3002a77533 (image=quay.io/ceph/ceph:v20, name=goofy_knuth, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.41.3, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 08:06:09 compute-0 podman[74523]: 2026-01-31 08:06:09.385425602 +0000 UTC m=+0.141710484 container start 6df10fb94a3094f8f7a18e707f895a01470c4c1cf43d0a8164b78c3002a77533 (image=quay.io/ceph/ceph:v20, name=goofy_knuth, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Jan 31 08:06:09 compute-0 podman[74523]: 2026-01-31 08:06:09.400755899 +0000 UTC m=+0.157040781 container attach 6df10fb94a3094f8f7a18e707f895a01470c4c1cf43d0a8164b78c3002a77533 (image=quay.io/ceph/ceph:v20, name=goofy_knuth, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Jan 31 08:06:09 compute-0 goofy_knuth[74539]: /usr/bin/monmaptool: monmap file /tmp/monmap
Jan 31 08:06:09 compute-0 goofy_knuth[74539]: setting min_mon_release = tentacle
Jan 31 08:06:09 compute-0 goofy_knuth[74539]: /usr/bin/monmaptool: set fsid to 82c880e6-d992-5408-8b12-efff9c275473
Jan 31 08:06:09 compute-0 goofy_knuth[74539]: /usr/bin/monmaptool: writing epoch 0 to /tmp/monmap (1 monitors)
Jan 31 08:06:09 compute-0 systemd[1]: libpod-6df10fb94a3094f8f7a18e707f895a01470c4c1cf43d0a8164b78c3002a77533.scope: Deactivated successfully.
Jan 31 08:06:09 compute-0 podman[74523]: 2026-01-31 08:06:09.432633368 +0000 UTC m=+0.188918250 container died 6df10fb94a3094f8f7a18e707f895a01470c4c1cf43d0a8164b78c3002a77533 (image=quay.io/ceph/ceph:v20, name=goofy_knuth, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 31 08:06:09 compute-0 podman[74523]: 2026-01-31 08:06:09.464524108 +0000 UTC m=+0.220809000 container remove 6df10fb94a3094f8f7a18e707f895a01470c4c1cf43d0a8164b78c3002a77533 (image=quay.io/ceph/ceph:v20, name=goofy_knuth, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, ceph=True, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Jan 31 08:06:09 compute-0 systemd[1]: libpod-conmon-6df10fb94a3094f8f7a18e707f895a01470c4c1cf43d0a8164b78c3002a77533.scope: Deactivated successfully.
Jan 31 08:06:09 compute-0 podman[74560]: 2026-01-31 08:06:09.530094339 +0000 UTC m=+0.050740369 container create fffa2c9948f267fb33a123ff07426345723abe33c256021d5bf34d58b1c1fa78 (image=quay.io/ceph/ceph:v20, name=elegant_proskuriakova, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Jan 31 08:06:09 compute-0 systemd[1]: Started libpod-conmon-fffa2c9948f267fb33a123ff07426345723abe33c256021d5bf34d58b1c1fa78.scope.
Jan 31 08:06:09 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:06:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e106549aa1b54d7be9e16716bbf642848b9f3df0fb62677b68c6360095a91022/merged/tmp/keyring supports timestamps until 2038 (0x7fffffff)
Jan 31 08:06:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e106549aa1b54d7be9e16716bbf642848b9f3df0fb62677b68c6360095a91022/merged/tmp/monmap supports timestamps until 2038 (0x7fffffff)
Jan 31 08:06:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e106549aa1b54d7be9e16716bbf642848b9f3df0fb62677b68c6360095a91022/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 08:06:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e106549aa1b54d7be9e16716bbf642848b9f3df0fb62677b68c6360095a91022/merged/var/lib/ceph/mon/ceph-compute-0 supports timestamps until 2038 (0x7fffffff)
Jan 31 08:06:09 compute-0 podman[74560]: 2026-01-31 08:06:09.592038587 +0000 UTC m=+0.112684667 container init fffa2c9948f267fb33a123ff07426345723abe33c256021d5bf34d58b1c1fa78 (image=quay.io/ceph/ceph:v20, name=elegant_proskuriakova, io.buildah.version=1.41.3, ceph=True, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 08:06:09 compute-0 podman[74560]: 2026-01-31 08:06:09.502644196 +0000 UTC m=+0.023290256 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 31 08:06:09 compute-0 podman[74560]: 2026-01-31 08:06:09.599013876 +0000 UTC m=+0.119659896 container start fffa2c9948f267fb33a123ff07426345723abe33c256021d5bf34d58b1c1fa78 (image=quay.io/ceph/ceph:v20, name=elegant_proskuriakova, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 08:06:09 compute-0 podman[74560]: 2026-01-31 08:06:09.603397991 +0000 UTC m=+0.124044051 container attach fffa2c9948f267fb33a123ff07426345723abe33c256021d5bf34d58b1c1fa78 (image=quay.io/ceph/ceph:v20, name=elegant_proskuriakova, ceph=True, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030)
Jan 31 08:06:09 compute-0 systemd[1]: libpod-fffa2c9948f267fb33a123ff07426345723abe33c256021d5bf34d58b1c1fa78.scope: Deactivated successfully.
Jan 31 08:06:09 compute-0 podman[74560]: 2026-01-31 08:06:09.704355561 +0000 UTC m=+0.225001611 container died fffa2c9948f267fb33a123ff07426345723abe33c256021d5bf34d58b1c1fa78 (image=quay.io/ceph/ceph:v20, name=elegant_proskuriakova, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, ceph=True)
Jan 31 08:06:09 compute-0 podman[74560]: 2026-01-31 08:06:09.753535254 +0000 UTC m=+0.274181304 container remove fffa2c9948f267fb33a123ff07426345723abe33c256021d5bf34d58b1c1fa78 (image=quay.io/ceph/ceph:v20, name=elegant_proskuriakova, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle)
Jan 31 08:06:09 compute-0 systemd[1]: libpod-conmon-fffa2c9948f267fb33a123ff07426345723abe33c256021d5bf34d58b1c1fa78.scope: Deactivated successfully.
Jan 31 08:06:09 compute-0 systemd[1]: Reloading.
Jan 31 08:06:09 compute-0 systemd-rc-local-generator[74635]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 08:06:09 compute-0 systemd-sysv-generator[74642]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 08:06:10 compute-0 systemd[1]: Reloading.
Jan 31 08:06:10 compute-0 systemd-sysv-generator[74684]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 08:06:10 compute-0 systemd-rc-local-generator[74680]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 08:06:10 compute-0 systemd[1]: Reached target All Ceph clusters and services.
Jan 31 08:06:10 compute-0 systemd[1]: Reloading.
Jan 31 08:06:10 compute-0 systemd-sysv-generator[74721]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 08:06:10 compute-0 systemd-rc-local-generator[74716]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 08:06:10 compute-0 systemd[1]: Reached target Ceph cluster 82c880e6-d992-5408-8b12-efff9c275473.
Jan 31 08:06:10 compute-0 systemd[1]: Reloading.
Jan 31 08:06:10 compute-0 systemd-rc-local-generator[74753]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 08:06:10 compute-0 systemd-sysv-generator[74756]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 08:06:10 compute-0 systemd[1]: Reloading.
Jan 31 08:06:10 compute-0 systemd-rc-local-generator[74796]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 08:06:10 compute-0 systemd-sysv-generator[74799]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 08:06:11 compute-0 systemd[1]: Created slice Slice /system/ceph-82c880e6-d992-5408-8b12-efff9c275473.
Jan 31 08:06:11 compute-0 systemd[1]: Reached target System Time Set.
Jan 31 08:06:11 compute-0 systemd[1]: Reached target System Time Synchronized.
Jan 31 08:06:11 compute-0 systemd[1]: Starting Ceph mon.compute-0 for 82c880e6-d992-5408-8b12-efff9c275473...
Jan 31 08:06:11 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 31 08:06:11 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 31 08:06:11 compute-0 podman[74854]: 2026-01-31 08:06:11.191210993 +0000 UTC m=+0.043648006 container create f08e8aa80cf9a7a7195af8b48d8c358f703628ba6c0d03776496a000ae724410 (image=quay.io/ceph/ceph:v20, name=ceph-82c880e6-d992-5408-8b12-efff9c275473-mon-compute-0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, io.buildah.version=1.41.3, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 08:06:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bb6a935dd4de1434407d93089d65dd8910047cdc0a19e76d531e36be2b7ad1bf/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 08:06:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bb6a935dd4de1434407d93089d65dd8910047cdc0a19e76d531e36be2b7ad1bf/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 08:06:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bb6a935dd4de1434407d93089d65dd8910047cdc0a19e76d531e36be2b7ad1bf/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 08:06:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bb6a935dd4de1434407d93089d65dd8910047cdc0a19e76d531e36be2b7ad1bf/merged/var/lib/ceph/mon/ceph-compute-0 supports timestamps until 2038 (0x7fffffff)
Jan 31 08:06:11 compute-0 podman[74854]: 2026-01-31 08:06:11.24472951 +0000 UTC m=+0.097166513 container init f08e8aa80cf9a7a7195af8b48d8c358f703628ba6c0d03776496a000ae724410 (image=quay.io/ceph/ceph:v20, name=ceph-82c880e6-d992-5408-8b12-efff9c275473-mon-compute-0, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 08:06:11 compute-0 podman[74854]: 2026-01-31 08:06:11.252159342 +0000 UTC m=+0.104596325 container start f08e8aa80cf9a7a7195af8b48d8c358f703628ba6c0d03776496a000ae724410 (image=quay.io/ceph/ceph:v20, name=ceph-82c880e6-d992-5408-8b12-efff9c275473-mon-compute-0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 31 08:06:11 compute-0 bash[74854]: f08e8aa80cf9a7a7195af8b48d8c358f703628ba6c0d03776496a000ae724410
Jan 31 08:06:11 compute-0 podman[74854]: 2026-01-31 08:06:11.170201834 +0000 UTC m=+0.022638847 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 31 08:06:11 compute-0 systemd[1]: Started Ceph mon.compute-0 for 82c880e6-d992-5408-8b12-efff9c275473.
Jan 31 08:06:11 compute-0 ceph-mon[74874]: set uid:gid to 167:167 (ceph:ceph)
Jan 31 08:06:11 compute-0 ceph-mon[74874]: ceph version 20.2.0 (69f84cc2651aa259a15bc192ddaabd3baba07489) tentacle (stable - RelWithDebInfo), process ceph-mon, pid 2
Jan 31 08:06:11 compute-0 ceph-mon[74874]: pidfile_write: ignore empty --pid-file
Jan 31 08:06:11 compute-0 ceph-mon[74874]: load: jerasure load: lrc 
Jan 31 08:06:11 compute-0 ceph-mon[74874]: rocksdb: RocksDB version: 7.9.2
Jan 31 08:06:11 compute-0 ceph-mon[74874]: rocksdb: Git sha 0
Jan 31 08:06:11 compute-0 ceph-mon[74874]: rocksdb: Compile date 2025-10-30 15:42:43
Jan 31 08:06:11 compute-0 ceph-mon[74874]: rocksdb: DB SUMMARY
Jan 31 08:06:11 compute-0 ceph-mon[74874]: rocksdb: DB Session ID:  6H349LA39CZV4Z01SNE0
Jan 31 08:06:11 compute-0 ceph-mon[74874]: rocksdb: CURRENT file:  CURRENT
Jan 31 08:06:11 compute-0 ceph-mon[74874]: rocksdb: IDENTITY file:  IDENTITY
Jan 31 08:06:11 compute-0 ceph-mon[74874]: rocksdb: MANIFEST file:  MANIFEST-000005 size: 59 Bytes
Jan 31 08:06:11 compute-0 ceph-mon[74874]: rocksdb: SST files in /var/lib/ceph/mon/ceph-compute-0/store.db dir, Total Num: 0, files: 
Jan 31 08:06:11 compute-0 ceph-mon[74874]: rocksdb: Write Ahead Log file in /var/lib/ceph/mon/ceph-compute-0/store.db: 000004.log size: 807 ; 
Jan 31 08:06:11 compute-0 ceph-mon[74874]: rocksdb:                         Options.error_if_exists: 0
Jan 31 08:06:11 compute-0 ceph-mon[74874]: rocksdb:                       Options.create_if_missing: 0
Jan 31 08:06:11 compute-0 ceph-mon[74874]: rocksdb:                         Options.paranoid_checks: 1
Jan 31 08:06:11 compute-0 ceph-mon[74874]: rocksdb:             Options.flush_verify_memtable_count: 1
Jan 31 08:06:11 compute-0 ceph-mon[74874]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Jan 31 08:06:11 compute-0 ceph-mon[74874]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Jan 31 08:06:11 compute-0 ceph-mon[74874]: rocksdb:                                     Options.env: 0x55fed842c440
Jan 31 08:06:11 compute-0 ceph-mon[74874]: rocksdb:                                      Options.fs: PosixFileSystem
Jan 31 08:06:11 compute-0 ceph-mon[74874]: rocksdb:                                Options.info_log: 0x55fed9b1b3e0
Jan 31 08:06:11 compute-0 ceph-mon[74874]: rocksdb:                Options.max_file_opening_threads: 16
Jan 31 08:06:11 compute-0 ceph-mon[74874]: rocksdb:                              Options.statistics: (nil)
Jan 31 08:06:11 compute-0 ceph-mon[74874]: rocksdb:                               Options.use_fsync: 0
Jan 31 08:06:11 compute-0 ceph-mon[74874]: rocksdb:                       Options.max_log_file_size: 0
Jan 31 08:06:11 compute-0 ceph-mon[74874]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Jan 31 08:06:11 compute-0 ceph-mon[74874]: rocksdb:                   Options.log_file_time_to_roll: 0
Jan 31 08:06:11 compute-0 ceph-mon[74874]: rocksdb:                       Options.keep_log_file_num: 1000
Jan 31 08:06:11 compute-0 ceph-mon[74874]: rocksdb:                    Options.recycle_log_file_num: 0
Jan 31 08:06:11 compute-0 ceph-mon[74874]: rocksdb:                         Options.allow_fallocate: 1
Jan 31 08:06:11 compute-0 ceph-mon[74874]: rocksdb:                        Options.allow_mmap_reads: 0
Jan 31 08:06:11 compute-0 ceph-mon[74874]: rocksdb:                       Options.allow_mmap_writes: 0
Jan 31 08:06:11 compute-0 ceph-mon[74874]: rocksdb:                        Options.use_direct_reads: 0
Jan 31 08:06:11 compute-0 ceph-mon[74874]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Jan 31 08:06:11 compute-0 ceph-mon[74874]: rocksdb:          Options.create_missing_column_families: 0
Jan 31 08:06:11 compute-0 ceph-mon[74874]: rocksdb:                              Options.db_log_dir: 
Jan 31 08:06:11 compute-0 ceph-mon[74874]: rocksdb:                                 Options.wal_dir: 
Jan 31 08:06:11 compute-0 ceph-mon[74874]: rocksdb:                Options.table_cache_numshardbits: 6
Jan 31 08:06:11 compute-0 ceph-mon[74874]: rocksdb:                         Options.WAL_ttl_seconds: 0
Jan 31 08:06:11 compute-0 ceph-mon[74874]: rocksdb:                       Options.WAL_size_limit_MB: 0
Jan 31 08:06:11 compute-0 ceph-mon[74874]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Jan 31 08:06:11 compute-0 ceph-mon[74874]: rocksdb:             Options.manifest_preallocation_size: 4194304
Jan 31 08:06:11 compute-0 ceph-mon[74874]: rocksdb:                     Options.is_fd_close_on_exec: 1
Jan 31 08:06:11 compute-0 ceph-mon[74874]: rocksdb:                   Options.advise_random_on_open: 1
Jan 31 08:06:11 compute-0 ceph-mon[74874]: rocksdb:                    Options.db_write_buffer_size: 0
Jan 31 08:06:11 compute-0 ceph-mon[74874]: rocksdb:                    Options.write_buffer_manager: 0x55fed9a9a140
Jan 31 08:06:11 compute-0 ceph-mon[74874]: rocksdb:         Options.access_hint_on_compaction_start: 1
Jan 31 08:06:11 compute-0 ceph-mon[74874]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Jan 31 08:06:11 compute-0 ceph-mon[74874]: rocksdb:                      Options.use_adaptive_mutex: 0
Jan 31 08:06:11 compute-0 ceph-mon[74874]: rocksdb:                            Options.rate_limiter: (nil)
Jan 31 08:06:11 compute-0 ceph-mon[74874]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Jan 31 08:06:11 compute-0 ceph-mon[74874]: rocksdb:                       Options.wal_recovery_mode: 2
Jan 31 08:06:11 compute-0 ceph-mon[74874]: rocksdb:                  Options.enable_thread_tracking: 0
Jan 31 08:06:11 compute-0 ceph-mon[74874]: rocksdb:                  Options.enable_pipelined_write: 0
Jan 31 08:06:11 compute-0 ceph-mon[74874]: rocksdb:                  Options.unordered_write: 0
Jan 31 08:06:11 compute-0 ceph-mon[74874]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Jan 31 08:06:11 compute-0 ceph-mon[74874]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Jan 31 08:06:11 compute-0 ceph-mon[74874]: rocksdb:             Options.write_thread_max_yield_usec: 100
Jan 31 08:06:11 compute-0 ceph-mon[74874]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Jan 31 08:06:11 compute-0 ceph-mon[74874]: rocksdb:                               Options.row_cache: None
Jan 31 08:06:11 compute-0 ceph-mon[74874]: rocksdb:                              Options.wal_filter: None
Jan 31 08:06:11 compute-0 ceph-mon[74874]: rocksdb:             Options.avoid_flush_during_recovery: 0
Jan 31 08:06:11 compute-0 ceph-mon[74874]: rocksdb:             Options.allow_ingest_behind: 0
Jan 31 08:06:11 compute-0 ceph-mon[74874]: rocksdb:             Options.two_write_queues: 0
Jan 31 08:06:11 compute-0 ceph-mon[74874]: rocksdb:             Options.manual_wal_flush: 0
Jan 31 08:06:11 compute-0 ceph-mon[74874]: rocksdb:             Options.wal_compression: 0
Jan 31 08:06:11 compute-0 ceph-mon[74874]: rocksdb:             Options.atomic_flush: 0
Jan 31 08:06:11 compute-0 ceph-mon[74874]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Jan 31 08:06:11 compute-0 ceph-mon[74874]: rocksdb:                 Options.persist_stats_to_disk: 0
Jan 31 08:06:11 compute-0 ceph-mon[74874]: rocksdb:                 Options.write_dbid_to_manifest: 0
Jan 31 08:06:11 compute-0 ceph-mon[74874]: rocksdb:                 Options.log_readahead_size: 0
Jan 31 08:06:11 compute-0 ceph-mon[74874]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Jan 31 08:06:11 compute-0 ceph-mon[74874]: rocksdb:                 Options.best_efforts_recovery: 0
Jan 31 08:06:11 compute-0 ceph-mon[74874]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Jan 31 08:06:11 compute-0 ceph-mon[74874]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Jan 31 08:06:11 compute-0 ceph-mon[74874]: rocksdb:             Options.allow_data_in_errors: 0
Jan 31 08:06:11 compute-0 ceph-mon[74874]: rocksdb:             Options.db_host_id: __hostname__
Jan 31 08:06:11 compute-0 ceph-mon[74874]: rocksdb:             Options.enforce_single_del_contracts: true
Jan 31 08:06:11 compute-0 ceph-mon[74874]: rocksdb:             Options.max_background_jobs: 2
Jan 31 08:06:11 compute-0 ceph-mon[74874]: rocksdb:             Options.max_background_compactions: -1
Jan 31 08:06:11 compute-0 ceph-mon[74874]: rocksdb:             Options.max_subcompactions: 1
Jan 31 08:06:11 compute-0 ceph-mon[74874]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Jan 31 08:06:11 compute-0 ceph-mon[74874]: rocksdb:           Options.writable_file_max_buffer_size: 1048576
Jan 31 08:06:11 compute-0 ceph-mon[74874]: rocksdb:             Options.delayed_write_rate : 16777216
Jan 31 08:06:11 compute-0 ceph-mon[74874]: rocksdb:             Options.max_total_wal_size: 0
Jan 31 08:06:11 compute-0 ceph-mon[74874]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Jan 31 08:06:11 compute-0 ceph-mon[74874]: rocksdb:                   Options.stats_dump_period_sec: 600
Jan 31 08:06:11 compute-0 ceph-mon[74874]: rocksdb:                 Options.stats_persist_period_sec: 600
Jan 31 08:06:11 compute-0 ceph-mon[74874]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Jan 31 08:06:11 compute-0 ceph-mon[74874]: rocksdb:                          Options.max_open_files: -1
Jan 31 08:06:11 compute-0 ceph-mon[74874]: rocksdb:                          Options.bytes_per_sync: 0
Jan 31 08:06:11 compute-0 ceph-mon[74874]: rocksdb:                      Options.wal_bytes_per_sync: 0
Jan 31 08:06:11 compute-0 ceph-mon[74874]: rocksdb:                   Options.strict_bytes_per_sync: 0
Jan 31 08:06:11 compute-0 ceph-mon[74874]: rocksdb:       Options.compaction_readahead_size: 0
Jan 31 08:06:11 compute-0 ceph-mon[74874]: rocksdb:                  Options.max_background_flushes: -1
Jan 31 08:06:11 compute-0 ceph-mon[74874]: rocksdb: Compression algorithms supported:
Jan 31 08:06:11 compute-0 ceph-mon[74874]: rocksdb:         kZSTD supported: 0
Jan 31 08:06:11 compute-0 ceph-mon[74874]: rocksdb:         kXpressCompression supported: 0
Jan 31 08:06:11 compute-0 ceph-mon[74874]: rocksdb:         kBZip2Compression supported: 0
Jan 31 08:06:11 compute-0 ceph-mon[74874]: rocksdb:         kZSTDNotFinalCompression supported: 0
Jan 31 08:06:11 compute-0 ceph-mon[74874]: rocksdb:         kLZ4Compression supported: 1
Jan 31 08:06:11 compute-0 ceph-mon[74874]: rocksdb:         kZlibCompression supported: 1
Jan 31 08:06:11 compute-0 ceph-mon[74874]: rocksdb:         kLZ4HCCompression supported: 1
Jan 31 08:06:11 compute-0 ceph-mon[74874]: rocksdb:         kSnappyCompression supported: 1
Jan 31 08:06:11 compute-0 ceph-mon[74874]: rocksdb: Fast CRC32 supported: Supported on x86
Jan 31 08:06:11 compute-0 ceph-mon[74874]: rocksdb: DMutex implementation: pthread_mutex_t
Jan 31 08:06:11 compute-0 ceph-mon[74874]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: /var/lib/ceph/mon/ceph-compute-0/store.db/MANIFEST-000005
Jan 31 08:06:11 compute-0 ceph-mon[74874]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Jan 31 08:06:11 compute-0 ceph-mon[74874]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 31 08:06:11 compute-0 ceph-mon[74874]: rocksdb:           Options.merge_operator: 
Jan 31 08:06:11 compute-0 ceph-mon[74874]: rocksdb:        Options.compaction_filter: None
Jan 31 08:06:11 compute-0 ceph-mon[74874]: rocksdb:        Options.compaction_filter_factory: None
Jan 31 08:06:11 compute-0 ceph-mon[74874]: rocksdb:  Options.sst_partitioner_factory: None
Jan 31 08:06:11 compute-0 ceph-mon[74874]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 31 08:06:11 compute-0 ceph-mon[74874]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 31 08:06:11 compute-0 ceph-mon[74874]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55fed9aa6600)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55fed9a8b8d0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Jan 31 08:06:11 compute-0 ceph-mon[74874]: rocksdb:        Options.write_buffer_size: 33554432
Jan 31 08:06:11 compute-0 ceph-mon[74874]: rocksdb:  Options.max_write_buffer_number: 2
Jan 31 08:06:11 compute-0 ceph-mon[74874]: rocksdb:          Options.compression: NoCompression
Jan 31 08:06:11 compute-0 ceph-mon[74874]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 31 08:06:11 compute-0 ceph-mon[74874]: rocksdb:       Options.prefix_extractor: nullptr
Jan 31 08:06:11 compute-0 ceph-mon[74874]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 31 08:06:11 compute-0 ceph-mon[74874]: rocksdb:             Options.num_levels: 7
Jan 31 08:06:11 compute-0 ceph-mon[74874]: rocksdb:        Options.min_write_buffer_number_to_merge: 1
Jan 31 08:06:11 compute-0 ceph-mon[74874]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 31 08:06:11 compute-0 ceph-mon[74874]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 31 08:06:11 compute-0 ceph-mon[74874]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 31 08:06:11 compute-0 ceph-mon[74874]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 31 08:06:11 compute-0 ceph-mon[74874]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 31 08:06:11 compute-0 ceph-mon[74874]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 31 08:06:11 compute-0 ceph-mon[74874]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 31 08:06:11 compute-0 ceph-mon[74874]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 31 08:06:11 compute-0 ceph-mon[74874]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 31 08:06:11 compute-0 ceph-mon[74874]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 31 08:06:11 compute-0 ceph-mon[74874]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 31 08:06:11 compute-0 ceph-mon[74874]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 31 08:06:11 compute-0 ceph-mon[74874]: rocksdb:                  Options.compression_opts.level: 32767
Jan 31 08:06:11 compute-0 ceph-mon[74874]: rocksdb:               Options.compression_opts.strategy: 0
Jan 31 08:06:11 compute-0 ceph-mon[74874]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 31 08:06:11 compute-0 ceph-mon[74874]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 31 08:06:11 compute-0 ceph-mon[74874]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 31 08:06:11 compute-0 ceph-mon[74874]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 31 08:06:11 compute-0 ceph-mon[74874]: rocksdb:                  Options.compression_opts.enabled: false
Jan 31 08:06:11 compute-0 ceph-mon[74874]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 31 08:06:11 compute-0 ceph-mon[74874]: rocksdb:      Options.level0_file_num_compaction_trigger: 4
Jan 31 08:06:11 compute-0 ceph-mon[74874]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 31 08:06:11 compute-0 ceph-mon[74874]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 31 08:06:11 compute-0 ceph-mon[74874]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 31 08:06:11 compute-0 ceph-mon[74874]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 31 08:06:11 compute-0 ceph-mon[74874]: rocksdb:                Options.max_bytes_for_level_base: 268435456
Jan 31 08:06:11 compute-0 ceph-mon[74874]: rocksdb: Options.level_compaction_dynamic_level_bytes: 1
Jan 31 08:06:11 compute-0 ceph-mon[74874]: rocksdb:          Options.max_bytes_for_level_multiplier: 10.000000
Jan 31 08:06:11 compute-0 ceph-mon[74874]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 31 08:06:11 compute-0 ceph-mon[74874]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 31 08:06:11 compute-0 ceph-mon[74874]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 31 08:06:11 compute-0 ceph-mon[74874]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 31 08:06:11 compute-0 ceph-mon[74874]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 31 08:06:11 compute-0 ceph-mon[74874]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 31 08:06:11 compute-0 ceph-mon[74874]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 31 08:06:11 compute-0 ceph-mon[74874]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 31 08:06:11 compute-0 ceph-mon[74874]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 31 08:06:11 compute-0 ceph-mon[74874]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 31 08:06:11 compute-0 ceph-mon[74874]: rocksdb:                        Options.arena_block_size: 1048576
Jan 31 08:06:11 compute-0 ceph-mon[74874]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 31 08:06:11 compute-0 ceph-mon[74874]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 31 08:06:11 compute-0 ceph-mon[74874]: rocksdb:                Options.disable_auto_compactions: 0
Jan 31 08:06:11 compute-0 ceph-mon[74874]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 31 08:06:11 compute-0 ceph-mon[74874]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 31 08:06:11 compute-0 ceph-mon[74874]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 31 08:06:11 compute-0 ceph-mon[74874]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 31 08:06:11 compute-0 ceph-mon[74874]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 31 08:06:11 compute-0 ceph-mon[74874]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 31 08:06:11 compute-0 ceph-mon[74874]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 31 08:06:11 compute-0 ceph-mon[74874]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 31 08:06:11 compute-0 ceph-mon[74874]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 31 08:06:11 compute-0 ceph-mon[74874]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 31 08:06:11 compute-0 ceph-mon[74874]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 31 08:06:11 compute-0 ceph-mon[74874]: rocksdb:                   Options.inplace_update_support: 0
Jan 31 08:06:11 compute-0 ceph-mon[74874]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 31 08:06:11 compute-0 ceph-mon[74874]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 31 08:06:11 compute-0 ceph-mon[74874]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 31 08:06:11 compute-0 ceph-mon[74874]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 31 08:06:11 compute-0 ceph-mon[74874]: rocksdb:                           Options.bloom_locality: 0
Jan 31 08:06:11 compute-0 ceph-mon[74874]: rocksdb:                    Options.max_successive_merges: 0
Jan 31 08:06:11 compute-0 ceph-mon[74874]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 31 08:06:11 compute-0 ceph-mon[74874]: rocksdb:                Options.paranoid_file_checks: 0
Jan 31 08:06:11 compute-0 ceph-mon[74874]: rocksdb:                Options.force_consistency_checks: 1
Jan 31 08:06:11 compute-0 ceph-mon[74874]: rocksdb:                Options.report_bg_io_stats: 0
Jan 31 08:06:11 compute-0 ceph-mon[74874]: rocksdb:                               Options.ttl: 2592000
Jan 31 08:06:11 compute-0 ceph-mon[74874]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 31 08:06:11 compute-0 ceph-mon[74874]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 31 08:06:11 compute-0 ceph-mon[74874]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 31 08:06:11 compute-0 ceph-mon[74874]: rocksdb:                       Options.enable_blob_files: false
Jan 31 08:06:11 compute-0 ceph-mon[74874]: rocksdb:                           Options.min_blob_size: 0
Jan 31 08:06:11 compute-0 ceph-mon[74874]: rocksdb:                          Options.blob_file_size: 268435456
Jan 31 08:06:11 compute-0 ceph-mon[74874]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 31 08:06:11 compute-0 ceph-mon[74874]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 31 08:06:11 compute-0 ceph-mon[74874]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 31 08:06:11 compute-0 ceph-mon[74874]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 31 08:06:11 compute-0 ceph-mon[74874]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 31 08:06:11 compute-0 ceph-mon[74874]: rocksdb:                Options.blob_file_starting_level: 0
Jan 31 08:06:11 compute-0 ceph-mon[74874]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 31 08:06:11 compute-0 ceph-mon[74874]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:/var/lib/ceph/mon/ceph-compute-0/store.db/MANIFEST-000005 succeeded,manifest_file_number is 5, next_file_number is 7, last_sequence is 0, log_number is 0,prev_log_number is 0,max_column_family is 0,min_log_number_to_keep is 0
Jan 31 08:06:11 compute-0 ceph-mon[74874]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 0
Jan 31 08:06:11 compute-0 ceph-mon[74874]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: 91992687-9ca4-489a-811f-a25b3432622d
Jan 31 08:06:11 compute-0 ceph-mon[74874]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769846771295993, "job": 1, "event": "recovery_started", "wal_files": [4]}
Jan 31 08:06:11 compute-0 ceph-mon[74874]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #4 mode 2
Jan 31 08:06:11 compute-0 ceph-mon[74874]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769846771298133, "cf_name": "default", "job": 1, "event": "table_file_creation", "file_number": 8, "file_size": 1944, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 1, "largest_seqno": 5, "table_properties": {"data_size": 819, "index_size": 31, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 115, "raw_average_key_size": 23, "raw_value_size": 696, "raw_average_value_size": 139, "num_data_blocks": 1, "num_entries": 5, "num_filter_entries": 5, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769846771, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "91992687-9ca4-489a-811f-a25b3432622d", "db_session_id": "6H349LA39CZV4Z01SNE0", "orig_file_number": 8, "seqno_to_time_mapping": "N/A"}}
Jan 31 08:06:11 compute-0 ceph-mon[74874]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769846771298230, "job": 1, "event": "recovery_finished"}
Jan 31 08:06:11 compute-0 ceph-mon[74874]: rocksdb: [db/version_set.cc:5047] Creating manifest 10
Jan 31 08:06:11 compute-0 ceph-mon[74874]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000004.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 08:06:11 compute-0 ceph-mon[74874]: rocksdb: [db/db_impl/db_impl_open.cc:1987] SstFileManager instance 0x55fed9ab8e00
Jan 31 08:06:11 compute-0 ceph-mon[74874]: rocksdb: DB pointer 0x55fed9c04000
Jan 31 08:06:11 compute-0 ceph-mon[74874]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 31 08:06:11 compute-0 ceph-mon[74874]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 0.0 total, 0.0 interval
                                           Cumulative writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 GB, 0.00 MB/s
                                           Cumulative WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s
                                           Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.90 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.9      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Sum      1/0    1.90 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.9      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.9      0.00              0.00         1    0.002       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.9      0.00              0.00         1    0.002       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.0 total, 0.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.16 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.16 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55fed9a8b8d0#2 capacity: 512.00 MB usage: 1.17 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 0 last_secs: 5e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(1,0.11 KB,2.08616e-05%) IndexBlock(1,0.11 KB,2.08616e-05%) Misc(2,0.95 KB,0.000181794%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
Jan 31 08:06:11 compute-0 ceph-mon[74874]: starting mon.compute-0 rank 0 at public addrs [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0] at bind addrs [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0] mon_data /var/lib/ceph/mon/ceph-compute-0 fsid 82c880e6-d992-5408-8b12-efff9c275473
Jan 31 08:06:11 compute-0 ceph-mon[74874]: mon.compute-0@-1(???) e0 preinit fsid 82c880e6-d992-5408-8b12-efff9c275473
Jan 31 08:06:11 compute-0 ceph-mon[74874]: mon.compute-0@-1(probing) e0  my rank is now 0 (was -1)
Jan 31 08:06:11 compute-0 ceph-mon[74874]: mon.compute-0@0(probing) e0 win_standalone_election
Jan 31 08:06:11 compute-0 ceph-mon[74874]: paxos.0).electionLogic(0) init, first boot, initializing epoch at 1 
Jan 31 08:06:11 compute-0 ceph-mon[74874]: mon.compute-0@0(electing) e0 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Jan 31 08:06:11 compute-0 ceph-mon[74874]: log_channel(cluster) log [INF] : mon.compute-0 is new leader, mons compute-0 in quorum (ranks 0)
Jan 31 08:06:11 compute-0 ceph-mon[74874]: mon.compute-0@0(leader).osd e0 create_pending setting backfillfull_ratio = 0.9
Jan 31 08:06:11 compute-0 ceph-mon[74874]: mon.compute-0@0(leader).osd e0 create_pending setting full_ratio = 0.95
Jan 31 08:06:11 compute-0 ceph-mon[74874]: mon.compute-0@0(leader).osd e0 create_pending setting nearfull_ratio = 0.85
Jan 31 08:06:11 compute-0 ceph-mon[74874]: mon.compute-0@0(leader).osd e0 do_prune osdmap full prune enabled
Jan 31 08:06:11 compute-0 ceph-mon[74874]: mon.compute-0@0(leader).osd e0 encode_pending skipping prime_pg_temp; mapping job did not start
Jan 31 08:06:11 compute-0 ceph-mon[74874]: mon.compute-0@0(leader) e0 _apply_compatset_features enabling new quorum features: compat={},rocompat={},incompat={4=support erasure code pools,5=new-style osdmap encoding,6=support isa/lrc erasure code,7=support shec erasure code}
Jan 31 08:06:11 compute-0 ceph-mon[74874]: mon.compute-0@0(leader).paxosservice(auth 0..0) refresh upgraded, format 3 -> 0
Jan 31 08:06:11 compute-0 ceph-mon[74874]: mon.compute-0@0(probing) e1 win_standalone_election
Jan 31 08:06:11 compute-0 ceph-mon[74874]: paxos.0).electionLogic(2) init, last seen epoch 2
Jan 31 08:06:11 compute-0 ceph-mon[74874]: mon.compute-0@0(electing) e1 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Jan 31 08:06:11 compute-0 ceph-mon[74874]: log_channel(cluster) log [INF] : mon.compute-0 is new leader, mons compute-0 in quorum (ranks 0)
Jan 31 08:06:11 compute-0 ceph-mon[74874]: log_channel(cluster) log [DBG] : monmap epoch 1
Jan 31 08:06:11 compute-0 ceph-mon[74874]: log_channel(cluster) log [DBG] : fsid 82c880e6-d992-5408-8b12-efff9c275473
Jan 31 08:06:11 compute-0 ceph-mon[74874]: log_channel(cluster) log [DBG] : last_changed 2026-01-31T08:06:09.429767+0000
Jan 31 08:06:11 compute-0 ceph-mon[74874]: log_channel(cluster) log [DBG] : created 2026-01-31T08:06:09.429767+0000
Jan 31 08:06:11 compute-0 ceph-mon[74874]: log_channel(cluster) log [DBG] : min_mon_release 20 (tentacle)
Jan 31 08:06:11 compute-0 ceph-mon[74874]: log_channel(cluster) log [DBG] : election_strategy: 1
Jan 31 08:06:11 compute-0 ceph-mon[74874]: log_channel(cluster) log [DBG] : 0: [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0] mon.compute-0
Jan 31 08:06:11 compute-0 ceph-mon[74874]: mon.compute-0@0(leader) e1 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Jan 31 08:06:11 compute-0 ceph-mon[74874]: mgrc update_daemon_metadata mon.compute-0 metadata {addrs=[v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0],arch=x86_64,ceph_release=tentacle,ceph_version=ceph version 20.2.0 (69f84cc2651aa259a15bc192ddaabd3baba07489) tentacle (stable - RelWithDebInfo),ceph_version_short=20.2.0,ceph_version_when_created=ceph version 20.2.0 (69f84cc2651aa259a15bc192ddaabd3baba07489) tentacle (stable - RelWithDebInfo),compression_algorithms=none, snappy, zlib, zstd, lz4,container_hostname=compute-0,container_image=quay.io/ceph/ceph:v20,cpu=AMD EPYC-Rome Processor,created_at=2026-01-31T08:06:09.636428Z,device_ids=,device_paths=vda=/dev/disk/by-path/pci-0000:00:04.0,devices=vda,distro=centos,distro_description=CentOS Stream 9,distro_version=9,hostname=compute-0,kernel_description=#1 SMP PREEMPT_DYNAMIC Thu Jan 22 12:30:22 UTC 2026,kernel_version=5.14.0-665.el9.x86_64,mem_swap_kb=1048572,mem_total_kb=7864296,os=Linux}
Jan 31 08:06:11 compute-0 ceph-mon[74874]: mon.compute-0@0(leader).osd e0 create_pending setting backfillfull_ratio = 0.9
Jan 31 08:06:11 compute-0 ceph-mon[74874]: mon.compute-0@0(leader).osd e0 create_pending setting full_ratio = 0.95
Jan 31 08:06:11 compute-0 ceph-mon[74874]: mon.compute-0@0(leader).osd e0 create_pending setting nearfull_ratio = 0.85
Jan 31 08:06:11 compute-0 ceph-mon[74874]: mon.compute-0@0(leader).osd e0 do_prune osdmap full prune enabled
Jan 31 08:06:11 compute-0 ceph-mon[74874]: mon.compute-0@0(leader).osd e0 encode_pending skipping prime_pg_temp; mapping job did not start
Jan 31 08:06:11 compute-0 ceph-mon[74874]: mon.compute-0@0(leader) e1 _apply_compatset_features enabling new quorum features: compat={},rocompat={},incompat={8=support monmap features,9=luminous ondisk layout,10=mimic ondisk layout,11=nautilus ondisk layout,12=octopus ondisk layout,13=pacific ondisk layout,14=quincy ondisk layout,15=reef ondisk layout,16=squid ondisk layout,17=tentacle ondisk layout}
Jan 31 08:06:11 compute-0 podman[74875]: 2026-01-31 08:06:11.332603737 +0000 UTC m=+0.045295503 container create 54e821b7956b175c66dfe973edd2bfe859a35443a2cd871b77021e96253e12b6 (image=quay.io/ceph/ceph:v20, name=festive_lumiere, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Jan 31 08:06:11 compute-0 ceph-mon[74874]: mon.compute-0@0(leader).mds e1 new map
Jan 31 08:06:11 compute-0 ceph-mon[74874]: mon.compute-0@0(leader).mds e1 print_map
                                           e1
                                           btime 2026-01-31T08:06:11:330734+0000
                                           enable_multiple, ever_enabled_multiple: 1,1
                                           default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}
                                           legacy client fscid: -1
                                            
                                           No filesystems configured
Jan 31 08:06:11 compute-0 ceph-mon[74874]: mon.compute-0@0(leader).paxosservice(auth 0..0) refresh upgraded, format 3 -> 0
Jan 31 08:06:11 compute-0 ceph-mon[74874]: log_channel(cluster) log [DBG] : fsmap 
Jan 31 08:06:11 compute-0 ceph-mon[74874]: mon.compute-0@0(leader).osd e0 _set_cache_ratios kv ratio 0.25 inc ratio 0.375 full ratio 0.375
Jan 31 08:06:11 compute-0 ceph-mon[74874]: mon.compute-0@0(leader).osd e0 register_cache_with_pcm pcm target: 2147483648 pcm max: 1020054732 pcm min: 134217728 inc_osd_cache size: 1
Jan 31 08:06:11 compute-0 ceph-mon[74874]: mon.compute-0@0(leader).osd e1 e1: 0 total, 0 up, 0 in
Jan 31 08:06:11 compute-0 ceph-mon[74874]: mon.compute-0@0(leader).osd e1 crush map has features 3314932999778484224, adjusting msgr requires
Jan 31 08:06:11 compute-0 ceph-mon[74874]: mon.compute-0@0(leader).osd e1 crush map has features 288514050185494528, adjusting msgr requires
Jan 31 08:06:11 compute-0 ceph-mon[74874]: mon.compute-0@0(leader).osd e1 crush map has features 288514050185494528, adjusting msgr requires
Jan 31 08:06:11 compute-0 ceph-mon[74874]: mkfs 82c880e6-d992-5408-8b12-efff9c275473
Jan 31 08:06:11 compute-0 ceph-mon[74874]: mon.compute-0@0(leader).osd e1 crush map has features 288514050185494528, adjusting msgr requires
Jan 31 08:06:11 compute-0 ceph-mon[74874]: mon.compute-0@0(leader).paxosservice(auth 1..1) refresh upgraded, format 0 -> 3
Jan 31 08:06:11 compute-0 ceph-mon[74874]: log_channel(cluster) log [DBG] : osdmap e1: 0 total, 0 up, 0 in
Jan 31 08:06:11 compute-0 ceph-mon[74874]: log_channel(cluster) log [DBG] : mgrmap e1: no daemons active
Jan 31 08:06:11 compute-0 ceph-mon[74874]: mon.compute-0 is new leader, mons compute-0 in quorum (ranks 0)
Jan 31 08:06:11 compute-0 systemd[1]: Started libpod-conmon-54e821b7956b175c66dfe973edd2bfe859a35443a2cd871b77021e96253e12b6.scope.
Jan 31 08:06:11 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:06:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6a014a1838a9320b526b1b3e835f56b1b44a1cea195dfd8e04c226dc90f00892/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 08:06:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6a014a1838a9320b526b1b3e835f56b1b44a1cea195dfd8e04c226dc90f00892/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 31 08:06:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6a014a1838a9320b526b1b3e835f56b1b44a1cea195dfd8e04c226dc90f00892/merged/var/lib/ceph/mon/ceph-compute-0 supports timestamps until 2038 (0x7fffffff)
Jan 31 08:06:11 compute-0 podman[74875]: 2026-01-31 08:06:11.316146888 +0000 UTC m=+0.028838654 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 31 08:06:11 compute-0 podman[74875]: 2026-01-31 08:06:11.424268142 +0000 UTC m=+0.136959928 container init 54e821b7956b175c66dfe973edd2bfe859a35443a2cd871b77021e96253e12b6 (image=quay.io/ceph/ceph:v20, name=festive_lumiere, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 08:06:11 compute-0 podman[74875]: 2026-01-31 08:06:11.431598252 +0000 UTC m=+0.144290038 container start 54e821b7956b175c66dfe973edd2bfe859a35443a2cd871b77021e96253e12b6 (image=quay.io/ceph/ceph:v20, name=festive_lumiere, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 31 08:06:11 compute-0 podman[74875]: 2026-01-31 08:06:11.43820213 +0000 UTC m=+0.150893916 container attach 54e821b7956b175c66dfe973edd2bfe859a35443a2cd871b77021e96253e12b6 (image=quay.io/ceph/ceph:v20, name=festive_lumiere, CEPH_REF=tentacle, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Jan 31 08:06:11 compute-0 ceph-mon[74874]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status"} v 0)
Jan 31 08:06:11 compute-0 ceph-mon[74874]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3889949626' entity='client.admin' cmd={"prefix": "status"} : dispatch
Jan 31 08:06:11 compute-0 festive_lumiere[74929]:   cluster:
Jan 31 08:06:11 compute-0 festive_lumiere[74929]:     id:     82c880e6-d992-5408-8b12-efff9c275473
Jan 31 08:06:11 compute-0 festive_lumiere[74929]:     health: HEALTH_OK
Jan 31 08:06:11 compute-0 festive_lumiere[74929]:  
Jan 31 08:06:11 compute-0 festive_lumiere[74929]:   services:
Jan 31 08:06:11 compute-0 festive_lumiere[74929]:     mon: 1 daemons, quorum compute-0 (age 0.30607s) [leader: compute-0]
Jan 31 08:06:11 compute-0 festive_lumiere[74929]:     mgr: no daemons active
Jan 31 08:06:11 compute-0 festive_lumiere[74929]:     osd: 0 osds: 0 up, 0 in
Jan 31 08:06:11 compute-0 festive_lumiere[74929]:  
Jan 31 08:06:11 compute-0 festive_lumiere[74929]:   data:
Jan 31 08:06:11 compute-0 festive_lumiere[74929]:     pools:   0 pools, 0 pgs
Jan 31 08:06:11 compute-0 festive_lumiere[74929]:     objects: 0 objects, 0 B
Jan 31 08:06:11 compute-0 festive_lumiere[74929]:     usage:   0 B used, 0 B / 0 B avail
Jan 31 08:06:11 compute-0 festive_lumiere[74929]:     pgs:     
Jan 31 08:06:11 compute-0 festive_lumiere[74929]:  
Jan 31 08:06:11 compute-0 systemd[1]: libpod-54e821b7956b175c66dfe973edd2bfe859a35443a2cd871b77021e96253e12b6.scope: Deactivated successfully.
Jan 31 08:06:11 compute-0 conmon[74929]: conmon 54e821b7956b175c66df <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-54e821b7956b175c66dfe973edd2bfe859a35443a2cd871b77021e96253e12b6.scope/container/memory.events
Jan 31 08:06:11 compute-0 podman[74875]: 2026-01-31 08:06:11.654184042 +0000 UTC m=+0.366875808 container died 54e821b7956b175c66dfe973edd2bfe859a35443a2cd871b77021e96253e12b6 (image=quay.io/ceph/ceph:v20, name=festive_lumiere, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 08:06:11 compute-0 systemd[1]: var-lib-containers-storage-overlay-6a014a1838a9320b526b1b3e835f56b1b44a1cea195dfd8e04c226dc90f00892-merged.mount: Deactivated successfully.
Jan 31 08:06:11 compute-0 podman[74875]: 2026-01-31 08:06:11.689709926 +0000 UTC m=+0.402401692 container remove 54e821b7956b175c66dfe973edd2bfe859a35443a2cd871b77021e96253e12b6 (image=quay.io/ceph/ceph:v20, name=festive_lumiere, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 08:06:11 compute-0 systemd[1]: libpod-conmon-54e821b7956b175c66dfe973edd2bfe859a35443a2cd871b77021e96253e12b6.scope: Deactivated successfully.
Jan 31 08:06:11 compute-0 podman[74966]: 2026-01-31 08:06:11.748823212 +0000 UTC m=+0.040220988 container create 8849f7880c4fa518602bcc17e8cc7042ac55833407da3d0aab956eb9cb88339b (image=quay.io/ceph/ceph:v20, name=angry_blackwell, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3)
Jan 31 08:06:11 compute-0 systemd[1]: Started libpod-conmon-8849f7880c4fa518602bcc17e8cc7042ac55833407da3d0aab956eb9cb88339b.scope.
Jan 31 08:06:11 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:06:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/daa8e467b09be295f56e45055c4e280eab349f5d2a2a5f3948918bfcf77737f3/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 31 08:06:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/daa8e467b09be295f56e45055c4e280eab349f5d2a2a5f3948918bfcf77737f3/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 08:06:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/daa8e467b09be295f56e45055c4e280eab349f5d2a2a5f3948918bfcf77737f3/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 08:06:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/daa8e467b09be295f56e45055c4e280eab349f5d2a2a5f3948918bfcf77737f3/merged/var/lib/ceph/mon/ceph-compute-0 supports timestamps until 2038 (0x7fffffff)
Jan 31 08:06:11 compute-0 podman[74966]: 2026-01-31 08:06:11.818198572 +0000 UTC m=+0.109596378 container init 8849f7880c4fa518602bcc17e8cc7042ac55833407da3d0aab956eb9cb88339b (image=quay.io/ceph/ceph:v20, name=angry_blackwell, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 08:06:11 compute-0 podman[74966]: 2026-01-31 08:06:11.822900466 +0000 UTC m=+0.114298262 container start 8849f7880c4fa518602bcc17e8cc7042ac55833407da3d0aab956eb9cb88339b (image=quay.io/ceph/ceph:v20, name=angry_blackwell, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 08:06:11 compute-0 podman[74966]: 2026-01-31 08:06:11.729793599 +0000 UTC m=+0.021191445 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 31 08:06:11 compute-0 podman[74966]: 2026-01-31 08:06:11.826349254 +0000 UTC m=+0.117747110 container attach 8849f7880c4fa518602bcc17e8cc7042ac55833407da3d0aab956eb9cb88339b (image=quay.io/ceph/ceph:v20, name=angry_blackwell, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 08:06:12 compute-0 ceph-mon[74874]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config assimilate-conf"} v 0)
Jan 31 08:06:12 compute-0 ceph-mon[74874]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1700683954' entity='client.admin' cmd={"prefix": "config assimilate-conf"} : dispatch
Jan 31 08:06:12 compute-0 ceph-mon[74874]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1700683954' entity='client.admin' cmd='[{"prefix": "config assimilate-conf"}]': finished
Jan 31 08:06:12 compute-0 angry_blackwell[74982]: 
Jan 31 08:06:12 compute-0 angry_blackwell[74982]: [global]
Jan 31 08:06:12 compute-0 angry_blackwell[74982]:         fsid = 82c880e6-d992-5408-8b12-efff9c275473
Jan 31 08:06:12 compute-0 angry_blackwell[74982]:         mon_host = [v2:192.168.122.100:3300,v1:192.168.122.100:6789]
Jan 31 08:06:12 compute-0 angry_blackwell[74982]:         osd_crush_chooseleaf_type = 0
Jan 31 08:06:12 compute-0 systemd[1]: libpod-8849f7880c4fa518602bcc17e8cc7042ac55833407da3d0aab956eb9cb88339b.scope: Deactivated successfully.
Jan 31 08:06:12 compute-0 podman[74966]: 2026-01-31 08:06:12.06523838 +0000 UTC m=+0.356636186 container died 8849f7880c4fa518602bcc17e8cc7042ac55833407da3d0aab956eb9cb88339b (image=quay.io/ceph/ceph:v20, name=angry_blackwell, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 08:06:12 compute-0 systemd[1]: var-lib-containers-storage-overlay-daa8e467b09be295f56e45055c4e280eab349f5d2a2a5f3948918bfcf77737f3-merged.mount: Deactivated successfully.
Jan 31 08:06:12 compute-0 podman[74966]: 2026-01-31 08:06:12.098899871 +0000 UTC m=+0.390297667 container remove 8849f7880c4fa518602bcc17e8cc7042ac55833407da3d0aab956eb9cb88339b (image=quay.io/ceph/ceph:v20, name=angry_blackwell, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_REF=tentacle)
Jan 31 08:06:12 compute-0 systemd[1]: libpod-conmon-8849f7880c4fa518602bcc17e8cc7042ac55833407da3d0aab956eb9cb88339b.scope: Deactivated successfully.
Jan 31 08:06:12 compute-0 podman[75019]: 2026-01-31 08:06:12.15110096 +0000 UTC m=+0.037086569 container create 4f3f2ae0239b1ebf946e7e1d7aae4a6881adcf8f61e0be75b22ecfa9668fca62 (image=quay.io/ceph/ceph:v20, name=cranky_lamport, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0)
Jan 31 08:06:12 compute-0 systemd[1]: Started libpod-conmon-4f3f2ae0239b1ebf946e7e1d7aae4a6881adcf8f61e0be75b22ecfa9668fca62.scope.
Jan 31 08:06:12 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:06:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/353e19aeff3f6ecc31e430c8778428fadca4f612e0e20af9ca64f077f338afa5/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 31 08:06:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/353e19aeff3f6ecc31e430c8778428fadca4f612e0e20af9ca64f077f338afa5/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 08:06:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/353e19aeff3f6ecc31e430c8778428fadca4f612e0e20af9ca64f077f338afa5/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 08:06:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/353e19aeff3f6ecc31e430c8778428fadca4f612e0e20af9ca64f077f338afa5/merged/var/lib/ceph/mon/ceph-compute-0 supports timestamps until 2038 (0x7fffffff)
Jan 31 08:06:12 compute-0 podman[75019]: 2026-01-31 08:06:12.133334173 +0000 UTC m=+0.019319872 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 31 08:06:12 compute-0 podman[75019]: 2026-01-31 08:06:12.232006108 +0000 UTC m=+0.117991737 container init 4f3f2ae0239b1ebf946e7e1d7aae4a6881adcf8f61e0be75b22ecfa9668fca62 (image=quay.io/ceph/ceph:v20, name=cranky_lamport, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=tentacle)
Jan 31 08:06:12 compute-0 podman[75019]: 2026-01-31 08:06:12.236080984 +0000 UTC m=+0.122066643 container start 4f3f2ae0239b1ebf946e7e1d7aae4a6881adcf8f61e0be75b22ecfa9668fca62 (image=quay.io/ceph/ceph:v20, name=cranky_lamport, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Jan 31 08:06:12 compute-0 podman[75019]: 2026-01-31 08:06:12.240208592 +0000 UTC m=+0.126194241 container attach 4f3f2ae0239b1ebf946e7e1d7aae4a6881adcf8f61e0be75b22ecfa9668fca62 (image=quay.io/ceph/ceph:v20, name=cranky_lamport, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 08:06:12 compute-0 ceph-mon[74874]: mon.compute-0 is new leader, mons compute-0 in quorum (ranks 0)
Jan 31 08:06:12 compute-0 ceph-mon[74874]: monmap epoch 1
Jan 31 08:06:12 compute-0 ceph-mon[74874]: fsid 82c880e6-d992-5408-8b12-efff9c275473
Jan 31 08:06:12 compute-0 ceph-mon[74874]: last_changed 2026-01-31T08:06:09.429767+0000
Jan 31 08:06:12 compute-0 ceph-mon[74874]: created 2026-01-31T08:06:09.429767+0000
Jan 31 08:06:12 compute-0 ceph-mon[74874]: min_mon_release 20 (tentacle)
Jan 31 08:06:12 compute-0 ceph-mon[74874]: election_strategy: 1
Jan 31 08:06:12 compute-0 ceph-mon[74874]: 0: [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0] mon.compute-0
Jan 31 08:06:12 compute-0 ceph-mon[74874]: fsmap 
Jan 31 08:06:12 compute-0 ceph-mon[74874]: osdmap e1: 0 total, 0 up, 0 in
Jan 31 08:06:12 compute-0 ceph-mon[74874]: mgrmap e1: no daemons active
Jan 31 08:06:12 compute-0 ceph-mon[74874]: from='client.? 192.168.122.100:0/3889949626' entity='client.admin' cmd={"prefix": "status"} : dispatch
Jan 31 08:06:12 compute-0 ceph-mon[74874]: from='client.? 192.168.122.100:0/1700683954' entity='client.admin' cmd={"prefix": "config assimilate-conf"} : dispatch
Jan 31 08:06:12 compute-0 ceph-mon[74874]: from='client.? 192.168.122.100:0/1700683954' entity='client.admin' cmd='[{"prefix": "config assimilate-conf"}]': finished
Jan 31 08:06:12 compute-0 ceph-mon[74874]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 31 08:06:12 compute-0 ceph-mon[74874]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1999526819' entity='client.admin' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 08:06:12 compute-0 systemd[1]: libpod-4f3f2ae0239b1ebf946e7e1d7aae4a6881adcf8f61e0be75b22ecfa9668fca62.scope: Deactivated successfully.
Jan 31 08:06:12 compute-0 podman[75019]: 2026-01-31 08:06:12.441464384 +0000 UTC m=+0.327450043 container died 4f3f2ae0239b1ebf946e7e1d7aae4a6881adcf8f61e0be75b22ecfa9668fca62 (image=quay.io/ceph/ceph:v20, name=cranky_lamport, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030)
Jan 31 08:06:12 compute-0 systemd[1]: var-lib-containers-storage-overlay-353e19aeff3f6ecc31e430c8778428fadca4f612e0e20af9ca64f077f338afa5-merged.mount: Deactivated successfully.
Jan 31 08:06:12 compute-0 podman[75019]: 2026-01-31 08:06:12.477295937 +0000 UTC m=+0.363281556 container remove 4f3f2ae0239b1ebf946e7e1d7aae4a6881adcf8f61e0be75b22ecfa9668fca62 (image=quay.io/ceph/ceph:v20, name=cranky_lamport, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2)
Jan 31 08:06:12 compute-0 systemd[1]: libpod-conmon-4f3f2ae0239b1ebf946e7e1d7aae4a6881adcf8f61e0be75b22ecfa9668fca62.scope: Deactivated successfully.
Jan 31 08:06:12 compute-0 systemd[1]: Stopping Ceph mon.compute-0 for 82c880e6-d992-5408-8b12-efff9c275473...
Jan 31 08:06:12 compute-0 ceph-mon[74874]: received  signal: Terminated from /run/podman-init -- /usr/bin/ceph-mon -n mon.compute-0 -f --setuser ceph --setgroup ceph --default-log-to-file=false --default-log-to-journald=true --default-log-to-stderr=false --default-mon-cluster-log-to-file=false --default-mon-cluster-log-to-journald=true --default-mon-cluster-log-to-stderr=false  (PID: 1) UID: 0
Jan 31 08:06:12 compute-0 ceph-mon[74874]: mon.compute-0@0(leader) e1 *** Got Signal Terminated ***
Jan 31 08:06:12 compute-0 ceph-mon[74874]: mon.compute-0@0(leader) e1 shutdown
Jan 31 08:06:12 compute-0 ceph-82c880e6-d992-5408-8b12-efff9c275473-mon-compute-0[74870]: 2026-01-31T08:06:12.644+0000 7f7b5c911640 -1 received  signal: Terminated from /run/podman-init -- /usr/bin/ceph-mon -n mon.compute-0 -f --setuser ceph --setgroup ceph --default-log-to-file=false --default-log-to-journald=true --default-log-to-stderr=false --default-mon-cluster-log-to-file=false --default-mon-cluster-log-to-journald=true --default-mon-cluster-log-to-stderr=false  (PID: 1) UID: 0
Jan 31 08:06:12 compute-0 ceph-82c880e6-d992-5408-8b12-efff9c275473-mon-compute-0[74870]: 2026-01-31T08:06:12.644+0000 7f7b5c911640 -1 mon.compute-0@0(leader) e1 *** Got Signal Terminated ***
Jan 31 08:06:12 compute-0 ceph-mon[74874]: rocksdb: [db/db_impl/db_impl.cc:496] Shutdown: canceling all background work
Jan 31 08:06:12 compute-0 ceph-mon[74874]: rocksdb: [db/db_impl/db_impl.cc:704] Shutdown complete
Jan 31 08:06:12 compute-0 podman[75103]: 2026-01-31 08:06:12.797661487 +0000 UTC m=+0.183322331 container died f08e8aa80cf9a7a7195af8b48d8c358f703628ba6c0d03776496a000ae724410 (image=quay.io/ceph/ceph:v20, name=ceph-82c880e6-d992-5408-8b12-efff9c275473-mon-compute-0, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 31 08:06:12 compute-0 systemd[1]: var-lib-containers-storage-overlay-bb6a935dd4de1434407d93089d65dd8910047cdc0a19e76d531e36be2b7ad1bf-merged.mount: Deactivated successfully.
Jan 31 08:06:12 compute-0 podman[75103]: 2026-01-31 08:06:12.829855156 +0000 UTC m=+0.215515980 container remove f08e8aa80cf9a7a7195af8b48d8c358f703628ba6c0d03776496a000ae724410 (image=quay.io/ceph/ceph:v20, name=ceph-82c880e6-d992-5408-8b12-efff9c275473-mon-compute-0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 08:06:12 compute-0 bash[75103]: ceph-82c880e6-d992-5408-8b12-efff9c275473-mon-compute-0
Jan 31 08:06:12 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 31 08:06:12 compute-0 systemd[1]: ceph-82c880e6-d992-5408-8b12-efff9c275473@mon.compute-0.service: Deactivated successfully.
Jan 31 08:06:12 compute-0 systemd[1]: Stopped Ceph mon.compute-0 for 82c880e6-d992-5408-8b12-efff9c275473.
Jan 31 08:06:12 compute-0 systemd[1]: Starting Ceph mon.compute-0 for 82c880e6-d992-5408-8b12-efff9c275473...
Jan 31 08:06:13 compute-0 podman[75207]: 2026-01-31 08:06:13.145509211 +0000 UTC m=+0.047157137 container create 2c160fb9852a007dc977740f88f96001cc57b1cb392a9e315d541aef8037777a (image=quay.io/ceph/ceph:v20, name=ceph-82c880e6-d992-5408-8b12-efff9c275473-mon-compute-0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, OSD_FLAVOR=default)
Jan 31 08:06:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f8cc71e567ad31db8632cdc03ce8ca731d897ab1ea79d8674ba90ce0ed77e04a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 08:06:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f8cc71e567ad31db8632cdc03ce8ca731d897ab1ea79d8674ba90ce0ed77e04a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 08:06:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f8cc71e567ad31db8632cdc03ce8ca731d897ab1ea79d8674ba90ce0ed77e04a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 08:06:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f8cc71e567ad31db8632cdc03ce8ca731d897ab1ea79d8674ba90ce0ed77e04a/merged/var/lib/ceph/mon/ceph-compute-0 supports timestamps until 2038 (0x7fffffff)
Jan 31 08:06:13 compute-0 podman[75207]: 2026-01-31 08:06:13.121577778 +0000 UTC m=+0.023225744 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 31 08:06:13 compute-0 podman[75207]: 2026-01-31 08:06:13.224123354 +0000 UTC m=+0.125771290 container init 2c160fb9852a007dc977740f88f96001cc57b1cb392a9e315d541aef8037777a (image=quay.io/ceph/ceph:v20, name=ceph-82c880e6-d992-5408-8b12-efff9c275473-mon-compute-0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS)
Jan 31 08:06:13 compute-0 podman[75207]: 2026-01-31 08:06:13.232143692 +0000 UTC m=+0.133791588 container start 2c160fb9852a007dc977740f88f96001cc57b1cb392a9e315d541aef8037777a (image=quay.io/ceph/ceph:v20, name=ceph-82c880e6-d992-5408-8b12-efff9c275473-mon-compute-0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030)
Jan 31 08:06:13 compute-0 bash[75207]: 2c160fb9852a007dc977740f88f96001cc57b1cb392a9e315d541aef8037777a
Jan 31 08:06:13 compute-0 systemd[1]: Started Ceph mon.compute-0 for 82c880e6-d992-5408-8b12-efff9c275473.
Jan 31 08:06:13 compute-0 ceph-mon[75227]: set uid:gid to 167:167 (ceph:ceph)
Jan 31 08:06:13 compute-0 ceph-mon[75227]: ceph version 20.2.0 (69f84cc2651aa259a15bc192ddaabd3baba07489) tentacle (stable - RelWithDebInfo), process ceph-mon, pid 2
Jan 31 08:06:13 compute-0 ceph-mon[75227]: pidfile_write: ignore empty --pid-file
Jan 31 08:06:13 compute-0 ceph-mon[75227]: load: jerasure load: lrc 
Jan 31 08:06:13 compute-0 ceph-mon[75227]: rocksdb: RocksDB version: 7.9.2
Jan 31 08:06:13 compute-0 ceph-mon[75227]: rocksdb: Git sha 0
Jan 31 08:06:13 compute-0 ceph-mon[75227]: rocksdb: Compile date 2025-10-30 15:42:43
Jan 31 08:06:13 compute-0 ceph-mon[75227]: rocksdb: DB SUMMARY
Jan 31 08:06:13 compute-0 ceph-mon[75227]: rocksdb: DB Session ID:  RDN3DWKE2K2I6QTJYIJY
Jan 31 08:06:13 compute-0 ceph-mon[75227]: rocksdb: CURRENT file:  CURRENT
Jan 31 08:06:13 compute-0 ceph-mon[75227]: rocksdb: IDENTITY file:  IDENTITY
Jan 31 08:06:13 compute-0 ceph-mon[75227]: rocksdb: MANIFEST file:  MANIFEST-000010 size: 179 Bytes
Jan 31 08:06:13 compute-0 ceph-mon[75227]: rocksdb: SST files in /var/lib/ceph/mon/ceph-compute-0/store.db dir, Total Num: 1, files: 000008.sst 
Jan 31 08:06:13 compute-0 ceph-mon[75227]: rocksdb: Write Ahead Log file in /var/lib/ceph/mon/ceph-compute-0/store.db: 000009.log size: 60239 ; 
Jan 31 08:06:13 compute-0 ceph-mon[75227]: rocksdb:                         Options.error_if_exists: 0
Jan 31 08:06:13 compute-0 ceph-mon[75227]: rocksdb:                       Options.create_if_missing: 0
Jan 31 08:06:13 compute-0 ceph-mon[75227]: rocksdb:                         Options.paranoid_checks: 1
Jan 31 08:06:13 compute-0 ceph-mon[75227]: rocksdb:             Options.flush_verify_memtable_count: 1
Jan 31 08:06:13 compute-0 ceph-mon[75227]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Jan 31 08:06:13 compute-0 ceph-mon[75227]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Jan 31 08:06:13 compute-0 ceph-mon[75227]: rocksdb:                                     Options.env: 0x55bf4a6e3440
Jan 31 08:06:13 compute-0 ceph-mon[75227]: rocksdb:                                      Options.fs: PosixFileSystem
Jan 31 08:06:13 compute-0 ceph-mon[75227]: rocksdb:                                Options.info_log: 0x55bf4c749e80
Jan 31 08:06:13 compute-0 ceph-mon[75227]: rocksdb:                Options.max_file_opening_threads: 16
Jan 31 08:06:13 compute-0 ceph-mon[75227]: rocksdb:                              Options.statistics: (nil)
Jan 31 08:06:13 compute-0 ceph-mon[75227]: rocksdb:                               Options.use_fsync: 0
Jan 31 08:06:13 compute-0 ceph-mon[75227]: rocksdb:                       Options.max_log_file_size: 0
Jan 31 08:06:13 compute-0 ceph-mon[75227]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Jan 31 08:06:13 compute-0 ceph-mon[75227]: rocksdb:                   Options.log_file_time_to_roll: 0
Jan 31 08:06:13 compute-0 ceph-mon[75227]: rocksdb:                       Options.keep_log_file_num: 1000
Jan 31 08:06:13 compute-0 ceph-mon[75227]: rocksdb:                    Options.recycle_log_file_num: 0
Jan 31 08:06:13 compute-0 ceph-mon[75227]: rocksdb:                         Options.allow_fallocate: 1
Jan 31 08:06:13 compute-0 ceph-mon[75227]: rocksdb:                        Options.allow_mmap_reads: 0
Jan 31 08:06:13 compute-0 ceph-mon[75227]: rocksdb:                       Options.allow_mmap_writes: 0
Jan 31 08:06:13 compute-0 ceph-mon[75227]: rocksdb:                        Options.use_direct_reads: 0
Jan 31 08:06:13 compute-0 ceph-mon[75227]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Jan 31 08:06:13 compute-0 ceph-mon[75227]: rocksdb:          Options.create_missing_column_families: 0
Jan 31 08:06:13 compute-0 ceph-mon[75227]: rocksdb:                              Options.db_log_dir: 
Jan 31 08:06:13 compute-0 ceph-mon[75227]: rocksdb:                                 Options.wal_dir: 
Jan 31 08:06:13 compute-0 ceph-mon[75227]: rocksdb:                Options.table_cache_numshardbits: 6
Jan 31 08:06:13 compute-0 ceph-mon[75227]: rocksdb:                         Options.WAL_ttl_seconds: 0
Jan 31 08:06:13 compute-0 ceph-mon[75227]: rocksdb:                       Options.WAL_size_limit_MB: 0
Jan 31 08:06:13 compute-0 ceph-mon[75227]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Jan 31 08:06:13 compute-0 ceph-mon[75227]: rocksdb:             Options.manifest_preallocation_size: 4194304
Jan 31 08:06:13 compute-0 ceph-mon[75227]: rocksdb:                     Options.is_fd_close_on_exec: 1
Jan 31 08:06:13 compute-0 ceph-mon[75227]: rocksdb:                   Options.advise_random_on_open: 1
Jan 31 08:06:13 compute-0 ceph-mon[75227]: rocksdb:                    Options.db_write_buffer_size: 0
Jan 31 08:06:13 compute-0 ceph-mon[75227]: rocksdb:                    Options.write_buffer_manager: 0x55bf4c794140
Jan 31 08:06:13 compute-0 ceph-mon[75227]: rocksdb:         Options.access_hint_on_compaction_start: 1
Jan 31 08:06:13 compute-0 ceph-mon[75227]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Jan 31 08:06:13 compute-0 ceph-mon[75227]: rocksdb:                      Options.use_adaptive_mutex: 0
Jan 31 08:06:13 compute-0 ceph-mon[75227]: rocksdb:                            Options.rate_limiter: (nil)
Jan 31 08:06:13 compute-0 ceph-mon[75227]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Jan 31 08:06:13 compute-0 ceph-mon[75227]: rocksdb:                       Options.wal_recovery_mode: 2
Jan 31 08:06:13 compute-0 ceph-mon[75227]: rocksdb:                  Options.enable_thread_tracking: 0
Jan 31 08:06:13 compute-0 ceph-mon[75227]: rocksdb:                  Options.enable_pipelined_write: 0
Jan 31 08:06:13 compute-0 ceph-mon[75227]: rocksdb:                  Options.unordered_write: 0
Jan 31 08:06:13 compute-0 ceph-mon[75227]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Jan 31 08:06:13 compute-0 ceph-mon[75227]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Jan 31 08:06:13 compute-0 ceph-mon[75227]: rocksdb:             Options.write_thread_max_yield_usec: 100
Jan 31 08:06:13 compute-0 ceph-mon[75227]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Jan 31 08:06:13 compute-0 ceph-mon[75227]: rocksdb:                               Options.row_cache: None
Jan 31 08:06:13 compute-0 ceph-mon[75227]: rocksdb:                              Options.wal_filter: None
Jan 31 08:06:13 compute-0 ceph-mon[75227]: rocksdb:             Options.avoid_flush_during_recovery: 0
Jan 31 08:06:13 compute-0 ceph-mon[75227]: rocksdb:             Options.allow_ingest_behind: 0
Jan 31 08:06:13 compute-0 ceph-mon[75227]: rocksdb:             Options.two_write_queues: 0
Jan 31 08:06:13 compute-0 ceph-mon[75227]: rocksdb:             Options.manual_wal_flush: 0
Jan 31 08:06:13 compute-0 ceph-mon[75227]: rocksdb:             Options.wal_compression: 0
Jan 31 08:06:13 compute-0 ceph-mon[75227]: rocksdb:             Options.atomic_flush: 0
Jan 31 08:06:13 compute-0 ceph-mon[75227]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Jan 31 08:06:13 compute-0 ceph-mon[75227]: rocksdb:                 Options.persist_stats_to_disk: 0
Jan 31 08:06:13 compute-0 ceph-mon[75227]: rocksdb:                 Options.write_dbid_to_manifest: 0
Jan 31 08:06:13 compute-0 ceph-mon[75227]: rocksdb:                 Options.log_readahead_size: 0
Jan 31 08:06:13 compute-0 ceph-mon[75227]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Jan 31 08:06:13 compute-0 ceph-mon[75227]: rocksdb:                 Options.best_efforts_recovery: 0
Jan 31 08:06:13 compute-0 ceph-mon[75227]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Jan 31 08:06:13 compute-0 ceph-mon[75227]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Jan 31 08:06:13 compute-0 ceph-mon[75227]: rocksdb:             Options.allow_data_in_errors: 0
Jan 31 08:06:13 compute-0 ceph-mon[75227]: rocksdb:             Options.db_host_id: __hostname__
Jan 31 08:06:13 compute-0 ceph-mon[75227]: rocksdb:             Options.enforce_single_del_contracts: true
Jan 31 08:06:13 compute-0 ceph-mon[75227]: rocksdb:             Options.max_background_jobs: 2
Jan 31 08:06:13 compute-0 ceph-mon[75227]: rocksdb:             Options.max_background_compactions: -1
Jan 31 08:06:13 compute-0 ceph-mon[75227]: rocksdb:             Options.max_subcompactions: 1
Jan 31 08:06:13 compute-0 ceph-mon[75227]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Jan 31 08:06:13 compute-0 ceph-mon[75227]: rocksdb:           Options.writable_file_max_buffer_size: 1048576
Jan 31 08:06:13 compute-0 ceph-mon[75227]: rocksdb:             Options.delayed_write_rate : 16777216
Jan 31 08:06:13 compute-0 ceph-mon[75227]: rocksdb:             Options.max_total_wal_size: 0
Jan 31 08:06:13 compute-0 ceph-mon[75227]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Jan 31 08:06:13 compute-0 ceph-mon[75227]: rocksdb:                   Options.stats_dump_period_sec: 600
Jan 31 08:06:13 compute-0 ceph-mon[75227]: rocksdb:                 Options.stats_persist_period_sec: 600
Jan 31 08:06:13 compute-0 ceph-mon[75227]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Jan 31 08:06:13 compute-0 ceph-mon[75227]: rocksdb:                          Options.max_open_files: -1
Jan 31 08:06:13 compute-0 ceph-mon[75227]: rocksdb:                          Options.bytes_per_sync: 0
Jan 31 08:06:13 compute-0 ceph-mon[75227]: rocksdb:                      Options.wal_bytes_per_sync: 0
Jan 31 08:06:13 compute-0 ceph-mon[75227]: rocksdb:                   Options.strict_bytes_per_sync: 0
Jan 31 08:06:13 compute-0 ceph-mon[75227]: rocksdb:       Options.compaction_readahead_size: 0
Jan 31 08:06:13 compute-0 ceph-mon[75227]: rocksdb:                  Options.max_background_flushes: -1
Jan 31 08:06:13 compute-0 ceph-mon[75227]: rocksdb: Compression algorithms supported:
Jan 31 08:06:13 compute-0 ceph-mon[75227]: rocksdb:         kZSTD supported: 0
Jan 31 08:06:13 compute-0 ceph-mon[75227]: rocksdb:         kXpressCompression supported: 0
Jan 31 08:06:13 compute-0 ceph-mon[75227]: rocksdb:         kBZip2Compression supported: 0
Jan 31 08:06:13 compute-0 ceph-mon[75227]: rocksdb:         kZSTDNotFinalCompression supported: 0
Jan 31 08:06:13 compute-0 ceph-mon[75227]: rocksdb:         kLZ4Compression supported: 1
Jan 31 08:06:13 compute-0 ceph-mon[75227]: rocksdb:         kZlibCompression supported: 1
Jan 31 08:06:13 compute-0 ceph-mon[75227]: rocksdb:         kLZ4HCCompression supported: 1
Jan 31 08:06:13 compute-0 ceph-mon[75227]: rocksdb:         kSnappyCompression supported: 1
Jan 31 08:06:13 compute-0 ceph-mon[75227]: rocksdb: Fast CRC32 supported: Supported on x86
Jan 31 08:06:13 compute-0 ceph-mon[75227]: rocksdb: DMutex implementation: pthread_mutex_t
Jan 31 08:06:13 compute-0 ceph-mon[75227]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: /var/lib/ceph/mon/ceph-compute-0/store.db/MANIFEST-000010
Jan 31 08:06:13 compute-0 ceph-mon[75227]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Jan 31 08:06:13 compute-0 ceph-mon[75227]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 31 08:06:13 compute-0 ceph-mon[75227]: rocksdb:           Options.merge_operator: 
Jan 31 08:06:13 compute-0 ceph-mon[75227]: rocksdb:        Options.compaction_filter: None
Jan 31 08:06:13 compute-0 ceph-mon[75227]: rocksdb:        Options.compaction_filter_factory: None
Jan 31 08:06:13 compute-0 ceph-mon[75227]: rocksdb:  Options.sst_partitioner_factory: None
Jan 31 08:06:13 compute-0 ceph-mon[75227]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 31 08:06:13 compute-0 ceph-mon[75227]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 31 08:06:13 compute-0 ceph-mon[75227]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55bf4c7a0a00)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55bf4c7858d0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Jan 31 08:06:13 compute-0 ceph-mon[75227]: rocksdb:        Options.write_buffer_size: 33554432
Jan 31 08:06:13 compute-0 ceph-mon[75227]: rocksdb:  Options.max_write_buffer_number: 2
Jan 31 08:06:13 compute-0 ceph-mon[75227]: rocksdb:          Options.compression: NoCompression
Jan 31 08:06:13 compute-0 ceph-mon[75227]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 31 08:06:13 compute-0 ceph-mon[75227]: rocksdb:       Options.prefix_extractor: nullptr
Jan 31 08:06:13 compute-0 ceph-mon[75227]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 31 08:06:13 compute-0 ceph-mon[75227]: rocksdb:             Options.num_levels: 7
Jan 31 08:06:13 compute-0 ceph-mon[75227]: rocksdb:        Options.min_write_buffer_number_to_merge: 1
Jan 31 08:06:13 compute-0 ceph-mon[75227]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 31 08:06:13 compute-0 ceph-mon[75227]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 31 08:06:13 compute-0 ceph-mon[75227]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 31 08:06:13 compute-0 ceph-mon[75227]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 31 08:06:13 compute-0 ceph-mon[75227]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 31 08:06:13 compute-0 ceph-mon[75227]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 31 08:06:13 compute-0 ceph-mon[75227]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 31 08:06:13 compute-0 ceph-mon[75227]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 31 08:06:13 compute-0 ceph-mon[75227]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 31 08:06:13 compute-0 ceph-mon[75227]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 31 08:06:13 compute-0 ceph-mon[75227]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 31 08:06:13 compute-0 ceph-mon[75227]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 31 08:06:13 compute-0 ceph-mon[75227]: rocksdb:                  Options.compression_opts.level: 32767
Jan 31 08:06:13 compute-0 ceph-mon[75227]: rocksdb:               Options.compression_opts.strategy: 0
Jan 31 08:06:13 compute-0 ceph-mon[75227]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 31 08:06:13 compute-0 ceph-mon[75227]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 31 08:06:13 compute-0 ceph-mon[75227]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 31 08:06:13 compute-0 ceph-mon[75227]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 31 08:06:13 compute-0 ceph-mon[75227]: rocksdb:                  Options.compression_opts.enabled: false
Jan 31 08:06:13 compute-0 ceph-mon[75227]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 31 08:06:13 compute-0 ceph-mon[75227]: rocksdb:      Options.level0_file_num_compaction_trigger: 4
Jan 31 08:06:13 compute-0 ceph-mon[75227]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 31 08:06:13 compute-0 ceph-mon[75227]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 31 08:06:13 compute-0 ceph-mon[75227]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 31 08:06:13 compute-0 ceph-mon[75227]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 31 08:06:13 compute-0 ceph-mon[75227]: rocksdb:                Options.max_bytes_for_level_base: 268435456
Jan 31 08:06:13 compute-0 ceph-mon[75227]: rocksdb: Options.level_compaction_dynamic_level_bytes: 1
Jan 31 08:06:13 compute-0 ceph-mon[75227]: rocksdb:          Options.max_bytes_for_level_multiplier: 10.000000
Jan 31 08:06:13 compute-0 ceph-mon[75227]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 31 08:06:13 compute-0 ceph-mon[75227]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 31 08:06:13 compute-0 ceph-mon[75227]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 31 08:06:13 compute-0 ceph-mon[75227]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 31 08:06:13 compute-0 ceph-mon[75227]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 31 08:06:13 compute-0 ceph-mon[75227]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 31 08:06:13 compute-0 ceph-mon[75227]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 31 08:06:13 compute-0 ceph-mon[75227]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 31 08:06:13 compute-0 ceph-mon[75227]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 31 08:06:13 compute-0 ceph-mon[75227]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 31 08:06:13 compute-0 ceph-mon[75227]: rocksdb:                        Options.arena_block_size: 1048576
Jan 31 08:06:13 compute-0 ceph-mon[75227]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 31 08:06:13 compute-0 ceph-mon[75227]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 31 08:06:13 compute-0 ceph-mon[75227]: rocksdb:                Options.disable_auto_compactions: 0
Jan 31 08:06:13 compute-0 ceph-mon[75227]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 31 08:06:13 compute-0 ceph-mon[75227]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 31 08:06:13 compute-0 ceph-mon[75227]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 31 08:06:13 compute-0 ceph-mon[75227]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 31 08:06:13 compute-0 ceph-mon[75227]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 31 08:06:13 compute-0 ceph-mon[75227]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 31 08:06:13 compute-0 ceph-mon[75227]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 31 08:06:13 compute-0 ceph-mon[75227]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 31 08:06:13 compute-0 ceph-mon[75227]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 31 08:06:13 compute-0 ceph-mon[75227]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 31 08:06:13 compute-0 ceph-mon[75227]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 31 08:06:13 compute-0 ceph-mon[75227]: rocksdb:                   Options.inplace_update_support: 0
Jan 31 08:06:13 compute-0 ceph-mon[75227]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 31 08:06:13 compute-0 ceph-mon[75227]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 31 08:06:13 compute-0 ceph-mon[75227]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 31 08:06:13 compute-0 ceph-mon[75227]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 31 08:06:13 compute-0 ceph-mon[75227]: rocksdb:                           Options.bloom_locality: 0
Jan 31 08:06:13 compute-0 ceph-mon[75227]: rocksdb:                    Options.max_successive_merges: 0
Jan 31 08:06:13 compute-0 ceph-mon[75227]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 31 08:06:13 compute-0 ceph-mon[75227]: rocksdb:                Options.paranoid_file_checks: 0
Jan 31 08:06:13 compute-0 ceph-mon[75227]: rocksdb:                Options.force_consistency_checks: 1
Jan 31 08:06:13 compute-0 ceph-mon[75227]: rocksdb:                Options.report_bg_io_stats: 0
Jan 31 08:06:13 compute-0 ceph-mon[75227]: rocksdb:                               Options.ttl: 2592000
Jan 31 08:06:13 compute-0 ceph-mon[75227]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 31 08:06:13 compute-0 ceph-mon[75227]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 31 08:06:13 compute-0 ceph-mon[75227]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 31 08:06:13 compute-0 ceph-mon[75227]: rocksdb:                       Options.enable_blob_files: false
Jan 31 08:06:13 compute-0 ceph-mon[75227]: rocksdb:                           Options.min_blob_size: 0
Jan 31 08:06:13 compute-0 ceph-mon[75227]: rocksdb:                          Options.blob_file_size: 268435456
Jan 31 08:06:13 compute-0 ceph-mon[75227]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 31 08:06:13 compute-0 ceph-mon[75227]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 31 08:06:13 compute-0 ceph-mon[75227]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 31 08:06:13 compute-0 ceph-mon[75227]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 31 08:06:13 compute-0 ceph-mon[75227]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 31 08:06:13 compute-0 ceph-mon[75227]: rocksdb:                Options.blob_file_starting_level: 0
Jan 31 08:06:13 compute-0 ceph-mon[75227]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 31 08:06:13 compute-0 ceph-mon[75227]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:/var/lib/ceph/mon/ceph-compute-0/store.db/MANIFEST-000010 succeeded,manifest_file_number is 10, next_file_number is 12, last_sequence is 5, log_number is 5,prev_log_number is 0,max_column_family is 0,min_log_number_to_keep is 5
Jan 31 08:06:13 compute-0 ceph-mon[75227]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5
Jan 31 08:06:13 compute-0 ceph-mon[75227]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: 91992687-9ca4-489a-811f-a25b3432622d
Jan 31 08:06:13 compute-0 ceph-mon[75227]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769846773282060, "job": 1, "event": "recovery_started", "wal_files": [9]}
Jan 31 08:06:13 compute-0 ceph-mon[75227]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #9 mode 2
Jan 31 08:06:13 compute-0 ceph-mon[75227]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769846773286610, "cf_name": "default", "job": 1, "event": "table_file_creation", "file_number": 13, "file_size": 59960, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 8, "largest_seqno": 143, "table_properties": {"data_size": 58438, "index_size": 164, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 325, "raw_key_size": 3403, "raw_average_key_size": 30, "raw_value_size": 55790, "raw_average_value_size": 507, "num_data_blocks": 9, "num_entries": 110, "num_filter_entries": 110, "num_deletions": 3, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769846773, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "91992687-9ca4-489a-811f-a25b3432622d", "db_session_id": "RDN3DWKE2K2I6QTJYIJY", "orig_file_number": 13, "seqno_to_time_mapping": "N/A"}}
Jan 31 08:06:13 compute-0 ceph-mon[75227]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769846773286782, "job": 1, "event": "recovery_finished"}
Jan 31 08:06:13 compute-0 ceph-mon[75227]: rocksdb: [db/version_set.cc:5047] Creating manifest 15
Jan 31 08:06:13 compute-0 ceph-mon[75227]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000009.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 08:06:13 compute-0 ceph-mon[75227]: rocksdb: [db/db_impl/db_impl_open.cc:1987] SstFileManager instance 0x55bf4c7b2e00
Jan 31 08:06:13 compute-0 ceph-mon[75227]: rocksdb: DB pointer 0x55bf4c8fc000
Jan 31 08:06:13 compute-0 ceph-mon[75227]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 31 08:06:13 compute-0 ceph-mon[75227]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 0.0 total, 0.0 interval
                                           Cumulative writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 GB, 0.00 MB/s
                                           Cumulative WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s
                                           Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      2/0   60.45 KB   0.5      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0     13.5      0.00              0.00         1    0.004       0      0       0.0       0.0
                                            Sum      2/0   60.45 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0     13.5      0.00              0.00         1    0.004       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0     13.5      0.00              0.00         1    0.004       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     13.5      0.00              0.00         1    0.004       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.0 total, 0.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 3.77 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 3.77 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55bf4c7858d0#2 capacity: 512.00 MB usage: 0.84 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 0 last_secs: 3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(2,0.48 KB,9.23872e-05%) IndexBlock(2,0.36 KB,6.85453e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
Jan 31 08:06:13 compute-0 ceph-mon[75227]: starting mon.compute-0 rank 0 at public addrs [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0] at bind addrs [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0] mon_data /var/lib/ceph/mon/ceph-compute-0 fsid 82c880e6-d992-5408-8b12-efff9c275473
Jan 31 08:06:13 compute-0 ceph-mon[75227]: mon.compute-0@-1(???) e1 preinit fsid 82c880e6-d992-5408-8b12-efff9c275473
Jan 31 08:06:13 compute-0 ceph-mon[75227]: mon.compute-0@-1(???).mds e1 new map
Jan 31 08:06:13 compute-0 ceph-mon[75227]: mon.compute-0@-1(???).mds e1 print_map
                                           e1
                                           btime 2026-01-31T08:06:11:330734+0000
                                           enable_multiple, ever_enabled_multiple: 1,1
                                           default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}
                                           legacy client fscid: -1
                                            
                                           No filesystems configured
Jan 31 08:06:13 compute-0 ceph-mon[75227]: mon.compute-0@-1(???).osd e1 crush map has features 3314932999778484224, adjusting msgr requires
Jan 31 08:06:13 compute-0 ceph-mon[75227]: mon.compute-0@-1(???).osd e1 crush map has features 288514050185494528, adjusting msgr requires
Jan 31 08:06:13 compute-0 ceph-mon[75227]: mon.compute-0@-1(???).osd e1 crush map has features 288514050185494528, adjusting msgr requires
Jan 31 08:06:13 compute-0 ceph-mon[75227]: mon.compute-0@-1(???).osd e1 crush map has features 288514050185494528, adjusting msgr requires
Jan 31 08:06:13 compute-0 ceph-mon[75227]: mon.compute-0@-1(???).paxosservice(auth 1..2) refresh upgraded, format 0 -> 3
Jan 31 08:06:13 compute-0 ceph-mon[75227]: mon.compute-0@-1(probing) e1  my rank is now 0 (was -1)
Jan 31 08:06:13 compute-0 ceph-mon[75227]: mon.compute-0@0(probing) e1 win_standalone_election
Jan 31 08:06:13 compute-0 ceph-mon[75227]: paxos.0).electionLogic(3) init, last seen epoch 3, mid-election, bumping
Jan 31 08:06:13 compute-0 podman[75228]: 2026-01-31 08:06:13.309567911 +0000 UTC m=+0.051177721 container create 315e875abeca5f4120baad4ec3b234d42e12031e5871f0ff40c5a325642a69a5 (image=quay.io/ceph/ceph:v20, name=pensive_pike, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 08:06:13 compute-0 ceph-mon[75227]: mon.compute-0@0(electing) e1 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Jan 31 08:06:13 compute-0 ceph-mon[75227]: log_channel(cluster) log [INF] : mon.compute-0 is new leader, mons compute-0 in quorum (ranks 0)
Jan 31 08:06:13 compute-0 ceph-mon[75227]: log_channel(cluster) log [DBG] : monmap epoch 1
Jan 31 08:06:13 compute-0 ceph-mon[75227]: log_channel(cluster) log [DBG] : fsid 82c880e6-d992-5408-8b12-efff9c275473
Jan 31 08:06:13 compute-0 ceph-mon[75227]: log_channel(cluster) log [DBG] : last_changed 2026-01-31T08:06:09.429767+0000
Jan 31 08:06:13 compute-0 ceph-mon[75227]: log_channel(cluster) log [DBG] : created 2026-01-31T08:06:09.429767+0000
Jan 31 08:06:13 compute-0 ceph-mon[75227]: log_channel(cluster) log [DBG] : min_mon_release 20 (tentacle)
Jan 31 08:06:13 compute-0 ceph-mon[75227]: log_channel(cluster) log [DBG] : election_strategy: 1
Jan 31 08:06:13 compute-0 ceph-mon[75227]: log_channel(cluster) log [DBG] : 0: [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0] mon.compute-0
Jan 31 08:06:13 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Jan 31 08:06:13 compute-0 ceph-mon[75227]: log_channel(cluster) log [DBG] : fsmap 
Jan 31 08:06:13 compute-0 ceph-mon[75227]: log_channel(cluster) log [DBG] : osdmap e1: 0 total, 0 up, 0 in
Jan 31 08:06:13 compute-0 ceph-mon[75227]: log_channel(cluster) log [DBG] : mgrmap e1: no daemons active
Jan 31 08:06:13 compute-0 systemd[1]: Started libpod-conmon-315e875abeca5f4120baad4ec3b234d42e12031e5871f0ff40c5a325642a69a5.scope.
Jan 31 08:06:13 compute-0 ceph-mon[75227]: mon.compute-0 is new leader, mons compute-0 in quorum (ranks 0)
Jan 31 08:06:13 compute-0 ceph-mon[75227]: monmap epoch 1
Jan 31 08:06:13 compute-0 ceph-mon[75227]: fsid 82c880e6-d992-5408-8b12-efff9c275473
Jan 31 08:06:13 compute-0 ceph-mon[75227]: last_changed 2026-01-31T08:06:09.429767+0000
Jan 31 08:06:13 compute-0 ceph-mon[75227]: created 2026-01-31T08:06:09.429767+0000
Jan 31 08:06:13 compute-0 ceph-mon[75227]: min_mon_release 20 (tentacle)
Jan 31 08:06:13 compute-0 ceph-mon[75227]: election_strategy: 1
Jan 31 08:06:13 compute-0 ceph-mon[75227]: 0: [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0] mon.compute-0
Jan 31 08:06:13 compute-0 ceph-mon[75227]: fsmap 
Jan 31 08:06:13 compute-0 ceph-mon[75227]: osdmap e1: 0 total, 0 up, 0 in
Jan 31 08:06:13 compute-0 ceph-mon[75227]: mgrmap e1: no daemons active
Jan 31 08:06:13 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:06:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/63bea5f1aa5813187d491eb84d1dfbf8cf5a3ae7c10d2301067363596704f8e4/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 08:06:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/63bea5f1aa5813187d491eb84d1dfbf8cf5a3ae7c10d2301067363596704f8e4/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 08:06:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/63bea5f1aa5813187d491eb84d1dfbf8cf5a3ae7c10d2301067363596704f8e4/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 31 08:06:13 compute-0 podman[75228]: 2026-01-31 08:06:13.282031976 +0000 UTC m=+0.023641786 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 31 08:06:13 compute-0 podman[75228]: 2026-01-31 08:06:13.388749821 +0000 UTC m=+0.130359661 container init 315e875abeca5f4120baad4ec3b234d42e12031e5871f0ff40c5a325642a69a5 (image=quay.io/ceph/ceph:v20, name=pensive_pike, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.build-date=20251030)
Jan 31 08:06:13 compute-0 podman[75228]: 2026-01-31 08:06:13.396834791 +0000 UTC m=+0.138444601 container start 315e875abeca5f4120baad4ec3b234d42e12031e5871f0ff40c5a325642a69a5 (image=quay.io/ceph/ceph:v20, name=pensive_pike, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 08:06:13 compute-0 podman[75228]: 2026-01-31 08:06:13.402449011 +0000 UTC m=+0.144058821 container attach 315e875abeca5f4120baad4ec3b234d42e12031e5871f0ff40c5a325642a69a5 (image=quay.io/ceph/ceph:v20, name=pensive_pike, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 08:06:13 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=public_network}] v 0)
Jan 31 08:06:13 compute-0 systemd[1]: libpod-315e875abeca5f4120baad4ec3b234d42e12031e5871f0ff40c5a325642a69a5.scope: Deactivated successfully.
Jan 31 08:06:13 compute-0 podman[75228]: 2026-01-31 08:06:13.598585107 +0000 UTC m=+0.340194877 container died 315e875abeca5f4120baad4ec3b234d42e12031e5871f0ff40c5a325642a69a5 (image=quay.io/ceph/ceph:v20, name=pensive_pike, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Jan 31 08:06:13 compute-0 systemd[1]: var-lib-containers-storage-overlay-63bea5f1aa5813187d491eb84d1dfbf8cf5a3ae7c10d2301067363596704f8e4-merged.mount: Deactivated successfully.
Jan 31 08:06:13 compute-0 podman[75228]: 2026-01-31 08:06:13.636339985 +0000 UTC m=+0.377949755 container remove 315e875abeca5f4120baad4ec3b234d42e12031e5871f0ff40c5a325642a69a5 (image=quay.io/ceph/ceph:v20, name=pensive_pike, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Jan 31 08:06:13 compute-0 systemd[1]: libpod-conmon-315e875abeca5f4120baad4ec3b234d42e12031e5871f0ff40c5a325642a69a5.scope: Deactivated successfully.
Jan 31 08:06:13 compute-0 podman[75320]: 2026-01-31 08:06:13.685460816 +0000 UTC m=+0.035594646 container create 80803b89065d90c870bfcd3e5f574afdb60a305369cf4f78d62679b1aa9d58f0 (image=quay.io/ceph/ceph:v20, name=exciting_colden, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 08:06:13 compute-0 systemd[1]: Started libpod-conmon-80803b89065d90c870bfcd3e5f574afdb60a305369cf4f78d62679b1aa9d58f0.scope.
Jan 31 08:06:13 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:06:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/69dd95ed5ba0b5ce59bb298233ab41e307dffe64b4c68af12a92125ab8f73d5b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 08:06:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/69dd95ed5ba0b5ce59bb298233ab41e307dffe64b4c68af12a92125ab8f73d5b/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 31 08:06:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/69dd95ed5ba0b5ce59bb298233ab41e307dffe64b4c68af12a92125ab8f73d5b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 08:06:13 compute-0 podman[75320]: 2026-01-31 08:06:13.760084865 +0000 UTC m=+0.110218665 container init 80803b89065d90c870bfcd3e5f574afdb60a305369cf4f78d62679b1aa9d58f0 (image=quay.io/ceph/ceph:v20, name=exciting_colden, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 08:06:13 compute-0 podman[75320]: 2026-01-31 08:06:13.764655926 +0000 UTC m=+0.114789716 container start 80803b89065d90c870bfcd3e5f574afdb60a305369cf4f78d62679b1aa9d58f0 (image=quay.io/ceph/ceph:v20, name=exciting_colden, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 08:06:13 compute-0 podman[75320]: 2026-01-31 08:06:13.669936793 +0000 UTC m=+0.020070603 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 31 08:06:13 compute-0 podman[75320]: 2026-01-31 08:06:13.76832655 +0000 UTC m=+0.118460390 container attach 80803b89065d90c870bfcd3e5f574afdb60a305369cf4f78d62679b1aa9d58f0 (image=quay.io/ceph/ceph:v20, name=exciting_colden, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True)
Jan 31 08:06:13 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=cluster_network}] v 0)
Jan 31 08:06:14 compute-0 systemd[1]: libpod-80803b89065d90c870bfcd3e5f574afdb60a305369cf4f78d62679b1aa9d58f0.scope: Deactivated successfully.
Jan 31 08:06:14 compute-0 podman[75320]: 2026-01-31 08:06:14.006411023 +0000 UTC m=+0.356544813 container died 80803b89065d90c870bfcd3e5f574afdb60a305369cf4f78d62679b1aa9d58f0 (image=quay.io/ceph/ceph:v20, name=exciting_colden, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_REF=tentacle, io.buildah.version=1.41.3)
Jan 31 08:06:14 compute-0 systemd[1]: var-lib-containers-storage-overlay-69dd95ed5ba0b5ce59bb298233ab41e307dffe64b4c68af12a92125ab8f73d5b-merged.mount: Deactivated successfully.
Jan 31 08:06:14 compute-0 podman[75320]: 2026-01-31 08:06:14.046056154 +0000 UTC m=+0.396189954 container remove 80803b89065d90c870bfcd3e5f574afdb60a305369cf4f78d62679b1aa9d58f0 (image=quay.io/ceph/ceph:v20, name=exciting_colden, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, ceph=True)
Jan 31 08:06:14 compute-0 systemd[1]: libpod-conmon-80803b89065d90c870bfcd3e5f574afdb60a305369cf4f78d62679b1aa9d58f0.scope: Deactivated successfully.
Jan 31 08:06:14 compute-0 systemd[1]: Reloading.
Jan 31 08:06:14 compute-0 systemd-sysv-generator[75405]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 08:06:14 compute-0 systemd-rc-local-generator[75400]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 08:06:14 compute-0 systemd[1]: Reloading.
Jan 31 08:06:14 compute-0 systemd-sysv-generator[75441]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 08:06:14 compute-0 systemd-rc-local-generator[75437]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 08:06:14 compute-0 systemd[1]: Starting Ceph mgr.compute-0.fqetdi for 82c880e6-d992-5408-8b12-efff9c275473...
Jan 31 08:06:14 compute-0 podman[75500]: 2026-01-31 08:06:14.771187183 +0000 UTC m=+0.032405885 container create 469c441ebd046e516e3cb4dcf3c038c0dda2d507872e226173c5df8275cf3dab (image=quay.io/ceph/ceph:v20, name=ceph-82c880e6-d992-5408-8b12-efff9c275473-mgr-compute-0-fqetdi, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030)
Jan 31 08:06:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c6ccffd1cad692b3d9f9bd82a84fe440c5b183daf8cb8df4c0863a357fdcd315/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 08:06:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c6ccffd1cad692b3d9f9bd82a84fe440c5b183daf8cb8df4c0863a357fdcd315/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 08:06:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c6ccffd1cad692b3d9f9bd82a84fe440c5b183daf8cb8df4c0863a357fdcd315/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 08:06:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c6ccffd1cad692b3d9f9bd82a84fe440c5b183daf8cb8df4c0863a357fdcd315/merged/var/lib/ceph/mgr/ceph-compute-0.fqetdi supports timestamps until 2038 (0x7fffffff)
Jan 31 08:06:14 compute-0 podman[75500]: 2026-01-31 08:06:14.816681851 +0000 UTC m=+0.077900553 container init 469c441ebd046e516e3cb4dcf3c038c0dda2d507872e226173c5df8275cf3dab (image=quay.io/ceph/ceph:v20, name=ceph-82c880e6-d992-5408-8b12-efff9c275473-mgr-compute-0-fqetdi, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030)
Jan 31 08:06:14 compute-0 podman[75500]: 2026-01-31 08:06:14.824646779 +0000 UTC m=+0.085865481 container start 469c441ebd046e516e3cb4dcf3c038c0dda2d507872e226173c5df8275cf3dab (image=quay.io/ceph/ceph:v20, name=ceph-82c880e6-d992-5408-8b12-efff9c275473-mgr-compute-0-fqetdi, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, ceph=True, CEPH_REF=tentacle)
Jan 31 08:06:14 compute-0 bash[75500]: 469c441ebd046e516e3cb4dcf3c038c0dda2d507872e226173c5df8275cf3dab
Jan 31 08:06:14 compute-0 podman[75500]: 2026-01-31 08:06:14.75601053 +0000 UTC m=+0.017229262 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 31 08:06:14 compute-0 systemd[1]: Started Ceph mgr.compute-0.fqetdi for 82c880e6-d992-5408-8b12-efff9c275473.
Jan 31 08:06:14 compute-0 ceph-mgr[75519]: set uid:gid to 167:167 (ceph:ceph)
Jan 31 08:06:14 compute-0 ceph-mgr[75519]: ceph version 20.2.0 (69f84cc2651aa259a15bc192ddaabd3baba07489) tentacle (stable - RelWithDebInfo), process ceph-mgr, pid 2
Jan 31 08:06:14 compute-0 ceph-mgr[75519]: pidfile_write: ignore empty --pid-file
Jan 31 08:06:14 compute-0 ceph-mgr[75519]: mgr[py] Loading python module 'alerts'
Jan 31 08:06:14 compute-0 podman[75520]: 2026-01-31 08:06:14.902557211 +0000 UTC m=+0.042526974 container create 7e761ed3ba74d97b65df006dcdd730f08855e4b3762e6f2ec03013a700398fc4 (image=quay.io/ceph/ceph:v20, name=interesting_carver, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.schema-version=1.0)
Jan 31 08:06:14 compute-0 systemd[1]: Started libpod-conmon-7e761ed3ba74d97b65df006dcdd730f08855e4b3762e6f2ec03013a700398fc4.scope.
Jan 31 08:06:14 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:06:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/15d5b2da0c203584c86a0995679d6094248a7c9a03dd7efae5eb1d2e757e1884/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 31 08:06:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/15d5b2da0c203584c86a0995679d6094248a7c9a03dd7efae5eb1d2e757e1884/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 08:06:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/15d5b2da0c203584c86a0995679d6094248a7c9a03dd7efae5eb1d2e757e1884/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 08:06:14 compute-0 ceph-mgr[75519]: mgr[py] Loading python module 'balancer'
Jan 31 08:06:14 compute-0 podman[75520]: 2026-01-31 08:06:14.887909663 +0000 UTC m=+0.027879446 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 31 08:06:15 compute-0 podman[75520]: 2026-01-31 08:06:14.992627881 +0000 UTC m=+0.132597724 container init 7e761ed3ba74d97b65df006dcdd730f08855e4b3762e6f2ec03013a700398fc4 (image=quay.io/ceph/ceph:v20, name=interesting_carver, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Jan 31 08:06:15 compute-0 podman[75520]: 2026-01-31 08:06:15.027365172 +0000 UTC m=+0.167334955 container start 7e761ed3ba74d97b65df006dcdd730f08855e4b3762e6f2ec03013a700398fc4 (image=quay.io/ceph/ceph:v20, name=interesting_carver, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 08:06:15 compute-0 podman[75520]: 2026-01-31 08:06:15.030558163 +0000 UTC m=+0.170528006 container attach 7e761ed3ba74d97b65df006dcdd730f08855e4b3762e6f2ec03013a700398fc4 (image=quay.io/ceph/ceph:v20, name=interesting_carver, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 31 08:06:15 compute-0 ceph-mgr[75519]: mgr[py] Loading python module 'cephadm'
Jan 31 08:06:15 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0)
Jan 31 08:06:15 compute-0 ceph-mon[75227]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2154000332' entity='client.admin' cmd={"prefix": "status", "format": "json-pretty"} : dispatch
Jan 31 08:06:15 compute-0 interesting_carver[75557]: 
Jan 31 08:06:15 compute-0 interesting_carver[75557]: {
Jan 31 08:06:15 compute-0 interesting_carver[75557]:     "fsid": "82c880e6-d992-5408-8b12-efff9c275473",
Jan 31 08:06:15 compute-0 interesting_carver[75557]:     "health": {
Jan 31 08:06:15 compute-0 interesting_carver[75557]:         "status": "HEALTH_OK",
Jan 31 08:06:15 compute-0 interesting_carver[75557]:         "checks": {},
Jan 31 08:06:15 compute-0 interesting_carver[75557]:         "mutes": []
Jan 31 08:06:15 compute-0 interesting_carver[75557]:     },
Jan 31 08:06:15 compute-0 interesting_carver[75557]:     "election_epoch": 5,
Jan 31 08:06:15 compute-0 interesting_carver[75557]:     "quorum": [
Jan 31 08:06:15 compute-0 interesting_carver[75557]:         0
Jan 31 08:06:15 compute-0 interesting_carver[75557]:     ],
Jan 31 08:06:15 compute-0 interesting_carver[75557]:     "quorum_names": [
Jan 31 08:06:15 compute-0 interesting_carver[75557]:         "compute-0"
Jan 31 08:06:15 compute-0 interesting_carver[75557]:     ],
Jan 31 08:06:15 compute-0 interesting_carver[75557]:     "quorum_age": 1,
Jan 31 08:06:15 compute-0 interesting_carver[75557]:     "monmap": {
Jan 31 08:06:15 compute-0 interesting_carver[75557]:         "epoch": 1,
Jan 31 08:06:15 compute-0 interesting_carver[75557]:         "min_mon_release_name": "tentacle",
Jan 31 08:06:15 compute-0 interesting_carver[75557]:         "num_mons": 1
Jan 31 08:06:15 compute-0 interesting_carver[75557]:     },
Jan 31 08:06:15 compute-0 interesting_carver[75557]:     "osdmap": {
Jan 31 08:06:15 compute-0 interesting_carver[75557]:         "epoch": 1,
Jan 31 08:06:15 compute-0 interesting_carver[75557]:         "num_osds": 0,
Jan 31 08:06:15 compute-0 interesting_carver[75557]:         "num_up_osds": 0,
Jan 31 08:06:15 compute-0 interesting_carver[75557]:         "osd_up_since": 0,
Jan 31 08:06:15 compute-0 interesting_carver[75557]:         "num_in_osds": 0,
Jan 31 08:06:15 compute-0 interesting_carver[75557]:         "osd_in_since": 0,
Jan 31 08:06:15 compute-0 interesting_carver[75557]:         "num_remapped_pgs": 0
Jan 31 08:06:15 compute-0 interesting_carver[75557]:     },
Jan 31 08:06:15 compute-0 interesting_carver[75557]:     "pgmap": {
Jan 31 08:06:15 compute-0 interesting_carver[75557]:         "pgs_by_state": [],
Jan 31 08:06:15 compute-0 interesting_carver[75557]:         "num_pgs": 0,
Jan 31 08:06:15 compute-0 interesting_carver[75557]:         "num_pools": 0,
Jan 31 08:06:15 compute-0 interesting_carver[75557]:         "num_objects": 0,
Jan 31 08:06:15 compute-0 interesting_carver[75557]:         "data_bytes": 0,
Jan 31 08:06:15 compute-0 interesting_carver[75557]:         "bytes_used": 0,
Jan 31 08:06:15 compute-0 interesting_carver[75557]:         "bytes_avail": 0,
Jan 31 08:06:15 compute-0 interesting_carver[75557]:         "bytes_total": 0
Jan 31 08:06:15 compute-0 interesting_carver[75557]:     },
Jan 31 08:06:15 compute-0 interesting_carver[75557]:     "fsmap": {
Jan 31 08:06:15 compute-0 interesting_carver[75557]:         "epoch": 1,
Jan 31 08:06:15 compute-0 interesting_carver[75557]:         "btime": "2026-01-31T08:06:11:330734+0000",
Jan 31 08:06:15 compute-0 interesting_carver[75557]:         "by_rank": [],
Jan 31 08:06:15 compute-0 interesting_carver[75557]:         "up:standby": 0
Jan 31 08:06:15 compute-0 interesting_carver[75557]:     },
Jan 31 08:06:15 compute-0 interesting_carver[75557]:     "mgrmap": {
Jan 31 08:06:15 compute-0 interesting_carver[75557]:         "available": false,
Jan 31 08:06:15 compute-0 interesting_carver[75557]:         "num_standbys": 0,
Jan 31 08:06:15 compute-0 interesting_carver[75557]:         "modules": [
Jan 31 08:06:15 compute-0 interesting_carver[75557]:             "iostat",
Jan 31 08:06:15 compute-0 interesting_carver[75557]:             "nfs"
Jan 31 08:06:15 compute-0 interesting_carver[75557]:         ],
Jan 31 08:06:15 compute-0 interesting_carver[75557]:         "services": {}
Jan 31 08:06:15 compute-0 interesting_carver[75557]:     },
Jan 31 08:06:15 compute-0 interesting_carver[75557]:     "servicemap": {
Jan 31 08:06:15 compute-0 interesting_carver[75557]:         "epoch": 1,
Jan 31 08:06:15 compute-0 interesting_carver[75557]:         "modified": "2026-01-31T08:06:11.333031+0000",
Jan 31 08:06:15 compute-0 interesting_carver[75557]:         "services": {}
Jan 31 08:06:15 compute-0 interesting_carver[75557]:     },
Jan 31 08:06:15 compute-0 interesting_carver[75557]:     "progress_events": {}
Jan 31 08:06:15 compute-0 interesting_carver[75557]: }
Jan 31 08:06:15 compute-0 systemd[1]: libpod-7e761ed3ba74d97b65df006dcdd730f08855e4b3762e6f2ec03013a700398fc4.scope: Deactivated successfully.
Jan 31 08:06:15 compute-0 podman[75520]: 2026-01-31 08:06:15.211593299 +0000 UTC m=+0.351563102 container died 7e761ed3ba74d97b65df006dcdd730f08855e4b3762e6f2ec03013a700398fc4 (image=quay.io/ceph/ceph:v20, name=interesting_carver, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 08:06:15 compute-0 systemd[1]: var-lib-containers-storage-overlay-15d5b2da0c203584c86a0995679d6094248a7c9a03dd7efae5eb1d2e757e1884-merged.mount: Deactivated successfully.
Jan 31 08:06:15 compute-0 ceph-mon[75227]: from='client.? 192.168.122.100:0/2154000332' entity='client.admin' cmd={"prefix": "status", "format": "json-pretty"} : dispatch
Jan 31 08:06:15 compute-0 podman[75520]: 2026-01-31 08:06:15.257679223 +0000 UTC m=+0.397648976 container remove 7e761ed3ba74d97b65df006dcdd730f08855e4b3762e6f2ec03013a700398fc4 (image=quay.io/ceph/ceph:v20, name=interesting_carver, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 31 08:06:15 compute-0 systemd[1]: libpod-conmon-7e761ed3ba74d97b65df006dcdd730f08855e4b3762e6f2ec03013a700398fc4.scope: Deactivated successfully.
Jan 31 08:06:15 compute-0 ceph-mgr[75519]: mgr[py] Loading python module 'crash'
Jan 31 08:06:15 compute-0 ceph-mgr[75519]: mgr[py] Loading python module 'dashboard'
Jan 31 08:06:16 compute-0 ceph-mgr[75519]: mgr[py] Loading python module 'devicehealth'
Jan 31 08:06:16 compute-0 ceph-mgr[75519]: mgr[py] Loading python module 'diskprediction_local'
Jan 31 08:06:16 compute-0 ceph-82c880e6-d992-5408-8b12-efff9c275473-mgr-compute-0-fqetdi[75515]: /lib64/python3.9/site-packages/scipy/__init__.py:73: UserWarning: NumPy was imported from a Python sub-interpreter but NumPy does not properly support sub-interpreters. This will likely work for most users but might cause hard to track down issues or subtle bugs. A common user of the rare sub-interpreter feature is wsgi which also allows single-interpreter mode.
Jan 31 08:06:16 compute-0 ceph-82c880e6-d992-5408-8b12-efff9c275473-mgr-compute-0-fqetdi[75515]: Improvements in the case of bugs are welcome, but is not on the NumPy roadmap, and full support may require significant effort to achieve.
Jan 31 08:06:16 compute-0 ceph-82c880e6-d992-5408-8b12-efff9c275473-mgr-compute-0-fqetdi[75515]:   from numpy import show_config as show_numpy_config
Jan 31 08:06:16 compute-0 ceph-mgr[75519]: mgr[py] Loading python module 'influx'
Jan 31 08:06:16 compute-0 ceph-mgr[75519]: mgr[py] Loading python module 'insights'
Jan 31 08:06:16 compute-0 ceph-mgr[75519]: mgr[py] Loading python module 'iostat'
Jan 31 08:06:16 compute-0 ceph-mgr[75519]: mgr[py] Loading python module 'k8sevents'
Jan 31 08:06:17 compute-0 ceph-mgr[75519]: mgr[py] Loading python module 'localpool'
Jan 31 08:06:17 compute-0 podman[75607]: 2026-01-31 08:06:17.315286209 +0000 UTC m=+0.040869667 container create ca011698c682b67686028e598ae132eb7790771423469c1dd6e692f5b7a0c8db (image=quay.io/ceph/ceph:v20, name=romantic_goldwasser, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0)
Jan 31 08:06:17 compute-0 systemd[1]: Started libpod-conmon-ca011698c682b67686028e598ae132eb7790771423469c1dd6e692f5b7a0c8db.scope.
Jan 31 08:06:17 compute-0 ceph-mgr[75519]: mgr[py] Loading python module 'mds_autoscaler'
Jan 31 08:06:17 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:06:17 compute-0 podman[75607]: 2026-01-31 08:06:17.29571864 +0000 UTC m=+0.021302148 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 31 08:06:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eb1738e14227f5fb58a06f193086f87c10fdd439641f98e4db4c746b2f51c05a/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 31 08:06:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eb1738e14227f5fb58a06f193086f87c10fdd439641f98e4db4c746b2f51c05a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 08:06:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eb1738e14227f5fb58a06f193086f87c10fdd439641f98e4db4c746b2f51c05a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 08:06:17 compute-0 podman[75607]: 2026-01-31 08:06:17.410741792 +0000 UTC m=+0.136325280 container init ca011698c682b67686028e598ae132eb7790771423469c1dd6e692f5b7a0c8db (image=quay.io/ceph/ceph:v20, name=romantic_goldwasser, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 08:06:17 compute-0 podman[75607]: 2026-01-31 08:06:17.415946161 +0000 UTC m=+0.141529659 container start ca011698c682b67686028e598ae132eb7790771423469c1dd6e692f5b7a0c8db (image=quay.io/ceph/ceph:v20, name=romantic_goldwasser, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default)
Jan 31 08:06:17 compute-0 podman[75607]: 2026-01-31 08:06:17.425335538 +0000 UTC m=+0.150919087 container attach ca011698c682b67686028e598ae132eb7790771423469c1dd6e692f5b7a0c8db (image=quay.io/ceph/ceph:v20, name=romantic_goldwasser, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Jan 31 08:06:17 compute-0 ceph-mgr[75519]: mgr[py] Loading python module 'mirroring'
Jan 31 08:06:17 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0)
Jan 31 08:06:17 compute-0 ceph-mon[75227]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3422912478' entity='client.admin' cmd={"prefix": "status", "format": "json-pretty"} : dispatch
Jan 31 08:06:17 compute-0 romantic_goldwasser[75624]: 
Jan 31 08:06:17 compute-0 romantic_goldwasser[75624]: {
Jan 31 08:06:17 compute-0 romantic_goldwasser[75624]:     "fsid": "82c880e6-d992-5408-8b12-efff9c275473",
Jan 31 08:06:17 compute-0 romantic_goldwasser[75624]:     "health": {
Jan 31 08:06:17 compute-0 romantic_goldwasser[75624]:         "status": "HEALTH_OK",
Jan 31 08:06:17 compute-0 romantic_goldwasser[75624]:         "checks": {},
Jan 31 08:06:17 compute-0 romantic_goldwasser[75624]:         "mutes": []
Jan 31 08:06:17 compute-0 romantic_goldwasser[75624]:     },
Jan 31 08:06:17 compute-0 romantic_goldwasser[75624]:     "election_epoch": 5,
Jan 31 08:06:17 compute-0 romantic_goldwasser[75624]:     "quorum": [
Jan 31 08:06:17 compute-0 romantic_goldwasser[75624]:         0
Jan 31 08:06:17 compute-0 romantic_goldwasser[75624]:     ],
Jan 31 08:06:17 compute-0 romantic_goldwasser[75624]:     "quorum_names": [
Jan 31 08:06:17 compute-0 romantic_goldwasser[75624]:         "compute-0"
Jan 31 08:06:17 compute-0 romantic_goldwasser[75624]:     ],
Jan 31 08:06:17 compute-0 romantic_goldwasser[75624]:     "quorum_age": 4,
Jan 31 08:06:17 compute-0 romantic_goldwasser[75624]:     "monmap": {
Jan 31 08:06:17 compute-0 romantic_goldwasser[75624]:         "epoch": 1,
Jan 31 08:06:17 compute-0 romantic_goldwasser[75624]:         "min_mon_release_name": "tentacle",
Jan 31 08:06:17 compute-0 romantic_goldwasser[75624]:         "num_mons": 1
Jan 31 08:06:17 compute-0 romantic_goldwasser[75624]:     },
Jan 31 08:06:17 compute-0 romantic_goldwasser[75624]:     "osdmap": {
Jan 31 08:06:17 compute-0 romantic_goldwasser[75624]:         "epoch": 1,
Jan 31 08:06:17 compute-0 romantic_goldwasser[75624]:         "num_osds": 0,
Jan 31 08:06:17 compute-0 romantic_goldwasser[75624]:         "num_up_osds": 0,
Jan 31 08:06:17 compute-0 romantic_goldwasser[75624]:         "osd_up_since": 0,
Jan 31 08:06:17 compute-0 romantic_goldwasser[75624]:         "num_in_osds": 0,
Jan 31 08:06:17 compute-0 romantic_goldwasser[75624]:         "osd_in_since": 0,
Jan 31 08:06:17 compute-0 romantic_goldwasser[75624]:         "num_remapped_pgs": 0
Jan 31 08:06:17 compute-0 romantic_goldwasser[75624]:     },
Jan 31 08:06:17 compute-0 romantic_goldwasser[75624]:     "pgmap": {
Jan 31 08:06:17 compute-0 romantic_goldwasser[75624]:         "pgs_by_state": [],
Jan 31 08:06:17 compute-0 romantic_goldwasser[75624]:         "num_pgs": 0,
Jan 31 08:06:17 compute-0 romantic_goldwasser[75624]:         "num_pools": 0,
Jan 31 08:06:17 compute-0 romantic_goldwasser[75624]:         "num_objects": 0,
Jan 31 08:06:17 compute-0 romantic_goldwasser[75624]:         "data_bytes": 0,
Jan 31 08:06:17 compute-0 romantic_goldwasser[75624]:         "bytes_used": 0,
Jan 31 08:06:17 compute-0 romantic_goldwasser[75624]:         "bytes_avail": 0,
Jan 31 08:06:17 compute-0 romantic_goldwasser[75624]:         "bytes_total": 0
Jan 31 08:06:17 compute-0 romantic_goldwasser[75624]:     },
Jan 31 08:06:17 compute-0 romantic_goldwasser[75624]:     "fsmap": {
Jan 31 08:06:17 compute-0 romantic_goldwasser[75624]:         "epoch": 1,
Jan 31 08:06:17 compute-0 romantic_goldwasser[75624]:         "btime": "2026-01-31T08:06:11:330734+0000",
Jan 31 08:06:17 compute-0 romantic_goldwasser[75624]:         "by_rank": [],
Jan 31 08:06:17 compute-0 romantic_goldwasser[75624]:         "up:standby": 0
Jan 31 08:06:17 compute-0 romantic_goldwasser[75624]:     },
Jan 31 08:06:17 compute-0 romantic_goldwasser[75624]:     "mgrmap": {
Jan 31 08:06:17 compute-0 romantic_goldwasser[75624]:         "available": false,
Jan 31 08:06:17 compute-0 romantic_goldwasser[75624]:         "num_standbys": 0,
Jan 31 08:06:17 compute-0 romantic_goldwasser[75624]:         "modules": [
Jan 31 08:06:17 compute-0 romantic_goldwasser[75624]:             "iostat",
Jan 31 08:06:17 compute-0 romantic_goldwasser[75624]:             "nfs"
Jan 31 08:06:17 compute-0 romantic_goldwasser[75624]:         ],
Jan 31 08:06:17 compute-0 romantic_goldwasser[75624]:         "services": {}
Jan 31 08:06:17 compute-0 romantic_goldwasser[75624]:     },
Jan 31 08:06:17 compute-0 romantic_goldwasser[75624]:     "servicemap": {
Jan 31 08:06:17 compute-0 romantic_goldwasser[75624]:         "epoch": 1,
Jan 31 08:06:17 compute-0 romantic_goldwasser[75624]:         "modified": "2026-01-31T08:06:11.333031+0000",
Jan 31 08:06:17 compute-0 romantic_goldwasser[75624]:         "services": {}
Jan 31 08:06:17 compute-0 romantic_goldwasser[75624]:     },
Jan 31 08:06:17 compute-0 romantic_goldwasser[75624]:     "progress_events": {}
Jan 31 08:06:17 compute-0 romantic_goldwasser[75624]: }
Jan 31 08:06:17 compute-0 systemd[1]: libpod-ca011698c682b67686028e598ae132eb7790771423469c1dd6e692f5b7a0c8db.scope: Deactivated successfully.
Jan 31 08:06:17 compute-0 podman[75607]: 2026-01-31 08:06:17.634820715 +0000 UTC m=+0.360404213 container died ca011698c682b67686028e598ae132eb7790771423469c1dd6e692f5b7a0c8db (image=quay.io/ceph/ceph:v20, name=romantic_goldwasser, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 31 08:06:17 compute-0 systemd[1]: var-lib-containers-storage-overlay-eb1738e14227f5fb58a06f193086f87c10fdd439641f98e4db4c746b2f51c05a-merged.mount: Deactivated successfully.
Jan 31 08:06:17 compute-0 ceph-mon[75227]: from='client.? 192.168.122.100:0/3422912478' entity='client.admin' cmd={"prefix": "status", "format": "json-pretty"} : dispatch
Jan 31 08:06:17 compute-0 ceph-mgr[75519]: mgr[py] Loading python module 'nfs'
Jan 31 08:06:17 compute-0 podman[75607]: 2026-01-31 08:06:17.672293695 +0000 UTC m=+0.397877193 container remove ca011698c682b67686028e598ae132eb7790771423469c1dd6e692f5b7a0c8db (image=quay.io/ceph/ceph:v20, name=romantic_goldwasser, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_REF=tentacle)
Jan 31 08:06:17 compute-0 systemd[1]: libpod-conmon-ca011698c682b67686028e598ae132eb7790771423469c1dd6e692f5b7a0c8db.scope: Deactivated successfully.
Jan 31 08:06:17 compute-0 ceph-mgr[75519]: mgr[py] Loading python module 'orchestrator'
Jan 31 08:06:18 compute-0 ceph-mgr[75519]: mgr[py] Loading python module 'osd_perf_query'
Jan 31 08:06:18 compute-0 ceph-mgr[75519]: mgr[py] Loading python module 'osd_support'
Jan 31 08:06:18 compute-0 ceph-mgr[75519]: mgr[py] Loading python module 'pg_autoscaler'
Jan 31 08:06:18 compute-0 ceph-mgr[75519]: mgr[py] Loading python module 'progress'
Jan 31 08:06:18 compute-0 ceph-mgr[75519]: mgr[py] Loading python module 'prometheus'
Jan 31 08:06:18 compute-0 ceph-mgr[75519]: mgr[py] Loading python module 'rbd_support'
Jan 31 08:06:18 compute-0 ceph-mgr[75519]: mgr[py] Loading python module 'rgw'
Jan 31 08:06:19 compute-0 ceph-mgr[75519]: mgr[py] Loading python module 'rook'
Jan 31 08:06:19 compute-0 ceph-mgr[75519]: mgr[py] Loading python module 'selftest'
Jan 31 08:06:19 compute-0 ceph-mgr[75519]: mgr[py] Loading python module 'smb'
Jan 31 08:06:19 compute-0 podman[75662]: 2026-01-31 08:06:19.739362361 +0000 UTC m=+0.045314314 container create 63b0fd9a8ce7039ead3914ee9e068906bc2f461272af0875cc969d889e7a6058 (image=quay.io/ceph/ceph:v20, name=romantic_saha, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 08:06:19 compute-0 systemd[1]: Started libpod-conmon-63b0fd9a8ce7039ead3914ee9e068906bc2f461272af0875cc969d889e7a6058.scope.
Jan 31 08:06:19 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:06:19 compute-0 podman[75662]: 2026-01-31 08:06:19.719786912 +0000 UTC m=+0.025738895 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 31 08:06:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/78e341f31351f7b614d9f052021e1214027bf2d65a6967ff1383a260b424bb3a/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 31 08:06:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/78e341f31351f7b614d9f052021e1214027bf2d65a6967ff1383a260b424bb3a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 08:06:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/78e341f31351f7b614d9f052021e1214027bf2d65a6967ff1383a260b424bb3a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 08:06:19 compute-0 ceph-mgr[75519]: mgr[py] Loading python module 'snap_schedule'
Jan 31 08:06:19 compute-0 podman[75662]: 2026-01-31 08:06:19.860912519 +0000 UTC m=+0.166864522 container init 63b0fd9a8ce7039ead3914ee9e068906bc2f461272af0875cc969d889e7a6058 (image=quay.io/ceph/ceph:v20, name=romantic_saha, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 08:06:19 compute-0 podman[75662]: 2026-01-31 08:06:19.866098417 +0000 UTC m=+0.172050370 container start 63b0fd9a8ce7039ead3914ee9e068906bc2f461272af0875cc969d889e7a6058 (image=quay.io/ceph/ceph:v20, name=romantic_saha, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 08:06:19 compute-0 podman[75662]: 2026-01-31 08:06:19.88269516 +0000 UTC m=+0.188647203 container attach 63b0fd9a8ce7039ead3914ee9e068906bc2f461272af0875cc969d889e7a6058 (image=quay.io/ceph/ceph:v20, name=romantic_saha, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0)
Jan 31 08:06:19 compute-0 ceph-mgr[75519]: mgr[py] Loading python module 'stats'
Jan 31 08:06:20 compute-0 ceph-mgr[75519]: mgr[py] Loading python module 'status'
Jan 31 08:06:20 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0)
Jan 31 08:06:20 compute-0 ceph-mon[75227]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/253638719' entity='client.admin' cmd={"prefix": "status", "format": "json-pretty"} : dispatch
Jan 31 08:06:20 compute-0 romantic_saha[75678]: 
Jan 31 08:06:20 compute-0 romantic_saha[75678]: {
Jan 31 08:06:20 compute-0 romantic_saha[75678]:     "fsid": "82c880e6-d992-5408-8b12-efff9c275473",
Jan 31 08:06:20 compute-0 romantic_saha[75678]:     "health": {
Jan 31 08:06:20 compute-0 romantic_saha[75678]:         "status": "HEALTH_OK",
Jan 31 08:06:20 compute-0 romantic_saha[75678]:         "checks": {},
Jan 31 08:06:20 compute-0 romantic_saha[75678]:         "mutes": []
Jan 31 08:06:20 compute-0 romantic_saha[75678]:     },
Jan 31 08:06:20 compute-0 romantic_saha[75678]:     "election_epoch": 5,
Jan 31 08:06:20 compute-0 romantic_saha[75678]:     "quorum": [
Jan 31 08:06:20 compute-0 romantic_saha[75678]:         0
Jan 31 08:06:20 compute-0 romantic_saha[75678]:     ],
Jan 31 08:06:20 compute-0 romantic_saha[75678]:     "quorum_names": [
Jan 31 08:06:20 compute-0 romantic_saha[75678]:         "compute-0"
Jan 31 08:06:20 compute-0 romantic_saha[75678]:     ],
Jan 31 08:06:20 compute-0 romantic_saha[75678]:     "quorum_age": 6,
Jan 31 08:06:20 compute-0 romantic_saha[75678]:     "monmap": {
Jan 31 08:06:20 compute-0 romantic_saha[75678]:         "epoch": 1,
Jan 31 08:06:20 compute-0 romantic_saha[75678]:         "min_mon_release_name": "tentacle",
Jan 31 08:06:20 compute-0 romantic_saha[75678]:         "num_mons": 1
Jan 31 08:06:20 compute-0 romantic_saha[75678]:     },
Jan 31 08:06:20 compute-0 romantic_saha[75678]:     "osdmap": {
Jan 31 08:06:20 compute-0 romantic_saha[75678]:         "epoch": 1,
Jan 31 08:06:20 compute-0 romantic_saha[75678]:         "num_osds": 0,
Jan 31 08:06:20 compute-0 romantic_saha[75678]:         "num_up_osds": 0,
Jan 31 08:06:20 compute-0 romantic_saha[75678]:         "osd_up_since": 0,
Jan 31 08:06:20 compute-0 romantic_saha[75678]:         "num_in_osds": 0,
Jan 31 08:06:20 compute-0 romantic_saha[75678]:         "osd_in_since": 0,
Jan 31 08:06:20 compute-0 romantic_saha[75678]:         "num_remapped_pgs": 0
Jan 31 08:06:20 compute-0 romantic_saha[75678]:     },
Jan 31 08:06:20 compute-0 romantic_saha[75678]:     "pgmap": {
Jan 31 08:06:20 compute-0 romantic_saha[75678]:         "pgs_by_state": [],
Jan 31 08:06:20 compute-0 romantic_saha[75678]:         "num_pgs": 0,
Jan 31 08:06:20 compute-0 romantic_saha[75678]:         "num_pools": 0,
Jan 31 08:06:20 compute-0 romantic_saha[75678]:         "num_objects": 0,
Jan 31 08:06:20 compute-0 romantic_saha[75678]:         "data_bytes": 0,
Jan 31 08:06:20 compute-0 romantic_saha[75678]:         "bytes_used": 0,
Jan 31 08:06:20 compute-0 romantic_saha[75678]:         "bytes_avail": 0,
Jan 31 08:06:20 compute-0 romantic_saha[75678]:         "bytes_total": 0
Jan 31 08:06:20 compute-0 romantic_saha[75678]:     },
Jan 31 08:06:20 compute-0 romantic_saha[75678]:     "fsmap": {
Jan 31 08:06:20 compute-0 romantic_saha[75678]:         "epoch": 1,
Jan 31 08:06:20 compute-0 romantic_saha[75678]:         "btime": "2026-01-31T08:06:11:330734+0000",
Jan 31 08:06:20 compute-0 romantic_saha[75678]:         "by_rank": [],
Jan 31 08:06:20 compute-0 romantic_saha[75678]:         "up:standby": 0
Jan 31 08:06:20 compute-0 romantic_saha[75678]:     },
Jan 31 08:06:20 compute-0 romantic_saha[75678]:     "mgrmap": {
Jan 31 08:06:20 compute-0 romantic_saha[75678]:         "available": false,
Jan 31 08:06:20 compute-0 romantic_saha[75678]:         "num_standbys": 0,
Jan 31 08:06:20 compute-0 romantic_saha[75678]:         "modules": [
Jan 31 08:06:20 compute-0 romantic_saha[75678]:             "iostat",
Jan 31 08:06:20 compute-0 romantic_saha[75678]:             "nfs"
Jan 31 08:06:20 compute-0 romantic_saha[75678]:         ],
Jan 31 08:06:20 compute-0 romantic_saha[75678]:         "services": {}
Jan 31 08:06:20 compute-0 romantic_saha[75678]:     },
Jan 31 08:06:20 compute-0 romantic_saha[75678]:     "servicemap": {
Jan 31 08:06:20 compute-0 romantic_saha[75678]:         "epoch": 1,
Jan 31 08:06:20 compute-0 romantic_saha[75678]:         "modified": "2026-01-31T08:06:11.333031+0000",
Jan 31 08:06:20 compute-0 romantic_saha[75678]:         "services": {}
Jan 31 08:06:20 compute-0 romantic_saha[75678]:     },
Jan 31 08:06:20 compute-0 romantic_saha[75678]:     "progress_events": {}
Jan 31 08:06:20 compute-0 romantic_saha[75678]: }
Jan 31 08:06:20 compute-0 systemd[1]: libpod-63b0fd9a8ce7039ead3914ee9e068906bc2f461272af0875cc969d889e7a6058.scope: Deactivated successfully.
Jan 31 08:06:20 compute-0 podman[75662]: 2026-01-31 08:06:20.060015659 +0000 UTC m=+0.365967632 container died 63b0fd9a8ce7039ead3914ee9e068906bc2f461272af0875cc969d889e7a6058 (image=quay.io/ceph/ceph:v20, name=romantic_saha, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 08:06:20 compute-0 systemd[1]: var-lib-containers-storage-overlay-78e341f31351f7b614d9f052021e1214027bf2d65a6967ff1383a260b424bb3a-merged.mount: Deactivated successfully.
Jan 31 08:06:20 compute-0 podman[75662]: 2026-01-31 08:06:20.093629418 +0000 UTC m=+0.399581371 container remove 63b0fd9a8ce7039ead3914ee9e068906bc2f461272af0875cc969d889e7a6058 (image=quay.io/ceph/ceph:v20, name=romantic_saha, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS)
Jan 31 08:06:20 compute-0 ceph-mon[75227]: from='client.? 192.168.122.100:0/253638719' entity='client.admin' cmd={"prefix": "status", "format": "json-pretty"} : dispatch
Jan 31 08:06:20 compute-0 ceph-mgr[75519]: mgr[py] Loading python module 'telegraf'
Jan 31 08:06:20 compute-0 systemd[1]: libpod-conmon-63b0fd9a8ce7039ead3914ee9e068906bc2f461272af0875cc969d889e7a6058.scope: Deactivated successfully.
Jan 31 08:06:20 compute-0 ceph-mgr[75519]: mgr[py] Loading python module 'telemetry'
Jan 31 08:06:20 compute-0 ceph-mgr[75519]: mgr[py] Loading python module 'test_orchestrator'
Jan 31 08:06:20 compute-0 ceph-mgr[75519]: mgr[py] Loading python module 'volumes'
Jan 31 08:06:20 compute-0 ceph-mgr[75519]: ms_deliver_dispatch: unhandled message 0x55a9c8a8b860 mon_map magic: 0 from mon.0 v2:192.168.122.100:3300/0
Jan 31 08:06:20 compute-0 ceph-mon[75227]: log_channel(cluster) log [INF] : Activating manager daemon compute-0.fqetdi
Jan 31 08:06:20 compute-0 ceph-mgr[75519]: mgr handle_mgr_map Activating!
Jan 31 08:06:20 compute-0 ceph-mon[75227]: log_channel(cluster) log [DBG] : mgrmap e2: compute-0.fqetdi(active, starting, since 0.00868354s)
Jan 31 08:06:20 compute-0 ceph-mgr[75519]: mgr handle_mgr_map I am now activating
Jan 31 08:06:20 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mds metadata"} v 0)
Jan 31 08:06:20 compute-0 ceph-mon[75227]: log_channel(audit) log [DBG] : from='mgr.14102 192.168.122.100:0/1405045557' entity='mgr.compute-0.fqetdi' cmd={"prefix": "mds metadata"} : dispatch
Jan 31 08:06:20 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).mds e1 all = 1
Jan 31 08:06:20 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata"} v 0)
Jan 31 08:06:20 compute-0 ceph-mon[75227]: log_channel(audit) log [DBG] : from='mgr.14102 192.168.122.100:0/1405045557' entity='mgr.compute-0.fqetdi' cmd={"prefix": "osd metadata"} : dispatch
Jan 31 08:06:20 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon metadata"} v 0)
Jan 31 08:06:20 compute-0 ceph-mon[75227]: log_channel(audit) log [DBG] : from='mgr.14102 192.168.122.100:0/1405045557' entity='mgr.compute-0.fqetdi' cmd={"prefix": "mon metadata"} : dispatch
Jan 31 08:06:20 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon metadata", "id": "compute-0"} v 0)
Jan 31 08:06:20 compute-0 ceph-mon[75227]: log_channel(audit) log [DBG] : from='mgr.14102 192.168.122.100:0/1405045557' entity='mgr.compute-0.fqetdi' cmd={"prefix": "mon metadata", "id": "compute-0"} : dispatch
Jan 31 08:06:20 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr metadata", "who": "compute-0.fqetdi", "id": "compute-0.fqetdi"} v 0)
Jan 31 08:06:20 compute-0 ceph-mon[75227]: log_channel(audit) log [DBG] : from='mgr.14102 192.168.122.100:0/1405045557' entity='mgr.compute-0.fqetdi' cmd={"prefix": "mgr metadata", "who": "compute-0.fqetdi", "id": "compute-0.fqetdi"} : dispatch
Jan 31 08:06:20 compute-0 ceph-mgr[75519]: [balancer DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 31 08:06:20 compute-0 ceph-mgr[75519]: mgr load Constructed class from module: balancer
Jan 31 08:06:20 compute-0 ceph-mgr[75519]: [balancer INFO root] Starting
Jan 31 08:06:20 compute-0 ceph-mgr[75519]: [crash DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 31 08:06:20 compute-0 ceph-mgr[75519]: mgr load Constructed class from module: crash
Jan 31 08:06:20 compute-0 ceph-mgr[75519]: [balancer INFO root] Optimize plan auto_2026-01-31_08:06:20
Jan 31 08:06:20 compute-0 ceph-mon[75227]: log_channel(cluster) log [INF] : Manager daemon compute-0.fqetdi is now available
Jan 31 08:06:20 compute-0 ceph-mgr[75519]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 31 08:06:20 compute-0 ceph-mgr[75519]: [balancer INFO root] do_upmap
Jan 31 08:06:20 compute-0 ceph-mgr[75519]: [balancer INFO root] No pools available
Jan 31 08:06:20 compute-0 ceph-mgr[75519]: [devicehealth DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 31 08:06:20 compute-0 ceph-mgr[75519]: mgr load Constructed class from module: devicehealth
Jan 31 08:06:20 compute-0 ceph-mgr[75519]: [iostat DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 31 08:06:20 compute-0 ceph-mgr[75519]: mgr load Constructed class from module: iostat
Jan 31 08:06:20 compute-0 ceph-mgr[75519]: [devicehealth INFO root] Starting
Jan 31 08:06:20 compute-0 ceph-mgr[75519]: [nfs DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 31 08:06:20 compute-0 ceph-mgr[75519]: mgr load Constructed class from module: nfs
Jan 31 08:06:20 compute-0 ceph-mgr[75519]: [orchestrator DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 31 08:06:20 compute-0 ceph-mgr[75519]: mgr load Constructed class from module: orchestrator
Jan 31 08:06:20 compute-0 ceph-mgr[75519]: [pg_autoscaler DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 31 08:06:20 compute-0 ceph-mgr[75519]: mgr load Constructed class from module: pg_autoscaler
Jan 31 08:06:20 compute-0 ceph-mgr[75519]: [progress DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 31 08:06:20 compute-0 ceph-mgr[75519]: mgr load Constructed class from module: progress
Jan 31 08:06:20 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] _maybe_adjust
Jan 31 08:06:20 compute-0 ceph-mgr[75519]: [progress INFO root] Loading...
Jan 31 08:06:20 compute-0 ceph-mgr[75519]: [progress INFO root] No stored events to load
Jan 31 08:06:20 compute-0 ceph-mgr[75519]: [progress INFO root] Loaded [] historic events
Jan 31 08:06:20 compute-0 ceph-mgr[75519]: [rbd_support DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 31 08:06:20 compute-0 ceph-mgr[75519]: [progress INFO root] Loaded OSDMap, ready.
Jan 31 08:06:20 compute-0 ceph-mgr[75519]: [rbd_support INFO root] recovery thread starting
Jan 31 08:06:20 compute-0 ceph-mgr[75519]: [rbd_support INFO root] starting setup
Jan 31 08:06:20 compute-0 ceph-mgr[75519]: mgr load Constructed class from module: rbd_support
Jan 31 08:06:20 compute-0 ceph-mgr[75519]: [status DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 31 08:06:20 compute-0 ceph-mgr[75519]: mgr load Constructed class from module: status
Jan 31 08:06:20 compute-0 ceph-mgr[75519]: [telemetry DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 31 08:06:20 compute-0 ceph-mgr[75519]: mgr load Constructed class from module: telemetry
Jan 31 08:06:20 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.fqetdi/mirror_snapshot_schedule"} v 0)
Jan 31 08:06:20 compute-0 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14102 192.168.122.100:0/1405045557' entity='mgr.compute-0.fqetdi' cmd={"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.fqetdi/mirror_snapshot_schedule"} : dispatch
Jan 31 08:06:20 compute-0 ceph-mgr[75519]: [volumes DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 31 08:06:20 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/telemetry/report_id}] v 0)
Jan 31 08:06:20 compute-0 ceph-mgr[75519]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 31 08:06:20 compute-0 ceph-mgr[75519]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: starting
Jan 31 08:06:20 compute-0 ceph-mgr[75519]: [rbd_support INFO root] PerfHandler: starting
Jan 31 08:06:20 compute-0 ceph-mgr[75519]: [rbd_support INFO root] TaskHandler: starting
Jan 31 08:06:20 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.fqetdi/trash_purge_schedule"} v 0)
Jan 31 08:06:20 compute-0 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14102 192.168.122.100:0/1405045557' entity='mgr.compute-0.fqetdi' cmd={"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.fqetdi/trash_purge_schedule"} : dispatch
Jan 31 08:06:20 compute-0 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14102 192.168.122.100:0/1405045557' entity='mgr.compute-0.fqetdi' 
Jan 31 08:06:20 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/telemetry/salt}] v 0)
Jan 31 08:06:20 compute-0 ceph-mgr[75519]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 31 08:06:20 compute-0 ceph-mgr[75519]: [rbd_support INFO root] TrashPurgeScheduleHandler: starting
Jan 31 08:06:20 compute-0 ceph-mgr[75519]: [rbd_support INFO root] setup complete
Jan 31 08:06:20 compute-0 ceph-mgr[75519]: mgr load Constructed class from module: volumes
Jan 31 08:06:20 compute-0 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14102 192.168.122.100:0/1405045557' entity='mgr.compute-0.fqetdi' 
Jan 31 08:06:20 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/telemetry/collection}] v 0)
Jan 31 08:06:20 compute-0 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14102 192.168.122.100:0/1405045557' entity='mgr.compute-0.fqetdi' 
Jan 31 08:06:21 compute-0 ceph-mon[75227]: Activating manager daemon compute-0.fqetdi
Jan 31 08:06:21 compute-0 ceph-mon[75227]: mgrmap e2: compute-0.fqetdi(active, starting, since 0.00868354s)
Jan 31 08:06:21 compute-0 ceph-mon[75227]: from='mgr.14102 192.168.122.100:0/1405045557' entity='mgr.compute-0.fqetdi' cmd={"prefix": "mds metadata"} : dispatch
Jan 31 08:06:21 compute-0 ceph-mon[75227]: from='mgr.14102 192.168.122.100:0/1405045557' entity='mgr.compute-0.fqetdi' cmd={"prefix": "osd metadata"} : dispatch
Jan 31 08:06:21 compute-0 ceph-mon[75227]: from='mgr.14102 192.168.122.100:0/1405045557' entity='mgr.compute-0.fqetdi' cmd={"prefix": "mon metadata"} : dispatch
Jan 31 08:06:21 compute-0 ceph-mon[75227]: from='mgr.14102 192.168.122.100:0/1405045557' entity='mgr.compute-0.fqetdi' cmd={"prefix": "mon metadata", "id": "compute-0"} : dispatch
Jan 31 08:06:21 compute-0 ceph-mon[75227]: from='mgr.14102 192.168.122.100:0/1405045557' entity='mgr.compute-0.fqetdi' cmd={"prefix": "mgr metadata", "who": "compute-0.fqetdi", "id": "compute-0.fqetdi"} : dispatch
Jan 31 08:06:21 compute-0 ceph-mon[75227]: Manager daemon compute-0.fqetdi is now available
Jan 31 08:06:21 compute-0 ceph-mon[75227]: from='mgr.14102 192.168.122.100:0/1405045557' entity='mgr.compute-0.fqetdi' cmd={"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.fqetdi/mirror_snapshot_schedule"} : dispatch
Jan 31 08:06:21 compute-0 ceph-mon[75227]: from='mgr.14102 192.168.122.100:0/1405045557' entity='mgr.compute-0.fqetdi' cmd={"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.fqetdi/trash_purge_schedule"} : dispatch
Jan 31 08:06:21 compute-0 ceph-mon[75227]: from='mgr.14102 192.168.122.100:0/1405045557' entity='mgr.compute-0.fqetdi' 
Jan 31 08:06:21 compute-0 ceph-mon[75227]: from='mgr.14102 192.168.122.100:0/1405045557' entity='mgr.compute-0.fqetdi' 
Jan 31 08:06:21 compute-0 ceph-mon[75227]: from='mgr.14102 192.168.122.100:0/1405045557' entity='mgr.compute-0.fqetdi' 
Jan 31 08:06:21 compute-0 ceph-mon[75227]: log_channel(cluster) log [DBG] : mgrmap e3: compute-0.fqetdi(active, since 1.01893s)
Jan 31 08:06:22 compute-0 podman[75795]: 2026-01-31 08:06:22.165904462 +0000 UTC m=+0.051569132 container create ce19327d297282dd6243d2d03d01bfd186b4f5dfdc3dbd4f49623166fcbdbd2b (image=quay.io/ceph/ceph:v20, name=zen_bartik, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle)
Jan 31 08:06:22 compute-0 systemd[1]: Started libpod-conmon-ce19327d297282dd6243d2d03d01bfd186b4f5dfdc3dbd4f49623166fcbdbd2b.scope.
Jan 31 08:06:22 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:06:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1981d6f87c8699b59f69e5e30a7286081452b126a686a1f83210992fa02ac967/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 08:06:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1981d6f87c8699b59f69e5e30a7286081452b126a686a1f83210992fa02ac967/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 31 08:06:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1981d6f87c8699b59f69e5e30a7286081452b126a686a1f83210992fa02ac967/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 08:06:22 compute-0 podman[75795]: 2026-01-31 08:06:22.144434659 +0000 UTC m=+0.030099379 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 31 08:06:22 compute-0 podman[75795]: 2026-01-31 08:06:22.245719189 +0000 UTC m=+0.131383879 container init ce19327d297282dd6243d2d03d01bfd186b4f5dfdc3dbd4f49623166fcbdbd2b (image=quay.io/ceph/ceph:v20, name=zen_bartik, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 08:06:22 compute-0 podman[75795]: 2026-01-31 08:06:22.252119132 +0000 UTC m=+0.137783802 container start ce19327d297282dd6243d2d03d01bfd186b4f5dfdc3dbd4f49623166fcbdbd2b (image=quay.io/ceph/ceph:v20, name=zen_bartik, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 08:06:22 compute-0 podman[75795]: 2026-01-31 08:06:22.254652834 +0000 UTC m=+0.140317534 container attach ce19327d297282dd6243d2d03d01bfd186b4f5dfdc3dbd4f49623166fcbdbd2b (image=quay.io/ceph/ceph:v20, name=zen_bartik, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 31 08:06:22 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0)
Jan 31 08:06:22 compute-0 ceph-mon[75227]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1842329094' entity='client.admin' cmd={"prefix": "status", "format": "json-pretty"} : dispatch
Jan 31 08:06:22 compute-0 zen_bartik[75812]: 
Jan 31 08:06:22 compute-0 zen_bartik[75812]: {
Jan 31 08:06:22 compute-0 zen_bartik[75812]:     "fsid": "82c880e6-d992-5408-8b12-efff9c275473",
Jan 31 08:06:22 compute-0 zen_bartik[75812]:     "health": {
Jan 31 08:06:22 compute-0 zen_bartik[75812]:         "status": "HEALTH_OK",
Jan 31 08:06:22 compute-0 zen_bartik[75812]:         "checks": {},
Jan 31 08:06:22 compute-0 zen_bartik[75812]:         "mutes": []
Jan 31 08:06:22 compute-0 zen_bartik[75812]:     },
Jan 31 08:06:22 compute-0 zen_bartik[75812]:     "election_epoch": 5,
Jan 31 08:06:22 compute-0 zen_bartik[75812]:     "quorum": [
Jan 31 08:06:22 compute-0 zen_bartik[75812]:         0
Jan 31 08:06:22 compute-0 zen_bartik[75812]:     ],
Jan 31 08:06:22 compute-0 zen_bartik[75812]:     "quorum_names": [
Jan 31 08:06:22 compute-0 zen_bartik[75812]:         "compute-0"
Jan 31 08:06:22 compute-0 zen_bartik[75812]:     ],
Jan 31 08:06:22 compute-0 zen_bartik[75812]:     "quorum_age": 9,
Jan 31 08:06:22 compute-0 zen_bartik[75812]:     "monmap": {
Jan 31 08:06:22 compute-0 zen_bartik[75812]:         "epoch": 1,
Jan 31 08:06:22 compute-0 zen_bartik[75812]:         "min_mon_release_name": "tentacle",
Jan 31 08:06:22 compute-0 zen_bartik[75812]:         "num_mons": 1
Jan 31 08:06:22 compute-0 zen_bartik[75812]:     },
Jan 31 08:06:22 compute-0 zen_bartik[75812]:     "osdmap": {
Jan 31 08:06:22 compute-0 zen_bartik[75812]:         "epoch": 1,
Jan 31 08:06:22 compute-0 zen_bartik[75812]:         "num_osds": 0,
Jan 31 08:06:22 compute-0 zen_bartik[75812]:         "num_up_osds": 0,
Jan 31 08:06:22 compute-0 zen_bartik[75812]:         "osd_up_since": 0,
Jan 31 08:06:22 compute-0 zen_bartik[75812]:         "num_in_osds": 0,
Jan 31 08:06:22 compute-0 zen_bartik[75812]:         "osd_in_since": 0,
Jan 31 08:06:22 compute-0 zen_bartik[75812]:         "num_remapped_pgs": 0
Jan 31 08:06:22 compute-0 zen_bartik[75812]:     },
Jan 31 08:06:22 compute-0 zen_bartik[75812]:     "pgmap": {
Jan 31 08:06:22 compute-0 zen_bartik[75812]:         "pgs_by_state": [],
Jan 31 08:06:22 compute-0 zen_bartik[75812]:         "num_pgs": 0,
Jan 31 08:06:22 compute-0 zen_bartik[75812]:         "num_pools": 0,
Jan 31 08:06:22 compute-0 zen_bartik[75812]:         "num_objects": 0,
Jan 31 08:06:22 compute-0 zen_bartik[75812]:         "data_bytes": 0,
Jan 31 08:06:22 compute-0 zen_bartik[75812]:         "bytes_used": 0,
Jan 31 08:06:22 compute-0 zen_bartik[75812]:         "bytes_avail": 0,
Jan 31 08:06:22 compute-0 zen_bartik[75812]:         "bytes_total": 0
Jan 31 08:06:22 compute-0 zen_bartik[75812]:     },
Jan 31 08:06:22 compute-0 zen_bartik[75812]:     "fsmap": {
Jan 31 08:06:22 compute-0 zen_bartik[75812]:         "epoch": 1,
Jan 31 08:06:22 compute-0 zen_bartik[75812]:         "btime": "2026-01-31T08:06:11:330734+0000",
Jan 31 08:06:22 compute-0 zen_bartik[75812]:         "by_rank": [],
Jan 31 08:06:22 compute-0 zen_bartik[75812]:         "up:standby": 0
Jan 31 08:06:22 compute-0 zen_bartik[75812]:     },
Jan 31 08:06:22 compute-0 zen_bartik[75812]:     "mgrmap": {
Jan 31 08:06:22 compute-0 zen_bartik[75812]:         "available": true,
Jan 31 08:06:22 compute-0 zen_bartik[75812]:         "num_standbys": 0,
Jan 31 08:06:22 compute-0 zen_bartik[75812]:         "modules": [
Jan 31 08:06:22 compute-0 zen_bartik[75812]:             "iostat",
Jan 31 08:06:22 compute-0 zen_bartik[75812]:             "nfs"
Jan 31 08:06:22 compute-0 zen_bartik[75812]:         ],
Jan 31 08:06:22 compute-0 zen_bartik[75812]:         "services": {}
Jan 31 08:06:22 compute-0 zen_bartik[75812]:     },
Jan 31 08:06:22 compute-0 zen_bartik[75812]:     "servicemap": {
Jan 31 08:06:22 compute-0 zen_bartik[75812]:         "epoch": 1,
Jan 31 08:06:22 compute-0 zen_bartik[75812]:         "modified": "2026-01-31T08:06:11.333031+0000",
Jan 31 08:06:22 compute-0 zen_bartik[75812]:         "services": {}
Jan 31 08:06:22 compute-0 zen_bartik[75812]:     },
Jan 31 08:06:22 compute-0 zen_bartik[75812]:     "progress_events": {}
Jan 31 08:06:22 compute-0 zen_bartik[75812]: }
Jan 31 08:06:22 compute-0 systemd[1]: libpod-ce19327d297282dd6243d2d03d01bfd186b4f5dfdc3dbd4f49623166fcbdbd2b.scope: Deactivated successfully.
Jan 31 08:06:22 compute-0 podman[75795]: 2026-01-31 08:06:22.796087032 +0000 UTC m=+0.681751752 container died ce19327d297282dd6243d2d03d01bfd186b4f5dfdc3dbd4f49623166fcbdbd2b (image=quay.io/ceph/ceph:v20, name=zen_bartik, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Jan 31 08:06:22 compute-0 systemd[1]: var-lib-containers-storage-overlay-1981d6f87c8699b59f69e5e30a7286081452b126a686a1f83210992fa02ac967-merged.mount: Deactivated successfully.
Jan 31 08:06:22 compute-0 ceph-mgr[75519]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Jan 31 08:06:22 compute-0 ceph-mgr[75519]: [devicehealth WARNING root] not enough osds to create mgr pool
Jan 31 08:06:22 compute-0 podman[75795]: 2026-01-31 08:06:22.907655245 +0000 UTC m=+0.793319915 container remove ce19327d297282dd6243d2d03d01bfd186b4f5dfdc3dbd4f49623166fcbdbd2b (image=quay.io/ceph/ceph:v20, name=zen_bartik, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0)
Jan 31 08:06:22 compute-0 systemd[1]: libpod-conmon-ce19327d297282dd6243d2d03d01bfd186b4f5dfdc3dbd4f49623166fcbdbd2b.scope: Deactivated successfully.
Jan 31 08:06:22 compute-0 ceph-mon[75227]: mgrmap e3: compute-0.fqetdi(active, since 1.01893s)
Jan 31 08:06:22 compute-0 ceph-mon[75227]: from='client.? 192.168.122.100:0/1842329094' entity='client.admin' cmd={"prefix": "status", "format": "json-pretty"} : dispatch
Jan 31 08:06:22 compute-0 ceph-mon[75227]: log_channel(cluster) log [DBG] : mgrmap e4: compute-0.fqetdi(active, since 2s)
Jan 31 08:06:22 compute-0 podman[75850]: 2026-01-31 08:06:22.966055281 +0000 UTC m=+0.043108981 container create 4bf5e59d495ef036949b9ccb6386c8806a5b4a06ebae7e0438657d805c09c2b3 (image=quay.io/ceph/ceph:v20, name=inspiring_bassi, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Jan 31 08:06:23 compute-0 systemd[1]: Started libpod-conmon-4bf5e59d495ef036949b9ccb6386c8806a5b4a06ebae7e0438657d805c09c2b3.scope.
Jan 31 08:06:23 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:06:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2c1f88261158232e49504efbaea092f9fcbae5b5c89590fd10884ba3887554ac/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 08:06:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2c1f88261158232e49504efbaea092f9fcbae5b5c89590fd10884ba3887554ac/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 08:06:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2c1f88261158232e49504efbaea092f9fcbae5b5c89590fd10884ba3887554ac/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 31 08:06:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2c1f88261158232e49504efbaea092f9fcbae5b5c89590fd10884ba3887554ac/merged/var/lib/ceph/user.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 08:06:23 compute-0 podman[75850]: 2026-01-31 08:06:22.942918361 +0000 UTC m=+0.019972081 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 31 08:06:23 compute-0 podman[75850]: 2026-01-31 08:06:23.042218034 +0000 UTC m=+0.119271804 container init 4bf5e59d495ef036949b9ccb6386c8806a5b4a06ebae7e0438657d805c09c2b3 (image=quay.io/ceph/ceph:v20, name=inspiring_bassi, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 08:06:23 compute-0 podman[75850]: 2026-01-31 08:06:23.050641615 +0000 UTC m=+0.127695355 container start 4bf5e59d495ef036949b9ccb6386c8806a5b4a06ebae7e0438657d805c09c2b3 (image=quay.io/ceph/ceph:v20, name=inspiring_bassi, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=tentacle, org.label-schema.build-date=20251030, OSD_FLAVOR=default)
Jan 31 08:06:23 compute-0 podman[75850]: 2026-01-31 08:06:23.054619988 +0000 UTC m=+0.131673778 container attach 4bf5e59d495ef036949b9ccb6386c8806a5b4a06ebae7e0438657d805c09c2b3 (image=quay.io/ceph/ceph:v20, name=inspiring_bassi, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle)
Jan 31 08:06:23 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config assimilate-conf"} v 0)
Jan 31 08:06:23 compute-0 ceph-mon[75227]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/394067677' entity='client.admin' cmd={"prefix": "config assimilate-conf"} : dispatch
Jan 31 08:06:23 compute-0 inspiring_bassi[75866]: 
Jan 31 08:06:23 compute-0 inspiring_bassi[75866]: [global]
Jan 31 08:06:23 compute-0 inspiring_bassi[75866]:         fsid = 82c880e6-d992-5408-8b12-efff9c275473
Jan 31 08:06:23 compute-0 inspiring_bassi[75866]:         mon_host = [v2:192.168.122.100:3300,v1:192.168.122.100:6789]
Jan 31 08:06:23 compute-0 inspiring_bassi[75866]:         osd_crush_chooseleaf_type = 0
Jan 31 08:06:23 compute-0 systemd[1]: libpod-4bf5e59d495ef036949b9ccb6386c8806a5b4a06ebae7e0438657d805c09c2b3.scope: Deactivated successfully.
Jan 31 08:06:23 compute-0 podman[75850]: 2026-01-31 08:06:23.457014969 +0000 UTC m=+0.534068679 container died 4bf5e59d495ef036949b9ccb6386c8806a5b4a06ebae7e0438657d805c09c2b3 (image=quay.io/ceph/ceph:v20, name=inspiring_bassi, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Jan 31 08:06:23 compute-0 systemd[1]: var-lib-containers-storage-overlay-2c1f88261158232e49504efbaea092f9fcbae5b5c89590fd10884ba3887554ac-merged.mount: Deactivated successfully.
Jan 31 08:06:23 compute-0 podman[75850]: 2026-01-31 08:06:23.488138017 +0000 UTC m=+0.565191747 container remove 4bf5e59d495ef036949b9ccb6386c8806a5b4a06ebae7e0438657d805c09c2b3 (image=quay.io/ceph/ceph:v20, name=inspiring_bassi, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 08:06:23 compute-0 systemd[1]: libpod-conmon-4bf5e59d495ef036949b9ccb6386c8806a5b4a06ebae7e0438657d805c09c2b3.scope: Deactivated successfully.
Jan 31 08:06:23 compute-0 podman[75904]: 2026-01-31 08:06:23.534592623 +0000 UTC m=+0.032901880 container create f83a982200bc508ad879312be8b369f32eff1398413c6a997ed8883d3fea3c3d (image=quay.io/ceph/ceph:v20, name=angry_shannon, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Jan 31 08:06:23 compute-0 systemd[1]: Started libpod-conmon-f83a982200bc508ad879312be8b369f32eff1398413c6a997ed8883d3fea3c3d.scope.
Jan 31 08:06:23 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:06:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5a87469c0b76e6e24d1a2d7e90bccef0e9cc0b96016168f0adf28fc53fcab3d0/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 08:06:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5a87469c0b76e6e24d1a2d7e90bccef0e9cc0b96016168f0adf28fc53fcab3d0/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 31 08:06:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5a87469c0b76e6e24d1a2d7e90bccef0e9cc0b96016168f0adf28fc53fcab3d0/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 08:06:23 compute-0 podman[75904]: 2026-01-31 08:06:23.599826134 +0000 UTC m=+0.098135491 container init f83a982200bc508ad879312be8b369f32eff1398413c6a997ed8883d3fea3c3d (image=quay.io/ceph/ceph:v20, name=angry_shannon, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 08:06:23 compute-0 podman[75904]: 2026-01-31 08:06:23.605544377 +0000 UTC m=+0.103853674 container start f83a982200bc508ad879312be8b369f32eff1398413c6a997ed8883d3fea3c3d (image=quay.io/ceph/ceph:v20, name=angry_shannon, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 08:06:23 compute-0 podman[75904]: 2026-01-31 08:06:23.609536991 +0000 UTC m=+0.107846368 container attach f83a982200bc508ad879312be8b369f32eff1398413c6a997ed8883d3fea3c3d (image=quay.io/ceph/ceph:v20, name=angry_shannon, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 08:06:23 compute-0 podman[75904]: 2026-01-31 08:06:23.520410668 +0000 UTC m=+0.018719955 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 31 08:06:23 compute-0 ceph-mon[75227]: mgrmap e4: compute-0.fqetdi(active, since 2s)
Jan 31 08:06:23 compute-0 ceph-mon[75227]: from='client.? 192.168.122.100:0/394067677' entity='client.admin' cmd={"prefix": "config assimilate-conf"} : dispatch
Jan 31 08:06:24 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr module enable", "module": "cephadm"} v 0)
Jan 31 08:06:24 compute-0 ceph-mon[75227]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2042196584' entity='client.admin' cmd={"prefix": "mgr module enable", "module": "cephadm"} : dispatch
Jan 31 08:06:24 compute-0 ceph-mgr[75519]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Jan 31 08:06:24 compute-0 ceph-mgr[75519]: [devicehealth WARNING root] not enough osds to create mgr pool
Jan 31 08:06:24 compute-0 ceph-mon[75227]: from='client.? 192.168.122.100:0/2042196584' entity='client.admin' cmd={"prefix": "mgr module enable", "module": "cephadm"} : dispatch
Jan 31 08:06:24 compute-0 ceph-mon[75227]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2042196584' entity='client.admin' cmd='[{"prefix": "mgr module enable", "module": "cephadm"}]': finished
Jan 31 08:06:24 compute-0 ceph-mgr[75519]: mgr handle_mgr_map respawning because set of enabled modules changed!
Jan 31 08:06:24 compute-0 ceph-mgr[75519]: mgr respawn  e: '/usr/bin/ceph-mgr'
Jan 31 08:06:24 compute-0 ceph-mgr[75519]: mgr respawn  0: '/usr/bin/ceph-mgr'
Jan 31 08:06:24 compute-0 ceph-mgr[75519]: mgr respawn  1: '-n'
Jan 31 08:06:24 compute-0 ceph-mgr[75519]: mgr respawn  2: 'mgr.compute-0.fqetdi'
Jan 31 08:06:24 compute-0 ceph-mgr[75519]: mgr respawn  3: '-f'
Jan 31 08:06:24 compute-0 ceph-mgr[75519]: mgr respawn  4: '--setuser'
Jan 31 08:06:24 compute-0 ceph-mgr[75519]: mgr respawn  5: 'ceph'
Jan 31 08:06:24 compute-0 ceph-mgr[75519]: mgr respawn  6: '--setgroup'
Jan 31 08:06:24 compute-0 ceph-mgr[75519]: mgr respawn  7: 'ceph'
Jan 31 08:06:24 compute-0 ceph-mgr[75519]: mgr respawn  8: '--default-log-to-file=false'
Jan 31 08:06:24 compute-0 ceph-mgr[75519]: mgr respawn  9: '--default-log-to-journald=true'
Jan 31 08:06:24 compute-0 ceph-mgr[75519]: mgr respawn  10: '--default-log-to-stderr=false'
Jan 31 08:06:24 compute-0 ceph-mgr[75519]: mgr respawn respawning with exe /usr/bin/ceph-mgr
Jan 31 08:06:24 compute-0 ceph-mgr[75519]: mgr respawn  exe_path /proc/self/exe
Jan 31 08:06:24 compute-0 ceph-mon[75227]: log_channel(cluster) log [DBG] : mgrmap e5: compute-0.fqetdi(active, since 4s)
Jan 31 08:06:24 compute-0 systemd[1]: libpod-f83a982200bc508ad879312be8b369f32eff1398413c6a997ed8883d3fea3c3d.scope: Deactivated successfully.
Jan 31 08:06:24 compute-0 podman[75904]: 2026-01-31 08:06:24.994020371 +0000 UTC m=+1.492329628 container died f83a982200bc508ad879312be8b369f32eff1398413c6a997ed8883d3fea3c3d (image=quay.io/ceph/ceph:v20, name=angry_shannon, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.schema-version=1.0)
Jan 31 08:06:25 compute-0 systemd[1]: var-lib-containers-storage-overlay-5a87469c0b76e6e24d1a2d7e90bccef0e9cc0b96016168f0adf28fc53fcab3d0-merged.mount: Deactivated successfully.
Jan 31 08:06:25 compute-0 podman[75904]: 2026-01-31 08:06:25.023367619 +0000 UTC m=+1.521676866 container remove f83a982200bc508ad879312be8b369f32eff1398413c6a997ed8883d3fea3c3d (image=quay.io/ceph/ceph:v20, name=angry_shannon, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3)
Jan 31 08:06:25 compute-0 systemd[1]: libpod-conmon-f83a982200bc508ad879312be8b369f32eff1398413c6a997ed8883d3fea3c3d.scope: Deactivated successfully.
Jan 31 08:06:25 compute-0 ceph-82c880e6-d992-5408-8b12-efff9c275473-mgr-compute-0-fqetdi[75515]: ignoring --setuser ceph since I am not root
Jan 31 08:06:25 compute-0 ceph-82c880e6-d992-5408-8b12-efff9c275473-mgr-compute-0-fqetdi[75515]: ignoring --setgroup ceph since I am not root
Jan 31 08:06:25 compute-0 ceph-mgr[75519]: ceph version 20.2.0 (69f84cc2651aa259a15bc192ddaabd3baba07489) tentacle (stable - RelWithDebInfo), process ceph-mgr, pid 2
Jan 31 08:06:25 compute-0 ceph-mgr[75519]: pidfile_write: ignore empty --pid-file
Jan 31 08:06:25 compute-0 ceph-mgr[75519]: mgr[py] Loading python module 'alerts'
Jan 31 08:06:25 compute-0 podman[75958]: 2026-01-31 08:06:25.078062509 +0000 UTC m=+0.038715505 container create 39a8efdbaaad66dc0db5178439c3afb85e584ee2e1e9ebca5acb78b723acf74d (image=quay.io/ceph/ceph:v20, name=ecstatic_ride, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 08:06:25 compute-0 systemd[1]: Started libpod-conmon-39a8efdbaaad66dc0db5178439c3afb85e584ee2e1e9ebca5acb78b723acf74d.scope.
Jan 31 08:06:25 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:06:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/58165f7bba250035aea1c7200b4f1a3a32be727bd07eedf000f61c80b3886d20/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 08:06:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/58165f7bba250035aea1c7200b4f1a3a32be727bd07eedf000f61c80b3886d20/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 31 08:06:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/58165f7bba250035aea1c7200b4f1a3a32be727bd07eedf000f61c80b3886d20/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 08:06:25 compute-0 podman[75958]: 2026-01-31 08:06:25.15451385 +0000 UTC m=+0.115166846 container init 39a8efdbaaad66dc0db5178439c3afb85e584ee2e1e9ebca5acb78b723acf74d (image=quay.io/ceph/ceph:v20, name=ecstatic_ride, OSD_FLAVOR=default, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 08:06:25 compute-0 podman[75958]: 2026-01-31 08:06:25.158506834 +0000 UTC m=+0.119159830 container start 39a8efdbaaad66dc0db5178439c3afb85e584ee2e1e9ebca5acb78b723acf74d (image=quay.io/ceph/ceph:v20, name=ecstatic_ride, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, ceph=True, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 08:06:25 compute-0 podman[75958]: 2026-01-31 08:06:25.063184345 +0000 UTC m=+0.023837361 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 31 08:06:25 compute-0 podman[75958]: 2026-01-31 08:06:25.161900221 +0000 UTC m=+0.122553237 container attach 39a8efdbaaad66dc0db5178439c3afb85e584ee2e1e9ebca5acb78b723acf74d (image=quay.io/ceph/ceph:v20, name=ecstatic_ride, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 08:06:25 compute-0 ceph-mgr[75519]: mgr[py] Loading python module 'balancer'
Jan 31 08:06:25 compute-0 ceph-mgr[75519]: mgr[py] Loading python module 'cephadm'
Jan 31 08:06:25 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr stat"} v 0)
Jan 31 08:06:25 compute-0 ceph-mon[75227]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3544431121' entity='client.admin' cmd={"prefix": "mgr stat"} : dispatch
Jan 31 08:06:25 compute-0 ecstatic_ride[75994]: {
Jan 31 08:06:25 compute-0 ecstatic_ride[75994]:     "epoch": 5,
Jan 31 08:06:25 compute-0 ecstatic_ride[75994]:     "available": true,
Jan 31 08:06:25 compute-0 ecstatic_ride[75994]:     "active_name": "compute-0.fqetdi",
Jan 31 08:06:25 compute-0 ecstatic_ride[75994]:     "num_standby": 0
Jan 31 08:06:25 compute-0 ecstatic_ride[75994]: }
Jan 31 08:06:25 compute-0 systemd[1]: libpod-39a8efdbaaad66dc0db5178439c3afb85e584ee2e1e9ebca5acb78b723acf74d.scope: Deactivated successfully.
Jan 31 08:06:25 compute-0 podman[75958]: 2026-01-31 08:06:25.625090526 +0000 UTC m=+0.585743562 container died 39a8efdbaaad66dc0db5178439c3afb85e584ee2e1e9ebca5acb78b723acf74d (image=quay.io/ceph/ceph:v20, name=ecstatic_ride, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True)
Jan 31 08:06:25 compute-0 systemd[1]: var-lib-containers-storage-overlay-58165f7bba250035aea1c7200b4f1a3a32be727bd07eedf000f61c80b3886d20-merged.mount: Deactivated successfully.
Jan 31 08:06:25 compute-0 podman[75958]: 2026-01-31 08:06:25.677119081 +0000 UTC m=+0.637772077 container remove 39a8efdbaaad66dc0db5178439c3afb85e584ee2e1e9ebca5acb78b723acf74d (image=quay.io/ceph/ceph:v20, name=ecstatic_ride, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 08:06:25 compute-0 systemd[1]: libpod-conmon-39a8efdbaaad66dc0db5178439c3afb85e584ee2e1e9ebca5acb78b723acf74d.scope: Deactivated successfully.
Jan 31 08:06:25 compute-0 podman[76042]: 2026-01-31 08:06:25.748364564 +0000 UTC m=+0.051347026 container create d9c50f097fcaca1a3603c392e217df9b5c8ed9e3b32b3817662b4d5f0e9d15b7 (image=quay.io/ceph/ceph:v20, name=upbeat_meninsky, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=tentacle)
Jan 31 08:06:25 compute-0 systemd[1]: Started libpod-conmon-d9c50f097fcaca1a3603c392e217df9b5c8ed9e3b32b3817662b4d5f0e9d15b7.scope.
Jan 31 08:06:25 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:06:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9cd9c719eea4c895912027297b59bb9716404b693004c9214b22633d2e620161/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 08:06:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9cd9c719eea4c895912027297b59bb9716404b693004c9214b22633d2e620161/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 31 08:06:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9cd9c719eea4c895912027297b59bb9716404b693004c9214b22633d2e620161/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 08:06:25 compute-0 podman[76042]: 2026-01-31 08:06:25.727686094 +0000 UTC m=+0.030668586 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 31 08:06:25 compute-0 podman[76042]: 2026-01-31 08:06:25.824214528 +0000 UTC m=+0.127196990 container init d9c50f097fcaca1a3603c392e217df9b5c8ed9e3b32b3817662b4d5f0e9d15b7 (image=quay.io/ceph/ceph:v20, name=upbeat_meninsky, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 08:06:25 compute-0 podman[76042]: 2026-01-31 08:06:25.830676052 +0000 UTC m=+0.133658514 container start d9c50f097fcaca1a3603c392e217df9b5c8ed9e3b32b3817662b4d5f0e9d15b7 (image=quay.io/ceph/ceph:v20, name=upbeat_meninsky, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS)
Jan 31 08:06:25 compute-0 podman[76042]: 2026-01-31 08:06:25.837093765 +0000 UTC m=+0.140076227 container attach d9c50f097fcaca1a3603c392e217df9b5c8ed9e3b32b3817662b4d5f0e9d15b7 (image=quay.io/ceph/ceph:v20, name=upbeat_meninsky, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS)
Jan 31 08:06:25 compute-0 ceph-mgr[75519]: mgr[py] Loading python module 'crash'
Jan 31 08:06:25 compute-0 ceph-mgr[75519]: mgr[py] Loading python module 'dashboard'
Jan 31 08:06:25 compute-0 ceph-mon[75227]: from='client.? 192.168.122.100:0/2042196584' entity='client.admin' cmd='[{"prefix": "mgr module enable", "module": "cephadm"}]': finished
Jan 31 08:06:25 compute-0 ceph-mon[75227]: mgrmap e5: compute-0.fqetdi(active, since 4s)
Jan 31 08:06:25 compute-0 ceph-mon[75227]: from='client.? 192.168.122.100:0/3544431121' entity='client.admin' cmd={"prefix": "mgr stat"} : dispatch
Jan 31 08:06:26 compute-0 ceph-mgr[75519]: mgr[py] Loading python module 'devicehealth'
Jan 31 08:06:26 compute-0 ceph-mgr[75519]: mgr[py] Loading python module 'diskprediction_local'
Jan 31 08:06:26 compute-0 ceph-82c880e6-d992-5408-8b12-efff9c275473-mgr-compute-0-fqetdi[75515]: /lib64/python3.9/site-packages/scipy/__init__.py:73: UserWarning: NumPy was imported from a Python sub-interpreter but NumPy does not properly support sub-interpreters. This will likely work for most users but might cause hard to track down issues or subtle bugs. A common user of the rare sub-interpreter feature is wsgi which also allows single-interpreter mode.
Jan 31 08:06:26 compute-0 ceph-82c880e6-d992-5408-8b12-efff9c275473-mgr-compute-0-fqetdi[75515]: Improvements in the case of bugs are welcome, but is not on the NumPy roadmap, and full support may require significant effort to achieve.
Jan 31 08:06:26 compute-0 ceph-82c880e6-d992-5408-8b12-efff9c275473-mgr-compute-0-fqetdi[75515]:   from numpy import show_config as show_numpy_config
Jan 31 08:06:26 compute-0 ceph-mgr[75519]: mgr[py] Loading python module 'influx'
Jan 31 08:06:26 compute-0 ceph-mgr[75519]: mgr[py] Loading python module 'insights'
Jan 31 08:06:26 compute-0 ceph-mgr[75519]: mgr[py] Loading python module 'iostat'
Jan 31 08:06:27 compute-0 ceph-mgr[75519]: mgr[py] Loading python module 'k8sevents'
Jan 31 08:06:27 compute-0 ceph-mgr[75519]: mgr[py] Loading python module 'localpool'
Jan 31 08:06:27 compute-0 ceph-mgr[75519]: mgr[py] Loading python module 'mds_autoscaler'
Jan 31 08:06:27 compute-0 ceph-mgr[75519]: mgr[py] Loading python module 'mirroring'
Jan 31 08:06:27 compute-0 ceph-mgr[75519]: mgr[py] Loading python module 'nfs'
Jan 31 08:06:27 compute-0 ceph-mgr[75519]: mgr[py] Loading python module 'orchestrator'
Jan 31 08:06:28 compute-0 ceph-mgr[75519]: mgr[py] Loading python module 'osd_perf_query'
Jan 31 08:06:28 compute-0 ceph-mgr[75519]: mgr[py] Loading python module 'osd_support'
Jan 31 08:06:28 compute-0 ceph-mgr[75519]: mgr[py] Loading python module 'pg_autoscaler'
Jan 31 08:06:28 compute-0 ceph-mgr[75519]: mgr[py] Loading python module 'progress'
Jan 31 08:06:28 compute-0 ceph-mgr[75519]: mgr[py] Loading python module 'prometheus'
Jan 31 08:06:28 compute-0 ceph-mgr[75519]: mgr[py] Loading python module 'rbd_support'
Jan 31 08:06:28 compute-0 ceph-mgr[75519]: mgr[py] Loading python module 'rgw'
Jan 31 08:06:29 compute-0 ceph-mgr[75519]: mgr[py] Loading python module 'rook'
Jan 31 08:06:29 compute-0 ceph-mgr[75519]: mgr[py] Loading python module 'selftest'
Jan 31 08:06:29 compute-0 ceph-mgr[75519]: mgr[py] Loading python module 'smb'
Jan 31 08:06:30 compute-0 ceph-mgr[75519]: mgr[py] Loading python module 'snap_schedule'
Jan 31 08:06:30 compute-0 ceph-mgr[75519]: mgr[py] Loading python module 'stats'
Jan 31 08:06:30 compute-0 ceph-mgr[75519]: mgr[py] Loading python module 'status'
Jan 31 08:06:30 compute-0 ceph-mgr[75519]: mgr[py] Loading python module 'telegraf'
Jan 31 08:06:30 compute-0 ceph-mgr[75519]: mgr[py] Loading python module 'telemetry'
Jan 31 08:06:30 compute-0 ceph-mgr[75519]: mgr[py] Loading python module 'test_orchestrator'
Jan 31 08:06:30 compute-0 ceph-mgr[75519]: mgr[py] Loading python module 'volumes'
Jan 31 08:06:30 compute-0 ceph-mon[75227]: log_channel(cluster) log [INF] : Active manager daemon compute-0.fqetdi restarted
Jan 31 08:06:30 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e1 do_prune osdmap full prune enabled
Jan 31 08:06:30 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e1 encode_pending skipping prime_pg_temp; mapping job did not start
Jan 31 08:06:30 compute-0 ceph-mon[75227]: log_channel(cluster) log [INF] : Activating manager daemon compute-0.fqetdi
Jan 31 08:06:30 compute-0 ceph-mgr[75519]: ms_deliver_dispatch: unhandled message 0x555800902000 mon_map magic: 0 from mon.0 v2:192.168.122.100:3300/0
Jan 31 08:06:31 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e1 _set_cache_ratios kv ratio 0.2 inc ratio 0.4 full ratio 0.4
Jan 31 08:06:31 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e1 register_cache_with_pcm pcm target: 2147483648 pcm max: 1020054732 pcm min: 134217728 inc_osd_cache size: 1
Jan 31 08:06:31 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e2 e2: 0 total, 0 up, 0 in
Jan 31 08:06:31 compute-0 ceph-mgr[75519]: mgr handle_mgr_map Activating!
Jan 31 08:06:31 compute-0 ceph-mgr[75519]: mgr handle_mgr_map I am now activating
Jan 31 08:06:31 compute-0 ceph-mon[75227]: log_channel(cluster) log [DBG] : osdmap e2: 0 total, 0 up, 0 in
Jan 31 08:06:31 compute-0 ceph-mon[75227]: log_channel(cluster) log [DBG] : mgrmap e6: compute-0.fqetdi(active, starting, since 0.59262s)
Jan 31 08:06:31 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon metadata", "id": "compute-0"} v 0)
Jan 31 08:06:31 compute-0 ceph-mon[75227]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "mon metadata", "id": "compute-0"} : dispatch
Jan 31 08:06:31 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr metadata", "who": "compute-0.fqetdi", "id": "compute-0.fqetdi"} v 0)
Jan 31 08:06:31 compute-0 ceph-mon[75227]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "mgr metadata", "who": "compute-0.fqetdi", "id": "compute-0.fqetdi"} : dispatch
Jan 31 08:06:31 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mds metadata"} v 0)
Jan 31 08:06:31 compute-0 ceph-mon[75227]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "mds metadata"} : dispatch
Jan 31 08:06:31 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).mds e1 all = 1
Jan 31 08:06:31 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata"} v 0)
Jan 31 08:06:31 compute-0 ceph-mon[75227]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "osd metadata"} : dispatch
Jan 31 08:06:31 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon metadata"} v 0)
Jan 31 08:06:31 compute-0 ceph-mon[75227]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "mon metadata"} : dispatch
Jan 31 08:06:31 compute-0 ceph-mgr[75519]: [balancer DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 31 08:06:31 compute-0 ceph-mgr[75519]: mgr load Constructed class from module: balancer
Jan 31 08:06:31 compute-0 ceph-mgr[75519]: [balancer INFO root] Starting
Jan 31 08:06:31 compute-0 ceph-mon[75227]: log_channel(cluster) log [INF] : Manager daemon compute-0.fqetdi is now available
Jan 31 08:06:31 compute-0 ceph-mgr[75519]: [cephadm DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 31 08:06:31 compute-0 ceph-mgr[75519]: [balancer INFO root] Optimize plan auto_2026-01-31_08:06:31
Jan 31 08:06:31 compute-0 ceph-mgr[75519]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 31 08:06:31 compute-0 ceph-mgr[75519]: [balancer INFO root] do_upmap
Jan 31 08:06:31 compute-0 ceph-mgr[75519]: [balancer INFO root] No pools available
Jan 31 08:06:32 compute-0 ceph-mon[75227]: Active manager daemon compute-0.fqetdi restarted
Jan 31 08:06:32 compute-0 ceph-mon[75227]: Activating manager daemon compute-0.fqetdi
Jan 31 08:06:32 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/cert_store.cert.cephadm_root_ca_cert}] v 0)
Jan 31 08:06:32 compute-0 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 08:06:32 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/cert_store.key.cephadm_root_ca_key}] v 0)
Jan 31 08:06:32 compute-0 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 08:06:32 compute-0 ceph-mgr[75519]: [cephadm INFO cephadm.migrations] Found migration_current of "None". Setting to last migration.
Jan 31 08:06:32 compute-0 ceph-mgr[75519]: log_channel(cephadm) log [INF] : Found migration_current of "None". Setting to last migration.
Jan 31 08:06:32 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=mgr/cephadm/migration_current}] v 0)
Jan 31 08:06:32 compute-0 ceph-mgr[75519]: log_channel(audit) log [DBG] : from='client.14126 -' entity='client.admin' cmd=[{"prefix": "get_command_descriptions"}]: dispatch
Jan 31 08:06:32 compute-0 ceph-mon[75227]: log_channel(cluster) log [DBG] : mgrmap e7: compute-0.fqetdi(active, since 1.7327s)
Jan 31 08:06:32 compute-0 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 08:06:32 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/config_checks}] v 0)
Jan 31 08:06:32 compute-0 ceph-mgr[75519]: log_channel(audit) log [DBG] : from='client.14126 -' entity='client.admin' cmd=[{"prefix": "mgr_status"}]: dispatch
Jan 31 08:06:32 compute-0 upbeat_meninsky[76059]: {
Jan 31 08:06:32 compute-0 upbeat_meninsky[76059]:     "mgrmap_epoch": 7,
Jan 31 08:06:32 compute-0 upbeat_meninsky[76059]:     "initialized": true
Jan 31 08:06:32 compute-0 upbeat_meninsky[76059]: }
Jan 31 08:06:32 compute-0 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 08:06:32 compute-0 ceph-mgr[75519]: mgr load Constructed class from module: cephadm
Jan 31 08:06:32 compute-0 ceph-mgr[75519]: [crash DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 31 08:06:32 compute-0 ceph-mgr[75519]: mgr load Constructed class from module: crash
Jan 31 08:06:32 compute-0 ceph-mgr[75519]: [devicehealth DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 31 08:06:32 compute-0 ceph-mgr[75519]: mgr load Constructed class from module: devicehealth
Jan 31 08:06:32 compute-0 ceph-mgr[75519]: [devicehealth INFO root] Starting
Jan 31 08:06:32 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config dump", "format": "json"} v 0)
Jan 31 08:06:32 compute-0 ceph-mon[75227]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "config dump", "format": "json"} : dispatch
Jan 31 08:06:32 compute-0 ceph-mgr[75519]: [iostat DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 31 08:06:32 compute-0 ceph-mgr[75519]: mgr load Constructed class from module: iostat
Jan 31 08:06:32 compute-0 ceph-mgr[75519]: [nfs DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 31 08:06:32 compute-0 ceph-mgr[75519]: mgr load Constructed class from module: nfs
Jan 31 08:06:32 compute-0 ceph-mgr[75519]: [orchestrator DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 31 08:06:32 compute-0 ceph-mgr[75519]: mgr load Constructed class from module: orchestrator
Jan 31 08:06:32 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config dump", "format": "json"} v 0)
Jan 31 08:06:32 compute-0 ceph-mon[75227]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "config dump", "format": "json"} : dispatch
Jan 31 08:06:32 compute-0 ceph-mgr[75519]: [pg_autoscaler DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 31 08:06:32 compute-0 ceph-mgr[75519]: mgr load Constructed class from module: pg_autoscaler
Jan 31 08:06:32 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] _maybe_adjust
Jan 31 08:06:32 compute-0 ceph-mgr[75519]: [progress DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 31 08:06:32 compute-0 ceph-mgr[75519]: mgr load Constructed class from module: progress
Jan 31 08:06:32 compute-0 ceph-mgr[75519]: [progress INFO root] Loading...
Jan 31 08:06:32 compute-0 ceph-mgr[75519]: [progress INFO root] No stored events to load
Jan 31 08:06:32 compute-0 ceph-mgr[75519]: [progress INFO root] Loaded [] historic events
Jan 31 08:06:32 compute-0 ceph-mgr[75519]: [rbd_support DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 31 08:06:32 compute-0 ceph-mgr[75519]: [progress INFO root] Loaded OSDMap, ready.
Jan 31 08:06:32 compute-0 systemd[1]: libpod-d9c50f097fcaca1a3603c392e217df9b5c8ed9e3b32b3817662b4d5f0e9d15b7.scope: Deactivated successfully.
Jan 31 08:06:32 compute-0 podman[76042]: 2026-01-31 08:06:32.768216017 +0000 UTC m=+7.071198479 container died d9c50f097fcaca1a3603c392e217df9b5c8ed9e3b32b3817662b4d5f0e9d15b7 (image=quay.io/ceph/ceph:v20, name=upbeat_meninsky, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Jan 31 08:06:32 compute-0 ceph-mgr[75519]: [rbd_support INFO root] recovery thread starting
Jan 31 08:06:32 compute-0 ceph-mgr[75519]: [rbd_support INFO root] starting setup
Jan 31 08:06:32 compute-0 ceph-mgr[75519]: mgr load Constructed class from module: rbd_support
Jan 31 08:06:32 compute-0 ceph-mgr[75519]: [status DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 31 08:06:32 compute-0 ceph-mgr[75519]: mgr load Constructed class from module: status
Jan 31 08:06:32 compute-0 ceph-mgr[75519]: [telemetry DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 31 08:06:32 compute-0 ceph-mgr[75519]: mgr load Constructed class from module: telemetry
Jan 31 08:06:32 compute-0 ceph-mgr[75519]: [volumes DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 31 08:06:32 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.fqetdi/mirror_snapshot_schedule"} v 0)
Jan 31 08:06:32 compute-0 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.fqetdi/mirror_snapshot_schedule"} : dispatch
Jan 31 08:06:32 compute-0 ceph-mgr[75519]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 31 08:06:32 compute-0 ceph-mgr[75519]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: starting
Jan 31 08:06:32 compute-0 ceph-mgr[75519]: [rbd_support INFO root] PerfHandler: starting
Jan 31 08:06:32 compute-0 ceph-mgr[75519]: [rbd_support INFO root] TaskHandler: starting
Jan 31 08:06:32 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.fqetdi/trash_purge_schedule"} v 0)
Jan 31 08:06:32 compute-0 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.fqetdi/trash_purge_schedule"} : dispatch
Jan 31 08:06:32 compute-0 ceph-mgr[75519]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 31 08:06:32 compute-0 ceph-mgr[75519]: [rbd_support INFO root] TrashPurgeScheduleHandler: starting
Jan 31 08:06:32 compute-0 ceph-mgr[75519]: [rbd_support INFO root] setup complete
Jan 31 08:06:32 compute-0 ceph-mgr[75519]: mgr load Constructed class from module: volumes
Jan 31 08:06:32 compute-0 systemd[1]: var-lib-containers-storage-overlay-9cd9c719eea4c895912027297b59bb9716404b693004c9214b22633d2e620161-merged.mount: Deactivated successfully.
Jan 31 08:06:32 compute-0 podman[76042]: 2026-01-31 08:06:32.828448985 +0000 UTC m=+7.131431447 container remove d9c50f097fcaca1a3603c392e217df9b5c8ed9e3b32b3817662b4d5f0e9d15b7 (image=quay.io/ceph/ceph:v20, name=upbeat_meninsky, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Jan 31 08:06:32 compute-0 systemd[1]: libpod-conmon-d9c50f097fcaca1a3603c392e217df9b5c8ed9e3b32b3817662b4d5f0e9d15b7.scope: Deactivated successfully.
Jan 31 08:06:32 compute-0 podman[76208]: 2026-01-31 08:06:32.876866687 +0000 UTC m=+0.029281187 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 31 08:06:32 compute-0 podman[76208]: 2026-01-31 08:06:32.982041107 +0000 UTC m=+0.134455607 container create 797c18e7b54a0286f2a01ef9d7299d0794cf58904360c091ba37c0bdee1f7919 (image=quay.io/ceph/ceph:v20, name=upbeat_nobel, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 08:06:33 compute-0 systemd[1]: Started libpod-conmon-797c18e7b54a0286f2a01ef9d7299d0794cf58904360c091ba37c0bdee1f7919.scope.
Jan 31 08:06:33 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:06:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2c7666363b81423e9648fbafe6146de190628b13dc767a77c1f7b1796f77577c/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 31 08:06:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2c7666363b81423e9648fbafe6146de190628b13dc767a77c1f7b1796f77577c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 08:06:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2c7666363b81423e9648fbafe6146de190628b13dc767a77c1f7b1796f77577c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 08:06:33 compute-0 podman[76208]: 2026-01-31 08:06:33.193593163 +0000 UTC m=+0.346007723 container init 797c18e7b54a0286f2a01ef9d7299d0794cf58904360c091ba37c0bdee1f7919 (image=quay.io/ceph/ceph:v20, name=upbeat_nobel, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 08:06:33 compute-0 podman[76208]: 2026-01-31 08:06:33.19944216 +0000 UTC m=+0.351856650 container start 797c18e7b54a0286f2a01ef9d7299d0794cf58904360c091ba37c0bdee1f7919 (image=quay.io/ceph/ceph:v20, name=upbeat_nobel, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 08:06:33 compute-0 ceph-mon[75227]: osdmap e2: 0 total, 0 up, 0 in
Jan 31 08:06:33 compute-0 ceph-mon[75227]: mgrmap e6: compute-0.fqetdi(active, starting, since 0.59262s)
Jan 31 08:06:33 compute-0 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "mon metadata", "id": "compute-0"} : dispatch
Jan 31 08:06:33 compute-0 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "mgr metadata", "who": "compute-0.fqetdi", "id": "compute-0.fqetdi"} : dispatch
Jan 31 08:06:33 compute-0 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "mds metadata"} : dispatch
Jan 31 08:06:33 compute-0 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "osd metadata"} : dispatch
Jan 31 08:06:33 compute-0 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "mon metadata"} : dispatch
Jan 31 08:06:33 compute-0 ceph-mon[75227]: Manager daemon compute-0.fqetdi is now available
Jan 31 08:06:33 compute-0 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 08:06:33 compute-0 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 08:06:33 compute-0 ceph-mon[75227]: Found migration_current of "None". Setting to last migration.
Jan 31 08:06:33 compute-0 ceph-mon[75227]: from='client.14126 -' entity='client.admin' cmd=[{"prefix": "get_command_descriptions"}]: dispatch
Jan 31 08:06:33 compute-0 ceph-mon[75227]: mgrmap e7: compute-0.fqetdi(active, since 1.7327s)
Jan 31 08:06:33 compute-0 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 08:06:33 compute-0 ceph-mon[75227]: from='client.14126 -' entity='client.admin' cmd=[{"prefix": "mgr_status"}]: dispatch
Jan 31 08:06:33 compute-0 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 08:06:33 compute-0 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "config dump", "format": "json"} : dispatch
Jan 31 08:06:33 compute-0 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "config dump", "format": "json"} : dispatch
Jan 31 08:06:33 compute-0 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.fqetdi/mirror_snapshot_schedule"} : dispatch
Jan 31 08:06:33 compute-0 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.fqetdi/trash_purge_schedule"} : dispatch
Jan 31 08:06:33 compute-0 podman[76208]: 2026-01-31 08:06:33.244795004 +0000 UTC m=+0.397209544 container attach 797c18e7b54a0286f2a01ef9d7299d0794cf58904360c091ba37c0bdee1f7919 (image=quay.io/ceph/ceph:v20, name=upbeat_nobel, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 31 08:06:33 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e2 _set_new_cache_sizes cache_size:1019902321 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 08:06:33 compute-0 ceph-mgr[75519]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Jan 31 08:06:33 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr module enable", "module": "orchestrator"} v 0)
Jan 31 08:06:33 compute-0 ceph-mon[75227]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2957936051' entity='client.admin' cmd={"prefix": "mgr module enable", "module": "orchestrator"} : dispatch
Jan 31 08:06:33 compute-0 ceph-mon[75227]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2957936051' entity='client.admin' cmd='[{"prefix": "mgr module enable", "module": "orchestrator"}]': finished
Jan 31 08:06:33 compute-0 upbeat_nobel[76224]: module 'orchestrator' is already enabled (always-on)
Jan 31 08:06:33 compute-0 ceph-mon[75227]: log_channel(cluster) log [DBG] : mgrmap e8: compute-0.fqetdi(active, since 2s)
Jan 31 08:06:33 compute-0 systemd[1]: libpod-797c18e7b54a0286f2a01ef9d7299d0794cf58904360c091ba37c0bdee1f7919.scope: Deactivated successfully.
Jan 31 08:06:33 compute-0 podman[76208]: 2026-01-31 08:06:33.780883879 +0000 UTC m=+0.933298379 container died 797c18e7b54a0286f2a01ef9d7299d0794cf58904360c091ba37c0bdee1f7919 (image=quay.io/ceph/ceph:v20, name=upbeat_nobel, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.41.3)
Jan 31 08:06:33 compute-0 systemd[1]: var-lib-containers-storage-overlay-2c7666363b81423e9648fbafe6146de190628b13dc767a77c1f7b1796f77577c-merged.mount: Deactivated successfully.
Jan 31 08:06:33 compute-0 podman[76208]: 2026-01-31 08:06:33.823036912 +0000 UTC m=+0.975451382 container remove 797c18e7b54a0286f2a01ef9d7299d0794cf58904360c091ba37c0bdee1f7919 (image=quay.io/ceph/ceph:v20, name=upbeat_nobel, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2)
Jan 31 08:06:33 compute-0 systemd[1]: libpod-conmon-797c18e7b54a0286f2a01ef9d7299d0794cf58904360c091ba37c0bdee1f7919.scope: Deactivated successfully.
Jan 31 08:06:33 compute-0 ceph-mgr[75519]: [cephadm INFO cherrypy.error] [31/Jan/2026:08:06:33] ENGINE Bus STARTING
Jan 31 08:06:33 compute-0 ceph-mgr[75519]: log_channel(cephadm) log [INF] : [31/Jan/2026:08:06:33] ENGINE Bus STARTING
Jan 31 08:06:33 compute-0 podman[76262]: 2026-01-31 08:06:33.889810217 +0000 UTC m=+0.046126707 container create 1cd147078c52ae02ed97207c3d6e4f78cf008e91fc6e335d8f227f542d97101b (image=quay.io/ceph/ceph:v20, name=priceless_dhawan, ceph=True, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 08:06:33 compute-0 systemd[1]: Started libpod-conmon-1cd147078c52ae02ed97207c3d6e4f78cf008e91fc6e335d8f227f542d97101b.scope.
Jan 31 08:06:33 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:06:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9ae8f188609a2b377330f893d0db05c6fe6e451bb8886bebb7a8b8e48f080fac/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 08:06:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9ae8f188609a2b377330f893d0db05c6fe6e451bb8886bebb7a8b8e48f080fac/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 31 08:06:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9ae8f188609a2b377330f893d0db05c6fe6e451bb8886bebb7a8b8e48f080fac/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 08:06:33 compute-0 podman[76262]: 2026-01-31 08:06:33.870909508 +0000 UTC m=+0.027225998 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 31 08:06:33 compute-0 ceph-mgr[75519]: [cephadm INFO cherrypy.error] [31/Jan/2026:08:06:33] ENGINE Serving on http://192.168.122.100:8765
Jan 31 08:06:33 compute-0 ceph-mgr[75519]: log_channel(cephadm) log [INF] : [31/Jan/2026:08:06:33] ENGINE Serving on http://192.168.122.100:8765
Jan 31 08:06:34 compute-0 podman[76262]: 2026-01-31 08:06:34.061742412 +0000 UTC m=+0.218058922 container init 1cd147078c52ae02ed97207c3d6e4f78cf008e91fc6e335d8f227f542d97101b (image=quay.io/ceph/ceph:v20, name=priceless_dhawan, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 08:06:34 compute-0 podman[76262]: 2026-01-31 08:06:34.068428233 +0000 UTC m=+0.224744713 container start 1cd147078c52ae02ed97207c3d6e4f78cf008e91fc6e335d8f227f542d97101b (image=quay.io/ceph/ceph:v20, name=priceless_dhawan, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030)
Jan 31 08:06:34 compute-0 podman[76262]: 2026-01-31 08:06:34.097481362 +0000 UTC m=+0.253797842 container attach 1cd147078c52ae02ed97207c3d6e4f78cf008e91fc6e335d8f227f542d97101b (image=quay.io/ceph/ceph:v20, name=priceless_dhawan, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 31 08:06:34 compute-0 ceph-mgr[75519]: [cephadm INFO cherrypy.error] [31/Jan/2026:08:06:34] ENGINE Serving on https://192.168.122.100:7150
Jan 31 08:06:34 compute-0 ceph-mgr[75519]: log_channel(cephadm) log [INF] : [31/Jan/2026:08:06:34] ENGINE Serving on https://192.168.122.100:7150
Jan 31 08:06:34 compute-0 ceph-mgr[75519]: [cephadm INFO cherrypy.error] [31/Jan/2026:08:06:34] ENGINE Bus STARTED
Jan 31 08:06:34 compute-0 ceph-mgr[75519]: log_channel(cephadm) log [INF] : [31/Jan/2026:08:06:34] ENGINE Bus STARTED
Jan 31 08:06:34 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config dump", "format": "json"} v 0)
Jan 31 08:06:34 compute-0 ceph-mon[75227]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "config dump", "format": "json"} : dispatch
Jan 31 08:06:34 compute-0 ceph-mgr[75519]: [cephadm INFO cherrypy.error] [31/Jan/2026:08:06:34] ENGINE Client ('192.168.122.100', 44404) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)')
Jan 31 08:06:34 compute-0 ceph-mgr[75519]: log_channel(cephadm) log [INF] : [31/Jan/2026:08:06:34] ENGINE Client ('192.168.122.100', 44404) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)')
Jan 31 08:06:34 compute-0 ceph-mon[75227]: from='client.? 192.168.122.100:0/2957936051' entity='client.admin' cmd={"prefix": "mgr module enable", "module": "orchestrator"} : dispatch
Jan 31 08:06:34 compute-0 ceph-mon[75227]: from='client.? 192.168.122.100:0/2957936051' entity='client.admin' cmd='[{"prefix": "mgr module enable", "module": "orchestrator"}]': finished
Jan 31 08:06:34 compute-0 ceph-mon[75227]: mgrmap e8: compute-0.fqetdi(active, since 2s)
Jan 31 08:06:34 compute-0 ceph-mon[75227]: [31/Jan/2026:08:06:33] ENGINE Bus STARTING
Jan 31 08:06:34 compute-0 ceph-mon[75227]: [31/Jan/2026:08:06:33] ENGINE Serving on http://192.168.122.100:8765
Jan 31 08:06:34 compute-0 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "config dump", "format": "json"} : dispatch
Jan 31 08:06:34 compute-0 ceph-mgr[75519]: log_channel(audit) log [DBG] : from='client.14136 -' entity='client.admin' cmd=[{"prefix": "orch set backend", "module_name": "cephadm", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 08:06:34 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=mgr/orchestrator/orchestrator}] v 0)
Jan 31 08:06:34 compute-0 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 08:06:34 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config dump", "format": "json"} v 0)
Jan 31 08:06:34 compute-0 ceph-mon[75227]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "config dump", "format": "json"} : dispatch
Jan 31 08:06:34 compute-0 systemd[1]: libpod-1cd147078c52ae02ed97207c3d6e4f78cf008e91fc6e335d8f227f542d97101b.scope: Deactivated successfully.
Jan 31 08:06:34 compute-0 podman[76262]: 2026-01-31 08:06:34.545160124 +0000 UTC m=+0.701476574 container died 1cd147078c52ae02ed97207c3d6e4f78cf008e91fc6e335d8f227f542d97101b (image=quay.io/ceph/ceph:v20, name=priceless_dhawan, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 31 08:06:34 compute-0 systemd[1]: var-lib-containers-storage-overlay-9ae8f188609a2b377330f893d0db05c6fe6e451bb8886bebb7a8b8e48f080fac-merged.mount: Deactivated successfully.
Jan 31 08:06:34 compute-0 podman[76262]: 2026-01-31 08:06:34.579777862 +0000 UTC m=+0.736094322 container remove 1cd147078c52ae02ed97207c3d6e4f78cf008e91fc6e335d8f227f542d97101b (image=quay.io/ceph/ceph:v20, name=priceless_dhawan, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3)
Jan 31 08:06:34 compute-0 systemd[1]: libpod-conmon-1cd147078c52ae02ed97207c3d6e4f78cf008e91fc6e335d8f227f542d97101b.scope: Deactivated successfully.
Jan 31 08:06:34 compute-0 podman[76339]: 2026-01-31 08:06:34.634505223 +0000 UTC m=+0.041659219 container create 40d6afce10e986b04e4554face0d9eb6a536c7b919cf6b15d638d53e65c3b501 (image=quay.io/ceph/ceph:v20, name=nervous_roentgen, org.label-schema.license=GPLv2, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 08:06:34 compute-0 systemd[1]: Started libpod-conmon-40d6afce10e986b04e4554face0d9eb6a536c7b919cf6b15d638d53e65c3b501.scope.
Jan 31 08:06:34 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:06:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/32439d0d1bc702333bae972009f0a01afc6473d87252424836e929d48f3281bd/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 08:06:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/32439d0d1bc702333bae972009f0a01afc6473d87252424836e929d48f3281bd/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 31 08:06:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/32439d0d1bc702333bae972009f0a01afc6473d87252424836e929d48f3281bd/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 08:06:34 compute-0 podman[76339]: 2026-01-31 08:06:34.701061882 +0000 UTC m=+0.108215928 container init 40d6afce10e986b04e4554face0d9eb6a536c7b919cf6b15d638d53e65c3b501 (image=quay.io/ceph/ceph:v20, name=nervous_roentgen, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 08:06:34 compute-0 podman[76339]: 2026-01-31 08:06:34.613227836 +0000 UTC m=+0.020381932 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 31 08:06:34 compute-0 podman[76339]: 2026-01-31 08:06:34.708124324 +0000 UTC m=+0.115278350 container start 40d6afce10e986b04e4554face0d9eb6a536c7b919cf6b15d638d53e65c3b501 (image=quay.io/ceph/ceph:v20, name=nervous_roentgen, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 31 08:06:34 compute-0 podman[76339]: 2026-01-31 08:06:34.712532349 +0000 UTC m=+0.119686375 container attach 40d6afce10e986b04e4554face0d9eb6a536c7b919cf6b15d638d53e65c3b501 (image=quay.io/ceph/ceph:v20, name=nervous_roentgen, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 08:06:34 compute-0 ceph-mgr[75519]: [devicehealth WARNING root] not enough osds to create mgr pool
Jan 31 08:06:35 compute-0 ceph-mgr[75519]: log_channel(audit) log [DBG] : from='client.14138 -' entity='client.admin' cmd=[{"prefix": "cephadm set-user", "user": "ceph-admin", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 08:06:35 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/ssh_user}] v 0)
Jan 31 08:06:35 compute-0 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 08:06:35 compute-0 ceph-mgr[75519]: [cephadm INFO root] Set ssh ssh_user
Jan 31 08:06:35 compute-0 ceph-mgr[75519]: log_channel(cephadm) log [INF] : Set ssh ssh_user
Jan 31 08:06:35 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/ssh_config}] v 0)
Jan 31 08:06:35 compute-0 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 08:06:35 compute-0 ceph-mgr[75519]: [cephadm INFO root] Set ssh ssh_config
Jan 31 08:06:35 compute-0 ceph-mgr[75519]: log_channel(cephadm) log [INF] : Set ssh ssh_config
Jan 31 08:06:35 compute-0 ceph-mgr[75519]: [cephadm INFO root] ssh user set to ceph-admin. sudo will be used
Jan 31 08:06:35 compute-0 ceph-mgr[75519]: log_channel(cephadm) log [INF] : ssh user set to ceph-admin. sudo will be used
Jan 31 08:06:35 compute-0 nervous_roentgen[76356]: ssh user set to ceph-admin. sudo will be used
Jan 31 08:06:35 compute-0 systemd[1]: libpod-40d6afce10e986b04e4554face0d9eb6a536c7b919cf6b15d638d53e65c3b501.scope: Deactivated successfully.
Jan 31 08:06:35 compute-0 podman[76339]: 2026-01-31 08:06:35.139143641 +0000 UTC m=+0.546297647 container died 40d6afce10e986b04e4554face0d9eb6a536c7b919cf6b15d638d53e65c3b501 (image=quay.io/ceph/ceph:v20, name=nervous_roentgen, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Jan 31 08:06:35 compute-0 systemd[1]: var-lib-containers-storage-overlay-32439d0d1bc702333bae972009f0a01afc6473d87252424836e929d48f3281bd-merged.mount: Deactivated successfully.
Jan 31 08:06:35 compute-0 podman[76339]: 2026-01-31 08:06:35.16888978 +0000 UTC m=+0.576043776 container remove 40d6afce10e986b04e4554face0d9eb6a536c7b919cf6b15d638d53e65c3b501 (image=quay.io/ceph/ceph:v20, name=nervous_roentgen, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030)
Jan 31 08:06:35 compute-0 systemd[1]: libpod-conmon-40d6afce10e986b04e4554face0d9eb6a536c7b919cf6b15d638d53e65c3b501.scope: Deactivated successfully.
Jan 31 08:06:35 compute-0 podman[76393]: 2026-01-31 08:06:35.229103968 +0000 UTC m=+0.046406215 container create f8f0736a701247daedeed3ccf41e48cd2a42443df78d5398e34176c213930293 (image=quay.io/ceph/ceph:v20, name=sleepy_taussig, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.vendor=CentOS)
Jan 31 08:06:35 compute-0 systemd[1]: Started libpod-conmon-f8f0736a701247daedeed3ccf41e48cd2a42443df78d5398e34176c213930293.scope.
Jan 31 08:06:35 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:06:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dc0c814298041b3754a2ebfce51fedc1e4405c646afd350484bc532147e5997c/merged/tmp/cephadm-ssh-key.pub supports timestamps until 2038 (0x7fffffff)
Jan 31 08:06:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dc0c814298041b3754a2ebfce51fedc1e4405c646afd350484bc532147e5997c/merged/tmp/cephadm-ssh-key supports timestamps until 2038 (0x7fffffff)
Jan 31 08:06:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dc0c814298041b3754a2ebfce51fedc1e4405c646afd350484bc532147e5997c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 08:06:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dc0c814298041b3754a2ebfce51fedc1e4405c646afd350484bc532147e5997c/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 31 08:06:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dc0c814298041b3754a2ebfce51fedc1e4405c646afd350484bc532147e5997c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 08:06:35 compute-0 podman[76393]: 2026-01-31 08:06:35.204775874 +0000 UTC m=+0.022078161 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 31 08:06:35 compute-0 podman[76393]: 2026-01-31 08:06:35.30593185 +0000 UTC m=+0.123234087 container init f8f0736a701247daedeed3ccf41e48cd2a42443df78d5398e34176c213930293 (image=quay.io/ceph/ceph:v20, name=sleepy_taussig, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=tentacle, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030)
Jan 31 08:06:35 compute-0 podman[76393]: 2026-01-31 08:06:35.319375223 +0000 UTC m=+0.136677490 container start f8f0736a701247daedeed3ccf41e48cd2a42443df78d5398e34176c213930293 (image=quay.io/ceph/ceph:v20, name=sleepy_taussig, ceph=True, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle)
Jan 31 08:06:35 compute-0 podman[76393]: 2026-01-31 08:06:35.324375586 +0000 UTC m=+0.141677843 container attach f8f0736a701247daedeed3ccf41e48cd2a42443df78d5398e34176c213930293 (image=quay.io/ceph/ceph:v20, name=sleepy_taussig, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Jan 31 08:06:35 compute-0 ceph-mon[75227]: [31/Jan/2026:08:06:34] ENGINE Serving on https://192.168.122.100:7150
Jan 31 08:06:35 compute-0 ceph-mon[75227]: [31/Jan/2026:08:06:34] ENGINE Bus STARTED
Jan 31 08:06:35 compute-0 ceph-mon[75227]: [31/Jan/2026:08:06:34] ENGINE Client ('192.168.122.100', 44404) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)')
Jan 31 08:06:35 compute-0 ceph-mon[75227]: from='client.14136 -' entity='client.admin' cmd=[{"prefix": "orch set backend", "module_name": "cephadm", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 08:06:35 compute-0 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 08:06:35 compute-0 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "config dump", "format": "json"} : dispatch
Jan 31 08:06:35 compute-0 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 08:06:35 compute-0 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 08:06:35 compute-0 ceph-mgr[75519]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Jan 31 08:06:35 compute-0 ceph-mgr[75519]: log_channel(audit) log [DBG] : from='client.14140 -' entity='client.admin' cmd=[{"prefix": "cephadm set-priv-key", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 08:06:35 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/ssh_identity_key}] v 0)
Jan 31 08:06:35 compute-0 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 08:06:35 compute-0 ceph-mgr[75519]: [cephadm INFO root] Set ssh ssh_identity_key
Jan 31 08:06:35 compute-0 ceph-mgr[75519]: log_channel(cephadm) log [INF] : Set ssh ssh_identity_key
Jan 31 08:06:35 compute-0 ceph-mgr[75519]: [cephadm INFO root] Set ssh private key
Jan 31 08:06:35 compute-0 ceph-mgr[75519]: log_channel(cephadm) log [INF] : Set ssh private key
Jan 31 08:06:35 compute-0 systemd[1]: libpod-f8f0736a701247daedeed3ccf41e48cd2a42443df78d5398e34176c213930293.scope: Deactivated successfully.
Jan 31 08:06:35 compute-0 podman[76393]: 2026-01-31 08:06:35.785555934 +0000 UTC m=+0.602858171 container died f8f0736a701247daedeed3ccf41e48cd2a42443df78d5398e34176c213930293 (image=quay.io/ceph/ceph:v20, name=sleepy_taussig, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 08:06:35 compute-0 systemd[1]: var-lib-containers-storage-overlay-dc0c814298041b3754a2ebfce51fedc1e4405c646afd350484bc532147e5997c-merged.mount: Deactivated successfully.
Jan 31 08:06:35 compute-0 podman[76393]: 2026-01-31 08:06:35.814559772 +0000 UTC m=+0.631862009 container remove f8f0736a701247daedeed3ccf41e48cd2a42443df78d5398e34176c213930293 (image=quay.io/ceph/ceph:v20, name=sleepy_taussig, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 08:06:35 compute-0 systemd[1]: libpod-conmon-f8f0736a701247daedeed3ccf41e48cd2a42443df78d5398e34176c213930293.scope: Deactivated successfully.
Jan 31 08:06:35 compute-0 podman[76447]: 2026-01-31 08:06:35.85901198 +0000 UTC m=+0.029677318 container create 8a7d52f7b432d4f0791fd7d1e16be0e3ddaa606f5cade47e960b3f41e3cdd429 (image=quay.io/ceph/ceph:v20, name=naughty_agnesi, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle)
Jan 31 08:06:35 compute-0 systemd[1]: Started libpod-conmon-8a7d52f7b432d4f0791fd7d1e16be0e3ddaa606f5cade47e960b3f41e3cdd429.scope.
Jan 31 08:06:35 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:06:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a832180bdd05b534c2b460fcb39de361c93ac1183e99333332641a9313937ff7/merged/tmp/cephadm-ssh-key supports timestamps until 2038 (0x7fffffff)
Jan 31 08:06:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a832180bdd05b534c2b460fcb39de361c93ac1183e99333332641a9313937ff7/merged/tmp/cephadm-ssh-key.pub supports timestamps until 2038 (0x7fffffff)
Jan 31 08:06:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a832180bdd05b534c2b460fcb39de361c93ac1183e99333332641a9313937ff7/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 08:06:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a832180bdd05b534c2b460fcb39de361c93ac1183e99333332641a9313937ff7/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 31 08:06:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a832180bdd05b534c2b460fcb39de361c93ac1183e99333332641a9313937ff7/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 08:06:35 compute-0 podman[76447]: 2026-01-31 08:06:35.918556079 +0000 UTC m=+0.089221387 container init 8a7d52f7b432d4f0791fd7d1e16be0e3ddaa606f5cade47e960b3f41e3cdd429 (image=quay.io/ceph/ceph:v20, name=naughty_agnesi, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle)
Jan 31 08:06:35 compute-0 podman[76447]: 2026-01-31 08:06:35.927756851 +0000 UTC m=+0.098422189 container start 8a7d52f7b432d4f0791fd7d1e16be0e3ddaa606f5cade47e960b3f41e3cdd429 (image=quay.io/ceph/ceph:v20, name=naughty_agnesi, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Jan 31 08:06:35 compute-0 podman[76447]: 2026-01-31 08:06:35.932052374 +0000 UTC m=+0.102717712 container attach 8a7d52f7b432d4f0791fd7d1e16be0e3ddaa606f5cade47e960b3f41e3cdd429 (image=quay.io/ceph/ceph:v20, name=naughty_agnesi, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030)
Jan 31 08:06:35 compute-0 podman[76447]: 2026-01-31 08:06:35.844703622 +0000 UTC m=+0.015368940 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 31 08:06:36 compute-0 ceph-mgr[75519]: log_channel(audit) log [DBG] : from='client.14142 -' entity='client.admin' cmd=[{"prefix": "cephadm set-pub-key", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 08:06:36 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/ssh_identity_pub}] v 0)
Jan 31 08:06:36 compute-0 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 08:06:36 compute-0 ceph-mgr[75519]: [cephadm INFO root] Set ssh ssh_identity_pub
Jan 31 08:06:36 compute-0 ceph-mgr[75519]: log_channel(cephadm) log [INF] : Set ssh ssh_identity_pub
Jan 31 08:06:36 compute-0 systemd[1]: libpod-8a7d52f7b432d4f0791fd7d1e16be0e3ddaa606f5cade47e960b3f41e3cdd429.scope: Deactivated successfully.
Jan 31 08:06:36 compute-0 conmon[76463]: conmon 8a7d52f7b432d4f0791f <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-8a7d52f7b432d4f0791fd7d1e16be0e3ddaa606f5cade47e960b3f41e3cdd429.scope/container/memory.events
Jan 31 08:06:36 compute-0 podman[76447]: 2026-01-31 08:06:36.618229051 +0000 UTC m=+0.788894359 container died 8a7d52f7b432d4f0791fd7d1e16be0e3ddaa606f5cade47e960b3f41e3cdd429 (image=quay.io/ceph/ceph:v20, name=naughty_agnesi, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 08:06:36 compute-0 systemd[1]: var-lib-containers-storage-overlay-a832180bdd05b534c2b460fcb39de361c93ac1183e99333332641a9313937ff7-merged.mount: Deactivated successfully.
Jan 31 08:06:36 compute-0 podman[76447]: 2026-01-31 08:06:36.650280036 +0000 UTC m=+0.820945344 container remove 8a7d52f7b432d4f0791fd7d1e16be0e3ddaa606f5cade47e960b3f41e3cdd429 (image=quay.io/ceph/ceph:v20, name=naughty_agnesi, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 08:06:36 compute-0 systemd[1]: libpod-conmon-8a7d52f7b432d4f0791fd7d1e16be0e3ddaa606f5cade47e960b3f41e3cdd429.scope: Deactivated successfully.
Jan 31 08:06:36 compute-0 podman[76501]: 2026-01-31 08:06:36.726793319 +0000 UTC m=+0.061958209 container create 3c4f19dc2fd8c211680a0e384846147e4063efe840fdfe728828aacfbde9e903 (image=quay.io/ceph/ceph:v20, name=cool_keldysh, OSD_FLAVOR=default, io.buildah.version=1.41.3, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 08:06:36 compute-0 ceph-mgr[75519]: [devicehealth WARNING root] not enough osds to create mgr pool
Jan 31 08:06:36 compute-0 systemd[1]: Started libpod-conmon-3c4f19dc2fd8c211680a0e384846147e4063efe840fdfe728828aacfbde9e903.scope.
Jan 31 08:06:36 compute-0 ceph-mon[75227]: from='client.14138 -' entity='client.admin' cmd=[{"prefix": "cephadm set-user", "user": "ceph-admin", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 08:06:36 compute-0 ceph-mon[75227]: Set ssh ssh_user
Jan 31 08:06:36 compute-0 ceph-mon[75227]: Set ssh ssh_config
Jan 31 08:06:36 compute-0 ceph-mon[75227]: ssh user set to ceph-admin. sudo will be used
Jan 31 08:06:36 compute-0 ceph-mon[75227]: from='client.14140 -' entity='client.admin' cmd=[{"prefix": "cephadm set-priv-key", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 08:06:36 compute-0 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 08:06:36 compute-0 ceph-mon[75227]: Set ssh ssh_identity_key
Jan 31 08:06:36 compute-0 ceph-mon[75227]: Set ssh private key
Jan 31 08:06:36 compute-0 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 08:06:36 compute-0 podman[76501]: 2026-01-31 08:06:36.685158631 +0000 UTC m=+0.020323571 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 31 08:06:36 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:06:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/55e2e2db1c084c144bd35193adaff4501190539daa8cd51e9c0e3b6a76fa27d2/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 31 08:06:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/55e2e2db1c084c144bd35193adaff4501190539daa8cd51e9c0e3b6a76fa27d2/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 08:06:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/55e2e2db1c084c144bd35193adaff4501190539daa8cd51e9c0e3b6a76fa27d2/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 08:06:36 compute-0 podman[76501]: 2026-01-31 08:06:36.822831139 +0000 UTC m=+0.157996039 container init 3c4f19dc2fd8c211680a0e384846147e4063efe840fdfe728828aacfbde9e903 (image=quay.io/ceph/ceph:v20, name=cool_keldysh, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 08:06:36 compute-0 podman[76501]: 2026-01-31 08:06:36.829046956 +0000 UTC m=+0.164211876 container start 3c4f19dc2fd8c211680a0e384846147e4063efe840fdfe728828aacfbde9e903 (image=quay.io/ceph/ceph:v20, name=cool_keldysh, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, ceph=True)
Jan 31 08:06:36 compute-0 podman[76501]: 2026-01-31 08:06:36.833499993 +0000 UTC m=+0.168664893 container attach 3c4f19dc2fd8c211680a0e384846147e4063efe840fdfe728828aacfbde9e903 (image=quay.io/ceph/ceph:v20, name=cool_keldysh, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 31 08:06:37 compute-0 ceph-mgr[75519]: log_channel(audit) log [DBG] : from='client.14144 -' entity='client.admin' cmd=[{"prefix": "cephadm get-pub-key", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 08:06:37 compute-0 cool_keldysh[76517]: ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDBX4tgy1ewLcXde4iDx3CKeVUkIjIVtx9Ubv8Ne1qBY2gF18JFQDhG9+Uj8MmytXfbfbslwMxIRDThV4W4akGN/R6C+84jmeDosYAUouDsqoQ+ZSR52oY4W75/cTFCaXQyMa0I2alM942WPSngyKA12FpaluWyAaN9ZlrhM7LHe7zF9oEg8yrX02rbnzU+5fleC/Q9H1jArgVklTV5r/dLiDj+H/ZYjb1zNROtH9pH7rWKS9CB7lCeflFijdGli5ChbWFLosewDRqB4IO2D/Xb64a3YLAnqrCmwFRUTZyG5dt40IPkhOqP6Cpr5V+xjYzejxIJ2HIVWZ3/MDyDhNRKHDjwG6z+Mdzb/fsJAUCzWLPyZrq7RxOThV4tOXL57arAZHdsl7tt4LWvwt2gUxSqsmFiGEiGYXkqKS89UHvpHpyHtvqAcvKUlUJX2YZNyOkc+doGJN6EeBpc+1Buwcp5aV/Xw/Fs77zMkzBRA+HOTb+hMd1R9HHZqt195rKwevE= zuul@controller
Jan 31 08:06:37 compute-0 systemd[1]: libpod-3c4f19dc2fd8c211680a0e384846147e4063efe840fdfe728828aacfbde9e903.scope: Deactivated successfully.
Jan 31 08:06:37 compute-0 podman[76501]: 2026-01-31 08:06:37.251927491 +0000 UTC m=+0.587092381 container died 3c4f19dc2fd8c211680a0e384846147e4063efe840fdfe728828aacfbde9e903 (image=quay.io/ceph/ceph:v20, name=cool_keldysh, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle)
Jan 31 08:06:37 compute-0 systemd[1]: var-lib-containers-storage-overlay-55e2e2db1c084c144bd35193adaff4501190539daa8cd51e9c0e3b6a76fa27d2-merged.mount: Deactivated successfully.
Jan 31 08:06:37 compute-0 podman[76501]: 2026-01-31 08:06:37.292407854 +0000 UTC m=+0.627572754 container remove 3c4f19dc2fd8c211680a0e384846147e4063efe840fdfe728828aacfbde9e903 (image=quay.io/ceph/ceph:v20, name=cool_keldysh, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 08:06:37 compute-0 systemd[1]: libpod-conmon-3c4f19dc2fd8c211680a0e384846147e4063efe840fdfe728828aacfbde9e903.scope: Deactivated successfully.
Jan 31 08:06:37 compute-0 podman[76555]: 2026-01-31 08:06:37.368156546 +0000 UTC m=+0.055783576 container create aa07dac2bb59d4040b9be5c77fee3736d55d858413dc56eda3532580d7b0b8e1 (image=quay.io/ceph/ceph:v20, name=busy_cray, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 08:06:37 compute-0 systemd[1]: Started libpod-conmon-aa07dac2bb59d4040b9be5c77fee3736d55d858413dc56eda3532580d7b0b8e1.scope.
Jan 31 08:06:37 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:06:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a72e48cc5cbfe120b0cc27e791be21d5a5197f9cae8ece7190cb84b58e1c04ba/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 31 08:06:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a72e48cc5cbfe120b0cc27e791be21d5a5197f9cae8ece7190cb84b58e1c04ba/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 08:06:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a72e48cc5cbfe120b0cc27e791be21d5a5197f9cae8ece7190cb84b58e1c04ba/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 08:06:37 compute-0 podman[76555]: 2026-01-31 08:06:37.34500455 +0000 UTC m=+0.032631650 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 31 08:06:37 compute-0 podman[76555]: 2026-01-31 08:06:37.505905259 +0000 UTC m=+0.193532339 container init aa07dac2bb59d4040b9be5c77fee3736d55d858413dc56eda3532580d7b0b8e1 (image=quay.io/ceph/ceph:v20, name=busy_cray, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 08:06:37 compute-0 podman[76555]: 2026-01-31 08:06:37.511723118 +0000 UTC m=+0.199350168 container start aa07dac2bb59d4040b9be5c77fee3736d55d858413dc56eda3532580d7b0b8e1 (image=quay.io/ceph/ceph:v20, name=busy_cray, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Jan 31 08:06:37 compute-0 podman[76555]: 2026-01-31 08:06:37.5163638 +0000 UTC m=+0.203990850 container attach aa07dac2bb59d4040b9be5c77fee3736d55d858413dc56eda3532580d7b0b8e1 (image=quay.io/ceph/ceph:v20, name=busy_cray, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0)
Jan 31 08:06:37 compute-0 ceph-mgr[75519]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Jan 31 08:06:37 compute-0 ceph-mon[75227]: from='client.14142 -' entity='client.admin' cmd=[{"prefix": "cephadm set-pub-key", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 08:06:37 compute-0 ceph-mon[75227]: Set ssh ssh_identity_pub
Jan 31 08:06:37 compute-0 ceph-mgr[75519]: log_channel(audit) log [DBG] : from='client.14146 -' entity='client.admin' cmd=[{"prefix": "orch host add", "hostname": "compute-0", "addr": "192.168.122.100", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 08:06:38 compute-0 sshd-session[76597]: Accepted publickey for ceph-admin from 192.168.122.100 port 44118 ssh2: RSA SHA256:JZpQN7Htt0viR9Hdw23gW7S/V9mK0acyC3Hf9I+9Mfc
Jan 31 08:06:38 compute-0 systemd-logind[793]: New session 21 of user ceph-admin.
Jan 31 08:06:38 compute-0 systemd[1]: Created slice User Slice of UID 42477.
Jan 31 08:06:38 compute-0 systemd[1]: Starting User Runtime Directory /run/user/42477...
Jan 31 08:06:38 compute-0 systemd[1]: Finished User Runtime Directory /run/user/42477.
Jan 31 08:06:38 compute-0 systemd[1]: Starting User Manager for UID 42477...
Jan 31 08:06:38 compute-0 systemd[76601]: pam_unix(systemd-user:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Jan 31 08:06:38 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e2 _set_new_cache_sizes cache_size:1020052584 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 08:06:38 compute-0 sshd-session[76614]: Accepted publickey for ceph-admin from 192.168.122.100 port 44134 ssh2: RSA SHA256:JZpQN7Htt0viR9Hdw23gW7S/V9mK0acyC3Hf9I+9Mfc
Jan 31 08:06:38 compute-0 systemd[76601]: Queued start job for default target Main User Target.
Jan 31 08:06:38 compute-0 systemd-logind[793]: New session 23 of user ceph-admin.
Jan 31 08:06:38 compute-0 systemd[76601]: Created slice User Application Slice.
Jan 31 08:06:38 compute-0 systemd[76601]: Started Mark boot as successful after the user session has run 2 minutes.
Jan 31 08:06:38 compute-0 systemd[76601]: Started Daily Cleanup of User's Temporary Directories.
Jan 31 08:06:38 compute-0 systemd[76601]: Reached target Paths.
Jan 31 08:06:38 compute-0 systemd[76601]: Reached target Timers.
Jan 31 08:06:38 compute-0 systemd[76601]: Starting D-Bus User Message Bus Socket...
Jan 31 08:06:38 compute-0 systemd[76601]: Starting Create User's Volatile Files and Directories...
Jan 31 08:06:38 compute-0 systemd[76601]: Listening on D-Bus User Message Bus Socket.
Jan 31 08:06:38 compute-0 systemd[76601]: Reached target Sockets.
Jan 31 08:06:38 compute-0 systemd[76601]: Finished Create User's Volatile Files and Directories.
Jan 31 08:06:38 compute-0 systemd[76601]: Reached target Basic System.
Jan 31 08:06:38 compute-0 systemd[76601]: Reached target Main User Target.
Jan 31 08:06:38 compute-0 systemd[76601]: Startup finished in 156ms.
Jan 31 08:06:38 compute-0 systemd[1]: Started User Manager for UID 42477.
Jan 31 08:06:38 compute-0 systemd[1]: Started Session 21 of User ceph-admin.
Jan 31 08:06:38 compute-0 systemd[1]: Started Session 23 of User ceph-admin.
Jan 31 08:06:38 compute-0 sshd-session[76597]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Jan 31 08:06:38 compute-0 sshd-session[76614]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Jan 31 08:06:38 compute-0 sudo[76621]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 08:06:38 compute-0 sudo[76621]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:06:38 compute-0 sudo[76621]: pam_unix(sudo:session): session closed for user root
Jan 31 08:06:38 compute-0 sshd-session[76646]: Accepted publickey for ceph-admin from 192.168.122.100 port 44136 ssh2: RSA SHA256:JZpQN7Htt0viR9Hdw23gW7S/V9mK0acyC3Hf9I+9Mfc
Jan 31 08:06:38 compute-0 systemd-logind[793]: New session 24 of user ceph-admin.
Jan 31 08:06:38 compute-0 ceph-mgr[75519]: [devicehealth WARNING root] not enough osds to create mgr pool
Jan 31 08:06:38 compute-0 systemd[1]: Started Session 24 of User ceph-admin.
Jan 31 08:06:38 compute-0 sshd-session[76646]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Jan 31 08:06:38 compute-0 sudo[76650]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/82c880e6-d992-5408-8b12-efff9c275473/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 check-host --expect-hostname compute-0
Jan 31 08:06:38 compute-0 sudo[76650]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:06:38 compute-0 ceph-mon[75227]: from='client.14144 -' entity='client.admin' cmd=[{"prefix": "cephadm get-pub-key", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 08:06:38 compute-0 ceph-mon[75227]: from='client.14146 -' entity='client.admin' cmd=[{"prefix": "orch host add", "hostname": "compute-0", "addr": "192.168.122.100", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 08:06:38 compute-0 sudo[76650]: pam_unix(sudo:session): session closed for user root
Jan 31 08:06:39 compute-0 sshd-session[76675]: Accepted publickey for ceph-admin from 192.168.122.100 port 44142 ssh2: RSA SHA256:JZpQN7Htt0viR9Hdw23gW7S/V9mK0acyC3Hf9I+9Mfc
Jan 31 08:06:39 compute-0 systemd-logind[793]: New session 25 of user ceph-admin.
Jan 31 08:06:39 compute-0 systemd[1]: Started Session 25 of User ceph-admin.
Jan 31 08:06:39 compute-0 sshd-session[76675]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Jan 31 08:06:39 compute-0 sudo[76679]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /var/lib/ceph/82c880e6-d992-5408-8b12-efff9c275473/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b
Jan 31 08:06:39 compute-0 sudo[76679]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:06:39 compute-0 sudo[76679]: pam_unix(sudo:session): session closed for user root
Jan 31 08:06:39 compute-0 ceph-mgr[75519]: [cephadm INFO cephadm.serve] Deploying cephadm binary to compute-0
Jan 31 08:06:39 compute-0 ceph-mgr[75519]: log_channel(cephadm) log [INF] : Deploying cephadm binary to compute-0
Jan 31 08:06:39 compute-0 sshd-session[76704]: Accepted publickey for ceph-admin from 192.168.122.100 port 44156 ssh2: RSA SHA256:JZpQN7Htt0viR9Hdw23gW7S/V9mK0acyC3Hf9I+9Mfc
Jan 31 08:06:39 compute-0 systemd-logind[793]: New session 26 of user ceph-admin.
Jan 31 08:06:39 compute-0 systemd[1]: Started Session 26 of User ceph-admin.
Jan 31 08:06:39 compute-0 sshd-session[76704]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Jan 31 08:06:39 compute-0 sudo[76708]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /var/lib/ceph/82c880e6-d992-5408-8b12-efff9c275473
Jan 31 08:06:39 compute-0 sudo[76708]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:06:39 compute-0 sudo[76708]: pam_unix(sudo:session): session closed for user root
Jan 31 08:06:39 compute-0 ceph-mgr[75519]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Jan 31 08:06:39 compute-0 sshd-session[76733]: Accepted publickey for ceph-admin from 192.168.122.100 port 44164 ssh2: RSA SHA256:JZpQN7Htt0viR9Hdw23gW7S/V9mK0acyC3Hf9I+9Mfc
Jan 31 08:06:39 compute-0 systemd-logind[793]: New session 27 of user ceph-admin.
Jan 31 08:06:39 compute-0 systemd[1]: Started Session 27 of User ceph-admin.
Jan 31 08:06:39 compute-0 sshd-session[76733]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Jan 31 08:06:39 compute-0 sudo[76737]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-82c880e6-d992-5408-8b12-efff9c275473/var/lib/ceph/82c880e6-d992-5408-8b12-efff9c275473
Jan 31 08:06:39 compute-0 sudo[76737]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:06:39 compute-0 sudo[76737]: pam_unix(sudo:session): session closed for user root
Jan 31 08:06:40 compute-0 ceph-mon[75227]: Deploying cephadm binary to compute-0
Jan 31 08:06:40 compute-0 sshd-session[76762]: Accepted publickey for ceph-admin from 192.168.122.100 port 44166 ssh2: RSA SHA256:JZpQN7Htt0viR9Hdw23gW7S/V9mK0acyC3Hf9I+9Mfc
Jan 31 08:06:40 compute-0 systemd-logind[793]: New session 28 of user ceph-admin.
Jan 31 08:06:40 compute-0 systemd[1]: Started Session 28 of User ceph-admin.
Jan 31 08:06:40 compute-0 sshd-session[76762]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Jan 31 08:06:40 compute-0 sudo[76766]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-82c880e6-d992-5408-8b12-efff9c275473/var/lib/ceph/82c880e6-d992-5408-8b12-efff9c275473/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b.new
Jan 31 08:06:40 compute-0 sudo[76766]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:06:40 compute-0 sudo[76766]: pam_unix(sudo:session): session closed for user root
Jan 31 08:06:40 compute-0 sshd-session[76791]: Accepted publickey for ceph-admin from 192.168.122.100 port 44174 ssh2: RSA SHA256:JZpQN7Htt0viR9Hdw23gW7S/V9mK0acyC3Hf9I+9Mfc
Jan 31 08:06:40 compute-0 systemd-logind[793]: New session 29 of user ceph-admin.
Jan 31 08:06:40 compute-0 systemd[1]: Started Session 29 of User ceph-admin.
Jan 31 08:06:40 compute-0 sshd-session[76791]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Jan 31 08:06:40 compute-0 sudo[76795]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-82c880e6-d992-5408-8b12-efff9c275473
Jan 31 08:06:40 compute-0 sudo[76795]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:06:40 compute-0 sudo[76795]: pam_unix(sudo:session): session closed for user root
Jan 31 08:06:40 compute-0 ceph-mgr[75519]: [devicehealth WARNING root] not enough osds to create mgr pool
Jan 31 08:06:40 compute-0 sshd-session[76820]: Accepted publickey for ceph-admin from 192.168.122.100 port 44180 ssh2: RSA SHA256:JZpQN7Htt0viR9Hdw23gW7S/V9mK0acyC3Hf9I+9Mfc
Jan 31 08:06:40 compute-0 systemd-logind[793]: New session 30 of user ceph-admin.
Jan 31 08:06:40 compute-0 systemd[1]: Started Session 30 of User ceph-admin.
Jan 31 08:06:40 compute-0 sshd-session[76820]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Jan 31 08:06:40 compute-0 sudo[76824]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-82c880e6-d992-5408-8b12-efff9c275473/var/lib/ceph/82c880e6-d992-5408-8b12-efff9c275473/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b.new
Jan 31 08:06:40 compute-0 sudo[76824]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:06:40 compute-0 sudo[76824]: pam_unix(sudo:session): session closed for user root
Jan 31 08:06:41 compute-0 sshd-session[76849]: Accepted publickey for ceph-admin from 192.168.122.100 port 44196 ssh2: RSA SHA256:JZpQN7Htt0viR9Hdw23gW7S/V9mK0acyC3Hf9I+9Mfc
Jan 31 08:06:41 compute-0 systemd-logind[793]: New session 31 of user ceph-admin.
Jan 31 08:06:41 compute-0 systemd[1]: Started Session 31 of User ceph-admin.
Jan 31 08:06:41 compute-0 sshd-session[76849]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Jan 31 08:06:41 compute-0 ceph-mgr[75519]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Jan 31 08:06:42 compute-0 sshd-session[76876]: Accepted publickey for ceph-admin from 192.168.122.100 port 44198 ssh2: RSA SHA256:JZpQN7Htt0viR9Hdw23gW7S/V9mK0acyC3Hf9I+9Mfc
Jan 31 08:06:42 compute-0 systemd-logind[793]: New session 32 of user ceph-admin.
Jan 31 08:06:42 compute-0 systemd[1]: Started Session 32 of User ceph-admin.
Jan 31 08:06:42 compute-0 sshd-session[76876]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Jan 31 08:06:42 compute-0 sudo[76880]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv -Z /tmp/cephadm-82c880e6-d992-5408-8b12-efff9c275473/var/lib/ceph/82c880e6-d992-5408-8b12-efff9c275473/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b.new /var/lib/ceph/82c880e6-d992-5408-8b12-efff9c275473/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b
Jan 31 08:06:42 compute-0 sudo[76880]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:06:42 compute-0 sudo[76880]: pam_unix(sudo:session): session closed for user root
Jan 31 08:06:42 compute-0 ceph-mgr[75519]: [devicehealth WARNING root] not enough osds to create mgr pool
Jan 31 08:06:42 compute-0 sshd-session[76905]: Accepted publickey for ceph-admin from 192.168.122.100 port 44204 ssh2: RSA SHA256:JZpQN7Htt0viR9Hdw23gW7S/V9mK0acyC3Hf9I+9Mfc
Jan 31 08:06:42 compute-0 systemd-logind[793]: New session 33 of user ceph-admin.
Jan 31 08:06:42 compute-0 systemd[1]: Started Session 33 of User ceph-admin.
Jan 31 08:06:42 compute-0 sshd-session[76905]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Jan 31 08:06:43 compute-0 sudo[76909]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/82c880e6-d992-5408-8b12-efff9c275473/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 check-host --expect-hostname compute-0
Jan 31 08:06:43 compute-0 sudo[76909]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:06:43 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e2 _set_new_cache_sizes cache_size:1020054701 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 08:06:43 compute-0 sudo[76909]: pam_unix(sudo:session): session closed for user root
Jan 31 08:06:43 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0)
Jan 31 08:06:43 compute-0 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 08:06:43 compute-0 ceph-mgr[75519]: [cephadm INFO root] Added host compute-0
Jan 31 08:06:43 compute-0 ceph-mgr[75519]: log_channel(cephadm) log [INF] : Added host compute-0
Jan 31 08:06:43 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config dump", "format": "json"} v 0)
Jan 31 08:06:43 compute-0 ceph-mon[75227]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "config dump", "format": "json"} : dispatch
Jan 31 08:06:43 compute-0 busy_cray[76571]: Added host 'compute-0' with addr '192.168.122.100'
Jan 31 08:06:43 compute-0 systemd[1]: libpod-aa07dac2bb59d4040b9be5c77fee3736d55d858413dc56eda3532580d7b0b8e1.scope: Deactivated successfully.
Jan 31 08:06:43 compute-0 podman[76555]: 2026-01-31 08:06:43.39880585 +0000 UTC m=+6.086432960 container died aa07dac2bb59d4040b9be5c77fee3736d55d858413dc56eda3532580d7b0b8e1 (image=quay.io/ceph/ceph:v20, name=busy_cray, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 08:06:43 compute-0 sudo[76954]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 08:06:43 compute-0 sudo[76954]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:06:43 compute-0 systemd[1]: var-lib-containers-storage-overlay-a72e48cc5cbfe120b0cc27e791be21d5a5197f9cae8ece7190cb84b58e1c04ba-merged.mount: Deactivated successfully.
Jan 31 08:06:43 compute-0 sudo[76954]: pam_unix(sudo:session): session closed for user root
Jan 31 08:06:43 compute-0 podman[76555]: 2026-01-31 08:06:43.443046274 +0000 UTC m=+6.130673324 container remove aa07dac2bb59d4040b9be5c77fee3736d55d858413dc56eda3532580d7b0b8e1 (image=quay.io/ceph/ceph:v20, name=busy_cray, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 08:06:43 compute-0 systemd[1]: libpod-conmon-aa07dac2bb59d4040b9be5c77fee3736d55d858413dc56eda3532580d7b0b8e1.scope: Deactivated successfully.
Jan 31 08:06:43 compute-0 sudo[76989]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/82c880e6-d992-5408-8b12-efff9c275473/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph:v20 --timeout 895 pull
Jan 31 08:06:43 compute-0 sudo[76989]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:06:43 compute-0 podman[77002]: 2026-01-31 08:06:43.510331806 +0000 UTC m=+0.043684145 container create a9a9bb4f6b3fd499009a288c59ba24fbc6213836692e88f9db1b49fd48453a47 (image=quay.io/ceph/ceph:v20, name=relaxed_banach, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 08:06:43 compute-0 systemd[1]: Started libpod-conmon-a9a9bb4f6b3fd499009a288c59ba24fbc6213836692e88f9db1b49fd48453a47.scope.
Jan 31 08:06:43 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:06:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/08d5565d88cad172ea08d91ae7d34a5a8ccdb21f804263fbf596dc4bfe8701be/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 08:06:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/08d5565d88cad172ea08d91ae7d34a5a8ccdb21f804263fbf596dc4bfe8701be/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 31 08:06:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/08d5565d88cad172ea08d91ae7d34a5a8ccdb21f804263fbf596dc4bfe8701be/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 08:06:43 compute-0 podman[77002]: 2026-01-31 08:06:43.494078858 +0000 UTC m=+0.027431217 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 31 08:06:43 compute-0 podman[77002]: 2026-01-31 08:06:43.593656597 +0000 UTC m=+0.127009016 container init a9a9bb4f6b3fd499009a288c59ba24fbc6213836692e88f9db1b49fd48453a47 (image=quay.io/ceph/ceph:v20, name=relaxed_banach, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 08:06:43 compute-0 podman[77002]: 2026-01-31 08:06:43.60115552 +0000 UTC m=+0.134507869 container start a9a9bb4f6b3fd499009a288c59ba24fbc6213836692e88f9db1b49fd48453a47 (image=quay.io/ceph/ceph:v20, name=relaxed_banach, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Jan 31 08:06:43 compute-0 podman[77002]: 2026-01-31 08:06:43.604705983 +0000 UTC m=+0.138058422 container attach a9a9bb4f6b3fd499009a288c59ba24fbc6213836692e88f9db1b49fd48453a47 (image=quay.io/ceph/ceph:v20, name=relaxed_banach, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 08:06:43 compute-0 ceph-mgr[75519]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Jan 31 08:06:44 compute-0 ceph-mgr[75519]: log_channel(audit) log [DBG] : from='client.14148 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "mon", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 08:06:44 compute-0 ceph-mgr[75519]: [cephadm INFO root] Saving service mon spec with placement count:5
Jan 31 08:06:44 compute-0 ceph-mgr[75519]: log_channel(cephadm) log [INF] : Saving service mon spec with placement count:5
Jan 31 08:06:44 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mon}] v 0)
Jan 31 08:06:44 compute-0 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 08:06:44 compute-0 relaxed_banach[77030]: Scheduled mon update...
Jan 31 08:06:44 compute-0 systemd[1]: libpod-a9a9bb4f6b3fd499009a288c59ba24fbc6213836692e88f9db1b49fd48453a47.scope: Deactivated successfully.
Jan 31 08:06:44 compute-0 podman[77002]: 2026-01-31 08:06:44.107969391 +0000 UTC m=+0.641321730 container died a9a9bb4f6b3fd499009a288c59ba24fbc6213836692e88f9db1b49fd48453a47 (image=quay.io/ceph/ceph:v20, name=relaxed_banach, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 08:06:44 compute-0 systemd[1]: var-lib-containers-storage-overlay-08d5565d88cad172ea08d91ae7d34a5a8ccdb21f804263fbf596dc4bfe8701be-merged.mount: Deactivated successfully.
Jan 31 08:06:44 compute-0 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 08:06:44 compute-0 ceph-mon[75227]: Added host compute-0
Jan 31 08:06:44 compute-0 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "config dump", "format": "json"} : dispatch
Jan 31 08:06:44 compute-0 ceph-mon[75227]: from='client.14148 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "mon", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 08:06:44 compute-0 ceph-mon[75227]: Saving service mon spec with placement count:5
Jan 31 08:06:44 compute-0 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 08:06:44 compute-0 podman[77002]: 2026-01-31 08:06:44.478882218 +0000 UTC m=+1.012234567 container remove a9a9bb4f6b3fd499009a288c59ba24fbc6213836692e88f9db1b49fd48453a47 (image=quay.io/ceph/ceph:v20, name=relaxed_banach, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 31 08:06:44 compute-0 podman[77047]: 2026-01-31 08:06:44.508613943 +0000 UTC m=+0.803315299 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 31 08:06:44 compute-0 podman[77095]: 2026-01-31 08:06:44.586791696 +0000 UTC m=+0.084070630 container create 209bac04f4ba64719fe17bf0ec5b51a90ea929c1cf0b745a19cfea123a6b2552 (image=quay.io/ceph/ceph:v20, name=stupefied_hermann, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Jan 31 08:06:44 compute-0 podman[77095]: 2026-01-31 08:06:44.535198452 +0000 UTC m=+0.032477436 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 31 08:06:44 compute-0 systemd[1]: Started libpod-conmon-209bac04f4ba64719fe17bf0ec5b51a90ea929c1cf0b745a19cfea123a6b2552.scope.
Jan 31 08:06:44 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:06:44 compute-0 systemd[1]: libpod-conmon-a9a9bb4f6b3fd499009a288c59ba24fbc6213836692e88f9db1b49fd48453a47.scope: Deactivated successfully.
Jan 31 08:06:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6ec594152b8685f894ab4a93be75383b7b104243fef9aff9112d8f8fd60c135e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 08:06:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6ec594152b8685f894ab4a93be75383b7b104243fef9aff9112d8f8fd60c135e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 08:06:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6ec594152b8685f894ab4a93be75383b7b104243fef9aff9112d8f8fd60c135e/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 31 08:06:44 compute-0 ceph-mgr[75519]: [devicehealth WARNING root] not enough osds to create mgr pool
Jan 31 08:06:44 compute-0 podman[77095]: 2026-01-31 08:06:44.767203651 +0000 UTC m=+0.264482615 container init 209bac04f4ba64719fe17bf0ec5b51a90ea929c1cf0b745a19cfea123a6b2552 (image=quay.io/ceph/ceph:v20, name=stupefied_hermann, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 08:06:44 compute-0 podman[77095]: 2026-01-31 08:06:44.774688932 +0000 UTC m=+0.271967906 container start 209bac04f4ba64719fe17bf0ec5b51a90ea929c1cf0b745a19cfea123a6b2552 (image=quay.io/ceph/ceph:v20, name=stupefied_hermann, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.build-date=20251030, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 08:06:44 compute-0 podman[77125]: 2026-01-31 08:06:44.688634212 +0000 UTC m=+0.024130026 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 31 08:06:44 compute-0 podman[77095]: 2026-01-31 08:06:44.817003302 +0000 UTC m=+0.314282256 container attach 209bac04f4ba64719fe17bf0ec5b51a90ea929c1cf0b745a19cfea123a6b2552 (image=quay.io/ceph/ceph:v20, name=stupefied_hermann, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 31 08:06:44 compute-0 podman[77125]: 2026-01-31 08:06:44.84589525 +0000 UTC m=+0.181391084 container create 3fa8bcda8a61e13f2f83f1d4d4cfb73e681b6941f0e8e75ffb6dbeffdb3fcf04 (image=quay.io/ceph/ceph:v20, name=lucid_spence, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030)
Jan 31 08:06:44 compute-0 systemd[1]: Started libpod-conmon-3fa8bcda8a61e13f2f83f1d4d4cfb73e681b6941f0e8e75ffb6dbeffdb3fcf04.scope.
Jan 31 08:06:44 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:06:45 compute-0 podman[77125]: 2026-01-31 08:06:45.063907916 +0000 UTC m=+0.399403750 container init 3fa8bcda8a61e13f2f83f1d4d4cfb73e681b6941f0e8e75ffb6dbeffdb3fcf04 (image=quay.io/ceph/ceph:v20, name=lucid_spence, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Jan 31 08:06:45 compute-0 podman[77125]: 2026-01-31 08:06:45.069091647 +0000 UTC m=+0.404587481 container start 3fa8bcda8a61e13f2f83f1d4d4cfb73e681b6941f0e8e75ffb6dbeffdb3fcf04 (image=quay.io/ceph/ceph:v20, name=lucid_spence, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 08:06:45 compute-0 podman[77125]: 2026-01-31 08:06:45.1555094 +0000 UTC m=+0.491005224 container attach 3fa8bcda8a61e13f2f83f1d4d4cfb73e681b6941f0e8e75ffb6dbeffdb3fcf04 (image=quay.io/ceph/ceph:v20, name=lucid_spence, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 31 08:06:45 compute-0 lucid_spence[77165]: ceph version 20.2.0 (69f84cc2651aa259a15bc192ddaabd3baba07489) tentacle (stable)
Jan 31 08:06:45 compute-0 systemd[1]: libpod-3fa8bcda8a61e13f2f83f1d4d4cfb73e681b6941f0e8e75ffb6dbeffdb3fcf04.scope: Deactivated successfully.
Jan 31 08:06:45 compute-0 podman[77125]: 2026-01-31 08:06:45.167835812 +0000 UTC m=+0.503331606 container died 3fa8bcda8a61e13f2f83f1d4d4cfb73e681b6941f0e8e75ffb6dbeffdb3fcf04 (image=quay.io/ceph/ceph:v20, name=lucid_spence, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS)
Jan 31 08:06:45 compute-0 ceph-mgr[75519]: log_channel(audit) log [DBG] : from='client.14150 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "mgr", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 08:06:45 compute-0 ceph-mgr[75519]: [cephadm INFO root] Saving service mgr spec with placement count:2
Jan 31 08:06:45 compute-0 ceph-mgr[75519]: log_channel(cephadm) log [INF] : Saving service mgr spec with placement count:2
Jan 31 08:06:45 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mgr}] v 0)
Jan 31 08:06:45 compute-0 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 08:06:45 compute-0 stupefied_hermann[77127]: Scheduled mgr update...
Jan 31 08:06:45 compute-0 systemd[1]: libpod-209bac04f4ba64719fe17bf0ec5b51a90ea929c1cf0b745a19cfea123a6b2552.scope: Deactivated successfully.
Jan 31 08:06:45 compute-0 systemd[1]: var-lib-containers-storage-overlay-3a1341c3f7e4358bddb770aa99959bfbaa44855e1231c21c4fe5c19c162f34df-merged.mount: Deactivated successfully.
Jan 31 08:06:45 compute-0 podman[77125]: 2026-01-31 08:06:45.496218888 +0000 UTC m=+0.831714722 container remove 3fa8bcda8a61e13f2f83f1d4d4cfb73e681b6941f0e8e75ffb6dbeffdb3fcf04 (image=quay.io/ceph/ceph:v20, name=lucid_spence, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 08:06:45 compute-0 systemd[1]: libpod-conmon-3fa8bcda8a61e13f2f83f1d4d4cfb73e681b6941f0e8e75ffb6dbeffdb3fcf04.scope: Deactivated successfully.
Jan 31 08:06:45 compute-0 podman[77095]: 2026-01-31 08:06:45.534477319 +0000 UTC m=+1.031756293 container died 209bac04f4ba64719fe17bf0ec5b51a90ea929c1cf0b745a19cfea123a6b2552 (image=quay.io/ceph/ceph:v20, name=stupefied_hermann, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 08:06:45 compute-0 sudo[76989]: pam_unix(sudo:session): session closed for user root
Jan 31 08:06:45 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=container_image}] v 0)
Jan 31 08:06:45 compute-0 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 08:06:45 compute-0 systemd[1]: var-lib-containers-storage-overlay-6ec594152b8685f894ab4a93be75383b7b104243fef9aff9112d8f8fd60c135e-merged.mount: Deactivated successfully.
Jan 31 08:06:45 compute-0 podman[77183]: 2026-01-31 08:06:45.57933376 +0000 UTC m=+0.213650680 container remove 209bac04f4ba64719fe17bf0ec5b51a90ea929c1cf0b745a19cfea123a6b2552 (image=quay.io/ceph/ceph:v20, name=stupefied_hermann, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 08:06:45 compute-0 systemd[1]: libpod-conmon-209bac04f4ba64719fe17bf0ec5b51a90ea929c1cf0b745a19cfea123a6b2552.scope: Deactivated successfully.
Jan 31 08:06:45 compute-0 sudo[77199]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 08:06:45 compute-0 sudo[77199]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:06:45 compute-0 sudo[77199]: pam_unix(sudo:session): session closed for user root
Jan 31 08:06:45 compute-0 podman[77212]: 2026-01-31 08:06:45.634002934 +0000 UTC m=+0.041958819 container create a788a87643a0f4d874b3b476297eac1878e5bc92d2a433a241bfa985e4e15dce (image=quay.io/ceph/ceph:v20, name=zen_haslett, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Jan 31 08:06:45 compute-0 ceph-mgr[75519]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Jan 31 08:06:45 compute-0 systemd[1]: Started libpod-conmon-a788a87643a0f4d874b3b476297eac1878e5bc92d2a433a241bfa985e4e15dce.scope.
Jan 31 08:06:45 compute-0 sudo[77238]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/82c880e6-d992-5408-8b12-efff9c275473/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 check-host
Jan 31 08:06:45 compute-0 sudo[77238]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:06:45 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:06:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/839abe8c39a4f7a14f13e5592f33a14c9d5a6645b342d4a6630942334c312542/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 08:06:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/839abe8c39a4f7a14f13e5592f33a14c9d5a6645b342d4a6630942334c312542/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 31 08:06:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/839abe8c39a4f7a14f13e5592f33a14c9d5a6645b342d4a6630942334c312542/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 08:06:45 compute-0 podman[77212]: 2026-01-31 08:06:45.70913754 +0000 UTC m=+0.117093435 container init a788a87643a0f4d874b3b476297eac1878e5bc92d2a433a241bfa985e4e15dce (image=quay.io/ceph/ceph:v20, name=zen_haslett, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Jan 31 08:06:45 compute-0 podman[77212]: 2026-01-31 08:06:45.613420011 +0000 UTC m=+0.021375926 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 31 08:06:45 compute-0 podman[77212]: 2026-01-31 08:06:45.715246496 +0000 UTC m=+0.123202381 container start a788a87643a0f4d874b3b476297eac1878e5bc92d2a433a241bfa985e4e15dce (image=quay.io/ceph/ceph:v20, name=zen_haslett, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Jan 31 08:06:45 compute-0 podman[77212]: 2026-01-31 08:06:45.718528874 +0000 UTC m=+0.126484749 container attach a788a87643a0f4d874b3b476297eac1878e5bc92d2a433a241bfa985e4e15dce (image=quay.io/ceph/ceph:v20, name=zen_haslett, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle)
Jan 31 08:06:45 compute-0 sudo[77238]: pam_unix(sudo:session): session closed for user root
Jan 31 08:06:45 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 31 08:06:45 compute-0 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 08:06:46 compute-0 sudo[77308]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 08:06:46 compute-0 sudo[77308]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:06:46 compute-0 sudo[77308]: pam_unix(sudo:session): session closed for user root
Jan 31 08:06:46 compute-0 sudo[77333]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/82c880e6-d992-5408-8b12-efff9c275473/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ls
Jan 31 08:06:46 compute-0 sudo[77333]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:06:46 compute-0 ceph-mgr[75519]: log_channel(audit) log [DBG] : from='client.14152 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "crash", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 08:06:46 compute-0 ceph-mgr[75519]: [cephadm INFO root] Saving service crash spec with placement *
Jan 31 08:06:46 compute-0 ceph-mgr[75519]: log_channel(cephadm) log [INF] : Saving service crash spec with placement *
Jan 31 08:06:46 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.crash}] v 0)
Jan 31 08:06:46 compute-0 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 08:06:46 compute-0 zen_haslett[77264]: Scheduled crash update...
Jan 31 08:06:46 compute-0 systemd[1]: libpod-a788a87643a0f4d874b3b476297eac1878e5bc92d2a433a241bfa985e4e15dce.scope: Deactivated successfully.
Jan 31 08:06:46 compute-0 podman[77212]: 2026-01-31 08:06:46.159643478 +0000 UTC m=+0.567599383 container died a788a87643a0f4d874b3b476297eac1878e5bc92d2a433a241bfa985e4e15dce (image=quay.io/ceph/ceph:v20, name=zen_haslett, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 08:06:46 compute-0 systemd[1]: var-lib-containers-storage-overlay-839abe8c39a4f7a14f13e5592f33a14c9d5a6645b342d4a6630942334c312542-merged.mount: Deactivated successfully.
Jan 31 08:06:46 compute-0 podman[77212]: 2026-01-31 08:06:46.201891882 +0000 UTC m=+0.609847767 container remove a788a87643a0f4d874b3b476297eac1878e5bc92d2a433a241bfa985e4e15dce (image=quay.io/ceph/ceph:v20, name=zen_haslett, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3)
Jan 31 08:06:46 compute-0 systemd[1]: libpod-conmon-a788a87643a0f4d874b3b476297eac1878e5bc92d2a433a241bfa985e4e15dce.scope: Deactivated successfully.
Jan 31 08:06:46 compute-0 podman[77371]: 2026-01-31 08:06:46.261462762 +0000 UTC m=+0.039744057 container create 1c4ee3d8e17f6b2cb5101fd0889b0d3ed46b505890b20722c252ceb476c4b6c6 (image=quay.io/ceph/ceph:v20, name=eager_germain, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Jan 31 08:06:46 compute-0 systemd[1]: Started libpod-conmon-1c4ee3d8e17f6b2cb5101fd0889b0d3ed46b505890b20722c252ceb476c4b6c6.scope.
Jan 31 08:06:46 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:06:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/005e56fa849a4a4739584b21d6c8acc8428d23c4e54b06cc0cdf9f3a76fb018e/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 31 08:06:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/005e56fa849a4a4739584b21d6c8acc8428d23c4e54b06cc0cdf9f3a76fb018e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 08:06:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/005e56fa849a4a4739584b21d6c8acc8428d23c4e54b06cc0cdf9f3a76fb018e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 08:06:46 compute-0 ceph-mon[75227]: from='client.14150 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "mgr", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 08:06:46 compute-0 ceph-mon[75227]: Saving service mgr spec with placement count:2
Jan 31 08:06:46 compute-0 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 08:06:46 compute-0 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 08:06:46 compute-0 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 08:06:46 compute-0 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 08:06:46 compute-0 podman[77371]: 2026-01-31 08:06:46.340499213 +0000 UTC m=+0.118780528 container init 1c4ee3d8e17f6b2cb5101fd0889b0d3ed46b505890b20722c252ceb476c4b6c6 (image=quay.io/ceph/ceph:v20, name=eager_germain, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030)
Jan 31 08:06:46 compute-0 podman[77371]: 2026-01-31 08:06:46.246882425 +0000 UTC m=+0.025163770 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 31 08:06:46 compute-0 podman[77371]: 2026-01-31 08:06:46.34640275 +0000 UTC m=+0.124684085 container start 1c4ee3d8e17f6b2cb5101fd0889b0d3ed46b505890b20722c252ceb476c4b6c6 (image=quay.io/ceph/ceph:v20, name=eager_germain, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Jan 31 08:06:46 compute-0 podman[77371]: 2026-01-31 08:06:46.350679149 +0000 UTC m=+0.128960464 container attach 1c4ee3d8e17f6b2cb5101fd0889b0d3ed46b505890b20722c252ceb476c4b6c6 (image=quay.io/ceph/ceph:v20, name=eager_germain, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 08:06:46 compute-0 podman[77445]: 2026-01-31 08:06:46.558109142 +0000 UTC m=+0.075998026 container exec 2c160fb9852a007dc977740f88f96001cc57b1cb392a9e315d541aef8037777a (image=quay.io/ceph/ceph:v20, name=ceph-82c880e6-d992-5408-8b12-efff9c275473-mon-compute-0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030)
Jan 31 08:06:46 compute-0 podman[77445]: 2026-01-31 08:06:46.640767012 +0000 UTC m=+0.158655906 container exec_died 2c160fb9852a007dc977740f88f96001cc57b1cb392a9e315d541aef8037777a (image=quay.io/ceph/ceph:v20, name=ceph-82c880e6-d992-5408-8b12-efff9c275473-mon-compute-0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, ceph=True, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 08:06:46 compute-0 ceph-mgr[75519]: [devicehealth WARNING root] not enough osds to create mgr pool
Jan 31 08:06:46 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=mgr/cephadm/container_init}] v 0)
Jan 31 08:06:46 compute-0 ceph-mon[75227]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/697546649' entity='client.admin' 
Jan 31 08:06:46 compute-0 systemd[1]: libpod-1c4ee3d8e17f6b2cb5101fd0889b0d3ed46b505890b20722c252ceb476c4b6c6.scope: Deactivated successfully.
Jan 31 08:06:46 compute-0 podman[77371]: 2026-01-31 08:06:46.830129861 +0000 UTC m=+0.608411186 container died 1c4ee3d8e17f6b2cb5101fd0889b0d3ed46b505890b20722c252ceb476c4b6c6 (image=quay.io/ceph/ceph:v20, name=eager_germain, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 08:06:46 compute-0 systemd[1]: var-lib-containers-storage-overlay-005e56fa849a4a4739584b21d6c8acc8428d23c4e54b06cc0cdf9f3a76fb018e-merged.mount: Deactivated successfully.
Jan 31 08:06:46 compute-0 podman[77371]: 2026-01-31 08:06:46.873218061 +0000 UTC m=+0.651499356 container remove 1c4ee3d8e17f6b2cb5101fd0889b0d3ed46b505890b20722c252ceb476c4b6c6 (image=quay.io/ceph/ceph:v20, name=eager_germain, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 31 08:06:46 compute-0 sudo[77333]: pam_unix(sudo:session): session closed for user root
Jan 31 08:06:46 compute-0 systemd[1]: libpod-conmon-1c4ee3d8e17f6b2cb5101fd0889b0d3ed46b505890b20722c252ceb476c4b6c6.scope: Deactivated successfully.
Jan 31 08:06:46 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 31 08:06:46 compute-0 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 08:06:46 compute-0 podman[77544]: 2026-01-31 08:06:46.943022772 +0000 UTC m=+0.043604298 container create 0a4bf1c3490db1aabfab6bf1f7df2080f9eada39630016dab063f242e1ca02de (image=quay.io/ceph/ceph:v20, name=frosty_chebyshev, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Jan 31 08:06:46 compute-0 sudo[77551]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 08:06:46 compute-0 sudo[77551]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:06:46 compute-0 sudo[77551]: pam_unix(sudo:session): session closed for user root
Jan 31 08:06:46 compute-0 systemd[1]: Started libpod-conmon-0a4bf1c3490db1aabfab6bf1f7df2080f9eada39630016dab063f242e1ca02de.scope.
Jan 31 08:06:47 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:06:47 compute-0 sudo[77585]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/82c880e6-d992-5408-8b12-efff9c275473/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Jan 31 08:06:47 compute-0 sudo[77585]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:06:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2d4db5f777fb0e009b6d1efec9ed339438a030b541784c91f3749925a346019b/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 31 08:06:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2d4db5f777fb0e009b6d1efec9ed339438a030b541784c91f3749925a346019b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 08:06:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2d4db5f777fb0e009b6d1efec9ed339438a030b541784c91f3749925a346019b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 08:06:47 compute-0 podman[77544]: 2026-01-31 08:06:46.924012373 +0000 UTC m=+0.024593879 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 31 08:06:47 compute-0 podman[77544]: 2026-01-31 08:06:47.027389478 +0000 UTC m=+0.127970994 container init 0a4bf1c3490db1aabfab6bf1f7df2080f9eada39630016dab063f242e1ca02de (image=quay.io/ceph/ceph:v20, name=frosty_chebyshev, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 08:06:47 compute-0 podman[77544]: 2026-01-31 08:06:47.03433079 +0000 UTC m=+0.134912316 container start 0a4bf1c3490db1aabfab6bf1f7df2080f9eada39630016dab063f242e1ca02de (image=quay.io/ceph/ceph:v20, name=frosty_chebyshev, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 08:06:47 compute-0 podman[77544]: 2026-01-31 08:06:47.037948379 +0000 UTC m=+0.138529895 container attach 0a4bf1c3490db1aabfab6bf1f7df2080f9eada39630016dab063f242e1ca02de (image=quay.io/ceph/ceph:v20, name=frosty_chebyshev, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Jan 31 08:06:47 compute-0 systemd[1]: proc-sys-fs-binfmt_misc.automount: Got automount request for /proc/sys/fs/binfmt_misc, triggered by 77647 (sysctl)
Jan 31 08:06:47 compute-0 systemd[1]: Mounting Arbitrary Executable File Formats File System...
Jan 31 08:06:47 compute-0 systemd[1]: Mounted Arbitrary Executable File Formats File System.
Jan 31 08:06:47 compute-0 ceph-mgr[75519]: log_channel(audit) log [DBG] : from='client.14156 -' entity='client.admin' cmd=[{"prefix": "orch client-keyring set", "entity": "client.admin", "placement": "label:_admin", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 08:06:47 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/client_keyrings}] v 0)
Jan 31 08:06:47 compute-0 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 08:06:47 compute-0 systemd[1]: libpod-0a4bf1c3490db1aabfab6bf1f7df2080f9eada39630016dab063f242e1ca02de.scope: Deactivated successfully.
Jan 31 08:06:47 compute-0 podman[77661]: 2026-01-31 08:06:47.512945305 +0000 UTC m=+0.022449313 container died 0a4bf1c3490db1aabfab6bf1f7df2080f9eada39630016dab063f242e1ca02de (image=quay.io/ceph/ceph:v20, name=frosty_chebyshev, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 08:06:47 compute-0 systemd[1]: var-lib-containers-storage-overlay-2d4db5f777fb0e009b6d1efec9ed339438a030b541784c91f3749925a346019b-merged.mount: Deactivated successfully.
Jan 31 08:06:47 compute-0 sudo[77585]: pam_unix(sudo:session): session closed for user root
Jan 31 08:06:47 compute-0 podman[77661]: 2026-01-31 08:06:47.551547257 +0000 UTC m=+0.061051225 container remove 0a4bf1c3490db1aabfab6bf1f7df2080f9eada39630016dab063f242e1ca02de (image=quay.io/ceph/ceph:v20, name=frosty_chebyshev, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 08:06:47 compute-0 systemd[1]: libpod-conmon-0a4bf1c3490db1aabfab6bf1f7df2080f9eada39630016dab063f242e1ca02de.scope: Deactivated successfully.
Jan 31 08:06:47 compute-0 podman[77686]: 2026-01-31 08:06:47.619128536 +0000 UTC m=+0.047418435 container create 83a258ac46e9b9937dc46c029c57e9748bec3b2b877782e6b4584b6faece7247 (image=quay.io/ceph/ceph:v20, name=modest_kapitsa, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 31 08:06:47 compute-0 sudo[77698]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 08:06:47 compute-0 sudo[77698]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:06:47 compute-0 sudo[77698]: pam_unix(sudo:session): session closed for user root
Jan 31 08:06:47 compute-0 ceph-mgr[75519]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Jan 31 08:06:47 compute-0 systemd[1]: Started libpod-conmon-83a258ac46e9b9937dc46c029c57e9748bec3b2b877782e6b4584b6faece7247.scope.
Jan 31 08:06:47 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:06:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/72a2d89cd2603e035f4ea53fe7907ebfe8e84c29ba894e9ae95f5b0df7d61a73/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 08:06:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/72a2d89cd2603e035f4ea53fe7907ebfe8e84c29ba894e9ae95f5b0df7d61a73/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 31 08:06:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/72a2d89cd2603e035f4ea53fe7907ebfe8e84c29ba894e9ae95f5b0df7d61a73/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 08:06:47 compute-0 sudo[77727]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/82c880e6-d992-5408-8b12-efff9c275473/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 list-networks
Jan 31 08:06:47 compute-0 sudo[77727]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:06:47 compute-0 podman[77686]: 2026-01-31 08:06:47.606005342 +0000 UTC m=+0.034295271 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 31 08:06:47 compute-0 podman[77686]: 2026-01-31 08:06:47.695712884 +0000 UTC m=+0.124002783 container init 83a258ac46e9b9937dc46c029c57e9748bec3b2b877782e6b4584b6faece7247 (image=quay.io/ceph/ceph:v20, name=modest_kapitsa, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Jan 31 08:06:47 compute-0 podman[77686]: 2026-01-31 08:06:47.699991983 +0000 UTC m=+0.128281892 container start 83a258ac46e9b9937dc46c029c57e9748bec3b2b877782e6b4584b6faece7247 (image=quay.io/ceph/ceph:v20, name=modest_kapitsa, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Jan 31 08:06:47 compute-0 podman[77686]: 2026-01-31 08:06:47.703332847 +0000 UTC m=+0.131622766 container attach 83a258ac46e9b9937dc46c029c57e9748bec3b2b877782e6b4584b6faece7247 (image=quay.io/ceph/ceph:v20, name=modest_kapitsa, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Jan 31 08:06:47 compute-0 ceph-mon[75227]: from='client.14152 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "crash", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 08:06:47 compute-0 ceph-mon[75227]: Saving service crash spec with placement *
Jan 31 08:06:47 compute-0 ceph-mon[75227]: from='client.? 192.168.122.100:0/697546649' entity='client.admin' 
Jan 31 08:06:47 compute-0 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 08:06:47 compute-0 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 08:06:47 compute-0 sudo[77727]: pam_unix(sudo:session): session closed for user root
Jan 31 08:06:47 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 31 08:06:47 compute-0 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 08:06:47 compute-0 sudo[77795]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 08:06:47 compute-0 sudo[77795]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:06:47 compute-0 sudo[77795]: pam_unix(sudo:session): session closed for user root
Jan 31 08:06:48 compute-0 sudo[77820]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/82c880e6-d992-5408-8b12-efff9c275473/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 82c880e6-d992-5408-8b12-efff9c275473 -- inventory --format=json-pretty --filter-for-batch
Jan 31 08:06:48 compute-0 sudo[77820]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:06:48 compute-0 ceph-mgr[75519]: log_channel(audit) log [DBG] : from='client.14158 -' entity='client.admin' cmd=[{"prefix": "orch host label add", "hostname": "compute-0", "label": "_admin", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 08:06:48 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0)
Jan 31 08:06:48 compute-0 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 08:06:48 compute-0 ceph-mgr[75519]: [cephadm INFO root] Added label _admin to host compute-0
Jan 31 08:06:48 compute-0 ceph-mgr[75519]: log_channel(cephadm) log [INF] : Added label _admin to host compute-0
Jan 31 08:06:48 compute-0 modest_kapitsa[77740]: Added label _admin to host compute-0
Jan 31 08:06:48 compute-0 systemd[1]: libpod-83a258ac46e9b9937dc46c029c57e9748bec3b2b877782e6b4584b6faece7247.scope: Deactivated successfully.
Jan 31 08:06:48 compute-0 podman[77686]: 2026-01-31 08:06:48.149359388 +0000 UTC m=+0.577649287 container died 83a258ac46e9b9937dc46c029c57e9748bec3b2b877782e6b4584b6faece7247 (image=quay.io/ceph/ceph:v20, name=modest_kapitsa, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Jan 31 08:06:48 compute-0 systemd[1]: var-lib-containers-storage-overlay-72a2d89cd2603e035f4ea53fe7907ebfe8e84c29ba894e9ae95f5b0df7d61a73-merged.mount: Deactivated successfully.
Jan 31 08:06:48 compute-0 podman[77686]: 2026-01-31 08:06:48.175082248 +0000 UTC m=+0.603372147 container remove 83a258ac46e9b9937dc46c029c57e9748bec3b2b877782e6b4584b6faece7247 (image=quay.io/ceph/ceph:v20, name=modest_kapitsa, ceph=True, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Jan 31 08:06:48 compute-0 systemd[1]: libpod-conmon-83a258ac46e9b9937dc46c029c57e9748bec3b2b877782e6b4584b6faece7247.scope: Deactivated successfully.
Jan 31 08:06:48 compute-0 podman[77858]: 2026-01-31 08:06:48.226659871 +0000 UTC m=+0.036195014 container create 80ad8ea2636a1ab8f4ad004de049eebc9eaf6299b6329f9acc2fc0de929e9bfd (image=quay.io/ceph/ceph:v20, name=nervous_chandrasekhar, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Jan 31 08:06:48 compute-0 systemd[1]: Started libpod-conmon-80ad8ea2636a1ab8f4ad004de049eebc9eaf6299b6329f9acc2fc0de929e9bfd.scope.
Jan 31 08:06:48 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:06:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/871ae5e1b3ebaef5285155ffd109de2c1c8bcd7fefbd80798632ae28ce3edc2b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 08:06:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/871ae5e1b3ebaef5285155ffd109de2c1c8bcd7fefbd80798632ae28ce3edc2b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 08:06:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/871ae5e1b3ebaef5285155ffd109de2c1c8bcd7fefbd80798632ae28ce3edc2b/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 31 08:06:48 compute-0 podman[77888]: 2026-01-31 08:06:48.300049408 +0000 UTC m=+0.040347642 container create 960857c02f6c3b4ad8c502383a9c5bac67e96b652b7a4cdfdd18cc57330dc068 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zealous_dirac, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 08:06:48 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e2 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 08:06:48 compute-0 podman[77858]: 2026-01-31 08:06:48.213822473 +0000 UTC m=+0.023357636 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 31 08:06:48 compute-0 systemd[1]: Started libpod-conmon-960857c02f6c3b4ad8c502383a9c5bac67e96b652b7a4cdfdd18cc57330dc068.scope.
Jan 31 08:06:48 compute-0 podman[77858]: 2026-01-31 08:06:48.323011567 +0000 UTC m=+0.132546730 container init 80ad8ea2636a1ab8f4ad004de049eebc9eaf6299b6329f9acc2fc0de929e9bfd (image=quay.io/ceph/ceph:v20, name=nervous_chandrasekhar, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 08:06:48 compute-0 podman[77858]: 2026-01-31 08:06:48.327922694 +0000 UTC m=+0.137457847 container start 80ad8ea2636a1ab8f4ad004de049eebc9eaf6299b6329f9acc2fc0de929e9bfd (image=quay.io/ceph/ceph:v20, name=nervous_chandrasekhar, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, ceph=True)
Jan 31 08:06:48 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:06:48 compute-0 podman[77858]: 2026-01-31 08:06:48.330772704 +0000 UTC m=+0.140307847 container attach 80ad8ea2636a1ab8f4ad004de049eebc9eaf6299b6329f9acc2fc0de929e9bfd (image=quay.io/ceph/ceph:v20, name=nervous_chandrasekhar, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030)
Jan 31 08:06:48 compute-0 podman[77888]: 2026-01-31 08:06:48.342214125 +0000 UTC m=+0.082512409 container init 960857c02f6c3b4ad8c502383a9c5bac67e96b652b7a4cdfdd18cc57330dc068 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zealous_dirac, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Jan 31 08:06:48 compute-0 podman[77888]: 2026-01-31 08:06:48.34886727 +0000 UTC m=+0.089165524 container start 960857c02f6c3b4ad8c502383a9c5bac67e96b652b7a4cdfdd18cc57330dc068 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zealous_dirac, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 08:06:48 compute-0 zealous_dirac[77907]: 167 167
Jan 31 08:06:48 compute-0 systemd[1]: libpod-960857c02f6c3b4ad8c502383a9c5bac67e96b652b7a4cdfdd18cc57330dc068.scope: Deactivated successfully.
Jan 31 08:06:48 compute-0 podman[77888]: 2026-01-31 08:06:48.352295912 +0000 UTC m=+0.092594196 container attach 960857c02f6c3b4ad8c502383a9c5bac67e96b652b7a4cdfdd18cc57330dc068 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zealous_dirac, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0)
Jan 31 08:06:48 compute-0 podman[77888]: 2026-01-31 08:06:48.35293377 +0000 UTC m=+0.093232014 container died 960857c02f6c3b4ad8c502383a9c5bac67e96b652b7a4cdfdd18cc57330dc068 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zealous_dirac, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 08:06:48 compute-0 podman[77888]: 2026-01-31 08:06:48.28017413 +0000 UTC m=+0.020472404 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 08:06:48 compute-0 systemd[1]: var-lib-containers-storage-overlay-b43f384dd6307197c6f163f8bb8591a3795e3f4d3a4246c430cdf7dff30b6528-merged.mount: Deactivated successfully.
Jan 31 08:06:48 compute-0 podman[77888]: 2026-01-31 08:06:48.392714489 +0000 UTC m=+0.133012753 container remove 960857c02f6c3b4ad8c502383a9c5bac67e96b652b7a4cdfdd18cc57330dc068 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zealous_dirac, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Jan 31 08:06:48 compute-0 systemd[1]: libpod-conmon-960857c02f6c3b4ad8c502383a9c5bac67e96b652b7a4cdfdd18cc57330dc068.scope: Deactivated successfully.
Jan 31 08:06:48 compute-0 ceph-mgr[75519]: [devicehealth WARNING root] not enough osds to create mgr pool
Jan 31 08:06:48 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/dashboard/cluster/status}] v 0)
Jan 31 08:06:48 compute-0 ceph-mon[75227]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2926359208' entity='client.admin' 
Jan 31 08:06:48 compute-0 nervous_chandrasekhar[77896]: set mgr/dashboard/cluster/status
Jan 31 08:06:48 compute-0 systemd[1]: libpod-80ad8ea2636a1ab8f4ad004de049eebc9eaf6299b6329f9acc2fc0de929e9bfd.scope: Deactivated successfully.
Jan 31 08:06:48 compute-0 podman[77945]: 2026-01-31 08:06:48.906575841 +0000 UTC m=+0.021558473 container died 80ad8ea2636a1ab8f4ad004de049eebc9eaf6299b6329f9acc2fc0de929e9bfd (image=quay.io/ceph/ceph:v20, name=nervous_chandrasekhar, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 08:06:48 compute-0 ceph-mon[75227]: from='client.14156 -' entity='client.admin' cmd=[{"prefix": "orch client-keyring set", "entity": "client.admin", "placement": "label:_admin", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 08:06:48 compute-0 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 08:06:48 compute-0 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 08:06:48 compute-0 ceph-mon[75227]: from='client.? 192.168.122.100:0/2926359208' entity='client.admin' 
Jan 31 08:06:48 compute-0 systemd[1]: var-lib-containers-storage-overlay-871ae5e1b3ebaef5285155ffd109de2c1c8bcd7fefbd80798632ae28ce3edc2b-merged.mount: Deactivated successfully.
Jan 31 08:06:48 compute-0 podman[77945]: 2026-01-31 08:06:48.951977682 +0000 UTC m=+0.066960244 container remove 80ad8ea2636a1ab8f4ad004de049eebc9eaf6299b6329f9acc2fc0de929e9bfd (image=quay.io/ceph/ceph:v20, name=nervous_chandrasekhar, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 08:06:48 compute-0 systemd[1]: libpod-conmon-80ad8ea2636a1ab8f4ad004de049eebc9eaf6299b6329f9acc2fc0de929e9bfd.scope: Deactivated successfully.
Jan 31 08:06:48 compute-0 systemd[1]: Reloading.
Jan 31 08:06:49 compute-0 systemd-rc-local-generator[77988]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 08:06:49 compute-0 systemd-sysv-generator[77992]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 08:06:49 compute-0 sudo[74186]: pam_unix(sudo:session): session closed for user root
Jan 31 08:06:49 compute-0 podman[78007]: 2026-01-31 08:06:49.401921039 +0000 UTC m=+0.039454841 container create 6a60d50febfe53aee68fe3d416716d7d85fa8daf44454ab52f54e0a7657d3575 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=affectionate_lovelace, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle)
Jan 31 08:06:49 compute-0 systemd[1]: Started libpod-conmon-6a60d50febfe53aee68fe3d416716d7d85fa8daf44454ab52f54e0a7657d3575.scope.
Jan 31 08:06:49 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:06:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cbf2070e1c3a9115de71501871475fcbf94351213298f3075287c2b0fe508128/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 08:06:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cbf2070e1c3a9115de71501871475fcbf94351213298f3075287c2b0fe508128/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 08:06:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cbf2070e1c3a9115de71501871475fcbf94351213298f3075287c2b0fe508128/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 08:06:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cbf2070e1c3a9115de71501871475fcbf94351213298f3075287c2b0fe508128/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 08:06:49 compute-0 podman[78007]: 2026-01-31 08:06:49.385568821 +0000 UTC m=+0.023102623 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 08:06:49 compute-0 podman[78007]: 2026-01-31 08:06:49.496050263 +0000 UTC m=+0.133584035 container init 6a60d50febfe53aee68fe3d416716d7d85fa8daf44454ab52f54e0a7657d3575 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=affectionate_lovelace, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Jan 31 08:06:49 compute-0 podman[78007]: 2026-01-31 08:06:49.514063952 +0000 UTC m=+0.151597724 container start 6a60d50febfe53aee68fe3d416716d7d85fa8daf44454ab52f54e0a7657d3575 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=affectionate_lovelace, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 08:06:49 compute-0 podman[78007]: 2026-01-31 08:06:49.517473422 +0000 UTC m=+0.155007264 container attach 6a60d50febfe53aee68fe3d416716d7d85fa8daf44454ab52f54e0a7657d3575 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=affectionate_lovelace, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Jan 31 08:06:49 compute-0 sudo[78051]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-shcpzcuihelpbvyfxyxvhtfferorhfej ; /usr/bin/python3'
Jan 31 08:06:49 compute-0 ceph-mgr[75519]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Jan 31 08:06:49 compute-0 sudo[78051]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:06:49 compute-0 python3[78053]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 82c880e6-d992-5408-8b12-efff9c275473 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config set mgr mgr/cephadm/use_repo_digest false
                                           _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 08:06:49 compute-0 podman[78059]: 2026-01-31 08:06:49.855115912 +0000 UTC m=+0.053150657 container create a15bee5909f05cd68096dba8b8264920fe5b286435ff77571c41c06f5c80f5dd (image=quay.io/ceph/ceph:v20, name=peaceful_stonebraker, org.label-schema.build-date=20251030, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 08:06:49 compute-0 systemd[1]: Started libpod-conmon-a15bee5909f05cd68096dba8b8264920fe5b286435ff77571c41c06f5c80f5dd.scope.
Jan 31 08:06:49 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:06:49 compute-0 ceph-mon[75227]: from='client.14158 -' entity='client.admin' cmd=[{"prefix": "orch host label add", "hostname": "compute-0", "label": "_admin", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 08:06:49 compute-0 ceph-mon[75227]: Added label _admin to host compute-0
Jan 31 08:06:49 compute-0 podman[78059]: 2026-01-31 08:06:49.836985872 +0000 UTC m=+0.035020647 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 31 08:06:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2abb77c8256a36f62b2cf476108ce75c830a74b1767fd278edfcca953d66893f/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 08:06:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2abb77c8256a36f62b2cf476108ce75c830a74b1767fd278edfcca953d66893f/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 08:06:49 compute-0 podman[78059]: 2026-01-31 08:06:49.944849426 +0000 UTC m=+0.142884191 container init a15bee5909f05cd68096dba8b8264920fe5b286435ff77571c41c06f5c80f5dd (image=quay.io/ceph/ceph:v20, name=peaceful_stonebraker, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS)
Jan 31 08:06:49 compute-0 podman[78059]: 2026-01-31 08:06:49.951314404 +0000 UTC m=+0.149349199 container start a15bee5909f05cd68096dba8b8264920fe5b286435ff77571c41c06f5c80f5dd (image=quay.io/ceph/ceph:v20, name=peaceful_stonebraker, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0)
Jan 31 08:06:49 compute-0 podman[78059]: 2026-01-31 08:06:49.95512113 +0000 UTC m=+0.153155885 container attach a15bee5909f05cd68096dba8b8264920fe5b286435ff77571c41c06f5c80f5dd (image=quay.io/ceph/ceph:v20, name=peaceful_stonebraker, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 08:06:49 compute-0 affectionate_lovelace[78023]: [
Jan 31 08:06:49 compute-0 affectionate_lovelace[78023]:     {
Jan 31 08:06:49 compute-0 affectionate_lovelace[78023]:         "available": false,
Jan 31 08:06:49 compute-0 affectionate_lovelace[78023]:         "being_replaced": false,
Jan 31 08:06:49 compute-0 affectionate_lovelace[78023]:         "ceph_device_lvm": false,
Jan 31 08:06:49 compute-0 affectionate_lovelace[78023]:         "device_id": "QEMU_DVD-ROM_QM00001",
Jan 31 08:06:49 compute-0 affectionate_lovelace[78023]:         "lsm_data": {},
Jan 31 08:06:49 compute-0 affectionate_lovelace[78023]:         "lvs": [],
Jan 31 08:06:49 compute-0 affectionate_lovelace[78023]:         "path": "/dev/sr0",
Jan 31 08:06:49 compute-0 affectionate_lovelace[78023]:         "rejected_reasons": [
Jan 31 08:06:49 compute-0 affectionate_lovelace[78023]:             "Insufficient space (<5GB)",
Jan 31 08:06:49 compute-0 affectionate_lovelace[78023]:             "Has a FileSystem"
Jan 31 08:06:49 compute-0 affectionate_lovelace[78023]:         ],
Jan 31 08:06:49 compute-0 affectionate_lovelace[78023]:         "sys_api": {
Jan 31 08:06:49 compute-0 affectionate_lovelace[78023]:             "actuators": null,
Jan 31 08:06:49 compute-0 affectionate_lovelace[78023]:             "device_nodes": [
Jan 31 08:06:49 compute-0 affectionate_lovelace[78023]:                 "sr0"
Jan 31 08:06:49 compute-0 affectionate_lovelace[78023]:             ],
Jan 31 08:06:49 compute-0 affectionate_lovelace[78023]:             "devname": "sr0",
Jan 31 08:06:49 compute-0 affectionate_lovelace[78023]:             "human_readable_size": "482.00 KB",
Jan 31 08:06:49 compute-0 affectionate_lovelace[78023]:             "id_bus": "ata",
Jan 31 08:06:49 compute-0 affectionate_lovelace[78023]:             "model": "QEMU DVD-ROM",
Jan 31 08:06:49 compute-0 affectionate_lovelace[78023]:             "nr_requests": "2",
Jan 31 08:06:49 compute-0 affectionate_lovelace[78023]:             "parent": "/dev/sr0",
Jan 31 08:06:49 compute-0 affectionate_lovelace[78023]:             "partitions": {},
Jan 31 08:06:49 compute-0 affectionate_lovelace[78023]:             "path": "/dev/sr0",
Jan 31 08:06:49 compute-0 affectionate_lovelace[78023]:             "removable": "1",
Jan 31 08:06:49 compute-0 affectionate_lovelace[78023]:             "rev": "2.5+",
Jan 31 08:06:49 compute-0 affectionate_lovelace[78023]:             "ro": "0",
Jan 31 08:06:49 compute-0 affectionate_lovelace[78023]:             "rotational": "1",
Jan 31 08:06:49 compute-0 affectionate_lovelace[78023]:             "sas_address": "",
Jan 31 08:06:49 compute-0 affectionate_lovelace[78023]:             "sas_device_handle": "",
Jan 31 08:06:49 compute-0 affectionate_lovelace[78023]:             "scheduler_mode": "mq-deadline",
Jan 31 08:06:49 compute-0 affectionate_lovelace[78023]:             "sectors": 0,
Jan 31 08:06:49 compute-0 affectionate_lovelace[78023]:             "sectorsize": "2048",
Jan 31 08:06:49 compute-0 affectionate_lovelace[78023]:             "size": 493568.0,
Jan 31 08:06:49 compute-0 affectionate_lovelace[78023]:             "support_discard": "2048",
Jan 31 08:06:49 compute-0 affectionate_lovelace[78023]:             "type": "disk",
Jan 31 08:06:49 compute-0 affectionate_lovelace[78023]:             "vendor": "QEMU"
Jan 31 08:06:49 compute-0 affectionate_lovelace[78023]:         }
Jan 31 08:06:49 compute-0 affectionate_lovelace[78023]:     }
Jan 31 08:06:49 compute-0 affectionate_lovelace[78023]: ]
Jan 31 08:06:50 compute-0 systemd[1]: libpod-6a60d50febfe53aee68fe3d416716d7d85fa8daf44454ab52f54e0a7657d3575.scope: Deactivated successfully.
Jan 31 08:06:50 compute-0 podman[78007]: 2026-01-31 08:06:50.013231587 +0000 UTC m=+0.650765359 container died 6a60d50febfe53aee68fe3d416716d7d85fa8daf44454ab52f54e0a7657d3575 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=affectionate_lovelace, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Jan 31 08:06:50 compute-0 systemd[1]: var-lib-containers-storage-overlay-cbf2070e1c3a9115de71501871475fcbf94351213298f3075287c2b0fe508128-merged.mount: Deactivated successfully.
Jan 31 08:06:50 compute-0 podman[78007]: 2026-01-31 08:06:50.125535905 +0000 UTC m=+0.763069667 container remove 6a60d50febfe53aee68fe3d416716d7d85fa8daf44454ab52f54e0a7657d3575 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=affectionate_lovelace, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Jan 31 08:06:50 compute-0 systemd[1]: libpod-conmon-6a60d50febfe53aee68fe3d416716d7d85fa8daf44454ab52f54e0a7657d3575.scope: Deactivated successfully.
Jan 31 08:06:50 compute-0 sudo[77820]: pam_unix(sudo:session): session closed for user root
Jan 31 08:06:50 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 31 08:06:50 compute-0 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 08:06:50 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 31 08:06:50 compute-0 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 08:06:50 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 31 08:06:50 compute-0 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 08:06:50 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 31 08:06:50 compute-0 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 08:06:50 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0)
Jan 31 08:06:50 compute-0 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} : dispatch
Jan 31 08:06:50 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 31 08:06:50 compute-0 ceph-mon[75227]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 08:06:50 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 31 08:06:50 compute-0 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 31 08:06:50 compute-0 ceph-mgr[75519]: [cephadm INFO cephadm.serve] Updating compute-0:/etc/ceph/ceph.conf
Jan 31 08:06:50 compute-0 ceph-mgr[75519]: log_channel(cephadm) log [INF] : Updating compute-0:/etc/ceph/ceph.conf
Jan 31 08:06:50 compute-0 sudo[78721]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /etc/ceph
Jan 31 08:06:50 compute-0 sudo[78721]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:06:50 compute-0 sudo[78721]: pam_unix(sudo:session): session closed for user root
Jan 31 08:06:50 compute-0 sudo[78746]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-82c880e6-d992-5408-8b12-efff9c275473/etc/ceph
Jan 31 08:06:50 compute-0 sudo[78746]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:06:50 compute-0 sudo[78746]: pam_unix(sudo:session): session closed for user root
Jan 31 08:06:50 compute-0 sudo[78771]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-82c880e6-d992-5408-8b12-efff9c275473/etc/ceph/ceph.conf.new
Jan 31 08:06:50 compute-0 sudo[78771]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:06:50 compute-0 sudo[78771]: pam_unix(sudo:session): session closed for user root
Jan 31 08:06:50 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=mgr/cephadm/use_repo_digest}] v 0)
Jan 31 08:06:50 compute-0 ceph-mon[75227]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3928096470' entity='client.admin' 
Jan 31 08:06:50 compute-0 systemd[1]: libpod-a15bee5909f05cd68096dba8b8264920fe5b286435ff77571c41c06f5c80f5dd.scope: Deactivated successfully.
Jan 31 08:06:50 compute-0 podman[78059]: 2026-01-31 08:06:50.389738493 +0000 UTC m=+0.587773278 container died a15bee5909f05cd68096dba8b8264920fe5b286435ff77571c41c06f5c80f5dd (image=quay.io/ceph/ceph:v20, name=peaceful_stonebraker, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, ceph=True, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030)
Jan 31 08:06:50 compute-0 systemd[1]: var-lib-containers-storage-overlay-2abb77c8256a36f62b2cf476108ce75c830a74b1767fd278edfcca953d66893f-merged.mount: Deactivated successfully.
Jan 31 08:06:50 compute-0 sudo[78797]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-82c880e6-d992-5408-8b12-efff9c275473
Jan 31 08:06:50 compute-0 sudo[78797]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:06:50 compute-0 sudo[78797]: pam_unix(sudo:session): session closed for user root
Jan 31 08:06:50 compute-0 podman[78059]: 2026-01-31 08:06:50.432121259 +0000 UTC m=+0.630156014 container remove a15bee5909f05cd68096dba8b8264920fe5b286435ff77571c41c06f5c80f5dd (image=quay.io/ceph/ceph:v20, name=peaceful_stonebraker, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 08:06:50 compute-0 systemd[1]: libpod-conmon-a15bee5909f05cd68096dba8b8264920fe5b286435ff77571c41c06f5c80f5dd.scope: Deactivated successfully.
Jan 31 08:06:50 compute-0 sudo[78051]: pam_unix(sudo:session): session closed for user root
Jan 31 08:06:50 compute-0 sudo[78833]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-82c880e6-d992-5408-8b12-efff9c275473/etc/ceph/ceph.conf.new
Jan 31 08:06:50 compute-0 sudo[78833]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:06:50 compute-0 sudo[78833]: pam_unix(sudo:session): session closed for user root
Jan 31 08:06:50 compute-0 sudo[78881]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-82c880e6-d992-5408-8b12-efff9c275473/etc/ceph/ceph.conf.new
Jan 31 08:06:50 compute-0 sudo[78881]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:06:50 compute-0 sudo[78881]: pam_unix(sudo:session): session closed for user root
Jan 31 08:06:50 compute-0 sudo[78906]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-82c880e6-d992-5408-8b12-efff9c275473/etc/ceph/ceph.conf.new
Jan 31 08:06:50 compute-0 sudo[78906]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:06:50 compute-0 sudo[78906]: pam_unix(sudo:session): session closed for user root
Jan 31 08:06:50 compute-0 sudo[78931]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv -Z /tmp/cephadm-82c880e6-d992-5408-8b12-efff9c275473/etc/ceph/ceph.conf.new /etc/ceph/ceph.conf
Jan 31 08:06:50 compute-0 sudo[78931]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:06:50 compute-0 sudo[78931]: pam_unix(sudo:session): session closed for user root
Jan 31 08:06:50 compute-0 ceph-mgr[75519]: [cephadm INFO cephadm.serve] Updating compute-0:/var/lib/ceph/82c880e6-d992-5408-8b12-efff9c275473/config/ceph.conf
Jan 31 08:06:50 compute-0 ceph-mgr[75519]: log_channel(cephadm) log [INF] : Updating compute-0:/var/lib/ceph/82c880e6-d992-5408-8b12-efff9c275473/config/ceph.conf
Jan 31 08:06:50 compute-0 sudo[78956]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /var/lib/ceph/82c880e6-d992-5408-8b12-efff9c275473/config
Jan 31 08:06:50 compute-0 sudo[78956]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:06:50 compute-0 sudo[78956]: pam_unix(sudo:session): session closed for user root
Jan 31 08:06:50 compute-0 ceph-mgr[75519]: [devicehealth WARNING root] not enough osds to create mgr pool
Jan 31 08:06:50 compute-0 sudo[78981]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-82c880e6-d992-5408-8b12-efff9c275473/var/lib/ceph/82c880e6-d992-5408-8b12-efff9c275473/config
Jan 31 08:06:50 compute-0 sudo[78981]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:06:50 compute-0 sudo[78981]: pam_unix(sudo:session): session closed for user root
Jan 31 08:06:50 compute-0 sudo[79030]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-82c880e6-d992-5408-8b12-efff9c275473/var/lib/ceph/82c880e6-d992-5408-8b12-efff9c275473/config/ceph.conf.new
Jan 31 08:06:50 compute-0 sudo[79030]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:06:50 compute-0 sudo[79030]: pam_unix(sudo:session): session closed for user root
Jan 31 08:06:50 compute-0 sudo[79083]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-82c880e6-d992-5408-8b12-efff9c275473
Jan 31 08:06:50 compute-0 sudo[79083]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:06:50 compute-0 sudo[79083]: pam_unix(sudo:session): session closed for user root
Jan 31 08:06:50 compute-0 sudo[79131]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-82c880e6-d992-5408-8b12-efff9c275473/var/lib/ceph/82c880e6-d992-5408-8b12-efff9c275473/config/ceph.conf.new
Jan 31 08:06:50 compute-0 sudo[79131]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:06:50 compute-0 sudo[79131]: pam_unix(sudo:session): session closed for user root
Jan 31 08:06:50 compute-0 sudo[79179]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-82c880e6-d992-5408-8b12-efff9c275473/var/lib/ceph/82c880e6-d992-5408-8b12-efff9c275473/config/ceph.conf.new
Jan 31 08:06:50 compute-0 sudo[79179]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:06:51 compute-0 sudo[79179]: pam_unix(sudo:session): session closed for user root
Jan 31 08:06:51 compute-0 sudo[79204]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-82c880e6-d992-5408-8b12-efff9c275473/var/lib/ceph/82c880e6-d992-5408-8b12-efff9c275473/config/ceph.conf.new
Jan 31 08:06:51 compute-0 sudo[79204]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:06:51 compute-0 sudo[79204]: pam_unix(sudo:session): session closed for user root
Jan 31 08:06:51 compute-0 sudo[79252]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv -Z /tmp/cephadm-82c880e6-d992-5408-8b12-efff9c275473/var/lib/ceph/82c880e6-d992-5408-8b12-efff9c275473/config/ceph.conf.new /var/lib/ceph/82c880e6-d992-5408-8b12-efff9c275473/config/ceph.conf
Jan 31 08:06:51 compute-0 sudo[79252]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:06:51 compute-0 sudo[79252]: pam_unix(sudo:session): session closed for user root
Jan 31 08:06:51 compute-0 ceph-mgr[75519]: [cephadm INFO cephadm.serve] Updating compute-0:/etc/ceph/ceph.client.admin.keyring
Jan 31 08:06:51 compute-0 ceph-mgr[75519]: log_channel(cephadm) log [INF] : Updating compute-0:/etc/ceph/ceph.client.admin.keyring
Jan 31 08:06:51 compute-0 sudo[79301]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /etc/ceph
Jan 31 08:06:51 compute-0 sudo[79349]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-snnzulcwwunebxkleauxpzkenhommxtv ; ANSIBLE_ASYNC_DIR=\'~/.ansible_async\' /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1769846810.7880495-36661-99770731538985/async_wrapper.py j295523863277 30 /home/zuul/.ansible/tmp/ansible-tmp-1769846810.7880495-36661-99770731538985/AnsiballZ_command.py _'
Jan 31 08:06:51 compute-0 sudo[79301]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:06:51 compute-0 sudo[79349]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:06:51 compute-0 sudo[79301]: pam_unix(sudo:session): session closed for user root
Jan 31 08:06:51 compute-0 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 08:06:51 compute-0 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 08:06:51 compute-0 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 08:06:51 compute-0 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 08:06:51 compute-0 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} : dispatch
Jan 31 08:06:51 compute-0 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 08:06:51 compute-0 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 31 08:06:51 compute-0 ceph-mon[75227]: Updating compute-0:/etc/ceph/ceph.conf
Jan 31 08:06:51 compute-0 ceph-mon[75227]: from='client.? 192.168.122.100:0/3928096470' entity='client.admin' 
Jan 31 08:06:51 compute-0 ceph-mon[75227]: Updating compute-0:/var/lib/ceph/82c880e6-d992-5408-8b12-efff9c275473/config/ceph.conf
Jan 31 08:06:51 compute-0 sudo[79354]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-82c880e6-d992-5408-8b12-efff9c275473/etc/ceph
Jan 31 08:06:51 compute-0 sudo[79354]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:06:51 compute-0 sudo[79354]: pam_unix(sudo:session): session closed for user root
Jan 31 08:06:51 compute-0 sudo[79379]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-82c880e6-d992-5408-8b12-efff9c275473/etc/ceph/ceph.client.admin.keyring.new
Jan 31 08:06:51 compute-0 sudo[79379]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:06:51 compute-0 sudo[79379]: pam_unix(sudo:session): session closed for user root
Jan 31 08:06:51 compute-0 sudo[79404]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-82c880e6-d992-5408-8b12-efff9c275473
Jan 31 08:06:51 compute-0 sudo[79404]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:06:51 compute-0 sudo[79404]: pam_unix(sudo:session): session closed for user root
Jan 31 08:06:51 compute-0 ansible-async_wrapper.py[79353]: Invoked with j295523863277 30 /home/zuul/.ansible/tmp/ansible-tmp-1769846810.7880495-36661-99770731538985/AnsiballZ_command.py _
Jan 31 08:06:51 compute-0 ansible-async_wrapper.py[79444]: Starting module and watcher
Jan 31 08:06:51 compute-0 ansible-async_wrapper.py[79444]: Start watching 79449 (30)
Jan 31 08:06:51 compute-0 ansible-async_wrapper.py[79449]: Start module (79449)
Jan 31 08:06:51 compute-0 ansible-async_wrapper.py[79353]: Return async_wrapper task started.
Jan 31 08:06:51 compute-0 sudo[79429]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-82c880e6-d992-5408-8b12-efff9c275473/etc/ceph/ceph.client.admin.keyring.new
Jan 31 08:06:51 compute-0 sudo[79429]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:06:51 compute-0 sudo[79349]: pam_unix(sudo:session): session closed for user root
Jan 31 08:06:51 compute-0 sudo[79429]: pam_unix(sudo:session): session closed for user root
Jan 31 08:06:51 compute-0 sudo[79482]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-82c880e6-d992-5408-8b12-efff9c275473/etc/ceph/ceph.client.admin.keyring.new
Jan 31 08:06:51 compute-0 sudo[79482]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:06:51 compute-0 sudo[79482]: pam_unix(sudo:session): session closed for user root
Jan 31 08:06:51 compute-0 python3[79452]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 82c880e6-d992-5408-8b12-efff9c275473 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch status --format json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 08:06:51 compute-0 sudo[79507]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 600 /tmp/cephadm-82c880e6-d992-5408-8b12-efff9c275473/etc/ceph/ceph.client.admin.keyring.new
Jan 31 08:06:51 compute-0 sudo[79507]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:06:51 compute-0 sudo[79507]: pam_unix(sudo:session): session closed for user root
Jan 31 08:06:51 compute-0 podman[79508]: 2026-01-31 08:06:51.542399985 +0000 UTC m=+0.064906586 container create 6e25939a5c9c04ba0bb98c74f00feb55418f60b647c3d59e9df89a97c5807996 (image=quay.io/ceph/ceph:v20, name=interesting_dewdney, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Jan 31 08:06:51 compute-0 systemd[1]: Started libpod-conmon-6e25939a5c9c04ba0bb98c74f00feb55418f60b647c3d59e9df89a97c5807996.scope.
Jan 31 08:06:51 compute-0 sudo[79546]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv -Z /tmp/cephadm-82c880e6-d992-5408-8b12-efff9c275473/etc/ceph/ceph.client.admin.keyring.new /etc/ceph/ceph.client.admin.keyring
Jan 31 08:06:51 compute-0 sudo[79546]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:06:51 compute-0 podman[79508]: 2026-01-31 08:06:51.513515147 +0000 UTC m=+0.036021808 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 31 08:06:51 compute-0 sudo[79546]: pam_unix(sudo:session): session closed for user root
Jan 31 08:06:51 compute-0 ceph-mgr[75519]: [cephadm INFO cephadm.serve] Updating compute-0:/var/lib/ceph/82c880e6-d992-5408-8b12-efff9c275473/config/ceph.client.admin.keyring
Jan 31 08:06:51 compute-0 ceph-mgr[75519]: log_channel(cephadm) log [INF] : Updating compute-0:/var/lib/ceph/82c880e6-d992-5408-8b12-efff9c275473/config/ceph.client.admin.keyring
Jan 31 08:06:51 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:06:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ceeefba90c81fc89f486c9258bb8221d546620da327ed286a3e6b5ac68b85ace/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 08:06:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ceeefba90c81fc89f486c9258bb8221d546620da327ed286a3e6b5ac68b85ace/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 08:06:51 compute-0 ceph-mgr[75519]: mgr.server send_report Giving up on OSDs that haven't reported yet, sending potentially incomplete PG state to mon
Jan 31 08:06:51 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v3: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 31 08:06:51 compute-0 ceph-mon[75227]: log_channel(cluster) log [WRN] : Health check failed: OSD count 0 < osd_pool_default_size 1 (TOO_FEW_OSDS)
Jan 31 08:06:51 compute-0 podman[79508]: 2026-01-31 08:06:51.655831006 +0000 UTC m=+0.178337687 container init 6e25939a5c9c04ba0bb98c74f00feb55418f60b647c3d59e9df89a97c5807996 (image=quay.io/ceph/ceph:v20, name=interesting_dewdney, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 08:06:51 compute-0 podman[79508]: 2026-01-31 08:06:51.662378771 +0000 UTC m=+0.184885342 container start 6e25939a5c9c04ba0bb98c74f00feb55418f60b647c3d59e9df89a97c5807996 (image=quay.io/ceph/ceph:v20, name=interesting_dewdney, org.label-schema.build-date=20251030, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Jan 31 08:06:51 compute-0 podman[79508]: 2026-01-31 08:06:51.667108082 +0000 UTC m=+0.189614693 container attach 6e25939a5c9c04ba0bb98c74f00feb55418f60b647c3d59e9df89a97c5807996 (image=quay.io/ceph/ceph:v20, name=interesting_dewdney, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 08:06:51 compute-0 sudo[79576]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /var/lib/ceph/82c880e6-d992-5408-8b12-efff9c275473/config
Jan 31 08:06:51 compute-0 sudo[79576]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:06:51 compute-0 sudo[79576]: pam_unix(sudo:session): session closed for user root
Jan 31 08:06:51 compute-0 sudo[79602]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-82c880e6-d992-5408-8b12-efff9c275473/var/lib/ceph/82c880e6-d992-5408-8b12-efff9c275473/config
Jan 31 08:06:51 compute-0 sudo[79602]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:06:51 compute-0 sudo[79602]: pam_unix(sudo:session): session closed for user root
Jan 31 08:06:51 compute-0 sudo[79627]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-82c880e6-d992-5408-8b12-efff9c275473/var/lib/ceph/82c880e6-d992-5408-8b12-efff9c275473/config/ceph.client.admin.keyring.new
Jan 31 08:06:51 compute-0 sudo[79627]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:06:51 compute-0 sudo[79627]: pam_unix(sudo:session): session closed for user root
Jan 31 08:06:51 compute-0 sudo[79671]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-82c880e6-d992-5408-8b12-efff9c275473
Jan 31 08:06:51 compute-0 sudo[79671]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:06:51 compute-0 sudo[79671]: pam_unix(sudo:session): session closed for user root
Jan 31 08:06:51 compute-0 sudo[79696]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-82c880e6-d992-5408-8b12-efff9c275473/var/lib/ceph/82c880e6-d992-5408-8b12-efff9c275473/config/ceph.client.admin.keyring.new
Jan 31 08:06:51 compute-0 sudo[79696]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:06:51 compute-0 sudo[79696]: pam_unix(sudo:session): session closed for user root
Jan 31 08:06:52 compute-0 sudo[79744]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-82c880e6-d992-5408-8b12-efff9c275473/var/lib/ceph/82c880e6-d992-5408-8b12-efff9c275473/config/ceph.client.admin.keyring.new
Jan 31 08:06:52 compute-0 sudo[79744]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:06:52 compute-0 sudo[79744]: pam_unix(sudo:session): session closed for user root
Jan 31 08:06:52 compute-0 sudo[79769]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 600 /tmp/cephadm-82c880e6-d992-5408-8b12-efff9c275473/var/lib/ceph/82c880e6-d992-5408-8b12-efff9c275473/config/ceph.client.admin.keyring.new
Jan 31 08:06:52 compute-0 sudo[79769]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:06:52 compute-0 sudo[79769]: pam_unix(sudo:session): session closed for user root
Jan 31 08:06:52 compute-0 ceph-mgr[75519]: log_channel(audit) log [DBG] : from='client.14164 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Jan 31 08:06:52 compute-0 interesting_dewdney[79572]: 
Jan 31 08:06:52 compute-0 interesting_dewdney[79572]: {"available": true, "backend": "cephadm", "paused": false, "workers": 10}
Jan 31 08:06:52 compute-0 sudo[79794]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv -Z /tmp/cephadm-82c880e6-d992-5408-8b12-efff9c275473/var/lib/ceph/82c880e6-d992-5408-8b12-efff9c275473/config/ceph.client.admin.keyring.new /var/lib/ceph/82c880e6-d992-5408-8b12-efff9c275473/config/ceph.client.admin.keyring
Jan 31 08:06:52 compute-0 sudo[79794]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:06:52 compute-0 systemd[1]: libpod-6e25939a5c9c04ba0bb98c74f00feb55418f60b647c3d59e9df89a97c5807996.scope: Deactivated successfully.
Jan 31 08:06:52 compute-0 podman[79508]: 2026-01-31 08:06:52.121089436 +0000 UTC m=+0.643596037 container died 6e25939a5c9c04ba0bb98c74f00feb55418f60b647c3d59e9df89a97c5807996 (image=quay.io/ceph/ceph:v20, name=interesting_dewdney, org.label-schema.build-date=20251030, CEPH_REF=tentacle, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Jan 31 08:06:52 compute-0 sudo[79794]: pam_unix(sudo:session): session closed for user root
Jan 31 08:06:52 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 31 08:06:52 compute-0 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 08:06:52 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 31 08:06:52 compute-0 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 08:06:52 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 31 08:06:52 compute-0 systemd[1]: var-lib-containers-storage-overlay-ceeefba90c81fc89f486c9258bb8221d546620da327ed286a3e6b5ac68b85ace-merged.mount: Deactivated successfully.
Jan 31 08:06:52 compute-0 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 08:06:52 compute-0 ceph-mgr[75519]: [progress INFO root] update: starting ev 7dad7b7c-6b45-47d3-b703-f49deb4cfbb8 (Updating crash deployment (+1 -> 1))
Jan 31 08:06:52 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.crash.compute-0", "caps": ["mon", "profile crash", "mgr", "profile crash"]} v 0)
Jan 31 08:06:52 compute-0 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "auth get-or-create", "entity": "client.crash.compute-0", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch
Jan 31 08:06:52 compute-0 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd='[{"prefix": "auth get-or-create", "entity": "client.crash.compute-0", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]': finished
Jan 31 08:06:52 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 31 08:06:52 compute-0 ceph-mon[75227]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 08:06:52 compute-0 ceph-mgr[75519]: [cephadm INFO cephadm.serve] Deploying daemon crash.compute-0 on compute-0
Jan 31 08:06:52 compute-0 ceph-mgr[75519]: log_channel(cephadm) log [INF] : Deploying daemon crash.compute-0 on compute-0
Jan 31 08:06:52 compute-0 podman[79508]: 2026-01-31 08:06:52.176944788 +0000 UTC m=+0.699451349 container remove 6e25939a5c9c04ba0bb98c74f00feb55418f60b647c3d59e9df89a97c5807996 (image=quay.io/ceph/ceph:v20, name=interesting_dewdney, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 08:06:52 compute-0 systemd[1]: libpod-conmon-6e25939a5c9c04ba0bb98c74f00feb55418f60b647c3d59e9df89a97c5807996.scope: Deactivated successfully.
Jan 31 08:06:52 compute-0 ceph-mon[75227]: Updating compute-0:/etc/ceph/ceph.client.admin.keyring
Jan 31 08:06:52 compute-0 ceph-mon[75227]: Updating compute-0:/var/lib/ceph/82c880e6-d992-5408-8b12-efff9c275473/config/ceph.client.admin.keyring
Jan 31 08:06:52 compute-0 ceph-mon[75227]: pgmap v3: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 31 08:06:52 compute-0 ceph-mon[75227]: Health check failed: OSD count 0 < osd_pool_default_size 1 (TOO_FEW_OSDS)
Jan 31 08:06:52 compute-0 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 08:06:52 compute-0 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 08:06:52 compute-0 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 08:06:52 compute-0 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "auth get-or-create", "entity": "client.crash.compute-0", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch
Jan 31 08:06:52 compute-0 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd='[{"prefix": "auth get-or-create", "entity": "client.crash.compute-0", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]': finished
Jan 31 08:06:52 compute-0 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 08:06:52 compute-0 ansible-async_wrapper.py[79449]: Module complete (79449)
Jan 31 08:06:52 compute-0 sudo[79835]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 08:06:52 compute-0 sudo[79835]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:06:52 compute-0 sudo[79835]: pam_unix(sudo:session): session closed for user root
Jan 31 08:06:52 compute-0 sudo[79860]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/82c880e6-d992-5408-8b12-efff9c275473/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 _orch deploy --fsid 82c880e6-d992-5408-8b12-efff9c275473
Jan 31 08:06:52 compute-0 sudo[79860]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:06:52 compute-0 sudo[79949]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-awdiesvwyzggrgyerqlzkcwdgbccerkw ; /usr/bin/python3'
Jan 31 08:06:52 compute-0 sudo[79949]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:06:52 compute-0 podman[79974]: 2026-01-31 08:06:52.703913172 +0000 UTC m=+0.062982801 container create 4fba95ebb69ed8bc534919b4e3a6dad8419b0fcf1cddfd6278daf3eeeb14bc15 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_beaver, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3)
Jan 31 08:06:52 compute-0 python3[79958]: ansible-ansible.legacy.async_status Invoked with jid=j295523863277.79353 mode=status _async_dir=/root/.ansible_async
Jan 31 08:06:52 compute-0 sudo[79949]: pam_unix(sudo:session): session closed for user root
Jan 31 08:06:52 compute-0 systemd[1]: Started libpod-conmon-4fba95ebb69ed8bc534919b4e3a6dad8419b0fcf1cddfd6278daf3eeeb14bc15.scope.
Jan 31 08:06:52 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:06:52 compute-0 ceph-mgr[75519]: [devicehealth WARNING root] not enough osds to create mgr pool
Jan 31 08:06:52 compute-0 podman[79974]: 2026-01-31 08:06:52.769872803 +0000 UTC m=+0.128942432 container init 4fba95ebb69ed8bc534919b4e3a6dad8419b0fcf1cddfd6278daf3eeeb14bc15 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_beaver, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 08:06:52 compute-0 podman[79974]: 2026-01-31 08:06:52.676937658 +0000 UTC m=+0.036007357 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 08:06:52 compute-0 podman[79974]: 2026-01-31 08:06:52.774285025 +0000 UTC m=+0.133354634 container start 4fba95ebb69ed8bc534919b4e3a6dad8419b0fcf1cddfd6278daf3eeeb14bc15 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_beaver, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, ceph=True, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 08:06:52 compute-0 focused_beaver[79990]: 167 167
Jan 31 08:06:52 compute-0 systemd[1]: libpod-4fba95ebb69ed8bc534919b4e3a6dad8419b0fcf1cddfd6278daf3eeeb14bc15.scope: Deactivated successfully.
Jan 31 08:06:52 compute-0 podman[79974]: 2026-01-31 08:06:52.778243995 +0000 UTC m=+0.137313654 container attach 4fba95ebb69ed8bc534919b4e3a6dad8419b0fcf1cddfd6278daf3eeeb14bc15 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_beaver, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, ceph=True, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 08:06:52 compute-0 podman[79974]: 2026-01-31 08:06:52.778936448 +0000 UTC m=+0.138006067 container died 4fba95ebb69ed8bc534919b4e3a6dad8419b0fcf1cddfd6278daf3eeeb14bc15 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_beaver, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=tentacle)
Jan 31 08:06:52 compute-0 systemd[1]: var-lib-containers-storage-overlay-6a7572c4b5d9ea0807cb21646d38c27bc5a59e8d5e32031f8f7a6b65de39a1e0-merged.mount: Deactivated successfully.
Jan 31 08:06:52 compute-0 podman[79974]: 2026-01-31 08:06:52.812549066 +0000 UTC m=+0.171618715 container remove 4fba95ebb69ed8bc534919b4e3a6dad8419b0fcf1cddfd6278daf3eeeb14bc15 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_beaver, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 08:06:52 compute-0 systemd[1]: libpod-conmon-4fba95ebb69ed8bc534919b4e3a6dad8419b0fcf1cddfd6278daf3eeeb14bc15.scope: Deactivated successfully.
Jan 31 08:06:52 compute-0 sudo[80053]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tanuahugpckmnuvoytvmlwtpjapxjasx ; /usr/bin/python3'
Jan 31 08:06:52 compute-0 sudo[80053]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:06:52 compute-0 systemd[1]: Reloading.
Jan 31 08:06:52 compute-0 systemd-rc-local-generator[80083]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 08:06:52 compute-0 systemd-sysv-generator[80087]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 08:06:53 compute-0 python3[80057]: ansible-ansible.legacy.async_status Invoked with jid=j295523863277.79353 mode=cleanup _async_dir=/root/.ansible_async
Jan 31 08:06:53 compute-0 sudo[80053]: pam_unix(sudo:session): session closed for user root
Jan 31 08:06:53 compute-0 systemd[1]: Reloading.
Jan 31 08:06:53 compute-0 systemd-rc-local-generator[80121]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 08:06:53 compute-0 systemd-sysv-generator[80124]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 08:06:53 compute-0 ceph-mon[75227]: from='client.14164 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Jan 31 08:06:53 compute-0 ceph-mon[75227]: Deploying daemon crash.compute-0 on compute-0
Jan 31 08:06:53 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e2 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 08:06:53 compute-0 systemd[1]: Starting Ceph crash.compute-0 for 82c880e6-d992-5408-8b12-efff9c275473...
Jan 31 08:06:53 compute-0 sudo[80162]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-agmuaopfsbrsykbwofnohhyficpybloy ; /usr/bin/python3'
Jan 31 08:06:53 compute-0 sudo[80162]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:06:53 compute-0 python3[80169]: ansible-ansible.builtin.stat Invoked with path=/home/ceph-admin/specs/ceph_spec.yaml follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Jan 31 08:06:53 compute-0 sudo[80162]: pam_unix(sudo:session): session closed for user root
Jan 31 08:06:53 compute-0 podman[80210]: 2026-01-31 08:06:53.554680237 +0000 UTC m=+0.051074427 container create a94e6142bb25ebdfc8bc31b3aa58a4b332318e4966bc778bd3a102cba4f5260c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-82c880e6-d992-5408-8b12-efff9c275473-crash-compute-0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20251030)
Jan 31 08:06:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fdf1bf90d92a2178ecd7034bb997dc02230f23c76ee0a19467566c3312cfa2ac/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 08:06:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fdf1bf90d92a2178ecd7034bb997dc02230f23c76ee0a19467566c3312cfa2ac/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 08:06:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fdf1bf90d92a2178ecd7034bb997dc02230f23c76ee0a19467566c3312cfa2ac/merged/etc/ceph/ceph.client.crash.compute-0.keyring supports timestamps until 2038 (0x7fffffff)
Jan 31 08:06:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fdf1bf90d92a2178ecd7034bb997dc02230f23c76ee0a19467566c3312cfa2ac/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 08:06:53 compute-0 podman[80210]: 2026-01-31 08:06:53.527505185 +0000 UTC m=+0.023899375 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 08:06:53 compute-0 podman[80210]: 2026-01-31 08:06:53.623668684 +0000 UTC m=+0.120062934 container init a94e6142bb25ebdfc8bc31b3aa58a4b332318e4966bc778bd3a102cba4f5260c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-82c880e6-d992-5408-8b12-efff9c275473-crash-compute-0, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20251030, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 08:06:53 compute-0 podman[80210]: 2026-01-31 08:06:53.633143316 +0000 UTC m=+0.129537506 container start a94e6142bb25ebdfc8bc31b3aa58a4b332318e4966bc778bd3a102cba4f5260c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-82c880e6-d992-5408-8b12-efff9c275473-crash-compute-0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 08:06:53 compute-0 bash[80210]: a94e6142bb25ebdfc8bc31b3aa58a4b332318e4966bc778bd3a102cba4f5260c
Jan 31 08:06:53 compute-0 systemd[1]: Started Ceph crash.compute-0 for 82c880e6-d992-5408-8b12-efff9c275473.
Jan 31 08:06:53 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v4: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 31 08:06:53 compute-0 ceph-82c880e6-d992-5408-8b12-efff9c275473-crash-compute-0[80227]: INFO:ceph-crash:pinging cluster to exercise our key
Jan 31 08:06:53 compute-0 sudo[79860]: pam_unix(sudo:session): session closed for user root
Jan 31 08:06:53 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 31 08:06:53 compute-0 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 08:06:53 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 31 08:06:53 compute-0 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 08:06:53 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.crash}] v 0)
Jan 31 08:06:53 compute-0 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 08:06:53 compute-0 ceph-mgr[75519]: [progress INFO root] complete: finished ev 7dad7b7c-6b45-47d3-b703-f49deb4cfbb8 (Updating crash deployment (+1 -> 1))
Jan 31 08:06:53 compute-0 ceph-mgr[75519]: [progress INFO root] Completed event 7dad7b7c-6b45-47d3-b703-f49deb4cfbb8 (Updating crash deployment (+1 -> 1)) in 2 seconds
Jan 31 08:06:53 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.crash}] v 0)
Jan 31 08:06:53 compute-0 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 08:06:53 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mon}] v 0)
Jan 31 08:06:53 compute-0 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 08:06:53 compute-0 ceph-mgr[75519]: [progress INFO root] update: starting ev 475dc65f-8be8-43cf-bedc-fc1250554d70 (Updating mgr deployment (+1 -> 2))
Jan 31 08:06:53 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mgr.compute-0.mdykbc", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} v 0)
Jan 31 08:06:53 compute-0 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "auth get-or-create", "entity": "mgr.compute-0.mdykbc", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch
Jan 31 08:06:53 compute-0 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd='[{"prefix": "auth get-or-create", "entity": "mgr.compute-0.mdykbc", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]': finished
Jan 31 08:06:53 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr services"} v 0)
Jan 31 08:06:53 compute-0 ceph-mon[75227]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "mgr services"} : dispatch
Jan 31 08:06:53 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 31 08:06:53 compute-0 ceph-mon[75227]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 08:06:53 compute-0 ceph-mgr[75519]: [cephadm INFO cephadm.serve] Deploying daemon mgr.compute-0.mdykbc on compute-0
Jan 31 08:06:53 compute-0 ceph-mgr[75519]: log_channel(cephadm) log [INF] : Deploying daemon mgr.compute-0.mdykbc on compute-0
Jan 31 08:06:53 compute-0 sudo[80270]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cfmdsmdgmteridegohpwnexnxvwufjwh ; /usr/bin/python3'
Jan 31 08:06:53 compute-0 sudo[80270]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:06:53 compute-0 ceph-82c880e6-d992-5408-8b12-efff9c275473-crash-compute-0[80227]: 2026-01-31T08:06:53.806+0000 7f3d3cc71640 -1 auth: unable to find a keyring on /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin: (2) No such file or directory
Jan 31 08:06:53 compute-0 ceph-82c880e6-d992-5408-8b12-efff9c275473-crash-compute-0[80227]: 2026-01-31T08:06:53.806+0000 7f3d3cc71640 -1 AuthRegistry(0x7f3d38052d90) no keyring found at /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin, disabling cephx
Jan 31 08:06:53 compute-0 ceph-82c880e6-d992-5408-8b12-efff9c275473-crash-compute-0[80227]: 2026-01-31T08:06:53.808+0000 7f3d3cc71640 -1 auth: unable to find a keyring on /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin: (2) No such file or directory
Jan 31 08:06:53 compute-0 ceph-82c880e6-d992-5408-8b12-efff9c275473-crash-compute-0[80227]: 2026-01-31T08:06:53.808+0000 7f3d3cc71640 -1 AuthRegistry(0x7f3d3cc6ffe0) no keyring found at /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin, disabling cephx
Jan 31 08:06:53 compute-0 ceph-82c880e6-d992-5408-8b12-efff9c275473-crash-compute-0[80227]: 2026-01-31T08:06:53.809+0000 7f3d36575640 -1 monclient(hunting): handle_auth_bad_method server allowed_methods [2] but i only support [1]
Jan 31 08:06:53 compute-0 ceph-82c880e6-d992-5408-8b12-efff9c275473-crash-compute-0[80227]: 2026-01-31T08:06:53.809+0000 7f3d3cc71640 -1 monclient: authenticate NOTE: no keyring found; disabled cephx authentication
Jan 31 08:06:53 compute-0 ceph-82c880e6-d992-5408-8b12-efff9c275473-crash-compute-0[80227]: [errno 13] RADOS permission denied (error connecting to the cluster)
Jan 31 08:06:53 compute-0 ceph-82c880e6-d992-5408-8b12-efff9c275473-crash-compute-0[80227]: INFO:ceph-crash:monitoring path /var/lib/ceph/crash, delay 600s
Jan 31 08:06:53 compute-0 sudo[80246]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 08:06:53 compute-0 sudo[80246]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:06:53 compute-0 sudo[80246]: pam_unix(sudo:session): session closed for user root
Jan 31 08:06:53 compute-0 sudo[80295]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/82c880e6-d992-5408-8b12-efff9c275473/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 _orch deploy --fsid 82c880e6-d992-5408-8b12-efff9c275473
Jan 31 08:06:53 compute-0 sudo[80295]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:06:53 compute-0 python3[80283]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 82c880e6-d992-5408-8b12-efff9c275473 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch status --format json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 08:06:54 compute-0 podman[80320]: 2026-01-31 08:06:54.025683861 +0000 UTC m=+0.052079640 container create 97e390e7a879efbb8c3588e5381004c1164ce3e8c03ed292ad9b168d2ba79bbb (image=quay.io/ceph/ceph:v20, name=stupefied_montalcini, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 08:06:54 compute-0 systemd[1]: Started libpod-conmon-97e390e7a879efbb8c3588e5381004c1164ce3e8c03ed292ad9b168d2ba79bbb.scope.
Jan 31 08:06:54 compute-0 podman[80320]: 2026-01-31 08:06:53.999388428 +0000 UTC m=+0.025784267 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 31 08:06:54 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:06:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a5c7516e05a96e39e2e6c93f96ff872a89bbb5a408969e478a475c351a237a2a/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 08:06:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a5c7516e05a96e39e2e6c93f96ff872a89bbb5a408969e478a475c351a237a2a/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Jan 31 08:06:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a5c7516e05a96e39e2e6c93f96ff872a89bbb5a408969e478a475c351a237a2a/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 08:06:54 compute-0 podman[80320]: 2026-01-31 08:06:54.120719017 +0000 UTC m=+0.147114846 container init 97e390e7a879efbb8c3588e5381004c1164ce3e8c03ed292ad9b168d2ba79bbb (image=quay.io/ceph/ceph:v20, name=stupefied_montalcini, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20251030)
Jan 31 08:06:54 compute-0 podman[80320]: 2026-01-31 08:06:54.127210938 +0000 UTC m=+0.153606687 container start 97e390e7a879efbb8c3588e5381004c1164ce3e8c03ed292ad9b168d2ba79bbb (image=quay.io/ceph/ceph:v20, name=stupefied_montalcini, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, io.buildah.version=1.41.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Jan 31 08:06:54 compute-0 podman[80320]: 2026-01-31 08:06:54.138548519 +0000 UTC m=+0.164944288 container attach 97e390e7a879efbb8c3588e5381004c1164ce3e8c03ed292ad9b168d2ba79bbb (image=quay.io/ceph/ceph:v20, name=stupefied_montalcini, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 08:06:54 compute-0 podman[80387]: 2026-01-31 08:06:54.305628951 +0000 UTC m=+0.044884635 container create b5c7e7069c108e24b0e7ab21c31ae005adcfd0b55a9d615aec45e81f455b2d5a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=suspicious_gates, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030)
Jan 31 08:06:54 compute-0 systemd[1]: Started libpod-conmon-b5c7e7069c108e24b0e7ab21c31ae005adcfd0b55a9d615aec45e81f455b2d5a.scope.
Jan 31 08:06:54 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:06:54 compute-0 podman[80387]: 2026-01-31 08:06:54.285111094 +0000 UTC m=+0.024366808 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 08:06:54 compute-0 podman[80387]: 2026-01-31 08:06:54.42442926 +0000 UTC m=+0.163685024 container init b5c7e7069c108e24b0e7ab21c31ae005adcfd0b55a9d615aec45e81f455b2d5a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=suspicious_gates, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 08:06:54 compute-0 podman[80387]: 2026-01-31 08:06:54.430670677 +0000 UTC m=+0.169926361 container start b5c7e7069c108e24b0e7ab21c31ae005adcfd0b55a9d615aec45e81f455b2d5a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=suspicious_gates, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 08:06:54 compute-0 suspicious_gates[80412]: 167 167
Jan 31 08:06:54 compute-0 systemd[1]: libpod-b5c7e7069c108e24b0e7ab21c31ae005adcfd0b55a9d615aec45e81f455b2d5a.scope: Deactivated successfully.
Jan 31 08:06:54 compute-0 podman[80387]: 2026-01-31 08:06:54.437174619 +0000 UTC m=+0.176430423 container attach b5c7e7069c108e24b0e7ab21c31ae005adcfd0b55a9d615aec45e81f455b2d5a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=suspicious_gates, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_REF=tentacle)
Jan 31 08:06:54 compute-0 podman[80387]: 2026-01-31 08:06:54.437645642 +0000 UTC m=+0.176901366 container died b5c7e7069c108e24b0e7ab21c31ae005adcfd0b55a9d615aec45e81f455b2d5a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=suspicious_gates, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 08:06:54 compute-0 systemd[1]: var-lib-containers-storage-overlay-60595ffe9fcdfd1270c2f8622386927961d4908d0e8956c9f19317c32ffa617c-merged.mount: Deactivated successfully.
Jan 31 08:06:54 compute-0 podman[80387]: 2026-01-31 08:06:54.512647826 +0000 UTC m=+0.251903540 container remove b5c7e7069c108e24b0e7ab21c31ae005adcfd0b55a9d615aec45e81f455b2d5a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=suspicious_gates, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 31 08:06:54 compute-0 systemd[1]: libpod-conmon-b5c7e7069c108e24b0e7ab21c31ae005adcfd0b55a9d615aec45e81f455b2d5a.scope: Deactivated successfully.
Jan 31 08:06:54 compute-0 ceph-mgr[75519]: log_channel(audit) log [DBG] : from='client.14166 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Jan 31 08:06:54 compute-0 stupefied_montalcini[80335]: 
Jan 31 08:06:54 compute-0 stupefied_montalcini[80335]: {"available": true, "backend": "cephadm", "paused": false, "workers": 10}
Jan 31 08:06:54 compute-0 systemd[1]: libpod-97e390e7a879efbb8c3588e5381004c1164ce3e8c03ed292ad9b168d2ba79bbb.scope: Deactivated successfully.
Jan 31 08:06:54 compute-0 podman[80320]: 2026-01-31 08:06:54.587317009 +0000 UTC m=+0.613712758 container died 97e390e7a879efbb8c3588e5381004c1164ce3e8c03ed292ad9b168d2ba79bbb (image=quay.io/ceph/ceph:v20, name=stupefied_montalcini, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, ceph=True)
Jan 31 08:06:54 compute-0 systemd[1]: Reloading.
Jan 31 08:06:54 compute-0 systemd-rc-local-generator[80467]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 08:06:54 compute-0 systemd-sysv-generator[80470]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 08:06:54 compute-0 ceph-mon[75227]: pgmap v4: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 31 08:06:54 compute-0 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 08:06:54 compute-0 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 08:06:54 compute-0 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 08:06:54 compute-0 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 08:06:54 compute-0 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 08:06:54 compute-0 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "auth get-or-create", "entity": "mgr.compute-0.mdykbc", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch
Jan 31 08:06:54 compute-0 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd='[{"prefix": "auth get-or-create", "entity": "mgr.compute-0.mdykbc", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]': finished
Jan 31 08:06:54 compute-0 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "mgr services"} : dispatch
Jan 31 08:06:54 compute-0 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 08:06:54 compute-0 ceph-mon[75227]: Deploying daemon mgr.compute-0.mdykbc on compute-0
Jan 31 08:06:54 compute-0 ceph-mgr[75519]: [devicehealth WARNING root] not enough osds to create mgr pool
Jan 31 08:06:54 compute-0 systemd[1]: var-lib-containers-storage-overlay-a5c7516e05a96e39e2e6c93f96ff872a89bbb5a408969e478a475c351a237a2a-merged.mount: Deactivated successfully.
Jan 31 08:06:54 compute-0 podman[80320]: 2026-01-31 08:06:54.896534923 +0000 UTC m=+0.922930672 container remove 97e390e7a879efbb8c3588e5381004c1164ce3e8c03ed292ad9b168d2ba79bbb (image=quay.io/ceph/ceph:v20, name=stupefied_montalcini, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=tentacle)
Jan 31 08:06:54 compute-0 systemd[1]: libpod-conmon-97e390e7a879efbb8c3588e5381004c1164ce3e8c03ed292ad9b168d2ba79bbb.scope: Deactivated successfully.
Jan 31 08:06:54 compute-0 sudo[80270]: pam_unix(sudo:session): session closed for user root
Jan 31 08:06:54 compute-0 systemd[1]: Reloading.
Jan 31 08:06:55 compute-0 systemd-rc-local-generator[80510]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 08:06:55 compute-0 systemd-sysv-generator[80514]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 08:06:55 compute-0 systemd[1]: Starting Ceph mgr.compute-0.mdykbc for 82c880e6-d992-5408-8b12-efff9c275473...
Jan 31 08:06:55 compute-0 sudo[80544]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-taykcvepkgnmkdlqlciumyxbonhedlew ; /usr/bin/python3'
Jan 31 08:06:55 compute-0 sudo[80544]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:06:55 compute-0 python3[80549]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 82c880e6-d992-5408-8b12-efff9c275473 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config set global log_to_file true _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 08:06:55 compute-0 podman[80596]: 2026-01-31 08:06:55.431025723 +0000 UTC m=+0.045306113 container create 7e59e7a2f63e9fdb34be3dfc03ad0787e7e54a0cbc811197c4e5d1ea740f4f6e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-82c880e6-d992-5408-8b12-efff9c275473-mgr-compute-0-mdykbc, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 08:06:55 compute-0 podman[80595]: 2026-01-31 08:06:55.46143777 +0000 UTC m=+0.072159656 container create 8e5be85540dd80bd7d6d25a0c17a1d327199dfd4acc26de627f21f1d5259856b (image=quay.io/ceph/ceph:v20, name=mystifying_wiles, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True)
Jan 31 08:06:55 compute-0 podman[80596]: 2026-01-31 08:06:55.40526943 +0000 UTC m=+0.019549850 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 08:06:55 compute-0 podman[80595]: 2026-01-31 08:06:55.409178685 +0000 UTC m=+0.019900591 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 31 08:06:55 compute-0 systemd[1]: Started libpod-conmon-8e5be85540dd80bd7d6d25a0c17a1d327199dfd4acc26de627f21f1d5259856b.scope.
Jan 31 08:06:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3d5fc9bb375f27ca41f548f0790d8dde407be6574ec409f03e8dfbe52d9d29f3/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 08:06:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3d5fc9bb375f27ca41f548f0790d8dde407be6574ec409f03e8dfbe52d9d29f3/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 08:06:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3d5fc9bb375f27ca41f548f0790d8dde407be6574ec409f03e8dfbe52d9d29f3/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 08:06:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3d5fc9bb375f27ca41f548f0790d8dde407be6574ec409f03e8dfbe52d9d29f3/merged/var/lib/ceph/mgr/ceph-compute-0.mdykbc supports timestamps until 2038 (0x7fffffff)
Jan 31 08:06:55 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:06:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0e11c5e3a827d52d8ab5d05f2e3b786621ce280a69b76951857a54d1e85fbd77/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 08:06:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0e11c5e3a827d52d8ab5d05f2e3b786621ce280a69b76951857a54d1e85fbd77/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Jan 31 08:06:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0e11c5e3a827d52d8ab5d05f2e3b786621ce280a69b76951857a54d1e85fbd77/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 08:06:55 compute-0 podman[80596]: 2026-01-31 08:06:55.530140271 +0000 UTC m=+0.144420671 container init 7e59e7a2f63e9fdb34be3dfc03ad0787e7e54a0cbc811197c4e5d1ea740f4f6e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-82c880e6-d992-5408-8b12-efff9c275473-mgr-compute-0-mdykbc, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Jan 31 08:06:55 compute-0 podman[80596]: 2026-01-31 08:06:55.538867945 +0000 UTC m=+0.153148335 container start 7e59e7a2f63e9fdb34be3dfc03ad0787e7e54a0cbc811197c4e5d1ea740f4f6e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-82c880e6-d992-5408-8b12-efff9c275473-mgr-compute-0-mdykbc, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 08:06:55 compute-0 podman[80595]: 2026-01-31 08:06:55.542645499 +0000 UTC m=+0.153367365 container init 8e5be85540dd80bd7d6d25a0c17a1d327199dfd4acc26de627f21f1d5259856b (image=quay.io/ceph/ceph:v20, name=mystifying_wiles, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Jan 31 08:06:55 compute-0 podman[80595]: 2026-01-31 08:06:55.548057441 +0000 UTC m=+0.158779307 container start 8e5be85540dd80bd7d6d25a0c17a1d327199dfd4acc26de627f21f1d5259856b (image=quay.io/ceph/ceph:v20, name=mystifying_wiles, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030)
Jan 31 08:06:55 compute-0 bash[80596]: 7e59e7a2f63e9fdb34be3dfc03ad0787e7e54a0cbc811197c4e5d1ea740f4f6e
Jan 31 08:06:55 compute-0 systemd[1]: Started Ceph mgr.compute-0.mdykbc for 82c880e6-d992-5408-8b12-efff9c275473.
Jan 31 08:06:55 compute-0 podman[80595]: 2026-01-31 08:06:55.563636718 +0000 UTC m=+0.174358604 container attach 8e5be85540dd80bd7d6d25a0c17a1d327199dfd4acc26de627f21f1d5259856b (image=quay.io/ceph/ceph:v20, name=mystifying_wiles, CEPH_REF=tentacle, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 31 08:06:55 compute-0 ceph-mgr[80633]: set uid:gid to 167:167 (ceph:ceph)
Jan 31 08:06:55 compute-0 ceph-mgr[80633]: ceph version 20.2.0 (69f84cc2651aa259a15bc192ddaabd3baba07489) tentacle (stable - RelWithDebInfo), process ceph-mgr, pid 2
Jan 31 08:06:55 compute-0 ceph-mgr[80633]: pidfile_write: ignore empty --pid-file
Jan 31 08:06:55 compute-0 sudo[80295]: pam_unix(sudo:session): session closed for user root
Jan 31 08:06:55 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 31 08:06:55 compute-0 ceph-mgr[80633]: mgr[py] Loading python module 'alerts'
Jan 31 08:06:55 compute-0 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 08:06:55 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 31 08:06:55 compute-0 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 08:06:55 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mgr}] v 0)
Jan 31 08:06:55 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v5: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 31 08:06:55 compute-0 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 08:06:55 compute-0 ceph-mgr[75519]: [progress INFO root] complete: finished ev 475dc65f-8be8-43cf-bedc-fc1250554d70 (Updating mgr deployment (+1 -> 2))
Jan 31 08:06:55 compute-0 ceph-mgr[75519]: [progress INFO root] Completed event 475dc65f-8be8-43cf-bedc-fc1250554d70 (Updating mgr deployment (+1 -> 2)) in 2 seconds
Jan 31 08:06:55 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mgr}] v 0)
Jan 31 08:06:55 compute-0 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 08:06:55 compute-0 sudo[80674]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 31 08:06:55 compute-0 sudo[80674]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:06:55 compute-0 sudo[80674]: pam_unix(sudo:session): session closed for user root
Jan 31 08:06:55 compute-0 ceph-mon[75227]: from='client.14166 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Jan 31 08:06:55 compute-0 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 08:06:55 compute-0 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 08:06:55 compute-0 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 08:06:55 compute-0 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 08:06:55 compute-0 ceph-mgr[80633]: mgr[py] Loading python module 'balancer'
Jan 31 08:06:55 compute-0 sudo[80699]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 08:06:55 compute-0 sudo[80699]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:06:55 compute-0 sudo[80699]: pam_unix(sudo:session): session closed for user root
Jan 31 08:06:55 compute-0 sudo[80724]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/82c880e6-d992-5408-8b12-efff9c275473/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ls
Jan 31 08:06:55 compute-0 sudo[80724]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:06:55 compute-0 ceph-mgr[80633]: mgr[py] Loading python module 'cephadm'
Jan 31 08:06:56 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=log_to_file}] v 0)
Jan 31 08:06:56 compute-0 ceph-mon[75227]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3220492711' entity='client.admin' 
Jan 31 08:06:56 compute-0 systemd[1]: libpod-8e5be85540dd80bd7d6d25a0c17a1d327199dfd4acc26de627f21f1d5259856b.scope: Deactivated successfully.
Jan 31 08:06:56 compute-0 conmon[80628]: conmon 8e5be85540dd80bd7d6d <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-8e5be85540dd80bd7d6d25a0c17a1d327199dfd4acc26de627f21f1d5259856b.scope/container/memory.events
Jan 31 08:06:56 compute-0 podman[80595]: 2026-01-31 08:06:56.06551523 +0000 UTC m=+0.676237086 container died 8e5be85540dd80bd7d6d25a0c17a1d327199dfd4acc26de627f21f1d5259856b (image=quay.io/ceph/ceph:v20, name=mystifying_wiles, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Jan 31 08:06:56 compute-0 systemd[1]: var-lib-containers-storage-overlay-0e11c5e3a827d52d8ab5d05f2e3b786621ce280a69b76951857a54d1e85fbd77-merged.mount: Deactivated successfully.
Jan 31 08:06:56 compute-0 podman[80595]: 2026-01-31 08:06:56.106225304 +0000 UTC m=+0.716947160 container remove 8e5be85540dd80bd7d6d25a0c17a1d327199dfd4acc26de627f21f1d5259856b (image=quay.io/ceph/ceph:v20, name=mystifying_wiles, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 08:06:56 compute-0 systemd[1]: libpod-conmon-8e5be85540dd80bd7d6d25a0c17a1d327199dfd4acc26de627f21f1d5259856b.scope: Deactivated successfully.
Jan 31 08:06:56 compute-0 sudo[80544]: pam_unix(sudo:session): session closed for user root
Jan 31 08:06:56 compute-0 sudo[80854]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hzekmnuikbkmiyubrxtprnztjebsifaz ; /usr/bin/python3'
Jan 31 08:06:56 compute-0 podman[80811]: 2026-01-31 08:06:56.267116862 +0000 UTC m=+0.063140275 container exec 2c160fb9852a007dc977740f88f96001cc57b1cb392a9e315d541aef8037777a (image=quay.io/ceph/ceph:v20, name=ceph-82c880e6-d992-5408-8b12-efff9c275473-mon-compute-0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 08:06:56 compute-0 sudo[80854]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:06:56 compute-0 ansible-async_wrapper.py[79444]: Done in kid B.
Jan 31 08:06:56 compute-0 podman[80811]: 2026-01-31 08:06:56.360994773 +0000 UTC m=+0.157018156 container exec_died 2c160fb9852a007dc977740f88f96001cc57b1cb392a9e315d541aef8037777a (image=quay.io/ceph/ceph:v20, name=ceph-82c880e6-d992-5408-8b12-efff9c275473-mon-compute-0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 08:06:56 compute-0 python3[80863]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 82c880e6-d992-5408-8b12-efff9c275473 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config set global mon_cluster_log_to_file true _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 08:06:56 compute-0 podman[80887]: 2026-01-31 08:06:56.457388064 +0000 UTC m=+0.031690525 container create 1bc5a5046c49505faa9b74a97062fb58c812c30141d92d12dbda44849c1cf70e (image=quay.io/ceph/ceph:v20, name=vibrant_bell, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 08:06:56 compute-0 systemd[1]: Started libpod-conmon-1bc5a5046c49505faa9b74a97062fb58c812c30141d92d12dbda44849c1cf70e.scope.
Jan 31 08:06:56 compute-0 ceph-mgr[80633]: mgr[py] Loading python module 'crash'
Jan 31 08:06:56 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:06:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6ad445215d5052f306bd777c14c6b362936ab9df47a028f0b2d552ac60cf5c96/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 08:06:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6ad445215d5052f306bd777c14c6b362936ab9df47a028f0b2d552ac60cf5c96/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 08:06:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6ad445215d5052f306bd777c14c6b362936ab9df47a028f0b2d552ac60cf5c96/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Jan 31 08:06:56 compute-0 podman[80887]: 2026-01-31 08:06:56.523924007 +0000 UTC m=+0.098226498 container init 1bc5a5046c49505faa9b74a97062fb58c812c30141d92d12dbda44849c1cf70e (image=quay.io/ceph/ceph:v20, name=vibrant_bell, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 31 08:06:56 compute-0 podman[80887]: 2026-01-31 08:06:56.539410796 +0000 UTC m=+0.113713277 container start 1bc5a5046c49505faa9b74a97062fb58c812c30141d92d12dbda44849c1cf70e (image=quay.io/ceph/ceph:v20, name=vibrant_bell, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle)
Jan 31 08:06:56 compute-0 podman[80887]: 2026-01-31 08:06:56.443027057 +0000 UTC m=+0.017329528 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 31 08:06:56 compute-0 podman[80887]: 2026-01-31 08:06:56.548580821 +0000 UTC m=+0.122883292 container attach 1bc5a5046c49505faa9b74a97062fb58c812c30141d92d12dbda44849c1cf70e (image=quay.io/ceph/ceph:v20, name=vibrant_bell, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3)
Jan 31 08:06:56 compute-0 ceph-mgr[80633]: mgr[py] Loading python module 'dashboard'
Jan 31 08:06:56 compute-0 ceph-mgr[75519]: [devicehealth WARNING root] not enough osds to create mgr pool
Jan 31 08:06:56 compute-0 sudo[80724]: pam_unix(sudo:session): session closed for user root
Jan 31 08:06:56 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 31 08:06:56 compute-0 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 08:06:56 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 31 08:06:56 compute-0 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 08:06:56 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 31 08:06:56 compute-0 ceph-mon[75227]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 08:06:56 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 31 08:06:56 compute-0 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 31 08:06:56 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 31 08:06:56 compute-0 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 08:06:56 compute-0 sudo[80990]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 31 08:06:56 compute-0 sudo[80990]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:06:56 compute-0 sudo[80990]: pam_unix(sudo:session): session closed for user root
Jan 31 08:06:56 compute-0 ceph-mgr[75519]: [cephadm INFO cephadm.serve] Reconfiguring mon.compute-0 (unknown last config time)...
Jan 31 08:06:56 compute-0 ceph-mgr[75519]: log_channel(cephadm) log [INF] : Reconfiguring mon.compute-0 (unknown last config time)...
Jan 31 08:06:56 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "mon."} v 0)
Jan 31 08:06:56 compute-0 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "auth get", "entity": "mon."} : dispatch
Jan 31 08:06:56 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config get", "who": "mon", "key": "public_network"} v 0)
Jan 31 08:06:56 compute-0 ceph-mon[75227]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "config get", "who": "mon", "key": "public_network"} : dispatch
Jan 31 08:06:56 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 31 08:06:56 compute-0 ceph-mon[75227]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 08:06:56 compute-0 ceph-mgr[75519]: [cephadm INFO cephadm.serve] Reconfiguring daemon mon.compute-0 on compute-0
Jan 31 08:06:56 compute-0 ceph-mgr[75519]: log_channel(cephadm) log [INF] : Reconfiguring daemon mon.compute-0 on compute-0
Jan 31 08:06:56 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=mon_cluster_log_to_file}] v 0)
Jan 31 08:06:56 compute-0 ceph-mon[75227]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1768084644' entity='client.admin' 
Jan 31 08:06:56 compute-0 sudo[81015]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 08:06:56 compute-0 sudo[81015]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:06:56 compute-0 sudo[81015]: pam_unix(sudo:session): session closed for user root
Jan 31 08:06:56 compute-0 systemd[1]: libpod-1bc5a5046c49505faa9b74a97062fb58c812c30141d92d12dbda44849c1cf70e.scope: Deactivated successfully.
Jan 31 08:06:56 compute-0 podman[81042]: 2026-01-31 08:06:56.985335898 +0000 UTC m=+0.018536458 container died 1bc5a5046c49505faa9b74a97062fb58c812c30141d92d12dbda44849c1cf70e (image=quay.io/ceph/ceph:v20, name=vibrant_bell, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, ceph=True)
Jan 31 08:06:56 compute-0 sudo[81043]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/82c880e6-d992-5408-8b12-efff9c275473/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph:v20 --timeout 895 _orch deploy --fsid 82c880e6-d992-5408-8b12-efff9c275473
Jan 31 08:06:56 compute-0 sudo[81043]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:06:57 compute-0 systemd[1]: var-lib-containers-storage-overlay-6ad445215d5052f306bd777c14c6b362936ab9df47a028f0b2d552ac60cf5c96-merged.mount: Deactivated successfully.
Jan 31 08:06:57 compute-0 podman[81042]: 2026-01-31 08:06:57.012864822 +0000 UTC m=+0.046065352 container remove 1bc5a5046c49505faa9b74a97062fb58c812c30141d92d12dbda44849c1cf70e (image=quay.io/ceph/ceph:v20, name=vibrant_bell, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 08:06:57 compute-0 systemd[1]: libpod-conmon-1bc5a5046c49505faa9b74a97062fb58c812c30141d92d12dbda44849c1cf70e.scope: Deactivated successfully.
Jan 31 08:06:57 compute-0 sudo[80854]: pam_unix(sudo:session): session closed for user root
Jan 31 08:06:57 compute-0 ceph-mon[75227]: pgmap v5: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 31 08:06:57 compute-0 ceph-mon[75227]: from='client.? 192.168.122.100:0/3220492711' entity='client.admin' 
Jan 31 08:06:57 compute-0 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 08:06:57 compute-0 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 08:06:57 compute-0 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 08:06:57 compute-0 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 31 08:06:57 compute-0 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 08:06:57 compute-0 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "auth get", "entity": "mon."} : dispatch
Jan 31 08:06:57 compute-0 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "config get", "who": "mon", "key": "public_network"} : dispatch
Jan 31 08:06:57 compute-0 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 08:06:57 compute-0 ceph-mon[75227]: from='client.? 192.168.122.100:0/1768084644' entity='client.admin' 
Jan 31 08:06:57 compute-0 sudo[81106]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tzxfpevuziwninpulkonfmmzuizypboc ; /usr/bin/python3'
Jan 31 08:06:57 compute-0 sudo[81106]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:06:57 compute-0 ceph-mgr[80633]: mgr[py] Loading python module 'devicehealth'
Jan 31 08:06:57 compute-0 ceph-mgr[80633]: mgr[py] Loading python module 'diskprediction_local'
Jan 31 08:06:57 compute-0 podman[81123]: 2026-01-31 08:06:57.330417804 +0000 UTC m=+0.045946421 container create 34f39514043738930a81fc98fd6c46461905b9b8b7b2e05c1b970f181e7dc56b (image=quay.io/ceph/ceph:v20, name=fervent_dewdney, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 08:06:57 compute-0 systemd[1]: Started libpod-conmon-34f39514043738930a81fc98fd6c46461905b9b8b7b2e05c1b970f181e7dc56b.scope.
Jan 31 08:06:57 compute-0 python3[81113]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 82c880e6-d992-5408-8b12-efff9c275473 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd set-require-min-compat-client mimic
                                           _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 08:06:57 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:06:57 compute-0 podman[81123]: 2026-01-31 08:06:57.313183386 +0000 UTC m=+0.028712033 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 31 08:06:57 compute-0 podman[81123]: 2026-01-31 08:06:57.418512569 +0000 UTC m=+0.134041196 container init 34f39514043738930a81fc98fd6c46461905b9b8b7b2e05c1b970f181e7dc56b (image=quay.io/ceph/ceph:v20, name=fervent_dewdney, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 08:06:57 compute-0 podman[81123]: 2026-01-31 08:06:57.423038241 +0000 UTC m=+0.138566858 container start 34f39514043738930a81fc98fd6c46461905b9b8b7b2e05c1b970f181e7dc56b (image=quay.io/ceph/ceph:v20, name=fervent_dewdney, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 08:06:57 compute-0 fervent_dewdney[81139]: 167 167
Jan 31 08:06:57 compute-0 systemd[1]: libpod-34f39514043738930a81fc98fd6c46461905b9b8b7b2e05c1b970f181e7dc56b.scope: Deactivated successfully.
Jan 31 08:06:57 compute-0 conmon[81139]: conmon 34f39514043738930a81 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-34f39514043738930a81fc98fd6c46461905b9b8b7b2e05c1b970f181e7dc56b.scope/container/memory.events
Jan 31 08:06:57 compute-0 podman[81123]: 2026-01-31 08:06:57.429498949 +0000 UTC m=+0.145027586 container attach 34f39514043738930a81fc98fd6c46461905b9b8b7b2e05c1b970f181e7dc56b (image=quay.io/ceph/ceph:v20, name=fervent_dewdney, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.vendor=CentOS)
Jan 31 08:06:57 compute-0 podman[81123]: 2026-01-31 08:06:57.430699018 +0000 UTC m=+0.146227635 container died 34f39514043738930a81fc98fd6c46461905b9b8b7b2e05c1b970f181e7dc56b (image=quay.io/ceph/ceph:v20, name=fervent_dewdney, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle)
Jan 31 08:06:57 compute-0 systemd[1]: var-lib-containers-storage-overlay-53632e5d667fe8d1cf05b20e665c9aed781506251b4b3a5ec4fd625640e23d31-merged.mount: Deactivated successfully.
Jan 31 08:06:57 compute-0 podman[81123]: 2026-01-31 08:06:57.473673928 +0000 UTC m=+0.189202555 container remove 34f39514043738930a81fc98fd6c46461905b9b8b7b2e05c1b970f181e7dc56b (image=quay.io/ceph/ceph:v20, name=fervent_dewdney, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Jan 31 08:06:57 compute-0 ceph-82c880e6-d992-5408-8b12-efff9c275473-mgr-compute-0-mdykbc[80626]: /lib64/python3.9/site-packages/scipy/__init__.py:73: UserWarning: NumPy was imported from a Python sub-interpreter but NumPy does not properly support sub-interpreters. This will likely work for most users but might cause hard to track down issues or subtle bugs. A common user of the rare sub-interpreter feature is wsgi which also allows single-interpreter mode.
Jan 31 08:06:57 compute-0 ceph-82c880e6-d992-5408-8b12-efff9c275473-mgr-compute-0-mdykbc[80626]: Improvements in the case of bugs are welcome, but is not on the NumPy roadmap, and full support may require significant effort to achieve.
Jan 31 08:06:57 compute-0 ceph-82c880e6-d992-5408-8b12-efff9c275473-mgr-compute-0-mdykbc[80626]:   from numpy import show_config as show_numpy_config
Jan 31 08:06:57 compute-0 systemd[1]: libpod-conmon-34f39514043738930a81fc98fd6c46461905b9b8b7b2e05c1b970f181e7dc56b.scope: Deactivated successfully.
Jan 31 08:06:57 compute-0 ceph-mgr[80633]: mgr[py] Loading python module 'influx'
Jan 31 08:06:57 compute-0 podman[81142]: 2026-01-31 08:06:57.495727935 +0000 UTC m=+0.083493798 container create 187f25782afd21be32728f4cd0132c050e3067b56cbfaa4656d0a9d5b71aa80f (image=quay.io/ceph/ceph:v20, name=magical_taussig, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, ceph=True, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 08:06:57 compute-0 sudo[81043]: pam_unix(sudo:session): session closed for user root
Jan 31 08:06:57 compute-0 systemd[1]: Started libpod-conmon-187f25782afd21be32728f4cd0132c050e3067b56cbfaa4656d0a9d5b71aa80f.scope.
Jan 31 08:06:57 compute-0 podman[81142]: 2026-01-31 08:06:57.449559344 +0000 UTC m=+0.037325277 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 31 08:06:57 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 31 08:06:57 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:06:57 compute-0 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 08:06:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/454aaae0dd121e6f4a31cdc7265d928904b34ef519d7022552cedff8c7f90110/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 08:06:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/454aaae0dd121e6f4a31cdc7265d928904b34ef519d7022552cedff8c7f90110/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Jan 31 08:06:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/454aaae0dd121e6f4a31cdc7265d928904b34ef519d7022552cedff8c7f90110/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 08:06:57 compute-0 ceph-mgr[80633]: mgr[py] Loading python module 'insights'
Jan 31 08:06:57 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 31 08:06:57 compute-0 podman[81142]: 2026-01-31 08:06:57.57612964 +0000 UTC m=+0.163895573 container init 187f25782afd21be32728f4cd0132c050e3067b56cbfaa4656d0a9d5b71aa80f (image=quay.io/ceph/ceph:v20, name=magical_taussig, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 08:06:57 compute-0 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 08:06:57 compute-0 podman[81142]: 2026-01-31 08:06:57.580544122 +0000 UTC m=+0.168310005 container start 187f25782afd21be32728f4cd0132c050e3067b56cbfaa4656d0a9d5b71aa80f (image=quay.io/ceph/ceph:v20, name=magical_taussig, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 08:06:57 compute-0 ceph-mgr[75519]: [cephadm INFO cephadm.serve] Reconfiguring mgr.compute-0.fqetdi (unknown last config time)...
Jan 31 08:06:57 compute-0 ceph-mgr[75519]: log_channel(cephadm) log [INF] : Reconfiguring mgr.compute-0.fqetdi (unknown last config time)...
Jan 31 08:06:57 compute-0 podman[81142]: 2026-01-31 08:06:57.587880209 +0000 UTC m=+0.175646062 container attach 187f25782afd21be32728f4cd0132c050e3067b56cbfaa4656d0a9d5b71aa80f (image=quay.io/ceph/ceph:v20, name=magical_taussig, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030)
Jan 31 08:06:57 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mgr.compute-0.fqetdi", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} v 0)
Jan 31 08:06:57 compute-0 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "auth get-or-create", "entity": "mgr.compute-0.fqetdi", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch
Jan 31 08:06:57 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr services"} v 0)
Jan 31 08:06:57 compute-0 ceph-mon[75227]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "mgr services"} : dispatch
Jan 31 08:06:57 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 31 08:06:57 compute-0 ceph-mon[75227]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 08:06:57 compute-0 ceph-mgr[75519]: [cephadm INFO cephadm.serve] Reconfiguring daemon mgr.compute-0.fqetdi on compute-0
Jan 31 08:06:57 compute-0 ceph-mgr[75519]: log_channel(cephadm) log [INF] : Reconfiguring daemon mgr.compute-0.fqetdi on compute-0
Jan 31 08:06:57 compute-0 ceph-mgr[80633]: mgr[py] Loading python module 'iostat'
Jan 31 08:06:57 compute-0 sudo[81174]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 08:06:57 compute-0 sudo[81174]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:06:57 compute-0 sudo[81174]: pam_unix(sudo:session): session closed for user root
Jan 31 08:06:57 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v6: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 31 08:06:57 compute-0 sudo[81199]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/82c880e6-d992-5408-8b12-efff9c275473/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph:v20 --timeout 895 _orch deploy --fsid 82c880e6-d992-5408-8b12-efff9c275473
Jan 31 08:06:57 compute-0 sudo[81199]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:06:57 compute-0 ceph-mgr[80633]: mgr[py] Loading python module 'k8sevents'
Jan 31 08:06:57 compute-0 ceph-mgr[75519]: [progress INFO root] Writing back 2 completed events
Jan 31 08:06:57 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Jan 31 08:06:57 compute-0 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 08:06:57 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd set-require-min-compat-client", "version": "mimic"} v 0)
Jan 31 08:06:57 compute-0 ceph-mon[75227]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1028584485' entity='client.admin' cmd={"prefix": "osd set-require-min-compat-client", "version": "mimic"} : dispatch
Jan 31 08:06:58 compute-0 podman[81258]: 2026-01-31 08:06:58.014739486 +0000 UTC m=+0.059313928 container create 7ec378fa3d8d4b09b5e43e5e40135ddee00d752a9f98d554bf709ac96ac382b3 (image=quay.io/ceph/ceph:v20, name=elastic_shannon, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 08:06:58 compute-0 ceph-mon[75227]: Reconfiguring mon.compute-0 (unknown last config time)...
Jan 31 08:06:58 compute-0 ceph-mon[75227]: Reconfiguring daemon mon.compute-0 on compute-0
Jan 31 08:06:58 compute-0 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 08:06:58 compute-0 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 08:06:58 compute-0 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "auth get-or-create", "entity": "mgr.compute-0.fqetdi", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch
Jan 31 08:06:58 compute-0 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "mgr services"} : dispatch
Jan 31 08:06:58 compute-0 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 08:06:58 compute-0 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 08:06:58 compute-0 ceph-mon[75227]: from='client.? 192.168.122.100:0/1028584485' entity='client.admin' cmd={"prefix": "osd set-require-min-compat-client", "version": "mimic"} : dispatch
Jan 31 08:06:58 compute-0 systemd[1]: Started libpod-conmon-7ec378fa3d8d4b09b5e43e5e40135ddee00d752a9f98d554bf709ac96ac382b3.scope.
Jan 31 08:06:58 compute-0 ceph-mgr[80633]: mgr[py] Loading python module 'localpool'
Jan 31 08:06:58 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:06:58 compute-0 podman[81258]: 2026-01-31 08:06:57.979851932 +0000 UTC m=+0.024426464 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 31 08:06:58 compute-0 podman[81258]: 2026-01-31 08:06:58.085327958 +0000 UTC m=+0.129902450 container init 7ec378fa3d8d4b09b5e43e5e40135ddee00d752a9f98d554bf709ac96ac382b3 (image=quay.io/ceph/ceph:v20, name=elastic_shannon, CEPH_REF=tentacle, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3)
Jan 31 08:06:58 compute-0 podman[81258]: 2026-01-31 08:06:58.09468552 +0000 UTC m=+0.139259962 container start 7ec378fa3d8d4b09b5e43e5e40135ddee00d752a9f98d554bf709ac96ac382b3 (image=quay.io/ceph/ceph:v20, name=elastic_shannon, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.license=GPLv2)
Jan 31 08:06:58 compute-0 elastic_shannon[81275]: 167 167
Jan 31 08:06:58 compute-0 systemd[1]: libpod-7ec378fa3d8d4b09b5e43e5e40135ddee00d752a9f98d554bf709ac96ac382b3.scope: Deactivated successfully.
Jan 31 08:06:58 compute-0 podman[81258]: 2026-01-31 08:06:58.09919843 +0000 UTC m=+0.143772932 container attach 7ec378fa3d8d4b09b5e43e5e40135ddee00d752a9f98d554bf709ac96ac382b3 (image=quay.io/ceph/ceph:v20, name=elastic_shannon, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 08:06:58 compute-0 podman[81258]: 2026-01-31 08:06:58.099973941 +0000 UTC m=+0.144548393 container died 7ec378fa3d8d4b09b5e43e5e40135ddee00d752a9f98d554bf709ac96ac382b3 (image=quay.io/ceph/ceph:v20, name=elastic_shannon, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, ceph=True, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle)
Jan 31 08:06:58 compute-0 systemd[1]: var-lib-containers-storage-overlay-be31eb65e9b9bf899f577e2733f5ccba3b3f26bbc8c1563de5837919a22489d7-merged.mount: Deactivated successfully.
Jan 31 08:06:58 compute-0 ceph-mgr[80633]: mgr[py] Loading python module 'mds_autoscaler'
Jan 31 08:06:58 compute-0 podman[81258]: 2026-01-31 08:06:58.140667113 +0000 UTC m=+0.185241555 container remove 7ec378fa3d8d4b09b5e43e5e40135ddee00d752a9f98d554bf709ac96ac382b3 (image=quay.io/ceph/ceph:v20, name=elastic_shannon, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 08:06:58 compute-0 systemd[1]: libpod-conmon-7ec378fa3d8d4b09b5e43e5e40135ddee00d752a9f98d554bf709ac96ac382b3.scope: Deactivated successfully.
Jan 31 08:06:58 compute-0 sudo[81199]: pam_unix(sudo:session): session closed for user root
Jan 31 08:06:58 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 31 08:06:58 compute-0 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 08:06:58 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 31 08:06:58 compute-0 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 08:06:58 compute-0 sudo[81291]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 08:06:58 compute-0 sudo[81291]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:06:58 compute-0 sudo[81291]: pam_unix(sudo:session): session closed for user root
Jan 31 08:06:58 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e2 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 08:06:58 compute-0 sudo[81316]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/82c880e6-d992-5408-8b12-efff9c275473/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ls
Jan 31 08:06:58 compute-0 sudo[81316]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:06:58 compute-0 ceph-mgr[80633]: mgr[py] Loading python module 'mirroring'
Jan 31 08:06:58 compute-0 ceph-mgr[80633]: mgr[py] Loading python module 'nfs'
Jan 31 08:06:58 compute-0 ceph-mgr[80633]: mgr[py] Loading python module 'orchestrator'
Jan 31 08:06:58 compute-0 podman[81388]: 2026-01-31 08:06:58.748859738 +0000 UTC m=+0.048073915 container exec 2c160fb9852a007dc977740f88f96001cc57b1cb392a9e315d541aef8037777a (image=quay.io/ceph/ceph:v20, name=ceph-82c880e6-d992-5408-8b12-efff9c275473-mon-compute-0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030)
Jan 31 08:06:58 compute-0 ceph-mgr[75519]: [devicehealth WARNING root] not enough osds to create mgr pool
Jan 31 08:06:58 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e2 do_prune osdmap full prune enabled
Jan 31 08:06:58 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e2 encode_pending skipping prime_pg_temp; mapping job did not start
Jan 31 08:06:58 compute-0 ceph-mon[75227]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1028584485' entity='client.admin' cmd='[{"prefix": "osd set-require-min-compat-client", "version": "mimic"}]': finished
Jan 31 08:06:58 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e3 e3: 0 total, 0 up, 0 in
Jan 31 08:06:58 compute-0 magical_taussig[81170]: set require_min_compat_client to mimic
Jan 31 08:06:58 compute-0 ceph-mon[75227]: log_channel(cluster) log [DBG] : osdmap e3: 0 total, 0 up, 0 in
Jan 31 08:06:58 compute-0 systemd[1]: libpod-187f25782afd21be32728f4cd0132c050e3067b56cbfaa4656d0a9d5b71aa80f.scope: Deactivated successfully.
Jan 31 08:06:58 compute-0 podman[81142]: 2026-01-31 08:06:58.794639423 +0000 UTC m=+1.382405286 container died 187f25782afd21be32728f4cd0132c050e3067b56cbfaa4656d0a9d5b71aa80f (image=quay.io/ceph/ceph:v20, name=magical_taussig, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 08:06:58 compute-0 systemd[1]: var-lib-containers-storage-overlay-454aaae0dd121e6f4a31cdc7265d928904b34ef519d7022552cedff8c7f90110-merged.mount: Deactivated successfully.
Jan 31 08:06:58 compute-0 podman[81142]: 2026-01-31 08:06:58.838650688 +0000 UTC m=+1.426416541 container remove 187f25782afd21be32728f4cd0132c050e3067b56cbfaa4656d0a9d5b71aa80f (image=quay.io/ceph/ceph:v20, name=magical_taussig, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 08:06:58 compute-0 systemd[1]: libpod-conmon-187f25782afd21be32728f4cd0132c050e3067b56cbfaa4656d0a9d5b71aa80f.scope: Deactivated successfully.
Jan 31 08:06:58 compute-0 ceph-mgr[80633]: mgr[py] Loading python module 'osd_perf_query'
Jan 31 08:06:58 compute-0 podman[81388]: 2026-01-31 08:06:58.855826 +0000 UTC m=+0.155040177 container exec_died 2c160fb9852a007dc977740f88f96001cc57b1cb392a9e315d541aef8037777a (image=quay.io/ceph/ceph:v20, name=ceph-82c880e6-d992-5408-8b12-efff9c275473-mon-compute-0, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 08:06:58 compute-0 sudo[81106]: pam_unix(sudo:session): session closed for user root
Jan 31 08:06:58 compute-0 ceph-mgr[80633]: mgr[py] Loading python module 'osd_support'
Jan 31 08:06:58 compute-0 ceph-mgr[80633]: mgr[py] Loading python module 'pg_autoscaler'
Jan 31 08:06:59 compute-0 ceph-mon[75227]: Reconfiguring mgr.compute-0.fqetdi (unknown last config time)...
Jan 31 08:06:59 compute-0 ceph-mon[75227]: Reconfiguring daemon mgr.compute-0.fqetdi on compute-0
Jan 31 08:06:59 compute-0 ceph-mon[75227]: pgmap v6: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 31 08:06:59 compute-0 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 08:06:59 compute-0 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 08:06:59 compute-0 ceph-mon[75227]: from='client.? 192.168.122.100:0/1028584485' entity='client.admin' cmd='[{"prefix": "osd set-require-min-compat-client", "version": "mimic"}]': finished
Jan 31 08:06:59 compute-0 ceph-mon[75227]: osdmap e3: 0 total, 0 up, 0 in
Jan 31 08:06:59 compute-0 ceph-mgr[80633]: mgr[py] Loading python module 'progress'
Jan 31 08:06:59 compute-0 ceph-mgr[80633]: mgr[py] Loading python module 'prometheus'
Jan 31 08:06:59 compute-0 sudo[81316]: pam_unix(sudo:session): session closed for user root
Jan 31 08:06:59 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 31 08:06:59 compute-0 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 08:06:59 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 31 08:06:59 compute-0 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 08:06:59 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 31 08:06:59 compute-0 ceph-mon[75227]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 08:06:59 compute-0 sudo[81538]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wtydbixcwnrqatnghduhluzxazfncalb ; /usr/bin/python3'
Jan 31 08:06:59 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 31 08:06:59 compute-0 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 31 08:06:59 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 31 08:06:59 compute-0 sudo[81538]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:06:59 compute-0 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 08:06:59 compute-0 sudo[81541]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 31 08:06:59 compute-0 sudo[81541]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:06:59 compute-0 sudo[81541]: pam_unix(sudo:session): session closed for user root
Jan 31 08:06:59 compute-0 ceph-mgr[80633]: mgr[py] Loading python module 'rbd_support'
Jan 31 08:06:59 compute-0 python3[81540]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 82c880e6-d992-5408-8b12-efff9c275473 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch apply --in-file /home/ceph_spec.yaml _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 08:06:59 compute-0 ceph-mgr[80633]: mgr[py] Loading python module 'rgw'
Jan 31 08:06:59 compute-0 podman[81566]: 2026-01-31 08:06:59.529202365 +0000 UTC m=+0.058060523 container create 04dc6a3a58355f778fc9d73a468ea602a1a9bc1408ce104d185c4a626fb08b72 (image=quay.io/ceph/ceph:v20, name=stupefied_carver, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 08:06:59 compute-0 systemd[1]: Started libpod-conmon-04dc6a3a58355f778fc9d73a468ea602a1a9bc1408ce104d185c4a626fb08b72.scope.
Jan 31 08:06:59 compute-0 podman[81566]: 2026-01-31 08:06:59.504547252 +0000 UTC m=+0.033405470 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 31 08:06:59 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:06:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0011b8608a934ccd060c90f3b8d9ec84492ea415d5e2cfe69535c5bc03974764/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Jan 31 08:06:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0011b8608a934ccd060c90f3b8d9ec84492ea415d5e2cfe69535c5bc03974764/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 08:06:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0011b8608a934ccd060c90f3b8d9ec84492ea415d5e2cfe69535c5bc03974764/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 08:06:59 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v8: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 31 08:06:59 compute-0 podman[81566]: 2026-01-31 08:06:59.670340906 +0000 UTC m=+0.199199134 container init 04dc6a3a58355f778fc9d73a468ea602a1a9bc1408ce104d185c4a626fb08b72 (image=quay.io/ceph/ceph:v20, name=stupefied_carver, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_REF=tentacle)
Jan 31 08:06:59 compute-0 podman[81566]: 2026-01-31 08:06:59.677770772 +0000 UTC m=+0.206628940 container start 04dc6a3a58355f778fc9d73a468ea602a1a9bc1408ce104d185c4a626fb08b72 (image=quay.io/ceph/ceph:v20, name=stupefied_carver, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 08:06:59 compute-0 podman[81566]: 2026-01-31 08:06:59.708464815 +0000 UTC m=+0.237323043 container attach 04dc6a3a58355f778fc9d73a468ea602a1a9bc1408ce104d185c4a626fb08b72 (image=quay.io/ceph/ceph:v20, name=stupefied_carver, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 31 08:06:59 compute-0 ceph-mgr[80633]: mgr[py] Loading python module 'rook'
Jan 31 08:07:00 compute-0 ceph-mgr[75519]: log_channel(audit) log [DBG] : from='client.14176 -' entity='client.admin' cmd=[{"prefix": "orch apply", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 08:07:00 compute-0 sudo[81605]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 08:07:00 compute-0 sudo[81605]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:07:00 compute-0 sudo[81605]: pam_unix(sudo:session): session closed for user root
Jan 31 08:07:00 compute-0 ceph-mgr[80633]: mgr[py] Loading python module 'selftest'
Jan 31 08:07:00 compute-0 sudo[81630]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/82c880e6-d992-5408-8b12-efff9c275473/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 check-host --expect-hostname compute-0
Jan 31 08:07:00 compute-0 sudo[81630]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:07:00 compute-0 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 08:07:00 compute-0 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 08:07:00 compute-0 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 08:07:00 compute-0 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 31 08:07:00 compute-0 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 08:07:00 compute-0 ceph-mon[75227]: pgmap v8: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 31 08:07:00 compute-0 ceph-mgr[80633]: mgr[py] Loading python module 'smb'
Jan 31 08:07:00 compute-0 sudo[81630]: pam_unix(sudo:session): session closed for user root
Jan 31 08:07:00 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0)
Jan 31 08:07:00 compute-0 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 08:07:00 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0)
Jan 31 08:07:00 compute-0 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 08:07:00 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0)
Jan 31 08:07:00 compute-0 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 08:07:00 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0)
Jan 31 08:07:00 compute-0 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 08:07:00 compute-0 ceph-mgr[75519]: [cephadm INFO root] Added host compute-0
Jan 31 08:07:00 compute-0 ceph-mgr[75519]: log_channel(cephadm) log [INF] : Added host compute-0
Jan 31 08:07:00 compute-0 ceph-mgr[75519]: [cephadm INFO root] Saving service mon spec with placement compute-0
Jan 31 08:07:00 compute-0 ceph-mgr[75519]: log_channel(cephadm) log [INF] : Saving service mon spec with placement compute-0
Jan 31 08:07:00 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mon}] v 0)
Jan 31 08:07:00 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 31 08:07:00 compute-0 ceph-mon[75227]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 08:07:00 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 31 08:07:00 compute-0 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 31 08:07:00 compute-0 ceph-mgr[80633]: mgr[py] Loading python module 'snap_schedule'
Jan 31 08:07:00 compute-0 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 08:07:00 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 31 08:07:00 compute-0 ceph-mgr[75519]: [cephadm INFO root] Saving service mgr spec with placement compute-0
Jan 31 08:07:00 compute-0 ceph-mgr[75519]: log_channel(cephadm) log [INF] : Saving service mgr spec with placement compute-0
Jan 31 08:07:00 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mgr}] v 0)
Jan 31 08:07:00 compute-0 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 08:07:00 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mon}] v 0)
Jan 31 08:07:00 compute-0 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 08:07:00 compute-0 ceph-mgr[75519]: [cephadm INFO root] Marking host: compute-0 for OSDSpec preview refresh.
Jan 31 08:07:00 compute-0 ceph-mgr[75519]: log_channel(cephadm) log [INF] : Marking host: compute-0 for OSDSpec preview refresh.
Jan 31 08:07:00 compute-0 ceph-mgr[75519]: [cephadm INFO root] Saving service osd.default_drive_group spec with placement compute-0
Jan 31 08:07:00 compute-0 ceph-mgr[75519]: log_channel(cephadm) log [INF] : Saving service osd.default_drive_group spec with placement compute-0
Jan 31 08:07:00 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.osd.default_drive_group}] v 0)
Jan 31 08:07:00 compute-0 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 08:07:00 compute-0 ceph-mgr[75519]: [progress INFO root] update: starting ev 7708b38e-1751-4b1c-a8da-60377aa4f99e (Updating mgr deployment (-1 -> 1))
Jan 31 08:07:00 compute-0 ceph-mgr[75519]: [cephadm INFO cephadm.serve] Removing daemon mgr.compute-0.mdykbc from compute-0 -- ports [8765]
Jan 31 08:07:00 compute-0 ceph-mgr[75519]: log_channel(cephadm) log [INF] : Removing daemon mgr.compute-0.mdykbc from compute-0 -- ports [8765]
Jan 31 08:07:00 compute-0 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 08:07:00 compute-0 stupefied_carver[81581]: Added host 'compute-0' with addr '192.168.122.100'
Jan 31 08:07:00 compute-0 stupefied_carver[81581]: Scheduled mon update...
Jan 31 08:07:00 compute-0 stupefied_carver[81581]: Scheduled mgr update...
Jan 31 08:07:00 compute-0 stupefied_carver[81581]: Scheduled osd.default_drive_group update...
Jan 31 08:07:00 compute-0 systemd[1]: libpod-04dc6a3a58355f778fc9d73a468ea602a1a9bc1408ce104d185c4a626fb08b72.scope: Deactivated successfully.
Jan 31 08:07:00 compute-0 podman[81566]: 2026-01-31 08:07:00.601240652 +0000 UTC m=+1.130098810 container died 04dc6a3a58355f778fc9d73a468ea602a1a9bc1408ce104d185c4a626fb08b72 (image=quay.io/ceph/ceph:v20, name=stupefied_carver, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 08:07:00 compute-0 sudo[81674]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 08:07:00 compute-0 sudo[81674]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:07:00 compute-0 sudo[81674]: pam_unix(sudo:session): session closed for user root
Jan 31 08:07:00 compute-0 systemd[1]: var-lib-containers-storage-overlay-0011b8608a934ccd060c90f3b8d9ec84492ea415d5e2cfe69535c5bc03974764-merged.mount: Deactivated successfully.
Jan 31 08:07:00 compute-0 ceph-mgr[80633]: mgr[py] Loading python module 'stats'
Jan 31 08:07:00 compute-0 podman[81566]: 2026-01-31 08:07:00.64474227 +0000 UTC m=+1.173600428 container remove 04dc6a3a58355f778fc9d73a468ea602a1a9bc1408ce104d185c4a626fb08b72 (image=quay.io/ceph/ceph:v20, name=stupefied_carver, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Jan 31 08:07:00 compute-0 systemd[1]: libpod-conmon-04dc6a3a58355f778fc9d73a468ea602a1a9bc1408ce104d185c4a626fb08b72.scope: Deactivated successfully.
Jan 31 08:07:00 compute-0 sudo[81538]: pam_unix(sudo:session): session closed for user root
Jan 31 08:07:00 compute-0 sudo[81707]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/82c880e6-d992-5408-8b12-efff9c275473/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 rm-daemon --fsid 82c880e6-d992-5408-8b12-efff9c275473 --name mgr.compute-0.mdykbc --force --tcp-ports 8765
Jan 31 08:07:00 compute-0 sudo[81707]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:07:00 compute-0 ceph-mgr[80633]: mgr[py] Loading python module 'status'
Jan 31 08:07:00 compute-0 ceph-mgr[75519]: [devicehealth WARNING root] not enough osds to create mgr pool
Jan 31 08:07:00 compute-0 ceph-mgr[80633]: mgr[py] Loading python module 'telegraf'
Jan 31 08:07:00 compute-0 ceph-mgr[80633]: mgr[py] Loading python module 'telemetry'
Jan 31 08:07:00 compute-0 systemd[1]: Stopping Ceph mgr.compute-0.mdykbc for 82c880e6-d992-5408-8b12-efff9c275473...
Jan 31 08:07:00 compute-0 sudo[81776]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jvylkvxiwvfgitaezhigxfspsutpphhc ; /usr/bin/python3'
Jan 31 08:07:00 compute-0 sudo[81776]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:07:00 compute-0 ceph-mgr[80633]: mgr[py] Loading python module 'test_orchestrator'
Jan 31 08:07:01 compute-0 python3[81779]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 82c880e6-d992-5408-8b12-efff9c275473 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   status --format json | jq .osdmap.num_up_osds _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 08:07:01 compute-0 podman[81804]: 2026-01-31 08:07:01.097548287 +0000 UTC m=+0.062948408 container died 7e59e7a2f63e9fdb34be3dfc03ad0787e7e54a0cbc811197c4e5d1ea740f4f6e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-82c880e6-d992-5408-8b12-efff9c275473-mgr-compute-0-mdykbc, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3)
Jan 31 08:07:01 compute-0 podman[81820]: 2026-01-31 08:07:01.11340088 +0000 UTC m=+0.039905582 container create cf09e313285ce87d2ca4dcb9a4c1a30db713d2902b1cf4d30797f64b38b8bfec (image=quay.io/ceph/ceph:v20, name=adoring_kapitsa, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True)
Jan 31 08:07:01 compute-0 systemd[1]: var-lib-containers-storage-overlay-3d5fc9bb375f27ca41f548f0790d8dde407be6574ec409f03e8dfbe52d9d29f3-merged.mount: Deactivated successfully.
Jan 31 08:07:01 compute-0 podman[81804]: 2026-01-31 08:07:01.138738095 +0000 UTC m=+0.104138226 container remove 7e59e7a2f63e9fdb34be3dfc03ad0787e7e54a0cbc811197c4e5d1ea740f4f6e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-82c880e6-d992-5408-8b12-efff9c275473-mgr-compute-0-mdykbc, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, ceph=True, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 08:07:01 compute-0 bash[81804]: ceph-82c880e6-d992-5408-8b12-efff9c275473-mgr-compute-0-mdykbc
Jan 31 08:07:01 compute-0 systemd[1]: Started libpod-conmon-cf09e313285ce87d2ca4dcb9a4c1a30db713d2902b1cf4d30797f64b38b8bfec.scope.
Jan 31 08:07:01 compute-0 systemd[1]: ceph-82c880e6-d992-5408-8b12-efff9c275473@mgr.compute-0.mdykbc.service: Main process exited, code=exited, status=143/n/a
Jan 31 08:07:01 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:07:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d89022a7cb22199affb66a6b8e47d2605bb5c1d287aa60530a3de016622ade63/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 08:07:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d89022a7cb22199affb66a6b8e47d2605bb5c1d287aa60530a3de016622ade63/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Jan 31 08:07:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d89022a7cb22199affb66a6b8e47d2605bb5c1d287aa60530a3de016622ade63/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 08:07:01 compute-0 podman[81820]: 2026-01-31 08:07:01.186672766 +0000 UTC m=+0.113177488 container init cf09e313285ce87d2ca4dcb9a4c1a30db713d2902b1cf4d30797f64b38b8bfec (image=quay.io/ceph/ceph:v20, name=adoring_kapitsa, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 08:07:01 compute-0 podman[81820]: 2026-01-31 08:07:01.093572086 +0000 UTC m=+0.020076878 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 31 08:07:01 compute-0 podman[81820]: 2026-01-31 08:07:01.191226391 +0000 UTC m=+0.117731103 container start cf09e313285ce87d2ca4dcb9a4c1a30db713d2902b1cf4d30797f64b38b8bfec (image=quay.io/ceph/ceph:v20, name=adoring_kapitsa, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 08:07:01 compute-0 podman[81820]: 2026-01-31 08:07:01.194269697 +0000 UTC m=+0.120774429 container attach cf09e313285ce87d2ca4dcb9a4c1a30db713d2902b1cf4d30797f64b38b8bfec (image=quay.io/ceph/ceph:v20, name=adoring_kapitsa, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 08:07:01 compute-0 systemd[1]: ceph-82c880e6-d992-5408-8b12-efff9c275473@mgr.compute-0.mdykbc.service: Failed with result 'exit-code'.
Jan 31 08:07:01 compute-0 systemd[1]: Stopped Ceph mgr.compute-0.mdykbc for 82c880e6-d992-5408-8b12-efff9c275473.
Jan 31 08:07:01 compute-0 systemd[1]: ceph-82c880e6-d992-5408-8b12-efff9c275473@mgr.compute-0.mdykbc.service: Consumed 6.173s CPU time, 426.0M memory peak, read 0B from disk, written 188.5K to disk.
Jan 31 08:07:01 compute-0 systemd[1]: Reloading.
Jan 31 08:07:01 compute-0 systemd-sysv-generator[81929]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 08:07:01 compute-0 systemd-rc-local-generator[81925]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 08:07:01 compute-0 ceph-mon[75227]: from='client.14176 -' entity='client.admin' cmd=[{"prefix": "orch apply", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 08:07:01 compute-0 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 08:07:01 compute-0 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 08:07:01 compute-0 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 08:07:01 compute-0 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 08:07:01 compute-0 ceph-mon[75227]: Added host compute-0
Jan 31 08:07:01 compute-0 ceph-mon[75227]: Saving service mon spec with placement compute-0
Jan 31 08:07:01 compute-0 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 08:07:01 compute-0 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 31 08:07:01 compute-0 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 08:07:01 compute-0 ceph-mon[75227]: Saving service mgr spec with placement compute-0
Jan 31 08:07:01 compute-0 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 08:07:01 compute-0 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 08:07:01 compute-0 ceph-mon[75227]: Marking host: compute-0 for OSDSpec preview refresh.
Jan 31 08:07:01 compute-0 ceph-mon[75227]: Saving service osd.default_drive_group spec with placement compute-0
Jan 31 08:07:01 compute-0 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 08:07:01 compute-0 ceph-mon[75227]: Removing daemon mgr.compute-0.mdykbc from compute-0 -- ports [8765]
Jan 31 08:07:01 compute-0 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 08:07:01 compute-0 sudo[81707]: pam_unix(sudo:session): session closed for user root
Jan 31 08:07:01 compute-0 ceph-mgr[75519]: [cephadm INFO cephadm.services.cephadmservice] Removing key for mgr.compute-0.mdykbc
Jan 31 08:07:01 compute-0 ceph-mgr[75519]: log_channel(cephadm) log [INF] : Removing key for mgr.compute-0.mdykbc
Jan 31 08:07:01 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth rm", "entity": "mgr.compute-0.mdykbc"} v 0)
Jan 31 08:07:01 compute-0 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "auth rm", "entity": "mgr.compute-0.mdykbc"} : dispatch
Jan 31 08:07:01 compute-0 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd='[{"prefix": "auth rm", "entity": "mgr.compute-0.mdykbc"}]': finished
Jan 31 08:07:01 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mgr}] v 0)
Jan 31 08:07:01 compute-0 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 08:07:01 compute-0 ceph-mgr[75519]: [progress INFO root] complete: finished ev 7708b38e-1751-4b1c-a8da-60377aa4f99e (Updating mgr deployment (-1 -> 1))
Jan 31 08:07:01 compute-0 ceph-mgr[75519]: [progress INFO root] Completed event 7708b38e-1751-4b1c-a8da-60377aa4f99e (Updating mgr deployment (-1 -> 1)) in 1 seconds
Jan 31 08:07:01 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mgr}] v 0)
Jan 31 08:07:01 compute-0 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 08:07:01 compute-0 sudo[81944]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 31 08:07:01 compute-0 sudo[81944]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:07:01 compute-0 sudo[81944]: pam_unix(sudo:session): session closed for user root
Jan 31 08:07:01 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v9: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 31 08:07:01 compute-0 sudo[81969]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 08:07:01 compute-0 sudo[81969]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:07:01 compute-0 sudo[81969]: pam_unix(sudo:session): session closed for user root
Jan 31 08:07:01 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json"} v 0)
Jan 31 08:07:01 compute-0 ceph-mon[75227]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/292455884' entity='client.admin' cmd={"prefix": "status", "format": "json"} : dispatch
Jan 31 08:07:01 compute-0 adoring_kapitsa[81849]: 
Jan 31 08:07:01 compute-0 adoring_kapitsa[81849]: {"fsid":"82c880e6-d992-5408-8b12-efff9c275473","health":{"status":"HEALTH_WARN","checks":{"TOO_FEW_OSDS":{"severity":"HEALTH_WARN","summary":{"message":"OSD count 0 < osd_pool_default_size 1","count":1},"muted":false}},"mutes":[]},"election_epoch":5,"quorum":[0],"quorum_names":["compute-0"],"quorum_age":48,"monmap":{"epoch":1,"min_mon_release_name":"tentacle","num_mons":1},"osdmap":{"epoch":3,"num_osds":0,"num_up_osds":0,"osd_up_since":0,"num_in_osds":0,"osd_in_since":0,"num_remapped_pgs":0},"pgmap":{"pgs_by_state":[],"num_pgs":0,"num_pools":0,"num_objects":0,"data_bytes":0,"bytes_used":0,"bytes_avail":0,"bytes_total":0},"fsmap":{"epoch":1,"btime":"2026-01-31T08:06:11:330734+0000","by_rank":[],"up:standby":0},"mgrmap":{"available":true,"num_standbys":0,"modules":["cephadm","iostat","nfs"],"services":{}},"servicemap":{"epoch":1,"modified":"2026-01-31T08:06:11.333031+0000","services":{}},"progress_events":{}}
Jan 31 08:07:01 compute-0 systemd[1]: libpod-cf09e313285ce87d2ca4dcb9a4c1a30db713d2902b1cf4d30797f64b38b8bfec.scope: Deactivated successfully.
Jan 31 08:07:01 compute-0 podman[81820]: 2026-01-31 08:07:01.735849882 +0000 UTC m=+0.662354624 container died cf09e313285ce87d2ca4dcb9a4c1a30db713d2902b1cf4d30797f64b38b8bfec (image=quay.io/ceph/ceph:v20, name=adoring_kapitsa, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 08:07:01 compute-0 systemd[1]: var-lib-containers-storage-overlay-d89022a7cb22199affb66a6b8e47d2605bb5c1d287aa60530a3de016622ade63-merged.mount: Deactivated successfully.
Jan 31 08:07:01 compute-0 sudo[81994]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/82c880e6-d992-5408-8b12-efff9c275473/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ls
Jan 31 08:07:01 compute-0 podman[81820]: 2026-01-31 08:07:01.777160571 +0000 UTC m=+0.703665273 container remove cf09e313285ce87d2ca4dcb9a4c1a30db713d2902b1cf4d30797f64b38b8bfec (image=quay.io/ceph/ceph:v20, name=adoring_kapitsa, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 08:07:01 compute-0 sudo[81994]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:07:01 compute-0 systemd[1]: libpod-conmon-cf09e313285ce87d2ca4dcb9a4c1a30db713d2902b1cf4d30797f64b38b8bfec.scope: Deactivated successfully.
Jan 31 08:07:01 compute-0 sudo[81776]: pam_unix(sudo:session): session closed for user root
Jan 31 08:07:02 compute-0 podman[82077]: 2026-01-31 08:07:02.139459894 +0000 UTC m=+0.055629883 container exec 2c160fb9852a007dc977740f88f96001cc57b1cb392a9e315d541aef8037777a (image=quay.io/ceph/ceph:v20, name=ceph-82c880e6-d992-5408-8b12-efff9c275473-mon-compute-0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Jan 31 08:07:02 compute-0 podman[82077]: 2026-01-31 08:07:02.238717884 +0000 UTC m=+0.154887873 container exec_died 2c160fb9852a007dc977740f88f96001cc57b1cb392a9e315d541aef8037777a (image=quay.io/ceph/ceph:v20, name=ceph-82c880e6-d992-5408-8b12-efff9c275473-mon-compute-0, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, OSD_FLAVOR=default)
Jan 31 08:07:02 compute-0 sudo[81994]: pam_unix(sudo:session): session closed for user root
Jan 31 08:07:02 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 31 08:07:02 compute-0 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 08:07:02 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 31 08:07:02 compute-0 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 08:07:02 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 31 08:07:02 compute-0 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 08:07:02 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 31 08:07:02 compute-0 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 08:07:02 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 31 08:07:02 compute-0 ceph-mon[75227]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 08:07:02 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 31 08:07:02 compute-0 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 31 08:07:02 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 31 08:07:02 compute-0 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 08:07:02 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 31 08:07:02 compute-0 ceph-mon[75227]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 31 08:07:02 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 31 08:07:02 compute-0 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 31 08:07:02 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 31 08:07:02 compute-0 ceph-mon[75227]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 08:07:02 compute-0 ceph-mon[75227]: Removing key for mgr.compute-0.mdykbc
Jan 31 08:07:02 compute-0 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "auth rm", "entity": "mgr.compute-0.mdykbc"} : dispatch
Jan 31 08:07:02 compute-0 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd='[{"prefix": "auth rm", "entity": "mgr.compute-0.mdykbc"}]': finished
Jan 31 08:07:02 compute-0 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 08:07:02 compute-0 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 08:07:02 compute-0 ceph-mon[75227]: pgmap v9: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 31 08:07:02 compute-0 ceph-mon[75227]: from='client.? 192.168.122.100:0/292455884' entity='client.admin' cmd={"prefix": "status", "format": "json"} : dispatch
Jan 31 08:07:02 compute-0 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 08:07:02 compute-0 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 08:07:02 compute-0 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 08:07:02 compute-0 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 08:07:02 compute-0 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 08:07:02 compute-0 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 31 08:07:02 compute-0 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 08:07:02 compute-0 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 31 08:07:02 compute-0 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 31 08:07:02 compute-0 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 08:07:02 compute-0 sudo[82176]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 08:07:02 compute-0 sudo[82176]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:07:02 compute-0 sudo[82176]: pam_unix(sudo:session): session closed for user root
Jan 31 08:07:02 compute-0 sudo[82201]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/82c880e6-d992-5408-8b12-efff9c275473/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 82c880e6-d992-5408-8b12-efff9c275473 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Jan 31 08:07:02 compute-0 sudo[82201]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:07:02 compute-0 ceph-mgr[75519]: [devicehealth WARNING root] not enough osds to create mgr pool
Jan 31 08:07:02 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Jan 31 08:07:02 compute-0 ceph-mgr[75519]: [progress INFO root] Writing back 3 completed events
Jan 31 08:07:02 compute-0 ceph-mgr[75519]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:07:02 compute-0 ceph-mgr[75519]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:07:02 compute-0 ceph-mgr[75519]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:07:02 compute-0 ceph-mgr[75519]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:07:02 compute-0 ceph-mgr[75519]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:07:02 compute-0 ceph-mgr[75519]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:07:02 compute-0 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 08:07:02 compute-0 podman[82238]: 2026-01-31 08:07:02.888807701 +0000 UTC m=+0.030151494 container create 947a4fec9e17e6f25f035872d20e54e87eccae7e356b0d11cd7379c215947c17 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=strange_sutherland, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2)
Jan 31 08:07:02 compute-0 systemd[1]: Started libpod-conmon-947a4fec9e17e6f25f035872d20e54e87eccae7e356b0d11cd7379c215947c17.scope.
Jan 31 08:07:02 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:07:02 compute-0 podman[82238]: 2026-01-31 08:07:02.946067621 +0000 UTC m=+0.087411414 container init 947a4fec9e17e6f25f035872d20e54e87eccae7e356b0d11cd7379c215947c17 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=strange_sutherland, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, io.buildah.version=1.41.3)
Jan 31 08:07:02 compute-0 podman[82238]: 2026-01-31 08:07:02.950487103 +0000 UTC m=+0.091830886 container start 947a4fec9e17e6f25f035872d20e54e87eccae7e356b0d11cd7379c215947c17 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=strange_sutherland, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Jan 31 08:07:02 compute-0 strange_sutherland[82254]: 167 167
Jan 31 08:07:02 compute-0 systemd[1]: libpod-947a4fec9e17e6f25f035872d20e54e87eccae7e356b0d11cd7379c215947c17.scope: Deactivated successfully.
Jan 31 08:07:02 compute-0 conmon[82254]: conmon 947a4fec9e17e6f25f03 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-947a4fec9e17e6f25f035872d20e54e87eccae7e356b0d11cd7379c215947c17.scope/container/memory.events
Jan 31 08:07:02 compute-0 podman[82238]: 2026-01-31 08:07:02.958675698 +0000 UTC m=+0.100019531 container attach 947a4fec9e17e6f25f035872d20e54e87eccae7e356b0d11cd7379c215947c17 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=strange_sutherland, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 08:07:02 compute-0 podman[82238]: 2026-01-31 08:07:02.959003948 +0000 UTC m=+0.100347771 container died 947a4fec9e17e6f25f035872d20e54e87eccae7e356b0d11cd7379c215947c17 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=strange_sutherland, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Jan 31 08:07:02 compute-0 podman[82238]: 2026-01-31 08:07:02.875462987 +0000 UTC m=+0.016806800 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 08:07:02 compute-0 systemd[1]: var-lib-containers-storage-overlay-b6a74aeacc6144366873e0827827a2b51170899bee32ab2ad7f35a12215c64f8-merged.mount: Deactivated successfully.
Jan 31 08:07:03 compute-0 podman[82238]: 2026-01-31 08:07:03.009435666 +0000 UTC m=+0.150779459 container remove 947a4fec9e17e6f25f035872d20e54e87eccae7e356b0d11cd7379c215947c17 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=strange_sutherland, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Jan 31 08:07:03 compute-0 systemd[1]: libpod-conmon-947a4fec9e17e6f25f035872d20e54e87eccae7e356b0d11cd7379c215947c17.scope: Deactivated successfully.
Jan 31 08:07:03 compute-0 podman[82276]: 2026-01-31 08:07:03.12730711 +0000 UTC m=+0.036932322 container create 9c089e81a2198e1a1b0deca4ab5336e8dd99c8abbe8e370c9dee5d001ea1a32e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=epic_aryabhata, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Jan 31 08:07:03 compute-0 systemd[1]: Started libpod-conmon-9c089e81a2198e1a1b0deca4ab5336e8dd99c8abbe8e370c9dee5d001ea1a32e.scope.
Jan 31 08:07:03 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:07:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/234179e24d5f203181caa95cc325ddd80e620dfe4be8e9a899ed925249d961cf/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 08:07:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/234179e24d5f203181caa95cc325ddd80e620dfe4be8e9a899ed925249d961cf/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 08:07:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/234179e24d5f203181caa95cc325ddd80e620dfe4be8e9a899ed925249d961cf/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 08:07:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/234179e24d5f203181caa95cc325ddd80e620dfe4be8e9a899ed925249d961cf/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 08:07:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/234179e24d5f203181caa95cc325ddd80e620dfe4be8e9a899ed925249d961cf/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 31 08:07:03 compute-0 podman[82276]: 2026-01-31 08:07:03.108572795 +0000 UTC m=+0.018198057 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 08:07:03 compute-0 podman[82276]: 2026-01-31 08:07:03.22226922 +0000 UTC m=+0.131894462 container init 9c089e81a2198e1a1b0deca4ab5336e8dd99c8abbe8e370c9dee5d001ea1a32e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=epic_aryabhata, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 08:07:03 compute-0 podman[82276]: 2026-01-31 08:07:03.229165597 +0000 UTC m=+0.138790839 container start 9c089e81a2198e1a1b0deca4ab5336e8dd99c8abbe8e370c9dee5d001ea1a32e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=epic_aryabhata, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030)
Jan 31 08:07:03 compute-0 podman[82276]: 2026-01-31 08:07:03.232152379 +0000 UTC m=+0.141777601 container attach 9c089e81a2198e1a1b0deca4ab5336e8dd99c8abbe8e370c9dee5d001ea1a32e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=epic_aryabhata, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=tentacle)
Jan 31 08:07:03 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e3 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 08:07:03 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v10: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 31 08:07:03 compute-0 epic_aryabhata[82292]: --> passed data devices: 0 physical, 3 LVM
Jan 31 08:07:03 compute-0 epic_aryabhata[82292]: Running command: /usr/bin/ceph-authtool --gen-print-key
Jan 31 08:07:03 compute-0 epic_aryabhata[82292]: Running command: /usr/bin/ceph-authtool --gen-print-key
Jan 31 08:07:03 compute-0 epic_aryabhata[82292]: Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring -i - osd new 39c36249-2898-4a76-b317-8e4ca379866f
Jan 31 08:07:03 compute-0 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 08:07:04 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd new", "uuid": "39c36249-2898-4a76-b317-8e4ca379866f"} v 0)
Jan 31 08:07:04 compute-0 ceph-mon[75227]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2971171863' entity='client.bootstrap-osd' cmd={"prefix": "osd new", "uuid": "39c36249-2898-4a76-b317-8e4ca379866f"} : dispatch
Jan 31 08:07:04 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e3 do_prune osdmap full prune enabled
Jan 31 08:07:04 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e3 encode_pending skipping prime_pg_temp; mapping job did not start
Jan 31 08:07:04 compute-0 ceph-mon[75227]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2971171863' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "39c36249-2898-4a76-b317-8e4ca379866f"}]': finished
Jan 31 08:07:04 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e4 e4: 1 total, 0 up, 1 in
Jan 31 08:07:04 compute-0 ceph-mon[75227]: log_channel(cluster) log [DBG] : osdmap e4: 1 total, 0 up, 1 in
Jan 31 08:07:04 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Jan 31 08:07:04 compute-0 ceph-mon[75227]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "osd metadata", "id": 0} : dispatch
Jan 31 08:07:04 compute-0 ceph-mgr[75519]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Jan 31 08:07:04 compute-0 epic_aryabhata[82292]: Running command: /usr/bin/mount -t tmpfs tmpfs /var/lib/ceph/osd/ceph-0
Jan 31 08:07:04 compute-0 epic_aryabhata[82292]: Running command: /usr/bin/chown -h ceph:ceph /dev/ceph_vg0/ceph_lv0
Jan 31 08:07:04 compute-0 epic_aryabhata[82292]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-0
Jan 31 08:07:04 compute-0 epic_aryabhata[82292]: Running command: /usr/bin/ln -s /dev/ceph_vg0/ceph_lv0 /var/lib/ceph/osd/ceph-0/block
Jan 31 08:07:04 compute-0 epic_aryabhata[82292]: Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring mon getmap -o /var/lib/ceph/osd/ceph-0/activate.monmap
Jan 31 08:07:04 compute-0 lvm[82386]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 31 08:07:04 compute-0 lvm[82386]: VG ceph_vg0 finished
Jan 31 08:07:04 compute-0 ceph-mgr[75519]: [devicehealth WARNING root] not enough osds to create mgr pool
Jan 31 08:07:04 compute-0 ceph-mon[75227]: pgmap v10: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 31 08:07:04 compute-0 ceph-mon[75227]: from='client.? 192.168.122.100:0/2971171863' entity='client.bootstrap-osd' cmd={"prefix": "osd new", "uuid": "39c36249-2898-4a76-b317-8e4ca379866f"} : dispatch
Jan 31 08:07:04 compute-0 ceph-mon[75227]: from='client.? 192.168.122.100:0/2971171863' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "39c36249-2898-4a76-b317-8e4ca379866f"}]': finished
Jan 31 08:07:04 compute-0 ceph-mon[75227]: osdmap e4: 1 total, 0 up, 1 in
Jan 31 08:07:04 compute-0 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "osd metadata", "id": 0} : dispatch
Jan 31 08:07:04 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon getmap"} v 0)
Jan 31 08:07:04 compute-0 ceph-mon[75227]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3434991703' entity='client.bootstrap-osd' cmd={"prefix": "mon getmap"} : dispatch
Jan 31 08:07:04 compute-0 epic_aryabhata[82292]:  stderr: got monmap epoch 1
Jan 31 08:07:04 compute-0 epic_aryabhata[82292]: --> Creating keyring file for osd.0
Jan 31 08:07:04 compute-0 epic_aryabhata[82292]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0/keyring
Jan 31 08:07:04 compute-0 epic_aryabhata[82292]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0/
Jan 31 08:07:04 compute-0 epic_aryabhata[82292]: Running command: /usr/bin/ceph-osd --cluster ceph --osd-objectstore bluestore --mkfs -i 0 --monmap /var/lib/ceph/osd/ceph-0/activate.monmap --keyfile - --osdspec-affinity default_drive_group --osd-data /var/lib/ceph/osd/ceph-0/ --osd-uuid 39c36249-2898-4a76-b317-8e4ca379866f --setuser ceph --setgroup ceph
Jan 31 08:07:05 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v12: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 31 08:07:05 compute-0 epic_aryabhata[82292]:  stderr: 2026-01-31T08:07:04.982+0000 7fbc6ac288c0 -1 bluestore(/var/lib/ceph/osd/ceph-0//block) No valid bdev label found
Jan 31 08:07:05 compute-0 epic_aryabhata[82292]:  stderr: 2026-01-31T08:07:05.002+0000 7fbc6ac288c0 -1 bluestore(/var/lib/ceph/osd/ceph-0/) _read_fsid unparsable uuid
Jan 31 08:07:05 compute-0 epic_aryabhata[82292]: --> ceph-volume lvm prepare successful for: ceph_vg0/ceph_lv0
Jan 31 08:07:05 compute-0 ceph-mon[75227]: log_channel(cluster) log [INF] : Health check cleared: TOO_FEW_OSDS (was: OSD count 0 < osd_pool_default_size 1)
Jan 31 08:07:05 compute-0 ceph-mon[75227]: log_channel(cluster) log [INF] : Cluster is now healthy
Jan 31 08:07:05 compute-0 ceph-mon[75227]: from='client.? 192.168.122.100:0/3434991703' entity='client.bootstrap-osd' cmd={"prefix": "mon getmap"} : dispatch
Jan 31 08:07:05 compute-0 epic_aryabhata[82292]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0
Jan 31 08:07:05 compute-0 epic_aryabhata[82292]: Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph_vg0/ceph_lv0 --path /var/lib/ceph/osd/ceph-0 --no-mon-config
Jan 31 08:07:05 compute-0 epic_aryabhata[82292]: Running command: /usr/bin/ln -snf /dev/ceph_vg0/ceph_lv0 /var/lib/ceph/osd/ceph-0/block
Jan 31 08:07:05 compute-0 epic_aryabhata[82292]: Running command: /usr/bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-0/block
Jan 31 08:07:05 compute-0 epic_aryabhata[82292]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-0
Jan 31 08:07:05 compute-0 epic_aryabhata[82292]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0
Jan 31 08:07:05 compute-0 epic_aryabhata[82292]: --> ceph-volume lvm activate successful for osd ID: 0
Jan 31 08:07:05 compute-0 epic_aryabhata[82292]: --> ceph-volume lvm create successful for: ceph_vg0/ceph_lv0
Jan 31 08:07:05 compute-0 epic_aryabhata[82292]: Running command: /usr/bin/ceph-authtool --gen-print-key
Jan 31 08:07:06 compute-0 epic_aryabhata[82292]: Running command: /usr/bin/ceph-authtool --gen-print-key
Jan 31 08:07:06 compute-0 epic_aryabhata[82292]: Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring -i - osd new dacad4fa-56d8-4937-b2d8-306fb75187f3
Jan 31 08:07:06 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd new", "uuid": "dacad4fa-56d8-4937-b2d8-306fb75187f3"} v 0)
Jan 31 08:07:06 compute-0 ceph-mon[75227]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1372698918' entity='client.bootstrap-osd' cmd={"prefix": "osd new", "uuid": "dacad4fa-56d8-4937-b2d8-306fb75187f3"} : dispatch
Jan 31 08:07:06 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e4 do_prune osdmap full prune enabled
Jan 31 08:07:06 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e4 encode_pending skipping prime_pg_temp; mapping job did not start
Jan 31 08:07:06 compute-0 ceph-mon[75227]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1372698918' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "dacad4fa-56d8-4937-b2d8-306fb75187f3"}]': finished
Jan 31 08:07:06 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e5 e5: 2 total, 0 up, 2 in
Jan 31 08:07:06 compute-0 ceph-mon[75227]: log_channel(cluster) log [DBG] : osdmap e5: 2 total, 0 up, 2 in
Jan 31 08:07:06 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Jan 31 08:07:06 compute-0 ceph-mon[75227]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "osd metadata", "id": 0} : dispatch
Jan 31 08:07:06 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Jan 31 08:07:06 compute-0 ceph-mon[75227]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "osd metadata", "id": 1} : dispatch
Jan 31 08:07:06 compute-0 ceph-mgr[75519]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Jan 31 08:07:06 compute-0 ceph-mgr[75519]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Jan 31 08:07:06 compute-0 lvm[83338]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 31 08:07:06 compute-0 lvm[83338]: VG ceph_vg1 finished
Jan 31 08:07:06 compute-0 epic_aryabhata[82292]: Running command: /usr/bin/mount -t tmpfs tmpfs /var/lib/ceph/osd/ceph-1
Jan 31 08:07:06 compute-0 epic_aryabhata[82292]: Running command: /usr/bin/chown -h ceph:ceph /dev/ceph_vg1/ceph_lv1
Jan 31 08:07:06 compute-0 epic_aryabhata[82292]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-1
Jan 31 08:07:06 compute-0 epic_aryabhata[82292]: Running command: /usr/bin/ln -s /dev/ceph_vg1/ceph_lv1 /var/lib/ceph/osd/ceph-1/block
Jan 31 08:07:06 compute-0 epic_aryabhata[82292]: Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring mon getmap -o /var/lib/ceph/osd/ceph-1/activate.monmap
Jan 31 08:07:06 compute-0 ceph-mgr[75519]: [devicehealth WARNING root] not enough osds to create mgr pool
Jan 31 08:07:06 compute-0 ceph-mon[75227]: pgmap v12: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 31 08:07:06 compute-0 ceph-mon[75227]: Health check cleared: TOO_FEW_OSDS (was: OSD count 0 < osd_pool_default_size 1)
Jan 31 08:07:06 compute-0 ceph-mon[75227]: Cluster is now healthy
Jan 31 08:07:06 compute-0 ceph-mon[75227]: from='client.? 192.168.122.100:0/1372698918' entity='client.bootstrap-osd' cmd={"prefix": "osd new", "uuid": "dacad4fa-56d8-4937-b2d8-306fb75187f3"} : dispatch
Jan 31 08:07:06 compute-0 ceph-mon[75227]: from='client.? 192.168.122.100:0/1372698918' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "dacad4fa-56d8-4937-b2d8-306fb75187f3"}]': finished
Jan 31 08:07:06 compute-0 ceph-mon[75227]: osdmap e5: 2 total, 0 up, 2 in
Jan 31 08:07:06 compute-0 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "osd metadata", "id": 0} : dispatch
Jan 31 08:07:06 compute-0 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "osd metadata", "id": 1} : dispatch
Jan 31 08:07:07 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon getmap"} v 0)
Jan 31 08:07:07 compute-0 ceph-mon[75227]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3829006065' entity='client.bootstrap-osd' cmd={"prefix": "mon getmap"} : dispatch
Jan 31 08:07:07 compute-0 epic_aryabhata[82292]:  stderr: got monmap epoch 1
Jan 31 08:07:07 compute-0 epic_aryabhata[82292]: --> Creating keyring file for osd.1
Jan 31 08:07:07 compute-0 epic_aryabhata[82292]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1/keyring
Jan 31 08:07:07 compute-0 epic_aryabhata[82292]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1/
Jan 31 08:07:07 compute-0 epic_aryabhata[82292]: Running command: /usr/bin/ceph-osd --cluster ceph --osd-objectstore bluestore --mkfs -i 1 --monmap /var/lib/ceph/osd/ceph-1/activate.monmap --keyfile - --osdspec-affinity default_drive_group --osd-data /var/lib/ceph/osd/ceph-1/ --osd-uuid dacad4fa-56d8-4937-b2d8-306fb75187f3 --setuser ceph --setgroup ceph
Jan 31 08:07:07 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v14: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 31 08:07:07 compute-0 ceph-mon[75227]: from='client.? 192.168.122.100:0/3829006065' entity='client.bootstrap-osd' cmd={"prefix": "mon getmap"} : dispatch
Jan 31 08:07:07 compute-0 epic_aryabhata[82292]:  stderr: 2026-01-31T08:07:07.185+0000 7f1834a1f8c0 -1 bluestore(/var/lib/ceph/osd/ceph-1//block) No valid bdev label found
Jan 31 08:07:07 compute-0 epic_aryabhata[82292]:  stderr: 2026-01-31T08:07:07.207+0000 7f1834a1f8c0 -1 bluestore(/var/lib/ceph/osd/ceph-1/) _read_fsid unparsable uuid
Jan 31 08:07:07 compute-0 epic_aryabhata[82292]: --> ceph-volume lvm prepare successful for: ceph_vg1/ceph_lv1
Jan 31 08:07:08 compute-0 epic_aryabhata[82292]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1
Jan 31 08:07:08 compute-0 epic_aryabhata[82292]: Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph_vg1/ceph_lv1 --path /var/lib/ceph/osd/ceph-1 --no-mon-config
Jan 31 08:07:08 compute-0 epic_aryabhata[82292]: Running command: /usr/bin/ln -snf /dev/ceph_vg1/ceph_lv1 /var/lib/ceph/osd/ceph-1/block
Jan 31 08:07:08 compute-0 epic_aryabhata[82292]: Running command: /usr/bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-1/block
Jan 31 08:07:08 compute-0 epic_aryabhata[82292]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-1
Jan 31 08:07:08 compute-0 epic_aryabhata[82292]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1
Jan 31 08:07:08 compute-0 epic_aryabhata[82292]: --> ceph-volume lvm activate successful for osd ID: 1
Jan 31 08:07:08 compute-0 epic_aryabhata[82292]: --> ceph-volume lvm create successful for: ceph_vg1/ceph_lv1
Jan 31 08:07:08 compute-0 epic_aryabhata[82292]: Running command: /usr/bin/ceph-authtool --gen-print-key
Jan 31 08:07:08 compute-0 epic_aryabhata[82292]: Running command: /usr/bin/ceph-authtool --gen-print-key
Jan 31 08:07:08 compute-0 epic_aryabhata[82292]: Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring -i - osd new faa25865-e7b6-44f9-8188-08bf287b941b
Jan 31 08:07:08 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e5 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 08:07:08 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd new", "uuid": "faa25865-e7b6-44f9-8188-08bf287b941b"} v 0)
Jan 31 08:07:08 compute-0 ceph-mon[75227]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2645612539' entity='client.bootstrap-osd' cmd={"prefix": "osd new", "uuid": "faa25865-e7b6-44f9-8188-08bf287b941b"} : dispatch
Jan 31 08:07:08 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e5 do_prune osdmap full prune enabled
Jan 31 08:07:08 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e5 encode_pending skipping prime_pg_temp; mapping job did not start
Jan 31 08:07:08 compute-0 ceph-mon[75227]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2645612539' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "faa25865-e7b6-44f9-8188-08bf287b941b"}]': finished
Jan 31 08:07:08 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e6 e6: 3 total, 0 up, 3 in
Jan 31 08:07:08 compute-0 ceph-mon[75227]: log_channel(cluster) log [DBG] : osdmap e6: 3 total, 0 up, 3 in
Jan 31 08:07:08 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Jan 31 08:07:08 compute-0 ceph-mon[75227]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "osd metadata", "id": 0} : dispatch
Jan 31 08:07:08 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Jan 31 08:07:08 compute-0 ceph-mon[75227]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "osd metadata", "id": 1} : dispatch
Jan 31 08:07:08 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Jan 31 08:07:08 compute-0 ceph-mon[75227]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Jan 31 08:07:08 compute-0 ceph-mgr[75519]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Jan 31 08:07:08 compute-0 ceph-mgr[75519]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Jan 31 08:07:08 compute-0 ceph-mgr[75519]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Jan 31 08:07:08 compute-0 lvm[84292]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 31 08:07:08 compute-0 lvm[84292]: VG ceph_vg2 finished
Jan 31 08:07:08 compute-0 epic_aryabhata[82292]: Running command: /usr/bin/mount -t tmpfs tmpfs /var/lib/ceph/osd/ceph-2
Jan 31 08:07:08 compute-0 epic_aryabhata[82292]: Running command: /usr/bin/chown -h ceph:ceph /dev/ceph_vg2/ceph_lv2
Jan 31 08:07:08 compute-0 epic_aryabhata[82292]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-2
Jan 31 08:07:08 compute-0 epic_aryabhata[82292]: Running command: /usr/bin/ln -s /dev/ceph_vg2/ceph_lv2 /var/lib/ceph/osd/ceph-2/block
Jan 31 08:07:08 compute-0 epic_aryabhata[82292]: Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring mon getmap -o /var/lib/ceph/osd/ceph-2/activate.monmap
Jan 31 08:07:08 compute-0 ceph-mgr[75519]: [devicehealth WARNING root] not enough osds to create mgr pool
Jan 31 08:07:08 compute-0 ceph-mon[75227]: pgmap v14: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 31 08:07:08 compute-0 ceph-mon[75227]: from='client.? 192.168.122.100:0/2645612539' entity='client.bootstrap-osd' cmd={"prefix": "osd new", "uuid": "faa25865-e7b6-44f9-8188-08bf287b941b"} : dispatch
Jan 31 08:07:08 compute-0 ceph-mon[75227]: from='client.? 192.168.122.100:0/2645612539' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "faa25865-e7b6-44f9-8188-08bf287b941b"}]': finished
Jan 31 08:07:08 compute-0 ceph-mon[75227]: osdmap e6: 3 total, 0 up, 3 in
Jan 31 08:07:08 compute-0 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "osd metadata", "id": 0} : dispatch
Jan 31 08:07:08 compute-0 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "osd metadata", "id": 1} : dispatch
Jan 31 08:07:08 compute-0 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Jan 31 08:07:09 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon getmap"} v 0)
Jan 31 08:07:09 compute-0 ceph-mon[75227]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2071841395' entity='client.bootstrap-osd' cmd={"prefix": "mon getmap"} : dispatch
Jan 31 08:07:09 compute-0 epic_aryabhata[82292]:  stderr: got monmap epoch 1
Jan 31 08:07:09 compute-0 epic_aryabhata[82292]: --> Creating keyring file for osd.2
Jan 31 08:07:09 compute-0 epic_aryabhata[82292]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2/keyring
Jan 31 08:07:09 compute-0 epic_aryabhata[82292]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2/
Jan 31 08:07:09 compute-0 epic_aryabhata[82292]: Running command: /usr/bin/ceph-osd --cluster ceph --osd-objectstore bluestore --mkfs -i 2 --monmap /var/lib/ceph/osd/ceph-2/activate.monmap --keyfile - --osdspec-affinity default_drive_group --osd-data /var/lib/ceph/osd/ceph-2/ --osd-uuid faa25865-e7b6-44f9-8188-08bf287b941b --setuser ceph --setgroup ceph
Jan 31 08:07:09 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v16: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 31 08:07:09 compute-0 ceph-mon[75227]: from='client.? 192.168.122.100:0/2071841395' entity='client.bootstrap-osd' cmd={"prefix": "mon getmap"} : dispatch
Jan 31 08:07:10 compute-0 epic_aryabhata[82292]:  stderr: 2026-01-31T08:07:09.362+0000 7fefb56948c0 -1 bluestore(/var/lib/ceph/osd/ceph-2//block) No valid bdev label found
Jan 31 08:07:10 compute-0 epic_aryabhata[82292]:  stderr: 2026-01-31T08:07:09.382+0000 7fefb56948c0 -1 bluestore(/var/lib/ceph/osd/ceph-2/) _read_fsid unparsable uuid
Jan 31 08:07:10 compute-0 epic_aryabhata[82292]: --> ceph-volume lvm prepare successful for: ceph_vg2/ceph_lv2
Jan 31 08:07:10 compute-0 epic_aryabhata[82292]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2
Jan 31 08:07:10 compute-0 epic_aryabhata[82292]: Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph_vg2/ceph_lv2 --path /var/lib/ceph/osd/ceph-2 --no-mon-config
Jan 31 08:07:10 compute-0 epic_aryabhata[82292]: Running command: /usr/bin/ln -snf /dev/ceph_vg2/ceph_lv2 /var/lib/ceph/osd/ceph-2/block
Jan 31 08:07:10 compute-0 epic_aryabhata[82292]: Running command: /usr/bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-2/block
Jan 31 08:07:10 compute-0 epic_aryabhata[82292]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-2
Jan 31 08:07:10 compute-0 epic_aryabhata[82292]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2
Jan 31 08:07:10 compute-0 epic_aryabhata[82292]: --> ceph-volume lvm activate successful for osd ID: 2
Jan 31 08:07:10 compute-0 epic_aryabhata[82292]: --> ceph-volume lvm create successful for: ceph_vg2/ceph_lv2
Jan 31 08:07:10 compute-0 systemd[1]: libpod-9c089e81a2198e1a1b0deca4ab5336e8dd99c8abbe8e370c9dee5d001ea1a32e.scope: Deactivated successfully.
Jan 31 08:07:10 compute-0 systemd[1]: libpod-9c089e81a2198e1a1b0deca4ab5336e8dd99c8abbe8e370c9dee5d001ea1a32e.scope: Consumed 5.399s CPU time.
Jan 31 08:07:10 compute-0 podman[85216]: 2026-01-31 08:07:10.699271794 +0000 UTC m=+0.022217715 container died 9c089e81a2198e1a1b0deca4ab5336e8dd99c8abbe8e370c9dee5d001ea1a32e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=epic_aryabhata, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 08:07:10 compute-0 ceph-mgr[75519]: [devicehealth WARNING root] not enough osds to create mgr pool
Jan 31 08:07:10 compute-0 systemd[1]: var-lib-containers-storage-overlay-234179e24d5f203181caa95cc325ddd80e620dfe4be8e9a899ed925249d961cf-merged.mount: Deactivated successfully.
Jan 31 08:07:10 compute-0 podman[85216]: 2026-01-31 08:07:10.852160666 +0000 UTC m=+0.175106577 container remove 9c089e81a2198e1a1b0deca4ab5336e8dd99c8abbe8e370c9dee5d001ea1a32e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=epic_aryabhata, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 08:07:10 compute-0 systemd[1]: libpod-conmon-9c089e81a2198e1a1b0deca4ab5336e8dd99c8abbe8e370c9dee5d001ea1a32e.scope: Deactivated successfully.
Jan 31 08:07:10 compute-0 ceph-mon[75227]: pgmap v16: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 31 08:07:10 compute-0 sudo[82201]: pam_unix(sudo:session): session closed for user root
Jan 31 08:07:10 compute-0 sudo[85231]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 08:07:10 compute-0 sudo[85231]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:07:10 compute-0 sudo[85231]: pam_unix(sudo:session): session closed for user root
Jan 31 08:07:10 compute-0 sudo[85256]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/82c880e6-d992-5408-8b12-efff9c275473/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 82c880e6-d992-5408-8b12-efff9c275473 -- lvm list --format json
Jan 31 08:07:10 compute-0 sudo[85256]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:07:11 compute-0 podman[85293]: 2026-01-31 08:07:11.235759831 +0000 UTC m=+0.030763919 container create f30686083b1c7cf781b6f893136e346192a21cb78f95397a0a7cc4800e3c4052 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hungry_zhukovsky, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 31 08:07:11 compute-0 systemd[1]: Started libpod-conmon-f30686083b1c7cf781b6f893136e346192a21cb78f95397a0a7cc4800e3c4052.scope.
Jan 31 08:07:11 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:07:11 compute-0 podman[85293]: 2026-01-31 08:07:11.313320354 +0000 UTC m=+0.108324462 container init f30686083b1c7cf781b6f893136e346192a21cb78f95397a0a7cc4800e3c4052 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hungry_zhukovsky, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.schema-version=1.0)
Jan 31 08:07:11 compute-0 podman[85293]: 2026-01-31 08:07:11.318978545 +0000 UTC m=+0.113982633 container start f30686083b1c7cf781b6f893136e346192a21cb78f95397a0a7cc4800e3c4052 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hungry_zhukovsky, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle)
Jan 31 08:07:11 compute-0 podman[85293]: 2026-01-31 08:07:11.222785751 +0000 UTC m=+0.017789859 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 08:07:11 compute-0 podman[85293]: 2026-01-31 08:07:11.321930869 +0000 UTC m=+0.116934957 container attach f30686083b1c7cf781b6f893136e346192a21cb78f95397a0a7cc4800e3c4052 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hungry_zhukovsky, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20251030)
Jan 31 08:07:11 compute-0 hungry_zhukovsky[85310]: 167 167
Jan 31 08:07:11 compute-0 systemd[1]: libpod-f30686083b1c7cf781b6f893136e346192a21cb78f95397a0a7cc4800e3c4052.scope: Deactivated successfully.
Jan 31 08:07:11 compute-0 podman[85293]: 2026-01-31 08:07:11.323260447 +0000 UTC m=+0.118264535 container died f30686083b1c7cf781b6f893136e346192a21cb78f95397a0a7cc4800e3c4052 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hungry_zhukovsky, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 08:07:11 compute-0 systemd[1]: var-lib-containers-storage-overlay-3c0bb43aeca8c4ce984e7a5b0ce83cc184130dab29992b8e3dcb7d4c32375fc8-merged.mount: Deactivated successfully.
Jan 31 08:07:11 compute-0 podman[85293]: 2026-01-31 08:07:11.35910489 +0000 UTC m=+0.154109008 container remove f30686083b1c7cf781b6f893136e346192a21cb78f95397a0a7cc4800e3c4052 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hungry_zhukovsky, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 08:07:11 compute-0 systemd[1]: libpod-conmon-f30686083b1c7cf781b6f893136e346192a21cb78f95397a0a7cc4800e3c4052.scope: Deactivated successfully.
Jan 31 08:07:11 compute-0 podman[85335]: 2026-01-31 08:07:11.493157435 +0000 UTC m=+0.039078046 container create b8dc2817787bdc99b6d1d850c6d6860fc6837022c5d094f91fa62a91d38eeeae (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=thirsty_lamport, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Jan 31 08:07:11 compute-0 systemd[1]: Started libpod-conmon-b8dc2817787bdc99b6d1d850c6d6860fc6837022c5d094f91fa62a91d38eeeae.scope.
Jan 31 08:07:11 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:07:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/79a3c24216f2fb6e352fc8dd4d492d87b70580d9001fddf922e3e9a08206e0da/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 08:07:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/79a3c24216f2fb6e352fc8dd4d492d87b70580d9001fddf922e3e9a08206e0da/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 08:07:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/79a3c24216f2fb6e352fc8dd4d492d87b70580d9001fddf922e3e9a08206e0da/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 08:07:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/79a3c24216f2fb6e352fc8dd4d492d87b70580d9001fddf922e3e9a08206e0da/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 08:07:11 compute-0 podman[85335]: 2026-01-31 08:07:11.472886546 +0000 UTC m=+0.018807137 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 08:07:11 compute-0 podman[85335]: 2026-01-31 08:07:11.645607594 +0000 UTC m=+0.191528175 container init b8dc2817787bdc99b6d1d850c6d6860fc6837022c5d094f91fa62a91d38eeeae (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=thirsty_lamport, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Jan 31 08:07:11 compute-0 podman[85335]: 2026-01-31 08:07:11.650167884 +0000 UTC m=+0.196088455 container start b8dc2817787bdc99b6d1d850c6d6860fc6837022c5d094f91fa62a91d38eeeae (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=thirsty_lamport, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 08:07:11 compute-0 podman[85335]: 2026-01-31 08:07:11.652962074 +0000 UTC m=+0.198882645 container attach b8dc2817787bdc99b6d1d850c6d6860fc6837022c5d094f91fa62a91d38eeeae (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=thirsty_lamport, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 08:07:11 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v17: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 31 08:07:11 compute-0 thirsty_lamport[85352]: {
Jan 31 08:07:11 compute-0 thirsty_lamport[85352]:     "0": [
Jan 31 08:07:11 compute-0 thirsty_lamport[85352]:         {
Jan 31 08:07:11 compute-0 thirsty_lamport[85352]:             "devices": [
Jan 31 08:07:11 compute-0 thirsty_lamport[85352]:                 "/dev/loop3"
Jan 31 08:07:11 compute-0 thirsty_lamport[85352]:             ],
Jan 31 08:07:11 compute-0 thirsty_lamport[85352]:             "lv_name": "ceph_lv0",
Jan 31 08:07:11 compute-0 thirsty_lamport[85352]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 08:07:11 compute-0 thirsty_lamport[85352]:             "lv_size": "21470642176",
Jan 31 08:07:11 compute-0 thirsty_lamport[85352]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=MTsNbY-MKaT-jGv0-3onj-5WQa-gnK0-BbfLsK,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=82c880e6-d992-5408-8b12-efff9c275473,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=39c36249-2898-4a76-b317-8e4ca379866f,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 31 08:07:11 compute-0 thirsty_lamport[85352]:             "lv_uuid": "MTsNbY-MKaT-jGv0-3onj-5WQa-gnK0-BbfLsK",
Jan 31 08:07:11 compute-0 thirsty_lamport[85352]:             "name": "ceph_lv0",
Jan 31 08:07:11 compute-0 thirsty_lamport[85352]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 08:07:11 compute-0 thirsty_lamport[85352]:             "tags": {
Jan 31 08:07:11 compute-0 thirsty_lamport[85352]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 31 08:07:11 compute-0 thirsty_lamport[85352]:                 "ceph.block_uuid": "MTsNbY-MKaT-jGv0-3onj-5WQa-gnK0-BbfLsK",
Jan 31 08:07:11 compute-0 thirsty_lamport[85352]:                 "ceph.cephx_lockbox_secret": "",
Jan 31 08:07:11 compute-0 thirsty_lamport[85352]:                 "ceph.cluster_fsid": "82c880e6-d992-5408-8b12-efff9c275473",
Jan 31 08:07:11 compute-0 thirsty_lamport[85352]:                 "ceph.cluster_name": "ceph",
Jan 31 08:07:11 compute-0 thirsty_lamport[85352]:                 "ceph.crush_device_class": "",
Jan 31 08:07:11 compute-0 thirsty_lamport[85352]:                 "ceph.encrypted": "0",
Jan 31 08:07:11 compute-0 thirsty_lamport[85352]:                 "ceph.objectstore": "bluestore",
Jan 31 08:07:11 compute-0 thirsty_lamport[85352]:                 "ceph.osd_fsid": "39c36249-2898-4a76-b317-8e4ca379866f",
Jan 31 08:07:11 compute-0 thirsty_lamport[85352]:                 "ceph.osd_id": "0",
Jan 31 08:07:11 compute-0 thirsty_lamport[85352]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 31 08:07:11 compute-0 thirsty_lamport[85352]:                 "ceph.type": "block",
Jan 31 08:07:11 compute-0 thirsty_lamport[85352]:                 "ceph.vdo": "0",
Jan 31 08:07:11 compute-0 thirsty_lamport[85352]:                 "ceph.with_tpm": "0"
Jan 31 08:07:11 compute-0 thirsty_lamport[85352]:             },
Jan 31 08:07:11 compute-0 thirsty_lamport[85352]:             "type": "block",
Jan 31 08:07:11 compute-0 thirsty_lamport[85352]:             "vg_name": "ceph_vg0"
Jan 31 08:07:11 compute-0 thirsty_lamport[85352]:         }
Jan 31 08:07:11 compute-0 thirsty_lamport[85352]:     ],
Jan 31 08:07:11 compute-0 thirsty_lamport[85352]:     "1": [
Jan 31 08:07:11 compute-0 thirsty_lamport[85352]:         {
Jan 31 08:07:11 compute-0 thirsty_lamport[85352]:             "devices": [
Jan 31 08:07:11 compute-0 thirsty_lamport[85352]:                 "/dev/loop4"
Jan 31 08:07:11 compute-0 thirsty_lamport[85352]:             ],
Jan 31 08:07:11 compute-0 thirsty_lamport[85352]:             "lv_name": "ceph_lv1",
Jan 31 08:07:11 compute-0 thirsty_lamport[85352]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 31 08:07:11 compute-0 thirsty_lamport[85352]:             "lv_size": "21470642176",
Jan 31 08:07:11 compute-0 thirsty_lamport[85352]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=p93Mbf-DMxT-pcUt-jSJE-SFna-oscq-yTAd40,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=82c880e6-d992-5408-8b12-efff9c275473,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=dacad4fa-56d8-4937-b2d8-306fb75187f3,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 31 08:07:11 compute-0 thirsty_lamport[85352]:             "lv_uuid": "p93Mbf-DMxT-pcUt-jSJE-SFna-oscq-yTAd40",
Jan 31 08:07:11 compute-0 thirsty_lamport[85352]:             "name": "ceph_lv1",
Jan 31 08:07:11 compute-0 thirsty_lamport[85352]:             "path": "/dev/ceph_vg1/ceph_lv1",
Jan 31 08:07:11 compute-0 thirsty_lamport[85352]:             "tags": {
Jan 31 08:07:11 compute-0 thirsty_lamport[85352]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 31 08:07:11 compute-0 thirsty_lamport[85352]:                 "ceph.block_uuid": "p93Mbf-DMxT-pcUt-jSJE-SFna-oscq-yTAd40",
Jan 31 08:07:11 compute-0 thirsty_lamport[85352]:                 "ceph.cephx_lockbox_secret": "",
Jan 31 08:07:11 compute-0 thirsty_lamport[85352]:                 "ceph.cluster_fsid": "82c880e6-d992-5408-8b12-efff9c275473",
Jan 31 08:07:11 compute-0 thirsty_lamport[85352]:                 "ceph.cluster_name": "ceph",
Jan 31 08:07:11 compute-0 thirsty_lamport[85352]:                 "ceph.crush_device_class": "",
Jan 31 08:07:11 compute-0 thirsty_lamport[85352]:                 "ceph.encrypted": "0",
Jan 31 08:07:11 compute-0 thirsty_lamport[85352]:                 "ceph.objectstore": "bluestore",
Jan 31 08:07:11 compute-0 thirsty_lamport[85352]:                 "ceph.osd_fsid": "dacad4fa-56d8-4937-b2d8-306fb75187f3",
Jan 31 08:07:11 compute-0 thirsty_lamport[85352]:                 "ceph.osd_id": "1",
Jan 31 08:07:11 compute-0 thirsty_lamport[85352]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 31 08:07:11 compute-0 thirsty_lamport[85352]:                 "ceph.type": "block",
Jan 31 08:07:11 compute-0 thirsty_lamport[85352]:                 "ceph.vdo": "0",
Jan 31 08:07:11 compute-0 thirsty_lamport[85352]:                 "ceph.with_tpm": "0"
Jan 31 08:07:11 compute-0 thirsty_lamport[85352]:             },
Jan 31 08:07:11 compute-0 thirsty_lamport[85352]:             "type": "block",
Jan 31 08:07:11 compute-0 thirsty_lamport[85352]:             "vg_name": "ceph_vg1"
Jan 31 08:07:11 compute-0 thirsty_lamport[85352]:         }
Jan 31 08:07:11 compute-0 thirsty_lamport[85352]:     ],
Jan 31 08:07:11 compute-0 thirsty_lamport[85352]:     "2": [
Jan 31 08:07:11 compute-0 thirsty_lamport[85352]:         {
Jan 31 08:07:11 compute-0 thirsty_lamport[85352]:             "devices": [
Jan 31 08:07:11 compute-0 thirsty_lamport[85352]:                 "/dev/loop5"
Jan 31 08:07:11 compute-0 thirsty_lamport[85352]:             ],
Jan 31 08:07:11 compute-0 thirsty_lamport[85352]:             "lv_name": "ceph_lv2",
Jan 31 08:07:11 compute-0 thirsty_lamport[85352]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 31 08:07:11 compute-0 thirsty_lamport[85352]:             "lv_size": "21470642176",
Jan 31 08:07:11 compute-0 thirsty_lamport[85352]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=fxd7JU-HnwP-NvcE-M4xv-EgEF-kK7y-w6dXCS,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=82c880e6-d992-5408-8b12-efff9c275473,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=faa25865-e7b6-44f9-8188-08bf287b941b,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 31 08:07:11 compute-0 thirsty_lamport[85352]:             "lv_uuid": "fxd7JU-HnwP-NvcE-M4xv-EgEF-kK7y-w6dXCS",
Jan 31 08:07:11 compute-0 thirsty_lamport[85352]:             "name": "ceph_lv2",
Jan 31 08:07:11 compute-0 thirsty_lamport[85352]:             "path": "/dev/ceph_vg2/ceph_lv2",
Jan 31 08:07:11 compute-0 thirsty_lamport[85352]:             "tags": {
Jan 31 08:07:11 compute-0 thirsty_lamport[85352]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 31 08:07:11 compute-0 thirsty_lamport[85352]:                 "ceph.block_uuid": "fxd7JU-HnwP-NvcE-M4xv-EgEF-kK7y-w6dXCS",
Jan 31 08:07:11 compute-0 thirsty_lamport[85352]:                 "ceph.cephx_lockbox_secret": "",
Jan 31 08:07:11 compute-0 thirsty_lamport[85352]:                 "ceph.cluster_fsid": "82c880e6-d992-5408-8b12-efff9c275473",
Jan 31 08:07:11 compute-0 thirsty_lamport[85352]:                 "ceph.cluster_name": "ceph",
Jan 31 08:07:11 compute-0 thirsty_lamport[85352]:                 "ceph.crush_device_class": "",
Jan 31 08:07:11 compute-0 thirsty_lamport[85352]:                 "ceph.encrypted": "0",
Jan 31 08:07:11 compute-0 thirsty_lamport[85352]:                 "ceph.objectstore": "bluestore",
Jan 31 08:07:11 compute-0 thirsty_lamport[85352]:                 "ceph.osd_fsid": "faa25865-e7b6-44f9-8188-08bf287b941b",
Jan 31 08:07:11 compute-0 thirsty_lamport[85352]:                 "ceph.osd_id": "2",
Jan 31 08:07:11 compute-0 thirsty_lamport[85352]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 31 08:07:11 compute-0 thirsty_lamport[85352]:                 "ceph.type": "block",
Jan 31 08:07:11 compute-0 thirsty_lamport[85352]:                 "ceph.vdo": "0",
Jan 31 08:07:11 compute-0 thirsty_lamport[85352]:                 "ceph.with_tpm": "0"
Jan 31 08:07:11 compute-0 thirsty_lamport[85352]:             },
Jan 31 08:07:11 compute-0 thirsty_lamport[85352]:             "type": "block",
Jan 31 08:07:11 compute-0 thirsty_lamport[85352]:             "vg_name": "ceph_vg2"
Jan 31 08:07:11 compute-0 thirsty_lamport[85352]:         }
Jan 31 08:07:11 compute-0 thirsty_lamport[85352]:     ]
Jan 31 08:07:11 compute-0 thirsty_lamport[85352]: }
Jan 31 08:07:11 compute-0 systemd[1]: libpod-b8dc2817787bdc99b6d1d850c6d6860fc6837022c5d094f91fa62a91d38eeeae.scope: Deactivated successfully.
Jan 31 08:07:11 compute-0 podman[85335]: 2026-01-31 08:07:11.915437573 +0000 UTC m=+0.461358144 container died b8dc2817787bdc99b6d1d850c6d6860fc6837022c5d094f91fa62a91d38eeeae (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=thirsty_lamport, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True)
Jan 31 08:07:11 compute-0 systemd[1]: var-lib-containers-storage-overlay-79a3c24216f2fb6e352fc8dd4d492d87b70580d9001fddf922e3e9a08206e0da-merged.mount: Deactivated successfully.
Jan 31 08:07:11 compute-0 podman[85335]: 2026-01-31 08:07:11.961220159 +0000 UTC m=+0.507140740 container remove b8dc2817787bdc99b6d1d850c6d6860fc6837022c5d094f91fa62a91d38eeeae (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=thirsty_lamport, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Jan 31 08:07:11 compute-0 systemd[1]: libpod-conmon-b8dc2817787bdc99b6d1d850c6d6860fc6837022c5d094f91fa62a91d38eeeae.scope: Deactivated successfully.
Jan 31 08:07:12 compute-0 sudo[85256]: pam_unix(sudo:session): session closed for user root
Jan 31 08:07:12 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "osd.0"} v 0)
Jan 31 08:07:12 compute-0 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "auth get", "entity": "osd.0"} : dispatch
Jan 31 08:07:12 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 31 08:07:12 compute-0 ceph-mon[75227]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 08:07:12 compute-0 ceph-mgr[75519]: [cephadm INFO cephadm.serve] Deploying daemon osd.0 on compute-0
Jan 31 08:07:12 compute-0 ceph-mgr[75519]: log_channel(cephadm) log [INF] : Deploying daemon osd.0 on compute-0
Jan 31 08:07:12 compute-0 sudo[85372]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 08:07:12 compute-0 sudo[85372]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:07:12 compute-0 sudo[85372]: pam_unix(sudo:session): session closed for user root
Jan 31 08:07:12 compute-0 sudo[85397]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/82c880e6-d992-5408-8b12-efff9c275473/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 _orch deploy --fsid 82c880e6-d992-5408-8b12-efff9c275473
Jan 31 08:07:12 compute-0 sudo[85397]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:07:12 compute-0 podman[85463]: 2026-01-31 08:07:12.507394042 +0000 UTC m=+0.043550504 container create 0c9f3dbb84bd69fc47c53990337ff13f8d00f0c35af285f0175be076eeb9ca22 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=admiring_ride, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 08:07:12 compute-0 systemd[1]: Started libpod-conmon-0c9f3dbb84bd69fc47c53990337ff13f8d00f0c35af285f0175be076eeb9ca22.scope.
Jan 31 08:07:12 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:07:12 compute-0 podman[85463]: 2026-01-31 08:07:12.582202256 +0000 UTC m=+0.118358728 container init 0c9f3dbb84bd69fc47c53990337ff13f8d00f0c35af285f0175be076eeb9ca22 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=admiring_ride, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 08:07:12 compute-0 podman[85463]: 2026-01-31 08:07:12.488689488 +0000 UTC m=+0.024845970 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 08:07:12 compute-0 podman[85463]: 2026-01-31 08:07:12.587195629 +0000 UTC m=+0.123352071 container start 0c9f3dbb84bd69fc47c53990337ff13f8d00f0c35af285f0175be076eeb9ca22 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=admiring_ride, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 08:07:12 compute-0 admiring_ride[85480]: 167 167
Jan 31 08:07:12 compute-0 systemd[1]: libpod-0c9f3dbb84bd69fc47c53990337ff13f8d00f0c35af285f0175be076eeb9ca22.scope: Deactivated successfully.
Jan 31 08:07:12 compute-0 podman[85463]: 2026-01-31 08:07:12.592404908 +0000 UTC m=+0.128561350 container attach 0c9f3dbb84bd69fc47c53990337ff13f8d00f0c35af285f0175be076eeb9ca22 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=admiring_ride, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_REF=tentacle)
Jan 31 08:07:12 compute-0 podman[85463]: 2026-01-31 08:07:12.592746357 +0000 UTC m=+0.128902799 container died 0c9f3dbb84bd69fc47c53990337ff13f8d00f0c35af285f0175be076eeb9ca22 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=admiring_ride, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 08:07:12 compute-0 systemd[1]: var-lib-containers-storage-overlay-1b2ef0344c615f3a1b61d93405ab647291ec13b22fc6c3cba569c0368bec3113-merged.mount: Deactivated successfully.
Jan 31 08:07:12 compute-0 podman[85463]: 2026-01-31 08:07:12.628519008 +0000 UTC m=+0.164675450 container remove 0c9f3dbb84bd69fc47c53990337ff13f8d00f0c35af285f0175be076eeb9ca22 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=admiring_ride, org.label-schema.build-date=20251030, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 08:07:12 compute-0 systemd[1]: libpod-conmon-0c9f3dbb84bd69fc47c53990337ff13f8d00f0c35af285f0175be076eeb9ca22.scope: Deactivated successfully.
Jan 31 08:07:12 compute-0 ceph-mgr[75519]: [devicehealth WARNING root] not enough osds to create mgr pool
Jan 31 08:07:12 compute-0 podman[85510]: 2026-01-31 08:07:12.845589671 +0000 UTC m=+0.044563592 container create 34ac979fcb6dab96c1d83c1ed27d9e02d18f5ef8979b5089162e782ca973c262 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-82c880e6-d992-5408-8b12-efff9c275473-osd-0-activate-test, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=tentacle, ceph=True, org.label-schema.vendor=CentOS)
Jan 31 08:07:12 compute-0 systemd[1]: Started libpod-conmon-34ac979fcb6dab96c1d83c1ed27d9e02d18f5ef8979b5089162e782ca973c262.scope.
Jan 31 08:07:12 compute-0 ceph-mon[75227]: pgmap v17: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 31 08:07:12 compute-0 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "auth get", "entity": "osd.0"} : dispatch
Jan 31 08:07:12 compute-0 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 08:07:12 compute-0 ceph-mon[75227]: Deploying daemon osd.0 on compute-0
Jan 31 08:07:12 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:07:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8cb46db8897d66cb2351c049bc55ba75c319d1142f3ec981d14f534b2b98ffdd/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 08:07:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8cb46db8897d66cb2351c049bc55ba75c319d1142f3ec981d14f534b2b98ffdd/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 08:07:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8cb46db8897d66cb2351c049bc55ba75c319d1142f3ec981d14f534b2b98ffdd/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 08:07:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8cb46db8897d66cb2351c049bc55ba75c319d1142f3ec981d14f534b2b98ffdd/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 08:07:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8cb46db8897d66cb2351c049bc55ba75c319d1142f3ec981d14f534b2b98ffdd/merged/var/lib/ceph/osd/ceph-0 supports timestamps until 2038 (0x7fffffff)
Jan 31 08:07:12 compute-0 podman[85510]: 2026-01-31 08:07:12.917559984 +0000 UTC m=+0.116533905 container init 34ac979fcb6dab96c1d83c1ed27d9e02d18f5ef8979b5089162e782ca973c262 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-82c880e6-d992-5408-8b12-efff9c275473-osd-0-activate-test, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 08:07:12 compute-0 podman[85510]: 2026-01-31 08:07:12.82344859 +0000 UTC m=+0.022422571 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 08:07:12 compute-0 podman[85510]: 2026-01-31 08:07:12.924229145 +0000 UTC m=+0.123203076 container start 34ac979fcb6dab96c1d83c1ed27d9e02d18f5ef8979b5089162e782ca973c262 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-82c880e6-d992-5408-8b12-efff9c275473-osd-0-activate-test, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_REF=tentacle, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 08:07:12 compute-0 podman[85510]: 2026-01-31 08:07:12.929573488 +0000 UTC m=+0.128547379 container attach 34ac979fcb6dab96c1d83c1ed27d9e02d18f5ef8979b5089162e782ca973c262 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-82c880e6-d992-5408-8b12-efff9c275473-osd-0-activate-test, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Jan 31 08:07:13 compute-0 ceph-82c880e6-d992-5408-8b12-efff9c275473-osd-0-activate-test[85526]: usage: ceph-volume activate [-h] [--osd-id OSD_ID] [--osd-uuid OSD_FSID]
Jan 31 08:07:13 compute-0 ceph-82c880e6-d992-5408-8b12-efff9c275473-osd-0-activate-test[85526]:                             [--no-systemd] [--no-tmpfs]
Jan 31 08:07:13 compute-0 ceph-82c880e6-d992-5408-8b12-efff9c275473-osd-0-activate-test[85526]: ceph-volume activate: error: unrecognized arguments: --bad-option
Jan 31 08:07:13 compute-0 systemd[1]: libpod-34ac979fcb6dab96c1d83c1ed27d9e02d18f5ef8979b5089162e782ca973c262.scope: Deactivated successfully.
Jan 31 08:07:13 compute-0 podman[85510]: 2026-01-31 08:07:13.145771196 +0000 UTC m=+0.344745097 container died 34ac979fcb6dab96c1d83c1ed27d9e02d18f5ef8979b5089162e782ca973c262 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-82c880e6-d992-5408-8b12-efff9c275473-osd-0-activate-test, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 08:07:13 compute-0 systemd[1]: var-lib-containers-storage-overlay-8cb46db8897d66cb2351c049bc55ba75c319d1142f3ec981d14f534b2b98ffdd-merged.mount: Deactivated successfully.
Jan 31 08:07:13 compute-0 podman[85510]: 2026-01-31 08:07:13.197005767 +0000 UTC m=+0.395979658 container remove 34ac979fcb6dab96c1d83c1ed27d9e02d18f5ef8979b5089162e782ca973c262 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-82c880e6-d992-5408-8b12-efff9c275473-osd-0-activate-test, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=tentacle)
Jan 31 08:07:13 compute-0 systemd[1]: libpod-conmon-34ac979fcb6dab96c1d83c1ed27d9e02d18f5ef8979b5089162e782ca973c262.scope: Deactivated successfully.
Jan 31 08:07:13 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e6 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 08:07:13 compute-0 systemd[1]: Reloading.
Jan 31 08:07:13 compute-0 systemd-sysv-generator[85592]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 08:07:13 compute-0 systemd-rc-local-generator[85589]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 08:07:13 compute-0 systemd[1]: Reloading.
Jan 31 08:07:13 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v18: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 31 08:07:13 compute-0 systemd-sysv-generator[85632]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 08:07:13 compute-0 systemd-rc-local-generator[85624]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 08:07:13 compute-0 systemd[1]: Starting Ceph osd.0 for 82c880e6-d992-5408-8b12-efff9c275473...
Jan 31 08:07:14 compute-0 podman[85689]: 2026-01-31 08:07:14.083037636 +0000 UTC m=+0.040329872 container create fde71eb53b1ac8b13bf2ad62a73145f415e501c24eedf59026dfc15b069ba30f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-82c880e6-d992-5408-8b12-efff9c275473-osd-0-activate, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Jan 31 08:07:14 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:07:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a71c9c5ba1802ecf0db1ee2bcd187a12c97b2081d5e6faf1d1ed7c3eb630ead4/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 08:07:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a71c9c5ba1802ecf0db1ee2bcd187a12c97b2081d5e6faf1d1ed7c3eb630ead4/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 08:07:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a71c9c5ba1802ecf0db1ee2bcd187a12c97b2081d5e6faf1d1ed7c3eb630ead4/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 08:07:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a71c9c5ba1802ecf0db1ee2bcd187a12c97b2081d5e6faf1d1ed7c3eb630ead4/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 08:07:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a71c9c5ba1802ecf0db1ee2bcd187a12c97b2081d5e6faf1d1ed7c3eb630ead4/merged/var/lib/ceph/osd/ceph-0 supports timestamps until 2038 (0x7fffffff)
Jan 31 08:07:14 compute-0 podman[85689]: 2026-01-31 08:07:14.062936203 +0000 UTC m=+0.020228519 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 08:07:14 compute-0 podman[85689]: 2026-01-31 08:07:14.160593009 +0000 UTC m=+0.117885325 container init fde71eb53b1ac8b13bf2ad62a73145f415e501c24eedf59026dfc15b069ba30f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-82c880e6-d992-5408-8b12-efff9c275473-osd-0-activate, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_REF=tentacle)
Jan 31 08:07:14 compute-0 podman[85689]: 2026-01-31 08:07:14.168277148 +0000 UTC m=+0.125569394 container start fde71eb53b1ac8b13bf2ad62a73145f415e501c24eedf59026dfc15b069ba30f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-82c880e6-d992-5408-8b12-efff9c275473-osd-0-activate, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Jan 31 08:07:14 compute-0 podman[85689]: 2026-01-31 08:07:14.171386007 +0000 UTC m=+0.128678253 container attach fde71eb53b1ac8b13bf2ad62a73145f415e501c24eedf59026dfc15b069ba30f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-82c880e6-d992-5408-8b12-efff9c275473-osd-0-activate, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 08:07:14 compute-0 ceph-82c880e6-d992-5408-8b12-efff9c275473-osd-0-activate[85704]: Running command: /usr/bin/ceph-authtool --gen-print-key
Jan 31 08:07:14 compute-0 bash[85689]: Running command: /usr/bin/ceph-authtool --gen-print-key
Jan 31 08:07:14 compute-0 ceph-82c880e6-d992-5408-8b12-efff9c275473-osd-0-activate[85704]: Running command: /usr/bin/ceph-authtool --gen-print-key
Jan 31 08:07:14 compute-0 bash[85689]: Running command: /usr/bin/ceph-authtool --gen-print-key
Jan 31 08:07:14 compute-0 ceph-mgr[75519]: [devicehealth WARNING root] not enough osds to create mgr pool
Jan 31 08:07:14 compute-0 lvm[85787]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 31 08:07:14 compute-0 lvm[85787]: VG ceph_vg0 finished
Jan 31 08:07:14 compute-0 lvm[85790]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 31 08:07:14 compute-0 lvm[85790]: VG ceph_vg1 finished
Jan 31 08:07:14 compute-0 lvm[85792]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 31 08:07:14 compute-0 lvm[85792]: VG ceph_vg2 finished
Jan 31 08:07:14 compute-0 lvm[85793]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 31 08:07:14 compute-0 lvm[85793]: VG ceph_vg0 finished
Jan 31 08:07:14 compute-0 ceph-82c880e6-d992-5408-8b12-efff9c275473-osd-0-activate[85704]: --> Failed to activate via raw: did not find any matching OSD to activate
Jan 31 08:07:14 compute-0 ceph-82c880e6-d992-5408-8b12-efff9c275473-osd-0-activate[85704]: Running command: /usr/bin/ceph-authtool --gen-print-key
Jan 31 08:07:14 compute-0 bash[85689]: --> Failed to activate via raw: did not find any matching OSD to activate
Jan 31 08:07:14 compute-0 bash[85689]: Running command: /usr/bin/ceph-authtool --gen-print-key
Jan 31 08:07:14 compute-0 ceph-mon[75227]: pgmap v18: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 31 08:07:14 compute-0 ceph-82c880e6-d992-5408-8b12-efff9c275473-osd-0-activate[85704]: Running command: /usr/bin/ceph-authtool --gen-print-key
Jan 31 08:07:14 compute-0 bash[85689]: Running command: /usr/bin/ceph-authtool --gen-print-key
Jan 31 08:07:15 compute-0 ceph-82c880e6-d992-5408-8b12-efff9c275473-osd-0-activate[85704]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0
Jan 31 08:07:15 compute-0 bash[85689]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0
Jan 31 08:07:15 compute-0 ceph-82c880e6-d992-5408-8b12-efff9c275473-osd-0-activate[85704]: Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph_vg0/ceph_lv0 --path /var/lib/ceph/osd/ceph-0 --no-mon-config
Jan 31 08:07:15 compute-0 bash[85689]: Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph_vg0/ceph_lv0 --path /var/lib/ceph/osd/ceph-0 --no-mon-config
Jan 31 08:07:15 compute-0 ceph-82c880e6-d992-5408-8b12-efff9c275473-osd-0-activate[85704]: Running command: /usr/bin/ln -snf /dev/ceph_vg0/ceph_lv0 /var/lib/ceph/osd/ceph-0/block
Jan 31 08:07:15 compute-0 bash[85689]: Running command: /usr/bin/ln -snf /dev/ceph_vg0/ceph_lv0 /var/lib/ceph/osd/ceph-0/block
Jan 31 08:07:15 compute-0 ceph-82c880e6-d992-5408-8b12-efff9c275473-osd-0-activate[85704]: Running command: /usr/bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-0/block
Jan 31 08:07:15 compute-0 bash[85689]: Running command: /usr/bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-0/block
Jan 31 08:07:15 compute-0 ceph-82c880e6-d992-5408-8b12-efff9c275473-osd-0-activate[85704]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-0
Jan 31 08:07:15 compute-0 bash[85689]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-0
Jan 31 08:07:15 compute-0 ceph-82c880e6-d992-5408-8b12-efff9c275473-osd-0-activate[85704]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0
Jan 31 08:07:15 compute-0 bash[85689]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0
Jan 31 08:07:15 compute-0 ceph-82c880e6-d992-5408-8b12-efff9c275473-osd-0-activate[85704]: --> ceph-volume lvm activate successful for osd ID: 0
Jan 31 08:07:15 compute-0 bash[85689]: --> ceph-volume lvm activate successful for osd ID: 0
Jan 31 08:07:15 compute-0 systemd[1]: libpod-fde71eb53b1ac8b13bf2ad62a73145f415e501c24eedf59026dfc15b069ba30f.scope: Deactivated successfully.
Jan 31 08:07:15 compute-0 podman[85689]: 2026-01-31 08:07:15.122508233 +0000 UTC m=+1.079800489 container died fde71eb53b1ac8b13bf2ad62a73145f415e501c24eedf59026dfc15b069ba30f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-82c880e6-d992-5408-8b12-efff9c275473-osd-0-activate, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Jan 31 08:07:15 compute-0 systemd[1]: libpod-fde71eb53b1ac8b13bf2ad62a73145f415e501c24eedf59026dfc15b069ba30f.scope: Consumed 1.220s CPU time.
Jan 31 08:07:15 compute-0 systemd[1]: var-lib-containers-storage-overlay-a71c9c5ba1802ecf0db1ee2bcd187a12c97b2081d5e6faf1d1ed7c3eb630ead4-merged.mount: Deactivated successfully.
Jan 31 08:07:15 compute-0 podman[85689]: 2026-01-31 08:07:15.176793782 +0000 UTC m=+1.134086028 container remove fde71eb53b1ac8b13bf2ad62a73145f415e501c24eedf59026dfc15b069ba30f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-82c880e6-d992-5408-8b12-efff9c275473-osd-0-activate, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 08:07:15 compute-0 podman[85951]: 2026-01-31 08:07:15.412926219 +0000 UTC m=+0.063205304 container create a780c474029a22c61c8c54917ed1da42069bcc90e60bdfc02f6b7ee79505675e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-82c880e6-d992-5408-8b12-efff9c275473-osd-0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3)
Jan 31 08:07:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/12598a021437db1139d352a9b9df1155b3b9d52bb477cc3ade1b1cd5f93e84be/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 08:07:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/12598a021437db1139d352a9b9df1155b3b9d52bb477cc3ade1b1cd5f93e84be/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 08:07:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/12598a021437db1139d352a9b9df1155b3b9d52bb477cc3ade1b1cd5f93e84be/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 08:07:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/12598a021437db1139d352a9b9df1155b3b9d52bb477cc3ade1b1cd5f93e84be/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 08:07:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/12598a021437db1139d352a9b9df1155b3b9d52bb477cc3ade1b1cd5f93e84be/merged/var/lib/ceph/osd/ceph-0 supports timestamps until 2038 (0x7fffffff)
Jan 31 08:07:15 compute-0 podman[85951]: 2026-01-31 08:07:15.385315011 +0000 UTC m=+0.035594156 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 08:07:15 compute-0 podman[85951]: 2026-01-31 08:07:15.496599737 +0000 UTC m=+0.146878842 container init a780c474029a22c61c8c54917ed1da42069bcc90e60bdfc02f6b7ee79505675e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-82c880e6-d992-5408-8b12-efff9c275473-osd-0, CEPH_REF=tentacle, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, ceph=True, OSD_FLAVOR=default)
Jan 31 08:07:15 compute-0 podman[85951]: 2026-01-31 08:07:15.501784624 +0000 UTC m=+0.152063699 container start a780c474029a22c61c8c54917ed1da42069bcc90e60bdfc02f6b7ee79505675e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-82c880e6-d992-5408-8b12-efff9c275473-osd-0, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 08:07:15 compute-0 bash[85951]: a780c474029a22c61c8c54917ed1da42069bcc90e60bdfc02f6b7ee79505675e
Jan 31 08:07:15 compute-0 systemd[1]: Started Ceph osd.0 for 82c880e6-d992-5408-8b12-efff9c275473.
Jan 31 08:07:15 compute-0 ceph-osd[85971]: set uid:gid to 167:167 (ceph:ceph)
Jan 31 08:07:15 compute-0 ceph-osd[85971]: ceph version 20.2.0 (69f84cc2651aa259a15bc192ddaabd3baba07489) tentacle (stable - RelWithDebInfo), process ceph-osd, pid 2
Jan 31 08:07:15 compute-0 ceph-osd[85971]: pidfile_write: ignore empty --pid-file
Jan 31 08:07:15 compute-0 ceph-osd[85971]: bdev(0x561e015cc000 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Jan 31 08:07:15 compute-0 ceph-osd[85971]: bdev(0x561e015cc000 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Jan 31 08:07:15 compute-0 ceph-osd[85971]: bdev(0x561e015cc000 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 31 08:07:15 compute-0 ceph-osd[85971]: bdev(0x561e015cc000 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 31 08:07:15 compute-0 ceph-osd[85971]: bdev(0x561e015cc000 /var/lib/ceph/osd/ceph-0/block) close
Jan 31 08:07:15 compute-0 ceph-osd[85971]: bdev(0x561e015cc000 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Jan 31 08:07:15 compute-0 ceph-osd[85971]: bdev(0x561e015cc000 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Jan 31 08:07:15 compute-0 ceph-osd[85971]: bdev(0x561e015cc000 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 31 08:07:15 compute-0 ceph-osd[85971]: bdev(0x561e015cc000 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 31 08:07:15 compute-0 ceph-osd[85971]: bdev(0x561e015cc000 /var/lib/ceph/osd/ceph-0/block) close
Jan 31 08:07:15 compute-0 sudo[85397]: pam_unix(sudo:session): session closed for user root
Jan 31 08:07:15 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 31 08:07:15 compute-0 ceph-osd[85971]: bdev(0x561e015cc000 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Jan 31 08:07:15 compute-0 ceph-osd[85971]: bdev(0x561e015cc000 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Jan 31 08:07:15 compute-0 ceph-osd[85971]: bdev(0x561e015cc000 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 31 08:07:15 compute-0 ceph-osd[85971]: bdev(0x561e015cc000 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 31 08:07:15 compute-0 ceph-osd[85971]: bdev(0x561e015cc000 /var/lib/ceph/osd/ceph-0/block) close
Jan 31 08:07:15 compute-0 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 08:07:15 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 31 08:07:15 compute-0 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 08:07:15 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "osd.1"} v 0)
Jan 31 08:07:15 compute-0 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "auth get", "entity": "osd.1"} : dispatch
Jan 31 08:07:15 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 31 08:07:15 compute-0 ceph-mon[75227]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 08:07:15 compute-0 ceph-mgr[75519]: [cephadm INFO cephadm.serve] Deploying daemon osd.1 on compute-0
Jan 31 08:07:15 compute-0 ceph-mgr[75519]: log_channel(cephadm) log [INF] : Deploying daemon osd.1 on compute-0
Jan 31 08:07:15 compute-0 ceph-osd[85971]: bdev(0x561e015cc000 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Jan 31 08:07:15 compute-0 ceph-osd[85971]: bdev(0x561e015cc000 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Jan 31 08:07:15 compute-0 ceph-osd[85971]: bdev(0x561e015cc000 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 31 08:07:15 compute-0 ceph-osd[85971]: bdev(0x561e015cc000 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 31 08:07:15 compute-0 ceph-osd[85971]: bdev(0x561e015cc000 /var/lib/ceph/osd/ceph-0/block) close
Jan 31 08:07:15 compute-0 ceph-osd[85971]: bdev(0x561e015cc000 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Jan 31 08:07:15 compute-0 ceph-osd[85971]: bdev(0x561e015cc000 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Jan 31 08:07:15 compute-0 ceph-osd[85971]: bdev(0x561e015cc000 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 31 08:07:15 compute-0 ceph-osd[85971]: bdev(0x561e015cc000 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 31 08:07:15 compute-0 ceph-osd[85971]: bdev(0x561e015cc000 /var/lib/ceph/osd/ceph-0/block) close
Jan 31 08:07:15 compute-0 sudo[85987]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 08:07:15 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v19: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 31 08:07:15 compute-0 sudo[85987]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:07:15 compute-0 sudo[85987]: pam_unix(sudo:session): session closed for user root
Jan 31 08:07:15 compute-0 ceph-osd[85971]: bdev(0x561e015cc000 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Jan 31 08:07:15 compute-0 ceph-osd[85971]: bdev(0x561e015cc000 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Jan 31 08:07:15 compute-0 ceph-osd[85971]: bdev(0x561e015cc000 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 31 08:07:15 compute-0 ceph-osd[85971]: bdev(0x561e015cc000 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 31 08:07:15 compute-0 ceph-osd[85971]: bluestore(/var/lib/ceph/osd/ceph-0) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Jan 31 08:07:15 compute-0 ceph-osd[85971]: bdev(0x561e015cc400 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Jan 31 08:07:15 compute-0 ceph-osd[85971]: bdev(0x561e015cc400 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Jan 31 08:07:15 compute-0 ceph-osd[85971]: bdev(0x561e015cc400 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 31 08:07:15 compute-0 ceph-osd[85971]: bdev(0x561e015cc400 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 31 08:07:15 compute-0 ceph-osd[85971]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-0/block size 20 GiB
Jan 31 08:07:15 compute-0 ceph-osd[85971]: bdev(0x561e015cc400 /var/lib/ceph/osd/ceph-0/block) close
Jan 31 08:07:15 compute-0 ceph-osd[85971]: bdev(0x561e015cc000 /var/lib/ceph/osd/ceph-0/block) close
Jan 31 08:07:15 compute-0 sudo[86016]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/82c880e6-d992-5408-8b12-efff9c275473/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 _orch deploy --fsid 82c880e6-d992-5408-8b12-efff9c275473
Jan 31 08:07:15 compute-0 ceph-osd[85971]: starting osd.0 osd_data /var/lib/ceph/osd/ceph-0 /var/lib/ceph/osd/ceph-0/journal
Jan 31 08:07:15 compute-0 sudo[86016]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:07:15 compute-0 ceph-osd[85971]: load: jerasure load: lrc 
Jan 31 08:07:15 compute-0 ceph-osd[85971]: bdev(0x561e015cdc00 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Jan 31 08:07:15 compute-0 ceph-osd[85971]: bdev(0x561e015cdc00 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Jan 31 08:07:15 compute-0 ceph-osd[85971]: bdev(0x561e015cdc00 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 31 08:07:15 compute-0 ceph-osd[85971]: bdev(0x561e015cdc00 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 31 08:07:15 compute-0 ceph-osd[85971]: bluestore(/var/lib/ceph/osd/ceph-0) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Jan 31 08:07:15 compute-0 ceph-osd[85971]: bdev(0x561e015cdc00 /var/lib/ceph/osd/ceph-0/block) close
Jan 31 08:07:15 compute-0 ceph-osd[85971]: bdev(0x561e015cdc00 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Jan 31 08:07:15 compute-0 ceph-osd[85971]: bdev(0x561e015cdc00 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Jan 31 08:07:15 compute-0 ceph-osd[85971]: bdev(0x561e015cdc00 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 31 08:07:15 compute-0 ceph-osd[85971]: bdev(0x561e015cdc00 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 31 08:07:15 compute-0 ceph-osd[85971]: bluestore(/var/lib/ceph/osd/ceph-0) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Jan 31 08:07:15 compute-0 ceph-osd[85971]: bdev(0x561e015cdc00 /var/lib/ceph/osd/ceph-0/block) close
Jan 31 08:07:15 compute-0 ceph-osd[85971]: mClockScheduler: set_osd_capacity_params_from_config: osd_bandwidth_cost_per_io: 499321.90 bytes/io, osd_bandwidth_capacity_per_shard 157286400.00 bytes/second
Jan 31 08:07:15 compute-0 ceph-osd[85971]: osd.0:0.OSDShard using op scheduler mclock_scheduler, cutoff=196
Jan 31 08:07:15 compute-0 ceph-osd[85971]: bdev(0x561e015cdc00 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Jan 31 08:07:15 compute-0 ceph-osd[85971]: bdev(0x561e015cdc00 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Jan 31 08:07:15 compute-0 ceph-osd[85971]: bdev(0x561e015cdc00 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 31 08:07:15 compute-0 ceph-osd[85971]: bdev(0x561e015cdc00 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 31 08:07:15 compute-0 ceph-osd[85971]: bdev(0x561e015cdc00 /var/lib/ceph/osd/ceph-0/block) close
Jan 31 08:07:15 compute-0 ceph-osd[85971]: bdev(0x561e015cdc00 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Jan 31 08:07:15 compute-0 ceph-osd[85971]: bdev(0x561e015cdc00 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Jan 31 08:07:15 compute-0 ceph-osd[85971]: bdev(0x561e015cdc00 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 31 08:07:15 compute-0 ceph-osd[85971]: bdev(0x561e015cdc00 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 31 08:07:15 compute-0 ceph-osd[85971]: bdev(0x561e015cdc00 /var/lib/ceph/osd/ceph-0/block) close
Jan 31 08:07:15 compute-0 ceph-osd[85971]: bdev(0x561e015cdc00 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Jan 31 08:07:15 compute-0 ceph-osd[85971]: bdev(0x561e015cdc00 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Jan 31 08:07:15 compute-0 ceph-osd[85971]: bdev(0x561e015cdc00 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 31 08:07:15 compute-0 ceph-osd[85971]: bdev(0x561e015cdc00 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 31 08:07:15 compute-0 ceph-osd[85971]: bdev(0x561e015cdc00 /var/lib/ceph/osd/ceph-0/block) close
Jan 31 08:07:15 compute-0 ceph-osd[85971]: bdev(0x561e015cdc00 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Jan 31 08:07:15 compute-0 ceph-osd[85971]: bdev(0x561e015cdc00 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Jan 31 08:07:15 compute-0 ceph-osd[85971]: bdev(0x561e015cdc00 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 31 08:07:15 compute-0 ceph-osd[85971]: bdev(0x561e015cdc00 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 31 08:07:15 compute-0 ceph-osd[85971]: bluestore(/var/lib/ceph/osd/ceph-0) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Jan 31 08:07:15 compute-0 ceph-osd[85971]: bdev(0x561e02263800 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Jan 31 08:07:15 compute-0 ceph-osd[85971]: bdev(0x561e02263800 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Jan 31 08:07:15 compute-0 ceph-osd[85971]: bdev(0x561e02263800 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 31 08:07:15 compute-0 ceph-osd[85971]: bdev(0x561e02263800 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 31 08:07:15 compute-0 ceph-osd[85971]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-0/block size 20 GiB
Jan 31 08:07:15 compute-0 ceph-osd[85971]: bluefs mount
Jan 31 08:07:15 compute-0 ceph-osd[85971]: bluefs _init_alloc shared, id 1, capacity 0x4ffc00000, block size 0x10000
Jan 31 08:07:15 compute-0 ceph-osd[85971]: bluefs mount final locked allocations 0 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Jan 31 08:07:15 compute-0 ceph-osd[85971]: bluefs mount final locked allocations 1 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Jan 31 08:07:15 compute-0 ceph-osd[85971]: bluefs mount final locked allocations 2 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Jan 31 08:07:15 compute-0 ceph-osd[85971]: bluefs mount final locked allocations 3 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Jan 31 08:07:15 compute-0 ceph-osd[85971]: bluefs mount final locked allocations 4 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Jan 31 08:07:15 compute-0 ceph-osd[85971]: bluefs mount shared_bdev_used = 0
Jan 31 08:07:15 compute-0 ceph-osd[85971]: bluestore(/var/lib/ceph/osd/ceph-0) _prepare_db_environment set db_paths to db,20397110067 db.slow,20397110067
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb: RocksDB version: 7.9.2
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb: Git sha 0
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb: Compile date 2025-10-30 15:42:43
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb: DB SUMMARY
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb: DB Session ID:  Z9SKTA50MPZ0LLKR730F
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb: CURRENT file:  CURRENT
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb: IDENTITY file:  IDENTITY
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb: MANIFEST file:  MANIFEST-000032 size: 1007 Bytes
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb: SST files in db dir, Total Num: 1, files: 000030.sst 
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb: SST files in db.slow dir, Total Num: 0, files: 
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb: Write Ahead Log file in db.wal: 000031.log size: 5097 ; 
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:                         Options.error_if_exists: 0
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:                       Options.create_if_missing: 0
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:                         Options.paranoid_checks: 1
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:             Options.flush_verify_memtable_count: 1
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:                                     Options.env: 0x561e0145dea0
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:                                      Options.fs: LegacyFileSystem
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:                                Options.info_log: 0x561e024b88a0
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:                Options.max_file_opening_threads: 16
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:                              Options.statistics: (nil)
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:                               Options.use_fsync: 0
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:                       Options.max_log_file_size: 0
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:                   Options.log_file_time_to_roll: 0
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:                       Options.keep_log_file_num: 1000
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:                    Options.recycle_log_file_num: 0
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:                         Options.allow_fallocate: 1
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:                        Options.allow_mmap_reads: 0
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:                       Options.allow_mmap_writes: 0
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:                        Options.use_direct_reads: 0
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:          Options.create_missing_column_families: 0
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:                              Options.db_log_dir: 
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:                                 Options.wal_dir: db.wal
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:                Options.table_cache_numshardbits: 6
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:                         Options.WAL_ttl_seconds: 0
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:                       Options.WAL_size_limit_MB: 0
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:             Options.manifest_preallocation_size: 4194304
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:                     Options.is_fd_close_on_exec: 1
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:                   Options.advise_random_on_open: 1
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:                    Options.db_write_buffer_size: 0
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:                    Options.write_buffer_manager: 0x561e014c2b40
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:         Options.access_hint_on_compaction_start: 1
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:                      Options.use_adaptive_mutex: 0
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:                            Options.rate_limiter: (nil)
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:                       Options.wal_recovery_mode: 2
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:                  Options.enable_thread_tracking: 0
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:                  Options.enable_pipelined_write: 0
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:                  Options.unordered_write: 0
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:             Options.write_thread_max_yield_usec: 100
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:                               Options.row_cache: None
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:                              Options.wal_filter: None
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:             Options.avoid_flush_during_recovery: 0
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:             Options.allow_ingest_behind: 0
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:             Options.two_write_queues: 0
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:             Options.manual_wal_flush: 0
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:             Options.wal_compression: 0
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:             Options.atomic_flush: 0
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:                 Options.persist_stats_to_disk: 0
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:                 Options.write_dbid_to_manifest: 0
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:                 Options.log_readahead_size: 0
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:                 Options.best_efforts_recovery: 0
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:             Options.allow_data_in_errors: 0
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:             Options.db_host_id: __hostname__
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:             Options.enforce_single_del_contracts: true
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:             Options.max_background_jobs: 4
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:             Options.max_background_compactions: -1
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:             Options.max_subcompactions: 1
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:           Options.writable_file_max_buffer_size: 0
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:             Options.delayed_write_rate : 16777216
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:             Options.max_total_wal_size: 1073741824
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:                   Options.stats_dump_period_sec: 600
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:                 Options.stats_persist_period_sec: 600
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:                          Options.max_open_files: -1
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:                          Options.bytes_per_sync: 0
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:                      Options.wal_bytes_per_sync: 0
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:                   Options.strict_bytes_per_sync: 0
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:       Options.compaction_readahead_size: 2097152
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:                  Options.max_background_flushes: -1
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb: Compression algorithms supported:
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:         kZSTD supported: 0
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:         kXpressCompression supported: 0
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:         kBZip2Compression supported: 0
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:         kZSTDNotFinalCompression supported: 0
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:         kLZ4Compression supported: 1
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:         kZlibCompression supported: 1
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:         kLZ4HCCompression supported: 1
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:         kSnappyCompression supported: 1
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb: Fast CRC32 supported: Supported on x86
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb: DMutex implementation: pthread_mutex_t
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb: [db/db_impl/db_impl_readonly.cc:25] Opening the db in read only mode
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: db/MANIFEST-000032
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:           Options.merge_operator: .T:int64_array.b:bitwise_xor
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:        Options.compaction_filter: None
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:        Options.compaction_filter_factory: None
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:  Options.sst_partitioner_factory: None
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x561e024b8c60)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x561e014618d0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:        Options.write_buffer_size: 16777216
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:  Options.max_write_buffer_number: 64
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:          Options.compression: LZ4
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:       Options.prefix_extractor: nullptr
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:             Options.num_levels: 7
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:                  Options.compression_opts.level: 32767
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:               Options.compression_opts.strategy: 0
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:                  Options.compression_opts.enabled: false
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:                        Options.arena_block_size: 1048576
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:                Options.disable_auto_compactions: 0
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:                   Options.inplace_update_support: 0
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:                           Options.bloom_locality: 0
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:                    Options.max_successive_merges: 0
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:                Options.paranoid_file_checks: 0
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:                Options.force_consistency_checks: 1
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:                Options.report_bg_io_stats: 0
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:                               Options.ttl: 2592000
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:                       Options.enable_blob_files: false
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:                           Options.min_blob_size: 0
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:                          Options.blob_file_size: 268435456
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:                Options.blob_file_starting_level: 0
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-0]:
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:           Options.merge_operator: None
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:        Options.compaction_filter: None
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:        Options.compaction_filter_factory: None
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:  Options.sst_partitioner_factory: None
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x561e024b8c60)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x561e014618d0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:        Options.write_buffer_size: 16777216
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:  Options.max_write_buffer_number: 64
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:          Options.compression: LZ4
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:       Options.prefix_extractor: nullptr
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:             Options.num_levels: 7
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:                  Options.compression_opts.level: 32767
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:               Options.compression_opts.strategy: 0
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:                  Options.compression_opts.enabled: false
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:                        Options.arena_block_size: 1048576
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:                Options.disable_auto_compactions: 0
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:                   Options.inplace_update_support: 0
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:                           Options.bloom_locality: 0
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:                    Options.max_successive_merges: 0
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:                Options.paranoid_file_checks: 0
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:                Options.force_consistency_checks: 1
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:                Options.report_bg_io_stats: 0
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:                               Options.ttl: 2592000
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:                       Options.enable_blob_files: false
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:                           Options.min_blob_size: 0
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:                          Options.blob_file_size: 268435456
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:                Options.blob_file_starting_level: 0
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-1]:
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:           Options.merge_operator: None
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:        Options.compaction_filter: None
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:        Options.compaction_filter_factory: None
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:  Options.sst_partitioner_factory: None
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x561e024b8c60)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x561e014618d0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:        Options.write_buffer_size: 16777216
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:  Options.max_write_buffer_number: 64
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:          Options.compression: LZ4
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:       Options.prefix_extractor: nullptr
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:             Options.num_levels: 7
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:                  Options.compression_opts.level: 32767
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:               Options.compression_opts.strategy: 0
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:                  Options.compression_opts.enabled: false
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:                        Options.arena_block_size: 1048576
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:                Options.disable_auto_compactions: 0
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:                   Options.inplace_update_support: 0
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:                           Options.bloom_locality: 0
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:                    Options.max_successive_merges: 0
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:                Options.paranoid_file_checks: 0
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:                Options.force_consistency_checks: 1
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:                Options.report_bg_io_stats: 0
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:                               Options.ttl: 2592000
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:                       Options.enable_blob_files: false
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:                           Options.min_blob_size: 0
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:                          Options.blob_file_size: 268435456
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:                Options.blob_file_starting_level: 0
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-2]:
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:           Options.merge_operator: None
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:        Options.compaction_filter: None
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:        Options.compaction_filter_factory: None
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:  Options.sst_partitioner_factory: None
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x561e024b8c60)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x561e014618d0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:        Options.write_buffer_size: 16777216
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:  Options.max_write_buffer_number: 64
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:          Options.compression: LZ4
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:       Options.prefix_extractor: nullptr
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:             Options.num_levels: 7
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:                  Options.compression_opts.level: 32767
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:               Options.compression_opts.strategy: 0
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:                  Options.compression_opts.enabled: false
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:                        Options.arena_block_size: 1048576
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:                Options.disable_auto_compactions: 0
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:                   Options.inplace_update_support: 0
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:                           Options.bloom_locality: 0
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:                    Options.max_successive_merges: 0
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:                Options.paranoid_file_checks: 0
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:                Options.force_consistency_checks: 1
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:                Options.report_bg_io_stats: 0
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:                               Options.ttl: 2592000
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:                       Options.enable_blob_files: false
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:                           Options.min_blob_size: 0
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:                          Options.blob_file_size: 268435456
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:                Options.blob_file_starting_level: 0
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-0]:
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:           Options.merge_operator: None
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:        Options.compaction_filter: None
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:        Options.compaction_filter_factory: None
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:  Options.sst_partitioner_factory: None
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x561e024b8c60)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x561e014618d0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:        Options.write_buffer_size: 16777216
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:  Options.max_write_buffer_number: 64
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:          Options.compression: LZ4
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:       Options.prefix_extractor: nullptr
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:             Options.num_levels: 7
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:                  Options.compression_opts.level: 32767
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:               Options.compression_opts.strategy: 0
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:                  Options.compression_opts.enabled: false
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:                        Options.arena_block_size: 1048576
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:                Options.disable_auto_compactions: 0
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:                   Options.inplace_update_support: 0
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:                           Options.bloom_locality: 0
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:                    Options.max_successive_merges: 0
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:                Options.paranoid_file_checks: 0
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:                Options.force_consistency_checks: 1
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:                Options.report_bg_io_stats: 0
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:                               Options.ttl: 2592000
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:                       Options.enable_blob_files: false
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:                           Options.min_blob_size: 0
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:                          Options.blob_file_size: 268435456
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:                Options.blob_file_starting_level: 0
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-1]:
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:           Options.merge_operator: None
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:        Options.compaction_filter: None
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:        Options.compaction_filter_factory: None
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:  Options.sst_partitioner_factory: None
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x561e024b8c60)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x561e014618d0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:        Options.write_buffer_size: 16777216
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:  Options.max_write_buffer_number: 64
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:          Options.compression: LZ4
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:       Options.prefix_extractor: nullptr
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:             Options.num_levels: 7
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:                  Options.compression_opts.level: 32767
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:               Options.compression_opts.strategy: 0
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:                  Options.compression_opts.enabled: false
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:                        Options.arena_block_size: 1048576
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:                Options.disable_auto_compactions: 0
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:                   Options.inplace_update_support: 0
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:                           Options.bloom_locality: 0
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:                    Options.max_successive_merges: 0
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:                Options.paranoid_file_checks: 0
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:                Options.force_consistency_checks: 1
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:                Options.report_bg_io_stats: 0
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:                               Options.ttl: 2592000
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:                       Options.enable_blob_files: false
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:                           Options.min_blob_size: 0
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:                          Options.blob_file_size: 268435456
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:                Options.blob_file_starting_level: 0
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-2]:
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:           Options.merge_operator: None
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:        Options.compaction_filter: None
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:        Options.compaction_filter_factory: None
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:  Options.sst_partitioner_factory: None
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x561e024b8c60)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x561e014618d0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:        Options.write_buffer_size: 16777216
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:  Options.max_write_buffer_number: 64
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:          Options.compression: LZ4
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:       Options.prefix_extractor: nullptr
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:             Options.num_levels: 7
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:                  Options.compression_opts.level: 32767
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:               Options.compression_opts.strategy: 0
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:                  Options.compression_opts.enabled: false
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:                        Options.arena_block_size: 1048576
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:                Options.disable_auto_compactions: 0
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:                   Options.inplace_update_support: 0
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:                           Options.bloom_locality: 0
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:                    Options.max_successive_merges: 0
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:                Options.paranoid_file_checks: 0
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:                Options.force_consistency_checks: 1
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:                Options.report_bg_io_stats: 0
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:                               Options.ttl: 2592000
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:                       Options.enable_blob_files: false
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:                           Options.min_blob_size: 0
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:                          Options.blob_file_size: 268435456
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:                Options.blob_file_starting_level: 0
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-0]:
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:           Options.merge_operator: None
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:        Options.compaction_filter: None
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:        Options.compaction_filter_factory: None
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:  Options.sst_partitioner_factory: None
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x561e024b8c80)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x561e01461a30
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:        Options.write_buffer_size: 16777216
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:  Options.max_write_buffer_number: 64
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:          Options.compression: LZ4
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:       Options.prefix_extractor: nullptr
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:             Options.num_levels: 7
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:                  Options.compression_opts.level: 32767
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:               Options.compression_opts.strategy: 0
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:                  Options.compression_opts.enabled: false
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:                        Options.arena_block_size: 1048576
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:                Options.disable_auto_compactions: 0
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:                   Options.inplace_update_support: 0
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:                           Options.bloom_locality: 0
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:                    Options.max_successive_merges: 0
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:                Options.paranoid_file_checks: 0
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:                Options.force_consistency_checks: 1
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:                Options.report_bg_io_stats: 0
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:                               Options.ttl: 2592000
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:                       Options.enable_blob_files: false
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:                           Options.min_blob_size: 0
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:                          Options.blob_file_size: 268435456
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:                Options.blob_file_starting_level: 0
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-1]:
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:           Options.merge_operator: None
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:        Options.compaction_filter: None
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:        Options.compaction_filter_factory: None
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:  Options.sst_partitioner_factory: None
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x561e024b8c80)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x561e01461a30
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:        Options.write_buffer_size: 16777216
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:  Options.max_write_buffer_number: 64
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:          Options.compression: LZ4
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:       Options.prefix_extractor: nullptr
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:             Options.num_levels: 7
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:                  Options.compression_opts.level: 32767
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:               Options.compression_opts.strategy: 0
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:                  Options.compression_opts.enabled: false
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:                        Options.arena_block_size: 1048576
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:                Options.disable_auto_compactions: 0
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:                   Options.inplace_update_support: 0
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:                           Options.bloom_locality: 0
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:                    Options.max_successive_merges: 0
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:                Options.paranoid_file_checks: 0
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:                Options.force_consistency_checks: 1
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:                Options.report_bg_io_stats: 0
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:                               Options.ttl: 2592000
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:                       Options.enable_blob_files: false
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:                           Options.min_blob_size: 0
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:                          Options.blob_file_size: 268435456
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:                Options.blob_file_starting_level: 0
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-2]:
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:           Options.merge_operator: None
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:        Options.compaction_filter: None
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:        Options.compaction_filter_factory: None
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:  Options.sst_partitioner_factory: None
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x561e024b8c80)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x561e01461a30
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:        Options.write_buffer_size: 16777216
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:  Options.max_write_buffer_number: 64
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:          Options.compression: LZ4
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:       Options.prefix_extractor: nullptr
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:             Options.num_levels: 7
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:                  Options.compression_opts.level: 32767
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:               Options.compression_opts.strategy: 0
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:                  Options.compression_opts.enabled: false
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:                        Options.arena_block_size: 1048576
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:                Options.disable_auto_compactions: 0
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:                   Options.inplace_update_support: 0
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:                           Options.bloom_locality: 0
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:                    Options.max_successive_merges: 0
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:                Options.paranoid_file_checks: 0
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:                Options.force_consistency_checks: 1
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:                Options.report_bg_io_stats: 0
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:                               Options.ttl: 2592000
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:                       Options.enable_blob_files: false
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:                           Options.min_blob_size: 0
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:                          Options.blob_file_size: 268435456
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:                Options.blob_file_starting_level: 0
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb: [db/column_family.cc:635]         (skipping printing options)
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb: [db/column_family.cc:635]         (skipping printing options)
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:db/MANIFEST-000032 succeeded,manifest_file_number is 32, next_file_number is 34, last_sequence is 12, log_number is 5,prev_log_number is 0,max_column_family is 11,min_log_number_to_keep is 5
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb: [db/version_set.cc:5581] Column family [m-0] (ID 1), log number is 5
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb: [db/version_set.cc:5581] Column family [m-1] (ID 2), log number is 5
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb: [db/version_set.cc:5581] Column family [m-2] (ID 3), log number is 5
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb: [db/version_set.cc:5581] Column family [p-0] (ID 4), log number is 5
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb: [db/version_set.cc:5581] Column family [p-1] (ID 5), log number is 5
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb: [db/version_set.cc:5581] Column family [p-2] (ID 6), log number is 5
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb: [db/version_set.cc:5581] Column family [O-0] (ID 7), log number is 5
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb: [db/version_set.cc:5581] Column family [O-1] (ID 8), log number is 5
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb: [db/version_set.cc:5581] Column family [O-2] (ID 9), log number is 5
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb: [db/version_set.cc:5581] Column family [L] (ID 10), log number is 5
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb: [db/version_set.cc:5581] Column family [P] (ID 11), log number is 5
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: 1c3ebcd3-0dce-476c-b7bf-b828bb6e67fa
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769846835925371, "job": 1, "event": "recovery_started", "wal_files": [31]}
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #31 mode 2
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769846835926236, "job": 1, "event": "recovery_finished"}
Jan 31 08:07:15 compute-0 ceph-osd[85971]: bluestore(/var/lib/ceph/osd/ceph-0) _open_db opened rocksdb path db options compression=kLZ4Compression,max_write_buffer_number=64,min_write_buffer_number_to_merge=6,compaction_style=kCompactionStyleLevel,write_buffer_size=16777216,max_background_jobs=4,level0_file_num_compaction_trigger=8,max_bytes_for_level_base=1073741824,max_bytes_for_level_multiplier=8,compaction_readahead_size=2MB,max_total_wal_size=1073741824,writable_file_max_buffer_size=0
Jan 31 08:07:15 compute-0 ceph-osd[85971]: bluestore(/var/lib/ceph/osd/ceph-0) _open_super_meta old nid_max 1025
Jan 31 08:07:15 compute-0 ceph-osd[85971]: bluestore(/var/lib/ceph/osd/ceph-0) _open_super_meta old blobid_max 10240
Jan 31 08:07:15 compute-0 ceph-osd[85971]: bluestore(/var/lib/ceph/osd/ceph-0) _open_super_meta ondisk_format 4 compat_ondisk_format 3
Jan 31 08:07:15 compute-0 ceph-osd[85971]: bluestore(/var/lib/ceph/osd/ceph-0) _open_super_meta min_alloc_size 0x1000
Jan 31 08:07:15 compute-0 ceph-osd[85971]: freelist init
Jan 31 08:07:15 compute-0 ceph-osd[85971]: freelist _read_cfg
Jan 31 08:07:15 compute-0 ceph-osd[85971]: bluestore(/var/lib/ceph/osd/ceph-0) _open_fm effective freelist_type = bitmap, freelist_alloc_size = 0x1000, min_alloc_size = 0x1000
Jan 31 08:07:15 compute-0 ceph-osd[85971]: bluestore(/var/lib/ceph/osd/ceph-0) _init_alloc loaded 20 GiB in 2 extents, allocator type hybrid, capacity 0x4ffc00000, block size 0x1000, free 0x4ffbfd000, fragmentation 1.9e-07
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb: [db/db_impl/db_impl.cc:496] Shutdown: canceling all background work
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb: [db/db_impl/db_impl.cc:704] Shutdown complete
Jan 31 08:07:15 compute-0 ceph-osd[85971]: bluefs umount
Jan 31 08:07:15 compute-0 ceph-osd[85971]: bdev(0x561e02263800 /var/lib/ceph/osd/ceph-0/block) close
Jan 31 08:07:15 compute-0 ceph-osd[85971]: bdev(0x561e02263800 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Jan 31 08:07:15 compute-0 ceph-osd[85971]: bdev(0x561e02263800 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Jan 31 08:07:15 compute-0 ceph-osd[85971]: bdev(0x561e02263800 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 31 08:07:15 compute-0 ceph-osd[85971]: bdev(0x561e02263800 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 31 08:07:15 compute-0 ceph-osd[85971]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-0/block size 20 GiB
Jan 31 08:07:15 compute-0 ceph-osd[85971]: bluefs mount
Jan 31 08:07:15 compute-0 ceph-osd[85971]: bluefs _init_alloc shared, id 1, capacity 0x4ffc00000, block size 0x10000
Jan 31 08:07:15 compute-0 ceph-osd[85971]: bluefs mount final locked allocations 0 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Jan 31 08:07:15 compute-0 ceph-osd[85971]: bluefs mount final locked allocations 1 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Jan 31 08:07:15 compute-0 ceph-osd[85971]: bluefs mount final locked allocations 2 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Jan 31 08:07:15 compute-0 ceph-osd[85971]: bluefs mount final locked allocations 3 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Jan 31 08:07:15 compute-0 ceph-osd[85971]: bluefs mount final locked allocations 4 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Jan 31 08:07:15 compute-0 ceph-osd[85971]: bluefs mount shared_bdev_used = 27262976
Jan 31 08:07:15 compute-0 ceph-osd[85971]: bluestore(/var/lib/ceph/osd/ceph-0) _prepare_db_environment set db_paths to db,20397110067 db.slow,20397110067
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb: RocksDB version: 7.9.2
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb: Git sha 0
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb: Compile date 2025-10-30 15:42:43
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb: DB SUMMARY
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb: DB Session ID:  Z9SKTA50MPZ0LLKR730E
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb: CURRENT file:  CURRENT
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb: IDENTITY file:  IDENTITY
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb: MANIFEST file:  MANIFEST-000032 size: 1007 Bytes
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb: SST files in db dir, Total Num: 1, files: 000030.sst 
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb: SST files in db.slow dir, Total Num: 0, files: 
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb: Write Ahead Log file in db.wal: 000031.log size: 5097 ; 
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:                         Options.error_if_exists: 0
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:                       Options.create_if_missing: 0
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:                         Options.paranoid_checks: 1
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:             Options.flush_verify_memtable_count: 1
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:                                     Options.env: 0x561e0145dce0
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:                                      Options.fs: LegacyFileSystem
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:                                Options.info_log: 0x561e024b8960
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:                Options.max_file_opening_threads: 16
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:                              Options.statistics: (nil)
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:                               Options.use_fsync: 0
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:                       Options.max_log_file_size: 0
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:                   Options.log_file_time_to_roll: 0
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:                       Options.keep_log_file_num: 1000
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:                    Options.recycle_log_file_num: 0
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:                         Options.allow_fallocate: 1
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:                        Options.allow_mmap_reads: 0
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:                       Options.allow_mmap_writes: 0
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:                        Options.use_direct_reads: 0
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:          Options.create_missing_column_families: 0
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:                              Options.db_log_dir: 
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:                                 Options.wal_dir: db.wal
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:                Options.table_cache_numshardbits: 6
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:                         Options.WAL_ttl_seconds: 0
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:                       Options.WAL_size_limit_MB: 0
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:             Options.manifest_preallocation_size: 4194304
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:                     Options.is_fd_close_on_exec: 1
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:                   Options.advise_random_on_open: 1
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:                    Options.db_write_buffer_size: 0
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:                    Options.write_buffer_manager: 0x561e014c2b40
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:         Options.access_hint_on_compaction_start: 1
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:                      Options.use_adaptive_mutex: 0
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:                            Options.rate_limiter: (nil)
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:                       Options.wal_recovery_mode: 2
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:                  Options.enable_thread_tracking: 0
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:                  Options.enable_pipelined_write: 0
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:                  Options.unordered_write: 0
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:             Options.write_thread_max_yield_usec: 100
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:                               Options.row_cache: None
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:                              Options.wal_filter: None
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:             Options.avoid_flush_during_recovery: 0
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:             Options.allow_ingest_behind: 0
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:             Options.two_write_queues: 0
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:             Options.manual_wal_flush: 0
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:             Options.wal_compression: 0
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:             Options.atomic_flush: 0
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:                 Options.persist_stats_to_disk: 0
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:                 Options.write_dbid_to_manifest: 0
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:                 Options.log_readahead_size: 0
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:                 Options.best_efforts_recovery: 0
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:             Options.allow_data_in_errors: 0
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:             Options.db_host_id: __hostname__
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:             Options.enforce_single_del_contracts: true
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:             Options.max_background_jobs: 4
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:             Options.max_background_compactions: -1
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:             Options.max_subcompactions: 1
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:           Options.writable_file_max_buffer_size: 0
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:             Options.delayed_write_rate : 16777216
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:             Options.max_total_wal_size: 1073741824
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:                   Options.stats_dump_period_sec: 600
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:                 Options.stats_persist_period_sec: 600
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:                          Options.max_open_files: -1
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:                          Options.bytes_per_sync: 0
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:                      Options.wal_bytes_per_sync: 0
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:                   Options.strict_bytes_per_sync: 0
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:       Options.compaction_readahead_size: 2097152
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:                  Options.max_background_flushes: -1
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb: Compression algorithms supported:
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:         kZSTD supported: 0
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:         kXpressCompression supported: 0
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:         kBZip2Compression supported: 0
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:         kZSTDNotFinalCompression supported: 0
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:         kLZ4Compression supported: 1
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:         kZlibCompression supported: 1
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:         kLZ4HCCompression supported: 1
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:         kSnappyCompression supported: 1
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb: Fast CRC32 supported: Supported on x86
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb: DMutex implementation: pthread_mutex_t
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: db/MANIFEST-000032
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:           Options.merge_operator: .T:int64_array.b:bitwise_xor
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:        Options.compaction_filter: None
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:        Options.compaction_filter_factory: None
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:  Options.sst_partitioner_factory: None
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x561e024b8bc0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x561e014618d0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:        Options.write_buffer_size: 16777216
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:  Options.max_write_buffer_number: 64
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:          Options.compression: LZ4
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:       Options.prefix_extractor: nullptr
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:             Options.num_levels: 7
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:                  Options.compression_opts.level: 32767
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:               Options.compression_opts.strategy: 0
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:                  Options.compression_opts.enabled: false
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:                        Options.arena_block_size: 1048576
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:                Options.disable_auto_compactions: 0
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:                   Options.inplace_update_support: 0
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:                           Options.bloom_locality: 0
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:                    Options.max_successive_merges: 0
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:                Options.paranoid_file_checks: 0
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:                Options.force_consistency_checks: 1
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:                Options.report_bg_io_stats: 0
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:                               Options.ttl: 2592000
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:                       Options.enable_blob_files: false
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:                           Options.min_blob_size: 0
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:                          Options.blob_file_size: 268435456
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:                Options.blob_file_starting_level: 0
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-0]:
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:           Options.merge_operator: None
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:        Options.compaction_filter: None
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:        Options.compaction_filter_factory: None
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:  Options.sst_partitioner_factory: None
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x561e024b8bc0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x561e014618d0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:        Options.write_buffer_size: 16777216
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:  Options.max_write_buffer_number: 64
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:          Options.compression: LZ4
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:       Options.prefix_extractor: nullptr
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:             Options.num_levels: 7
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:                  Options.compression_opts.level: 32767
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:               Options.compression_opts.strategy: 0
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:                  Options.compression_opts.enabled: false
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:                        Options.arena_block_size: 1048576
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:                Options.disable_auto_compactions: 0
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:                   Options.inplace_update_support: 0
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:                           Options.bloom_locality: 0
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:                    Options.max_successive_merges: 0
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:                Options.paranoid_file_checks: 0
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:                Options.force_consistency_checks: 1
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:                Options.report_bg_io_stats: 0
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:                               Options.ttl: 2592000
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:                       Options.enable_blob_files: false
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:                           Options.min_blob_size: 0
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:                          Options.blob_file_size: 268435456
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:                Options.blob_file_starting_level: 0
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-1]:
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:           Options.merge_operator: None
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:        Options.compaction_filter: None
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:        Options.compaction_filter_factory: None
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:  Options.sst_partitioner_factory: None
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x561e024b8bc0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x561e014618d0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:        Options.write_buffer_size: 16777216
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:  Options.max_write_buffer_number: 64
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:          Options.compression: LZ4
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:       Options.prefix_extractor: nullptr
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:             Options.num_levels: 7
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:                  Options.compression_opts.level: 32767
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:               Options.compression_opts.strategy: 0
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:                  Options.compression_opts.enabled: false
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:                        Options.arena_block_size: 1048576
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:                Options.disable_auto_compactions: 0
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:                   Options.inplace_update_support: 0
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:                           Options.bloom_locality: 0
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:                    Options.max_successive_merges: 0
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:                Options.paranoid_file_checks: 0
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:                Options.force_consistency_checks: 1
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:                Options.report_bg_io_stats: 0
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:                               Options.ttl: 2592000
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:                       Options.enable_blob_files: false
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:                           Options.min_blob_size: 0
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:                          Options.blob_file_size: 268435456
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:                Options.blob_file_starting_level: 0
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-2]:
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:           Options.merge_operator: None
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:        Options.compaction_filter: None
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:        Options.compaction_filter_factory: None
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:  Options.sst_partitioner_factory: None
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x561e024b8bc0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x561e014618d0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:        Options.write_buffer_size: 16777216
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:  Options.max_write_buffer_number: 64
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:          Options.compression: LZ4
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:       Options.prefix_extractor: nullptr
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:             Options.num_levels: 7
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:                  Options.compression_opts.level: 32767
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:               Options.compression_opts.strategy: 0
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:                  Options.compression_opts.enabled: false
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:                        Options.arena_block_size: 1048576
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:                Options.disable_auto_compactions: 0
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:                   Options.inplace_update_support: 0
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:                           Options.bloom_locality: 0
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:                    Options.max_successive_merges: 0
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:                Options.paranoid_file_checks: 0
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:                Options.force_consistency_checks: 1
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:                Options.report_bg_io_stats: 0
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:                               Options.ttl: 2592000
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:                       Options.enable_blob_files: false
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:                           Options.min_blob_size: 0
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:                          Options.blob_file_size: 268435456
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:                Options.blob_file_starting_level: 0
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-0]:
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:           Options.merge_operator: None
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:        Options.compaction_filter: None
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:        Options.compaction_filter_factory: None
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:  Options.sst_partitioner_factory: None
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x561e024b8bc0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x561e014618d0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:        Options.write_buffer_size: 16777216
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:  Options.max_write_buffer_number: 64
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:          Options.compression: LZ4
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:       Options.prefix_extractor: nullptr
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:             Options.num_levels: 7
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:                  Options.compression_opts.level: 32767
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:               Options.compression_opts.strategy: 0
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:                  Options.compression_opts.enabled: false
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:                        Options.arena_block_size: 1048576
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:                Options.disable_auto_compactions: 0
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:                   Options.inplace_update_support: 0
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:                           Options.bloom_locality: 0
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:                    Options.max_successive_merges: 0
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:                Options.paranoid_file_checks: 0
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:                Options.force_consistency_checks: 1
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:                Options.report_bg_io_stats: 0
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:                               Options.ttl: 2592000
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:                       Options.enable_blob_files: false
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:                           Options.min_blob_size: 0
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:                          Options.blob_file_size: 268435456
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:                Options.blob_file_starting_level: 0
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-1]:
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:           Options.merge_operator: None
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:        Options.compaction_filter: None
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:        Options.compaction_filter_factory: None
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:  Options.sst_partitioner_factory: None
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x561e024b8bc0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x561e014618d0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:        Options.write_buffer_size: 16777216
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:  Options.max_write_buffer_number: 64
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:          Options.compression: LZ4
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:       Options.prefix_extractor: nullptr
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:             Options.num_levels: 7
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:                  Options.compression_opts.level: 32767
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:               Options.compression_opts.strategy: 0
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:                  Options.compression_opts.enabled: false
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:                        Options.arena_block_size: 1048576
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:                Options.disable_auto_compactions: 0
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:                   Options.inplace_update_support: 0
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:                           Options.bloom_locality: 0
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:                    Options.max_successive_merges: 0
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:                Options.paranoid_file_checks: 0
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:                Options.force_consistency_checks: 1
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:                Options.report_bg_io_stats: 0
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:                               Options.ttl: 2592000
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:                       Options.enable_blob_files: false
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:                           Options.min_blob_size: 0
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:                          Options.blob_file_size: 268435456
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:                Options.blob_file_starting_level: 0
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-2]:
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:           Options.merge_operator: None
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:        Options.compaction_filter: None
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:        Options.compaction_filter_factory: None
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:  Options.sst_partitioner_factory: None
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x561e024b8bc0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x561e014618d0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:        Options.write_buffer_size: 16777216
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:  Options.max_write_buffer_number: 64
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:          Options.compression: LZ4
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:       Options.prefix_extractor: nullptr
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:             Options.num_levels: 7
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:                  Options.compression_opts.level: 32767
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:               Options.compression_opts.strategy: 0
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:                  Options.compression_opts.enabled: false
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:                        Options.arena_block_size: 1048576
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:                Options.disable_auto_compactions: 0
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:                   Options.inplace_update_support: 0
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:                           Options.bloom_locality: 0
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:                    Options.max_successive_merges: 0
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:                Options.paranoid_file_checks: 0
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:                Options.force_consistency_checks: 1
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:                Options.report_bg_io_stats: 0
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:                               Options.ttl: 2592000
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:                       Options.enable_blob_files: false
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:                           Options.min_blob_size: 0
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:                          Options.blob_file_size: 268435456
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:                Options.blob_file_starting_level: 0
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-0]:
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:           Options.merge_operator: None
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:        Options.compaction_filter: None
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:        Options.compaction_filter_factory: None
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:  Options.sst_partitioner_factory: None
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x561e024b90c0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x561e01461a30
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:        Options.write_buffer_size: 16777216
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:  Options.max_write_buffer_number: 64
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:          Options.compression: LZ4
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:       Options.prefix_extractor: nullptr
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:             Options.num_levels: 7
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:                  Options.compression_opts.level: 32767
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:               Options.compression_opts.strategy: 0
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:                  Options.compression_opts.enabled: false
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:                        Options.arena_block_size: 1048576
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:                Options.disable_auto_compactions: 0
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:                   Options.inplace_update_support: 0
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:                           Options.bloom_locality: 0
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:                    Options.max_successive_merges: 0
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:                Options.paranoid_file_checks: 0
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:                Options.force_consistency_checks: 1
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:                Options.report_bg_io_stats: 0
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:                               Options.ttl: 2592000
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:                       Options.enable_blob_files: false
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:                           Options.min_blob_size: 0
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:                          Options.blob_file_size: 268435456
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:                Options.blob_file_starting_level: 0
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-1]:
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:           Options.merge_operator: None
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:        Options.compaction_filter: None
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:        Options.compaction_filter_factory: None
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:  Options.sst_partitioner_factory: None
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x561e024b90c0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x561e01461a30
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:        Options.write_buffer_size: 16777216
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:  Options.max_write_buffer_number: 64
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:          Options.compression: LZ4
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:       Options.prefix_extractor: nullptr
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:             Options.num_levels: 7
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:                  Options.compression_opts.level: 32767
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:               Options.compression_opts.strategy: 0
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:                  Options.compression_opts.enabled: false
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:                        Options.arena_block_size: 1048576
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:                Options.disable_auto_compactions: 0
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:                   Options.inplace_update_support: 0
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:                           Options.bloom_locality: 0
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:                    Options.max_successive_merges: 0
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:                Options.paranoid_file_checks: 0
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:                Options.force_consistency_checks: 1
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:                Options.report_bg_io_stats: 0
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:                               Options.ttl: 2592000
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:                       Options.enable_blob_files: false
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:                           Options.min_blob_size: 0
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:                          Options.blob_file_size: 268435456
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:                Options.blob_file_starting_level: 0
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-2]:
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:           Options.merge_operator: None
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:        Options.compaction_filter: None
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:        Options.compaction_filter_factory: None
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:  Options.sst_partitioner_factory: None
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x561e024b90c0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x561e01461a30
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:        Options.write_buffer_size: 16777216
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:  Options.max_write_buffer_number: 64
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:          Options.compression: LZ4
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:       Options.prefix_extractor: nullptr
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:             Options.num_levels: 7
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:                  Options.compression_opts.level: 32767
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:               Options.compression_opts.strategy: 0
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:                  Options.compression_opts.enabled: false
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:                        Options.arena_block_size: 1048576
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:                Options.disable_auto_compactions: 0
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:                   Options.inplace_update_support: 0
Jan 31 08:07:15 compute-0 ceph-osd[85971]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 31 08:07:16 compute-0 ceph-osd[85971]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 31 08:07:16 compute-0 ceph-osd[85971]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 31 08:07:16 compute-0 ceph-osd[85971]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 31 08:07:16 compute-0 ceph-osd[85971]: rocksdb:                           Options.bloom_locality: 0
Jan 31 08:07:16 compute-0 ceph-osd[85971]: rocksdb:                    Options.max_successive_merges: 0
Jan 31 08:07:16 compute-0 ceph-osd[85971]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 31 08:07:16 compute-0 ceph-osd[85971]: rocksdb:                Options.paranoid_file_checks: 0
Jan 31 08:07:16 compute-0 ceph-osd[85971]: rocksdb:                Options.force_consistency_checks: 1
Jan 31 08:07:16 compute-0 ceph-osd[85971]: rocksdb:                Options.report_bg_io_stats: 0
Jan 31 08:07:16 compute-0 ceph-osd[85971]: rocksdb:                               Options.ttl: 2592000
Jan 31 08:07:16 compute-0 ceph-osd[85971]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 31 08:07:16 compute-0 ceph-osd[85971]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 31 08:07:16 compute-0 ceph-osd[85971]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 31 08:07:16 compute-0 ceph-osd[85971]: rocksdb:                       Options.enable_blob_files: false
Jan 31 08:07:16 compute-0 ceph-osd[85971]: rocksdb:                           Options.min_blob_size: 0
Jan 31 08:07:16 compute-0 ceph-osd[85971]: rocksdb:                          Options.blob_file_size: 268435456
Jan 31 08:07:16 compute-0 ceph-osd[85971]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 31 08:07:16 compute-0 ceph-osd[85971]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 31 08:07:16 compute-0 ceph-osd[85971]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 31 08:07:16 compute-0 ceph-osd[85971]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 31 08:07:16 compute-0 ceph-osd[85971]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 31 08:07:16 compute-0 ceph-osd[85971]: rocksdb:                Options.blob_file_starting_level: 0
Jan 31 08:07:16 compute-0 ceph-osd[85971]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 31 08:07:16 compute-0 ceph-osd[85971]: rocksdb: [db/column_family.cc:635]         (skipping printing options)
Jan 31 08:07:16 compute-0 ceph-osd[85971]: rocksdb: [db/column_family.cc:635]         (skipping printing options)
Jan 31 08:07:16 compute-0 ceph-osd[85971]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:db/MANIFEST-000032 succeeded,manifest_file_number is 32, next_file_number is 34, last_sequence is 12, log_number is 5,prev_log_number is 0,max_column_family is 11,min_log_number_to_keep is 5
Jan 31 08:07:16 compute-0 ceph-osd[85971]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5
Jan 31 08:07:16 compute-0 ceph-osd[85971]: rocksdb: [db/version_set.cc:5581] Column family [m-0] (ID 1), log number is 5
Jan 31 08:07:16 compute-0 ceph-osd[85971]: rocksdb: [db/version_set.cc:5581] Column family [m-1] (ID 2), log number is 5
Jan 31 08:07:16 compute-0 ceph-osd[85971]: rocksdb: [db/version_set.cc:5581] Column family [m-2] (ID 3), log number is 5
Jan 31 08:07:16 compute-0 ceph-osd[85971]: rocksdb: [db/version_set.cc:5581] Column family [p-0] (ID 4), log number is 5
Jan 31 08:07:16 compute-0 ceph-osd[85971]: rocksdb: [db/version_set.cc:5581] Column family [p-1] (ID 5), log number is 5
Jan 31 08:07:16 compute-0 ceph-osd[85971]: rocksdb: [db/version_set.cc:5581] Column family [p-2] (ID 6), log number is 5
Jan 31 08:07:16 compute-0 ceph-osd[85971]: rocksdb: [db/version_set.cc:5581] Column family [O-0] (ID 7), log number is 5
Jan 31 08:07:16 compute-0 ceph-osd[85971]: rocksdb: [db/version_set.cc:5581] Column family [O-1] (ID 8), log number is 5
Jan 31 08:07:16 compute-0 ceph-osd[85971]: rocksdb: [db/version_set.cc:5581] Column family [O-2] (ID 9), log number is 5
Jan 31 08:07:16 compute-0 ceph-osd[85971]: rocksdb: [db/version_set.cc:5581] Column family [L] (ID 10), log number is 5
Jan 31 08:07:16 compute-0 ceph-osd[85971]: rocksdb: [db/version_set.cc:5581] Column family [P] (ID 11), log number is 5
Jan 31 08:07:16 compute-0 ceph-osd[85971]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: 1c3ebcd3-0dce-476c-b7bf-b828bb6e67fa
Jan 31 08:07:16 compute-0 ceph-osd[85971]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769846835968973, "job": 1, "event": "recovery_started", "wal_files": [31]}
Jan 31 08:07:16 compute-0 ceph-osd[85971]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #31 mode 2
Jan 31 08:07:16 compute-0 ceph-osd[85971]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769846835978926, "cf_name": "default", "job": 1, "event": "table_file_creation", "file_number": 35, "file_size": 1275, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 13, "largest_seqno": 21, "table_properties": {"data_size": 131, "index_size": 27, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 87, "raw_average_key_size": 17, "raw_value_size": 82, "raw_average_value_size": 16, "num_data_blocks": 1, "num_entries": 5, "num_filter_entries": 5, "num_deletions": 0, "num_merge_operands": 2, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": ".T:int64_array.b:bitwise_xor", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769846835, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "1c3ebcd3-0dce-476c-b7bf-b828bb6e67fa", "db_session_id": "Z9SKTA50MPZ0LLKR730E", "orig_file_number": 35, "seqno_to_time_mapping": "N/A"}}
Jan 31 08:07:16 compute-0 ceph-osd[85971]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769846835991315, "cf_name": "p-0", "job": 1, "event": "table_file_creation", "file_number": 36, "file_size": 1595, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 14, "largest_seqno": 15, "table_properties": {"data_size": 469, "index_size": 39, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 72, "raw_average_key_size": 36, "raw_value_size": 571, "raw_average_value_size": 285, "num_data_blocks": 1, "num_entries": 2, "num_filter_entries": 2, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "p-0", "column_family_id": 4, "comparator": "leveldb.BytewiseComparator", "merge_operator": "nullptr", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769846835, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "1c3ebcd3-0dce-476c-b7bf-b828bb6e67fa", "db_session_id": "Z9SKTA50MPZ0LLKR730E", "orig_file_number": 36, "seqno_to_time_mapping": "N/A"}}
Jan 31 08:07:16 compute-0 ceph-osd[85971]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769846836001317, "cf_name": "O-2", "job": 1, "event": "table_file_creation", "file_number": 37, "file_size": 1275, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 16, "largest_seqno": 16, "table_properties": {"data_size": 121, "index_size": 64, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 55, "raw_average_key_size": 55, "raw_value_size": 50, "raw_average_value_size": 50, "num_data_blocks": 1, "num_entries": 1, "num_filter_entries": 1, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "O-2", "column_family_id": 9, "comparator": "leveldb.BytewiseComparator", "merge_operator": "nullptr", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769846835, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "1c3ebcd3-0dce-476c-b7bf-b828bb6e67fa", "db_session_id": "Z9SKTA50MPZ0LLKR730E", "orig_file_number": 37, "seqno_to_time_mapping": "N/A"}}
Jan 31 08:07:16 compute-0 ceph-osd[85971]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769846836004409, "job": 1, "event": "recovery_finished"}
Jan 31 08:07:16 compute-0 ceph-osd[85971]: rocksdb: [db/version_set.cc:5047] Creating manifest 40
Jan 31 08:07:16 compute-0 ceph-osd[85971]: rocksdb: [db/db_impl/db_impl_open.cc:1987] SstFileManager instance 0x561e024ba000
Jan 31 08:07:16 compute-0 ceph-osd[85971]: rocksdb: DB pointer 0x561e02672000
Jan 31 08:07:16 compute-0 ceph-osd[85971]: bluestore(/var/lib/ceph/osd/ceph-0) _open_db opened rocksdb path db options compression=kLZ4Compression,max_write_buffer_number=64,min_write_buffer_number_to_merge=6,compaction_style=kCompactionStyleLevel,write_buffer_size=16777216,max_background_jobs=4,level0_file_num_compaction_trigger=8,max_bytes_for_level_base=1073741824,max_bytes_for_level_multiplier=8,compaction_readahead_size=2MB,max_total_wal_size=1073741824,writable_file_max_buffer_size=0
Jan 31 08:07:16 compute-0 ceph-osd[85971]: bluestore(/var/lib/ceph/osd/ceph-0) _upgrade_super from 4, latest 4
Jan 31 08:07:16 compute-0 ceph-osd[85971]: bluestore(/var/lib/ceph/osd/ceph-0) _upgrade_super done
Jan 31 08:07:16 compute-0 ceph-osd[85971]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 31 08:07:16 compute-0 ceph-osd[85971]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Cumulative writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 GB, 0.00 MB/s
                                           Cumulative WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s
                                           Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.01              0.00         1    0.010       0      0       0.0       0.0
                                            Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.01              0.00         1    0.010       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.01              0.00         1    0.010       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.1      0.01              0.00         1    0.010       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.01 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.01 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x561e014618d0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 4.8e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
                                           
                                           ** Compaction Stats [m-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x561e014618d0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 4.8e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-0] **
                                           
                                           ** Compaction Stats [m-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x561e014618d0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 4.8e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-1] **
                                           
                                           ** Compaction Stats [m-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x561e014618d0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 4.8e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-2] **
                                           
                                           ** Compaction Stats [p-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.56 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.01              0.00         1    0.012       0      0       0.0       0.0
                                            Sum      1/0    1.56 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.01              0.00         1    0.012       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.01              0.00         1    0.012       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.1      0.01              0.00         1    0.012       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.02 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.02 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x561e014618d0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 4.8e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-0] **
                                           
                                           ** Compaction Stats [p-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x561e014618d0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 4.8e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-1] **
                                           
                                           ** Compaction Stats [p-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x561e014618d0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 4.8e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-2] **
                                           
                                           ** Compaction Stats [O-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x561e01461a30#2 capacity: 512.00 MB usage: 0.25 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 2 last_secs: 1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(1,0.11 KB,2.08616e-05%) IndexBlock(1,0.14 KB,2.68221e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-0] **
                                           
                                           ** Compaction Stats [O-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x561e01461a30#2 capacity: 512.00 MB usage: 0.25 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 2 last_secs: 1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(1,0.11 KB,2.08616e-05%) IndexBlock(1,0.14 KB,2.68221e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-1] **
                                           
                                           ** Compaction Stats [O-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.25 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.01              0.00         1    0.010       0      0       0.0       0.0
                                            Sum      1/0    1.25 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.01              0.00         1    0.010       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.01              0.00         1    0.010       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.1      0.01              0.00         1    0.010       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.01 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.01 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x561e01461a30#2 capacity: 512.00 MB usage: 0.25 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 2 last_secs: 1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(1,0.11 KB,2.08616e-05%) IndexBlock(1,0.14 KB,2.68221e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-2] **
                                           
                                           ** Compaction Stats [L] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.003       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [L] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.003       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x561e014618d0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 4.8e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [L] **
                                           
                                           ** Compaction Stats [P] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [P] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x561e014618d0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 4.8e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [P] **
Jan 31 08:07:16 compute-0 ceph-osd[85971]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/cephfs/cls_cephfs.cc:201: loading cephfs
Jan 31 08:07:16 compute-0 ceph-osd[85971]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/hello/cls_hello.cc:316: loading cls_hello
Jan 31 08:07:16 compute-0 ceph-osd[85971]: _get_class not permitted to load lua
Jan 31 08:07:16 compute-0 ceph-osd[85971]: _get_class not permitted to load sdk
Jan 31 08:07:16 compute-0 ceph-osd[85971]: osd.0 0 crush map has features 288232575208783872, adjusting msgr requires for clients
Jan 31 08:07:16 compute-0 ceph-osd[85971]: osd.0 0 crush map has features 288232575208783872 was 8705, adjusting msgr requires for mons
Jan 31 08:07:16 compute-0 ceph-osd[85971]: osd.0 0 crush map has features 288232575208783872, adjusting msgr requires for osds
Jan 31 08:07:16 compute-0 ceph-osd[85971]: osd.0 0 check_osdmap_features enabling on-disk ERASURE CODES compat feature
Jan 31 08:07:16 compute-0 ceph-osd[85971]: osd.0 0 load_pgs
Jan 31 08:07:16 compute-0 ceph-osd[85971]: osd.0 0 load_pgs opened 0 pgs
Jan 31 08:07:16 compute-0 ceph-osd[85971]: osd.0 0 log_to_monitors true
Jan 31 08:07:16 compute-0 ceph-82c880e6-d992-5408-8b12-efff9c275473-osd-0[85967]: 2026-01-31T08:07:16.052+0000 7fbc8d3c78c0 -1 osd.0 0 log_to_monitors true
Jan 31 08:07:16 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]} v 0)
Jan 31 08:07:16 compute-0 ceph-mon[75227]: log_channel(audit) log [INF] : from='osd.0 [v2:192.168.122.100:6802/2898453618,v1:192.168.122.100:6803/2898453618]' entity='osd.0' cmd={"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]} : dispatch
Jan 31 08:07:16 compute-0 podman[86477]: 2026-01-31 08:07:16.074628988 +0000 UTC m=+0.033577709 container create 549e3bfcf09886db027cd208005859efef79ab5484adef4b5ef932a937e830df (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=inspiring_aryabhata, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, ceph=True)
Jan 31 08:07:16 compute-0 systemd[1]: Started libpod-conmon-549e3bfcf09886db027cd208005859efef79ab5484adef4b5ef932a937e830df.scope.
Jan 31 08:07:16 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:07:16 compute-0 podman[86477]: 2026-01-31 08:07:16.059004863 +0000 UTC m=+0.017953594 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 08:07:16 compute-0 podman[86477]: 2026-01-31 08:07:16.168772214 +0000 UTC m=+0.127720935 container init 549e3bfcf09886db027cd208005859efef79ab5484adef4b5ef932a937e830df (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=inspiring_aryabhata, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030)
Jan 31 08:07:16 compute-0 podman[86477]: 2026-01-31 08:07:16.176944638 +0000 UTC m=+0.135893359 container start 549e3bfcf09886db027cd208005859efef79ab5484adef4b5ef932a937e830df (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=inspiring_aryabhata, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 08:07:16 compute-0 podman[86477]: 2026-01-31 08:07:16.183382111 +0000 UTC m=+0.142330862 container attach 549e3bfcf09886db027cd208005859efef79ab5484adef4b5ef932a937e830df (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=inspiring_aryabhata, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 08:07:16 compute-0 inspiring_aryabhata[86527]: 167 167
Jan 31 08:07:16 compute-0 systemd[1]: libpod-549e3bfcf09886db027cd208005859efef79ab5484adef4b5ef932a937e830df.scope: Deactivated successfully.
Jan 31 08:07:16 compute-0 podman[86477]: 2026-01-31 08:07:16.185751189 +0000 UTC m=+0.144699910 container died 549e3bfcf09886db027cd208005859efef79ab5484adef4b5ef932a937e830df (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=inspiring_aryabhata, io.buildah.version=1.41.3, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 08:07:16 compute-0 systemd[1]: var-lib-containers-storage-overlay-c45ed1e1ebf202f0ac94f7e949c4b1add2bf7021237bedb51eff2c1f69be5642-merged.mount: Deactivated successfully.
Jan 31 08:07:16 compute-0 podman[86477]: 2026-01-31 08:07:16.257193667 +0000 UTC m=+0.216142398 container remove 549e3bfcf09886db027cd208005859efef79ab5484adef4b5ef932a937e830df (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=inspiring_aryabhata, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 31 08:07:16 compute-0 systemd[1]: libpod-conmon-549e3bfcf09886db027cd208005859efef79ab5484adef4b5ef932a937e830df.scope: Deactivated successfully.
Jan 31 08:07:16 compute-0 podman[86558]: 2026-01-31 08:07:16.440306001 +0000 UTC m=+0.033848216 container create a7d26a5ed9b7885b58cfcc79c137c44cf6e34c533f30369267b8d04e5266a083 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-82c880e6-d992-5408-8b12-efff9c275473-osd-1-activate-test, ceph=True, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0)
Jan 31 08:07:16 compute-0 systemd[1]: Started libpod-conmon-a7d26a5ed9b7885b58cfcc79c137c44cf6e34c533f30369267b8d04e5266a083.scope.
Jan 31 08:07:16 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:07:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/57cdf86ae9a764725d083dce5ce388c6a8c330a2e02278950669e7bc3c8f8704/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 08:07:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/57cdf86ae9a764725d083dce5ce388c6a8c330a2e02278950669e7bc3c8f8704/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 08:07:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/57cdf86ae9a764725d083dce5ce388c6a8c330a2e02278950669e7bc3c8f8704/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 08:07:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/57cdf86ae9a764725d083dce5ce388c6a8c330a2e02278950669e7bc3c8f8704/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 08:07:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/57cdf86ae9a764725d083dce5ce388c6a8c330a2e02278950669e7bc3c8f8704/merged/var/lib/ceph/osd/ceph-1 supports timestamps until 2038 (0x7fffffff)
Jan 31 08:07:16 compute-0 podman[86558]: 2026-01-31 08:07:16.522306601 +0000 UTC m=+0.115848836 container init a7d26a5ed9b7885b58cfcc79c137c44cf6e34c533f30369267b8d04e5266a083 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-82c880e6-d992-5408-8b12-efff9c275473-osd-1-activate-test, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, ceph=True, OSD_FLAVOR=default)
Jan 31 08:07:16 compute-0 podman[86558]: 2026-01-31 08:07:16.425823258 +0000 UTC m=+0.019365503 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 08:07:16 compute-0 podman[86558]: 2026-01-31 08:07:16.530585157 +0000 UTC m=+0.124127412 container start a7d26a5ed9b7885b58cfcc79c137c44cf6e34c533f30369267b8d04e5266a083 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-82c880e6-d992-5408-8b12-efff9c275473-osd-1-activate-test, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 08:07:16 compute-0 podman[86558]: 2026-01-31 08:07:16.534487149 +0000 UTC m=+0.128029384 container attach a7d26a5ed9b7885b58cfcc79c137c44cf6e34c533f30369267b8d04e5266a083 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-82c880e6-d992-5408-8b12-efff9c275473-osd-1-activate-test, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Jan 31 08:07:16 compute-0 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 08:07:16 compute-0 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 08:07:16 compute-0 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "auth get", "entity": "osd.1"} : dispatch
Jan 31 08:07:16 compute-0 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 08:07:16 compute-0 ceph-mon[75227]: Deploying daemon osd.1 on compute-0
Jan 31 08:07:16 compute-0 ceph-mon[75227]: pgmap v19: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 31 08:07:16 compute-0 ceph-mon[75227]: from='osd.0 [v2:192.168.122.100:6802/2898453618,v1:192.168.122.100:6803/2898453618]' entity='osd.0' cmd={"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]} : dispatch
Jan 31 08:07:16 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e6 do_prune osdmap full prune enabled
Jan 31 08:07:16 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e6 encode_pending skipping prime_pg_temp; mapping job did not start
Jan 31 08:07:16 compute-0 ceph-mon[75227]: log_channel(audit) log [INF] : from='osd.0 [v2:192.168.122.100:6802/2898453618,v1:192.168.122.100:6803/2898453618]' entity='osd.0' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]': finished
Jan 31 08:07:16 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e7 e7: 3 total, 0 up, 3 in
Jan 31 08:07:16 compute-0 ceph-mon[75227]: log_channel(cluster) log [DBG] : osdmap e7: 3 total, 0 up, 3 in
Jan 31 08:07:16 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=compute-0", "root=default"]} v 0)
Jan 31 08:07:16 compute-0 ceph-mon[75227]: log_channel(audit) log [INF] : from='osd.0 [v2:192.168.122.100:6802/2898453618,v1:192.168.122.100:6803/2898453618]' entity='osd.0' cmd={"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=compute-0", "root=default"]} : dispatch
Jan 31 08:07:16 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e7 create-or-move crush item name 'osd.0' initial_weight 0.02 at location {host=compute-0,root=default}
Jan 31 08:07:16 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Jan 31 08:07:16 compute-0 ceph-mon[75227]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "osd metadata", "id": 0} : dispatch
Jan 31 08:07:16 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Jan 31 08:07:16 compute-0 ceph-mon[75227]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "osd metadata", "id": 1} : dispatch
Jan 31 08:07:16 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Jan 31 08:07:16 compute-0 ceph-mon[75227]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Jan 31 08:07:16 compute-0 ceph-mgr[75519]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Jan 31 08:07:16 compute-0 ceph-mgr[75519]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Jan 31 08:07:16 compute-0 ceph-mgr[75519]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Jan 31 08:07:16 compute-0 ceph-82c880e6-d992-5408-8b12-efff9c275473-osd-1-activate-test[86574]: usage: ceph-volume activate [-h] [--osd-id OSD_ID] [--osd-uuid OSD_FSID]
Jan 31 08:07:16 compute-0 ceph-82c880e6-d992-5408-8b12-efff9c275473-osd-1-activate-test[86574]:                             [--no-systemd] [--no-tmpfs]
Jan 31 08:07:16 compute-0 ceph-82c880e6-d992-5408-8b12-efff9c275473-osd-1-activate-test[86574]: ceph-volume activate: error: unrecognized arguments: --bad-option
Jan 31 08:07:16 compute-0 systemd[1]: libpod-a7d26a5ed9b7885b58cfcc79c137c44cf6e34c533f30369267b8d04e5266a083.scope: Deactivated successfully.
Jan 31 08:07:16 compute-0 podman[86558]: 2026-01-31 08:07:16.69648124 +0000 UTC m=+0.290023555 container died a7d26a5ed9b7885b58cfcc79c137c44cf6e34c533f30369267b8d04e5266a083 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-82c880e6-d992-5408-8b12-efff9c275473-osd-1-activate-test, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 08:07:16 compute-0 ceph-mgr[75519]: [devicehealth WARNING root] not enough osds to create mgr pool
Jan 31 08:07:17 compute-0 ceph-osd[85971]: log_channel(cluster) log [DBG] : purged_snaps scrub starts
Jan 31 08:07:17 compute-0 ceph-osd[85971]: log_channel(cluster) log [DBG] : purged_snaps scrub ok
Jan 31 08:07:17 compute-0 systemd[1]: var-lib-containers-storage-overlay-57cdf86ae9a764725d083dce5ce388c6a8c330a2e02278950669e7bc3c8f8704-merged.mount: Deactivated successfully.
Jan 31 08:07:17 compute-0 podman[86558]: 2026-01-31 08:07:17.217812395 +0000 UTC m=+0.811354610 container remove a7d26a5ed9b7885b58cfcc79c137c44cf6e34c533f30369267b8d04e5266a083 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-82c880e6-d992-5408-8b12-efff9c275473-osd-1-activate-test, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0)
Jan 31 08:07:17 compute-0 systemd[1]: libpod-conmon-a7d26a5ed9b7885b58cfcc79c137c44cf6e34c533f30369267b8d04e5266a083.scope: Deactivated successfully.
Jan 31 08:07:17 compute-0 systemd[1]: Reloading.
Jan 31 08:07:17 compute-0 systemd-rc-local-generator[86636]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 08:07:17 compute-0 systemd-sysv-generator[86641]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 08:07:17 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e7 do_prune osdmap full prune enabled
Jan 31 08:07:17 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e7 encode_pending skipping prime_pg_temp; mapping job did not start
Jan 31 08:07:17 compute-0 ceph-mon[75227]: log_channel(audit) log [INF] : from='osd.0 [v2:192.168.122.100:6802/2898453618,v1:192.168.122.100:6803/2898453618]' entity='osd.0' cmd='[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]': finished
Jan 31 08:07:17 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e8 e8: 3 total, 0 up, 3 in
Jan 31 08:07:17 compute-0 ceph-osd[85971]: osd.0 0 done with init, starting boot process
Jan 31 08:07:17 compute-0 ceph-osd[85971]: osd.0 0 start_boot
Jan 31 08:07:17 compute-0 ceph-osd[85971]: osd.0 0 maybe_override_options_for_qos osd_max_backfills set to 1
Jan 31 08:07:17 compute-0 ceph-osd[85971]: osd.0 0 maybe_override_options_for_qos osd_recovery_max_active set to 0
Jan 31 08:07:17 compute-0 ceph-osd[85971]: osd.0 0 maybe_override_options_for_qos osd_recovery_max_active_hdd set to 3
Jan 31 08:07:17 compute-0 ceph-osd[85971]: osd.0 0 maybe_override_options_for_qos osd_recovery_max_active_ssd set to 10
Jan 31 08:07:17 compute-0 ceph-osd[85971]: osd.0 0  bench count 12288000 bsize 4 KiB
Jan 31 08:07:17 compute-0 ceph-mon[75227]: log_channel(cluster) log [DBG] : osdmap e8: 3 total, 0 up, 3 in
Jan 31 08:07:17 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Jan 31 08:07:17 compute-0 ceph-mon[75227]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "osd metadata", "id": 0} : dispatch
Jan 31 08:07:17 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Jan 31 08:07:17 compute-0 ceph-mon[75227]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "osd metadata", "id": 1} : dispatch
Jan 31 08:07:17 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Jan 31 08:07:17 compute-0 ceph-mon[75227]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Jan 31 08:07:17 compute-0 ceph-mgr[75519]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Jan 31 08:07:17 compute-0 ceph-mgr[75519]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Jan 31 08:07:17 compute-0 ceph-mgr[75519]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Jan 31 08:07:17 compute-0 ceph-mgr[75519]: mgr.server handle_open ignoring open from osd.0 v2:192.168.122.100:6802/2898453618; not ready for session (expect reconnect)
Jan 31 08:07:17 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Jan 31 08:07:17 compute-0 ceph-mon[75227]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "osd metadata", "id": 0} : dispatch
Jan 31 08:07:17 compute-0 ceph-mgr[75519]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Jan 31 08:07:17 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v22: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 31 08:07:17 compute-0 ceph-mon[75227]: from='osd.0 [v2:192.168.122.100:6802/2898453618,v1:192.168.122.100:6803/2898453618]' entity='osd.0' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]': finished
Jan 31 08:07:17 compute-0 ceph-mon[75227]: osdmap e7: 3 total, 0 up, 3 in
Jan 31 08:07:17 compute-0 ceph-mon[75227]: from='osd.0 [v2:192.168.122.100:6802/2898453618,v1:192.168.122.100:6803/2898453618]' entity='osd.0' cmd={"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=compute-0", "root=default"]} : dispatch
Jan 31 08:07:17 compute-0 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "osd metadata", "id": 0} : dispatch
Jan 31 08:07:17 compute-0 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "osd metadata", "id": 1} : dispatch
Jan 31 08:07:17 compute-0 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Jan 31 08:07:17 compute-0 systemd[1]: Reloading.
Jan 31 08:07:17 compute-0 systemd-rc-local-generator[86674]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 08:07:17 compute-0 systemd-sysv-generator[86678]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 08:07:17 compute-0 systemd[1]: Starting Ceph osd.1 for 82c880e6-d992-5408-8b12-efff9c275473...
Jan 31 08:07:18 compute-0 podman[86734]: 2026-01-31 08:07:18.127993623 +0000 UTC m=+0.042412082 container create 1e7ca9a5b313fa386098909b966f7559a45b17455493788cdf9fefc7b6ab946d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-82c880e6-d992-5408-8b12-efff9c275473-osd-1-activate, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.vendor=CentOS)
Jan 31 08:07:18 compute-0 podman[86734]: 2026-01-31 08:07:18.102509875 +0000 UTC m=+0.016928344 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 08:07:18 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:07:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2f1c1c84a5bb7d324c75444e0a247947d3bc9e3d5bd81dc4df3aa1588e04321f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 08:07:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2f1c1c84a5bb7d324c75444e0a247947d3bc9e3d5bd81dc4df3aa1588e04321f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 08:07:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2f1c1c84a5bb7d324c75444e0a247947d3bc9e3d5bd81dc4df3aa1588e04321f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 08:07:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2f1c1c84a5bb7d324c75444e0a247947d3bc9e3d5bd81dc4df3aa1588e04321f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 08:07:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2f1c1c84a5bb7d324c75444e0a247947d3bc9e3d5bd81dc4df3aa1588e04321f/merged/var/lib/ceph/osd/ceph-1 supports timestamps until 2038 (0x7fffffff)
Jan 31 08:07:18 compute-0 podman[86734]: 2026-01-31 08:07:18.26423391 +0000 UTC m=+0.178652379 container init 1e7ca9a5b313fa386098909b966f7559a45b17455493788cdf9fefc7b6ab946d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-82c880e6-d992-5408-8b12-efff9c275473-osd-1-activate, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 08:07:18 compute-0 podman[86734]: 2026-01-31 08:07:18.271377503 +0000 UTC m=+0.185795942 container start 1e7ca9a5b313fa386098909b966f7559a45b17455493788cdf9fefc7b6ab946d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-82c880e6-d992-5408-8b12-efff9c275473-osd-1-activate, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.schema-version=1.0)
Jan 31 08:07:18 compute-0 podman[86734]: 2026-01-31 08:07:18.295568304 +0000 UTC m=+0.209986853 container attach 1e7ca9a5b313fa386098909b966f7559a45b17455493788cdf9fefc7b6ab946d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-82c880e6-d992-5408-8b12-efff9c275473-osd-1-activate, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Jan 31 08:07:18 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e8 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 08:07:18 compute-0 ceph-82c880e6-d992-5408-8b12-efff9c275473-osd-1-activate[86750]: Running command: /usr/bin/ceph-authtool --gen-print-key
Jan 31 08:07:18 compute-0 bash[86734]: Running command: /usr/bin/ceph-authtool --gen-print-key
Jan 31 08:07:18 compute-0 ceph-82c880e6-d992-5408-8b12-efff9c275473-osd-1-activate[86750]: Running command: /usr/bin/ceph-authtool --gen-print-key
Jan 31 08:07:18 compute-0 bash[86734]: Running command: /usr/bin/ceph-authtool --gen-print-key
Jan 31 08:07:18 compute-0 ceph-mgr[75519]: mgr.server handle_open ignoring open from osd.0 v2:192.168.122.100:6802/2898453618; not ready for session (expect reconnect)
Jan 31 08:07:18 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Jan 31 08:07:18 compute-0 ceph-mon[75227]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "osd metadata", "id": 0} : dispatch
Jan 31 08:07:18 compute-0 ceph-mgr[75519]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Jan 31 08:07:18 compute-0 ceph-mon[75227]: from='osd.0 [v2:192.168.122.100:6802/2898453618,v1:192.168.122.100:6803/2898453618]' entity='osd.0' cmd='[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]': finished
Jan 31 08:07:18 compute-0 ceph-mon[75227]: osdmap e8: 3 total, 0 up, 3 in
Jan 31 08:07:18 compute-0 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "osd metadata", "id": 0} : dispatch
Jan 31 08:07:18 compute-0 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "osd metadata", "id": 1} : dispatch
Jan 31 08:07:18 compute-0 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Jan 31 08:07:18 compute-0 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "osd metadata", "id": 0} : dispatch
Jan 31 08:07:18 compute-0 ceph-mon[75227]: pgmap v22: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 31 08:07:18 compute-0 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "osd metadata", "id": 0} : dispatch
Jan 31 08:07:18 compute-0 ceph-mgr[75519]: [devicehealth WARNING root] not enough osds to create mgr pool
Jan 31 08:07:18 compute-0 lvm[86835]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 31 08:07:18 compute-0 lvm[86836]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 31 08:07:18 compute-0 lvm[86835]: VG ceph_vg0 finished
Jan 31 08:07:18 compute-0 lvm[86836]: VG ceph_vg1 finished
Jan 31 08:07:18 compute-0 lvm[86838]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 31 08:07:18 compute-0 lvm[86838]: VG ceph_vg2 finished
Jan 31 08:07:18 compute-0 lvm[86839]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 31 08:07:18 compute-0 lvm[86839]: VG ceph_vg2 finished
Jan 31 08:07:18 compute-0 lvm[86842]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 31 08:07:18 compute-0 lvm[86842]: VG ceph_vg2 finished
Jan 31 08:07:19 compute-0 ceph-82c880e6-d992-5408-8b12-efff9c275473-osd-1-activate[86750]: --> Failed to activate via raw: did not find any matching OSD to activate
Jan 31 08:07:19 compute-0 bash[86734]: --> Failed to activate via raw: did not find any matching OSD to activate
Jan 31 08:07:19 compute-0 bash[86734]: Running command: /usr/bin/ceph-authtool --gen-print-key
Jan 31 08:07:19 compute-0 ceph-82c880e6-d992-5408-8b12-efff9c275473-osd-1-activate[86750]: Running command: /usr/bin/ceph-authtool --gen-print-key
Jan 31 08:07:19 compute-0 ceph-82c880e6-d992-5408-8b12-efff9c275473-osd-1-activate[86750]: Running command: /usr/bin/ceph-authtool --gen-print-key
Jan 31 08:07:19 compute-0 bash[86734]: Running command: /usr/bin/ceph-authtool --gen-print-key
Jan 31 08:07:19 compute-0 ceph-82c880e6-d992-5408-8b12-efff9c275473-osd-1-activate[86750]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1
Jan 31 08:07:19 compute-0 bash[86734]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1
Jan 31 08:07:19 compute-0 ceph-82c880e6-d992-5408-8b12-efff9c275473-osd-1-activate[86750]: Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph_vg1/ceph_lv1 --path /var/lib/ceph/osd/ceph-1 --no-mon-config
Jan 31 08:07:19 compute-0 bash[86734]: Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph_vg1/ceph_lv1 --path /var/lib/ceph/osd/ceph-1 --no-mon-config
Jan 31 08:07:19 compute-0 ceph-82c880e6-d992-5408-8b12-efff9c275473-osd-1-activate[86750]: Running command: /usr/bin/ln -snf /dev/ceph_vg1/ceph_lv1 /var/lib/ceph/osd/ceph-1/block
Jan 31 08:07:19 compute-0 bash[86734]: Running command: /usr/bin/ln -snf /dev/ceph_vg1/ceph_lv1 /var/lib/ceph/osd/ceph-1/block
Jan 31 08:07:19 compute-0 ceph-82c880e6-d992-5408-8b12-efff9c275473-osd-1-activate[86750]: Running command: /usr/bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-1/block
Jan 31 08:07:19 compute-0 bash[86734]: Running command: /usr/bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-1/block
Jan 31 08:07:19 compute-0 ceph-82c880e6-d992-5408-8b12-efff9c275473-osd-1-activate[86750]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-1
Jan 31 08:07:19 compute-0 bash[86734]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-1
Jan 31 08:07:19 compute-0 ceph-82c880e6-d992-5408-8b12-efff9c275473-osd-1-activate[86750]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1
Jan 31 08:07:19 compute-0 bash[86734]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1
Jan 31 08:07:19 compute-0 ceph-82c880e6-d992-5408-8b12-efff9c275473-osd-1-activate[86750]: --> ceph-volume lvm activate successful for osd ID: 1
Jan 31 08:07:19 compute-0 bash[86734]: --> ceph-volume lvm activate successful for osd ID: 1
Jan 31 08:07:19 compute-0 systemd[1]: libpod-1e7ca9a5b313fa386098909b966f7559a45b17455493788cdf9fefc7b6ab946d.scope: Deactivated successfully.
Jan 31 08:07:19 compute-0 systemd[1]: libpod-1e7ca9a5b313fa386098909b966f7559a45b17455493788cdf9fefc7b6ab946d.scope: Consumed 1.164s CPU time.
Jan 31 08:07:19 compute-0 podman[86955]: 2026-01-31 08:07:19.290900182 +0000 UTC m=+0.039361185 container died 1e7ca9a5b313fa386098909b966f7559a45b17455493788cdf9fefc7b6ab946d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-82c880e6-d992-5408-8b12-efff9c275473-osd-1-activate, ceph=True, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Jan 31 08:07:19 compute-0 systemd[1]: var-lib-containers-storage-overlay-2f1c1c84a5bb7d324c75444e0a247947d3bc9e3d5bd81dc4df3aa1588e04321f-merged.mount: Deactivated successfully.
Jan 31 08:07:19 compute-0 ceph-mgr[75519]: mgr.server handle_open ignoring open from osd.0 v2:192.168.122.100:6802/2898453618; not ready for session (expect reconnect)
Jan 31 08:07:19 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Jan 31 08:07:19 compute-0 ceph-mgr[75519]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Jan 31 08:07:19 compute-0 ceph-mon[75227]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "osd metadata", "id": 0} : dispatch
Jan 31 08:07:19 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v23: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 31 08:07:19 compute-0 podman[86955]: 2026-01-31 08:07:19.876233122 +0000 UTC m=+0.624694105 container remove 1e7ca9a5b313fa386098909b966f7559a45b17455493788cdf9fefc7b6ab946d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-82c880e6-d992-5408-8b12-efff9c275473-osd-1-activate, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030)
Jan 31 08:07:19 compute-0 ceph-mon[75227]: purged_snaps scrub starts
Jan 31 08:07:19 compute-0 ceph-mon[75227]: purged_snaps scrub ok
Jan 31 08:07:19 compute-0 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "osd metadata", "id": 0} : dispatch
Jan 31 08:07:20 compute-0 podman[87016]: 2026-01-31 08:07:20.116416284 +0000 UTC m=+0.062261777 container create 679fb36577e7af7aa8574edb205985ee64a087dfb733a7a7c1809df4284c659a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-82c880e6-d992-5408-8b12-efff9c275473-osd-1, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 08:07:20 compute-0 podman[87016]: 2026-01-31 08:07:20.077225386 +0000 UTC m=+0.023070949 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 08:07:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b4804df336c2564d02d5719e69487a3c0a1a5d71daf4aec625b7a59c0688b903/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 08:07:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b4804df336c2564d02d5719e69487a3c0a1a5d71daf4aec625b7a59c0688b903/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 08:07:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b4804df336c2564d02d5719e69487a3c0a1a5d71daf4aec625b7a59c0688b903/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 08:07:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b4804df336c2564d02d5719e69487a3c0a1a5d71daf4aec625b7a59c0688b903/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 08:07:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b4804df336c2564d02d5719e69487a3c0a1a5d71daf4aec625b7a59c0688b903/merged/var/lib/ceph/osd/ceph-1 supports timestamps until 2038 (0x7fffffff)
Jan 31 08:07:20 compute-0 podman[87016]: 2026-01-31 08:07:20.263695776 +0000 UTC m=+0.209541359 container init 679fb36577e7af7aa8574edb205985ee64a087dfb733a7a7c1809df4284c659a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-82c880e6-d992-5408-8b12-efff9c275473-osd-1, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True)
Jan 31 08:07:20 compute-0 podman[87016]: 2026-01-31 08:07:20.269183643 +0000 UTC m=+0.215029126 container start 679fb36577e7af7aa8574edb205985ee64a087dfb733a7a7c1809df4284c659a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-82c880e6-d992-5408-8b12-efff9c275473-osd-1, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Jan 31 08:07:20 compute-0 bash[87016]: 679fb36577e7af7aa8574edb205985ee64a087dfb733a7a7c1809df4284c659a
Jan 31 08:07:20 compute-0 systemd[1]: Started Ceph osd.1 for 82c880e6-d992-5408-8b12-efff9c275473.
Jan 31 08:07:20 compute-0 ceph-osd[87035]: set uid:gid to 167:167 (ceph:ceph)
Jan 31 08:07:20 compute-0 ceph-osd[87035]: ceph version 20.2.0 (69f84cc2651aa259a15bc192ddaabd3baba07489) tentacle (stable - RelWithDebInfo), process ceph-osd, pid 2
Jan 31 08:07:20 compute-0 ceph-osd[87035]: pidfile_write: ignore empty --pid-file
Jan 31 08:07:20 compute-0 ceph-osd[87035]: bdev(0x55d780744000 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Jan 31 08:07:20 compute-0 ceph-osd[87035]: bdev(0x55d780744000 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Jan 31 08:07:20 compute-0 ceph-osd[87035]: bdev(0x55d780744000 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 31 08:07:20 compute-0 ceph-osd[87035]: bdev(0x55d780744000 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 31 08:07:20 compute-0 ceph-osd[87035]: bdev(0x55d780744000 /var/lib/ceph/osd/ceph-1/block) close
Jan 31 08:07:20 compute-0 ceph-osd[87035]: bdev(0x55d780744000 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Jan 31 08:07:20 compute-0 ceph-osd[87035]: bdev(0x55d780744000 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Jan 31 08:07:20 compute-0 ceph-osd[87035]: bdev(0x55d780744000 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 31 08:07:20 compute-0 ceph-osd[87035]: bdev(0x55d780744000 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 31 08:07:20 compute-0 ceph-osd[87035]: bdev(0x55d780744000 /var/lib/ceph/osd/ceph-1/block) close
Jan 31 08:07:20 compute-0 sudo[86016]: pam_unix(sudo:session): session closed for user root
Jan 31 08:07:20 compute-0 ceph-osd[87035]: bdev(0x55d780744000 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Jan 31 08:07:20 compute-0 ceph-osd[87035]: bdev(0x55d780744000 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Jan 31 08:07:20 compute-0 ceph-osd[87035]: bdev(0x55d780744000 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 31 08:07:20 compute-0 ceph-osd[87035]: bdev(0x55d780744000 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 31 08:07:20 compute-0 ceph-osd[87035]: bdev(0x55d780744000 /var/lib/ceph/osd/ceph-1/block) close
Jan 31 08:07:20 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 31 08:07:20 compute-0 ceph-osd[87035]: bdev(0x55d780744000 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Jan 31 08:07:20 compute-0 ceph-osd[87035]: bdev(0x55d780744000 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Jan 31 08:07:20 compute-0 ceph-osd[87035]: bdev(0x55d780744000 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 31 08:07:20 compute-0 ceph-osd[87035]: bdev(0x55d780744000 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 31 08:07:20 compute-0 ceph-osd[87035]: bdev(0x55d780744000 /var/lib/ceph/osd/ceph-1/block) close
Jan 31 08:07:20 compute-0 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 08:07:20 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 31 08:07:20 compute-0 ceph-osd[87035]: bdev(0x55d780744000 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Jan 31 08:07:20 compute-0 ceph-osd[87035]: bdev(0x55d780744000 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Jan 31 08:07:20 compute-0 ceph-osd[87035]: bdev(0x55d780744000 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 31 08:07:20 compute-0 ceph-osd[87035]: bdev(0x55d780744000 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 31 08:07:20 compute-0 ceph-osd[87035]: bdev(0x55d780744000 /var/lib/ceph/osd/ceph-1/block) close
Jan 31 08:07:20 compute-0 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 08:07:20 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "osd.2"} v 0)
Jan 31 08:07:20 compute-0 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "auth get", "entity": "osd.2"} : dispatch
Jan 31 08:07:20 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 31 08:07:20 compute-0 ceph-mon[75227]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 08:07:20 compute-0 ceph-mgr[75519]: [cephadm INFO cephadm.serve] Deploying daemon osd.2 on compute-0
Jan 31 08:07:20 compute-0 ceph-mgr[75519]: log_channel(cephadm) log [INF] : Deploying daemon osd.2 on compute-0
Jan 31 08:07:20 compute-0 ceph-osd[87035]: bdev(0x55d780744000 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Jan 31 08:07:20 compute-0 ceph-osd[87035]: bdev(0x55d780744000 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Jan 31 08:07:20 compute-0 ceph-osd[87035]: bdev(0x55d780744000 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 31 08:07:20 compute-0 ceph-osd[87035]: bdev(0x55d780744000 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 31 08:07:20 compute-0 ceph-osd[87035]: bluestore(/var/lib/ceph/osd/ceph-1) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Jan 31 08:07:20 compute-0 ceph-osd[87035]: bdev(0x55d780744400 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Jan 31 08:07:20 compute-0 ceph-osd[87035]: bdev(0x55d780744400 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Jan 31 08:07:20 compute-0 ceph-osd[87035]: bdev(0x55d780744400 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 31 08:07:20 compute-0 ceph-osd[87035]: bdev(0x55d780744400 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 31 08:07:20 compute-0 ceph-osd[87035]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-1/block size 20 GiB
Jan 31 08:07:20 compute-0 ceph-osd[87035]: bdev(0x55d780744400 /var/lib/ceph/osd/ceph-1/block) close
Jan 31 08:07:20 compute-0 ceph-osd[87035]: bdev(0x55d780744000 /var/lib/ceph/osd/ceph-1/block) close
Jan 31 08:07:20 compute-0 sudo[87055]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 08:07:20 compute-0 sudo[87055]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:07:20 compute-0 sudo[87055]: pam_unix(sudo:session): session closed for user root
Jan 31 08:07:20 compute-0 ceph-osd[87035]: starting osd.1 osd_data /var/lib/ceph/osd/ceph-1 /var/lib/ceph/osd/ceph-1/journal
Jan 31 08:07:20 compute-0 ceph-osd[87035]: load: jerasure load: lrc 
Jan 31 08:07:20 compute-0 ceph-osd[87035]: bdev(0x55d780745c00 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Jan 31 08:07:20 compute-0 ceph-osd[87035]: bdev(0x55d780745c00 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Jan 31 08:07:20 compute-0 ceph-osd[87035]: bdev(0x55d780745c00 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 31 08:07:20 compute-0 ceph-osd[87035]: bdev(0x55d780745c00 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 31 08:07:20 compute-0 ceph-osd[87035]: bluestore(/var/lib/ceph/osd/ceph-1) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Jan 31 08:07:20 compute-0 ceph-osd[87035]: bdev(0x55d780745c00 /var/lib/ceph/osd/ceph-1/block) close
Jan 31 08:07:20 compute-0 sudo[87085]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/82c880e6-d992-5408-8b12-efff9c275473/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 _orch deploy --fsid 82c880e6-d992-5408-8b12-efff9c275473
Jan 31 08:07:20 compute-0 sudo[87085]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:07:20 compute-0 ceph-osd[87035]: bdev(0x55d780745c00 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Jan 31 08:07:20 compute-0 ceph-osd[87035]: bdev(0x55d780745c00 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Jan 31 08:07:20 compute-0 ceph-osd[87035]: bdev(0x55d780745c00 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 31 08:07:20 compute-0 ceph-osd[87035]: bdev(0x55d780745c00 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 31 08:07:20 compute-0 ceph-osd[87035]: bluestore(/var/lib/ceph/osd/ceph-1) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Jan 31 08:07:20 compute-0 ceph-osd[87035]: bdev(0x55d780745c00 /var/lib/ceph/osd/ceph-1/block) close
Jan 31 08:07:20 compute-0 ceph-osd[87035]: mClockScheduler: set_osd_capacity_params_from_config: osd_bandwidth_cost_per_io: 499321.90 bytes/io, osd_bandwidth_capacity_per_shard 157286400.00 bytes/second
Jan 31 08:07:20 compute-0 ceph-osd[87035]: osd.1:0.OSDShard using op scheduler mclock_scheduler, cutoff=196
Jan 31 08:07:20 compute-0 ceph-osd[87035]: bdev(0x55d780745c00 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Jan 31 08:07:20 compute-0 ceph-osd[87035]: bdev(0x55d780745c00 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Jan 31 08:07:20 compute-0 ceph-osd[87035]: bdev(0x55d780745c00 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 31 08:07:20 compute-0 ceph-osd[87035]: bdev(0x55d780745c00 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 31 08:07:20 compute-0 ceph-osd[87035]: bdev(0x55d780745c00 /var/lib/ceph/osd/ceph-1/block) close
Jan 31 08:07:20 compute-0 ceph-mgr[75519]: mgr.server handle_open ignoring open from osd.0 v2:192.168.122.100:6802/2898453618; not ready for session (expect reconnect)
Jan 31 08:07:20 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Jan 31 08:07:20 compute-0 ceph-mon[75227]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "osd metadata", "id": 0} : dispatch
Jan 31 08:07:20 compute-0 ceph-mgr[75519]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Jan 31 08:07:20 compute-0 ceph-osd[87035]: bdev(0x55d780745c00 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Jan 31 08:07:20 compute-0 ceph-osd[87035]: bdev(0x55d780745c00 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Jan 31 08:07:20 compute-0 ceph-osd[87035]: bdev(0x55d780745c00 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 31 08:07:20 compute-0 ceph-osd[87035]: bdev(0x55d780745c00 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 31 08:07:20 compute-0 ceph-osd[87035]: bdev(0x55d780745c00 /var/lib/ceph/osd/ceph-1/block) close
Jan 31 08:07:20 compute-0 ceph-osd[87035]: bdev(0x55d780745c00 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Jan 31 08:07:20 compute-0 ceph-osd[87035]: bdev(0x55d780745c00 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Jan 31 08:07:20 compute-0 ceph-osd[87035]: bdev(0x55d780745c00 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 31 08:07:20 compute-0 ceph-osd[87035]: bdev(0x55d780745c00 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 31 08:07:20 compute-0 ceph-osd[87035]: bdev(0x55d780745c00 /var/lib/ceph/osd/ceph-1/block) close
Jan 31 08:07:20 compute-0 ceph-osd[87035]: bdev(0x55d780745c00 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Jan 31 08:07:20 compute-0 ceph-osd[87035]: bdev(0x55d780745c00 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Jan 31 08:07:20 compute-0 ceph-osd[87035]: bdev(0x55d780745c00 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 31 08:07:20 compute-0 ceph-osd[87035]: bdev(0x55d780745c00 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 31 08:07:20 compute-0 ceph-osd[87035]: bluestore(/var/lib/ceph/osd/ceph-1) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Jan 31 08:07:20 compute-0 ceph-osd[87035]: bdev(0x55d7813db800 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Jan 31 08:07:20 compute-0 ceph-osd[87035]: bdev(0x55d7813db800 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Jan 31 08:07:20 compute-0 ceph-osd[87035]: bdev(0x55d7813db800 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 31 08:07:20 compute-0 ceph-osd[87035]: bdev(0x55d7813db800 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 31 08:07:20 compute-0 ceph-osd[87035]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-1/block size 20 GiB
Jan 31 08:07:20 compute-0 ceph-osd[87035]: bluefs mount
Jan 31 08:07:20 compute-0 ceph-osd[87035]: bluefs _init_alloc shared, id 1, capacity 0x4ffc00000, block size 0x10000
Jan 31 08:07:20 compute-0 ceph-osd[87035]: bluefs mount final locked allocations 0 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Jan 31 08:07:20 compute-0 ceph-osd[87035]: bluefs mount final locked allocations 1 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Jan 31 08:07:20 compute-0 ceph-osd[87035]: bluefs mount final locked allocations 2 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Jan 31 08:07:20 compute-0 ceph-osd[87035]: bluefs mount final locked allocations 3 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Jan 31 08:07:20 compute-0 ceph-osd[87035]: bluefs mount final locked allocations 4 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Jan 31 08:07:20 compute-0 ceph-osd[87035]: bluefs mount shared_bdev_used = 0
Jan 31 08:07:20 compute-0 ceph-osd[87035]: bluestore(/var/lib/ceph/osd/ceph-1) _prepare_db_environment set db_paths to db,20397110067 db.slow,20397110067
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb: RocksDB version: 7.9.2
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb: Git sha 0
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb: Compile date 2025-10-30 15:42:43
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb: DB SUMMARY
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb: DB Session ID:  5YQD9ZNBLM5EUMTKY353
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb: CURRENT file:  CURRENT
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb: IDENTITY file:  IDENTITY
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb: MANIFEST file:  MANIFEST-000032 size: 1007 Bytes
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb: SST files in db dir, Total Num: 1, files: 000030.sst 
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb: SST files in db.slow dir, Total Num: 0, files: 
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb: Write Ahead Log file in db.wal: 000031.log size: 5097 ; 
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:                         Options.error_if_exists: 0
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:                       Options.create_if_missing: 0
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:                         Options.paranoid_checks: 1
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:             Options.flush_verify_memtable_count: 1
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:                                     Options.env: 0x55d7805d5ea0
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:                                      Options.fs: LegacyFileSystem
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:                                Options.info_log: 0x55d7816268a0
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:                Options.max_file_opening_threads: 16
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:                              Options.statistics: (nil)
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:                               Options.use_fsync: 0
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:                       Options.max_log_file_size: 0
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:                   Options.log_file_time_to_roll: 0
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:                       Options.keep_log_file_num: 1000
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:                    Options.recycle_log_file_num: 0
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:                         Options.allow_fallocate: 1
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:                        Options.allow_mmap_reads: 0
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:                       Options.allow_mmap_writes: 0
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:                        Options.use_direct_reads: 0
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:          Options.create_missing_column_families: 0
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:                              Options.db_log_dir: 
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:                                 Options.wal_dir: db.wal
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:                Options.table_cache_numshardbits: 6
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:                         Options.WAL_ttl_seconds: 0
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:                       Options.WAL_size_limit_MB: 0
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:             Options.manifest_preallocation_size: 4194304
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:                     Options.is_fd_close_on_exec: 1
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:                   Options.advise_random_on_open: 1
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:                    Options.db_write_buffer_size: 0
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:                    Options.write_buffer_manager: 0x55d78063ab40
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:         Options.access_hint_on_compaction_start: 1
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:                      Options.use_adaptive_mutex: 0
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:                            Options.rate_limiter: (nil)
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:                       Options.wal_recovery_mode: 2
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:                  Options.enable_thread_tracking: 0
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:                  Options.enable_pipelined_write: 0
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:                  Options.unordered_write: 0
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:             Options.write_thread_max_yield_usec: 100
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:                               Options.row_cache: None
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:                              Options.wal_filter: None
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:             Options.avoid_flush_during_recovery: 0
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:             Options.allow_ingest_behind: 0
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:             Options.two_write_queues: 0
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:             Options.manual_wal_flush: 0
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:             Options.wal_compression: 0
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:             Options.atomic_flush: 0
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:                 Options.persist_stats_to_disk: 0
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:                 Options.write_dbid_to_manifest: 0
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:                 Options.log_readahead_size: 0
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:                 Options.best_efforts_recovery: 0
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:             Options.allow_data_in_errors: 0
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:             Options.db_host_id: __hostname__
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:             Options.enforce_single_del_contracts: true
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:             Options.max_background_jobs: 4
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:             Options.max_background_compactions: -1
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:             Options.max_subcompactions: 1
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:           Options.writable_file_max_buffer_size: 0
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:             Options.delayed_write_rate : 16777216
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:             Options.max_total_wal_size: 1073741824
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:                   Options.stats_dump_period_sec: 600
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:                 Options.stats_persist_period_sec: 600
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:                          Options.max_open_files: -1
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:                          Options.bytes_per_sync: 0
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:                      Options.wal_bytes_per_sync: 0
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:                   Options.strict_bytes_per_sync: 0
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:       Options.compaction_readahead_size: 2097152
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:                  Options.max_background_flushes: -1
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb: Compression algorithms supported:
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:         kZSTD supported: 0
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:         kXpressCompression supported: 0
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:         kBZip2Compression supported: 0
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:         kZSTDNotFinalCompression supported: 0
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:         kLZ4Compression supported: 1
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:         kZlibCompression supported: 1
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:         kLZ4HCCompression supported: 1
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:         kSnappyCompression supported: 1
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb: Fast CRC32 supported: Supported on x86
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb: DMutex implementation: pthread_mutex_t
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb: [db/db_impl/db_impl_readonly.cc:25] Opening the db in read only mode
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: db/MANIFEST-000032
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:           Options.merge_operator: .T:int64_array.b:bitwise_xor
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:        Options.compaction_filter: None
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:        Options.compaction_filter_factory: None
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:  Options.sst_partitioner_factory: None
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55d781626c60)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55d7805d98d0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:        Options.write_buffer_size: 16777216
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:  Options.max_write_buffer_number: 64
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:          Options.compression: LZ4
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:       Options.prefix_extractor: nullptr
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:             Options.num_levels: 7
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:                  Options.compression_opts.level: 32767
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:               Options.compression_opts.strategy: 0
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:                  Options.compression_opts.enabled: false
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:                        Options.arena_block_size: 1048576
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:                Options.disable_auto_compactions: 0
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:                   Options.inplace_update_support: 0
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:                           Options.bloom_locality: 0
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:                    Options.max_successive_merges: 0
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:                Options.paranoid_file_checks: 0
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:                Options.force_consistency_checks: 1
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:                Options.report_bg_io_stats: 0
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:                               Options.ttl: 2592000
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:                       Options.enable_blob_files: false
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:                           Options.min_blob_size: 0
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:                          Options.blob_file_size: 268435456
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:                Options.blob_file_starting_level: 0
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-0]:
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:           Options.merge_operator: None
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:        Options.compaction_filter: None
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:        Options.compaction_filter_factory: None
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:  Options.sst_partitioner_factory: None
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55d781626c60)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55d7805d98d0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:        Options.write_buffer_size: 16777216
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:  Options.max_write_buffer_number: 64
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:          Options.compression: LZ4
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:       Options.prefix_extractor: nullptr
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:             Options.num_levels: 7
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:                  Options.compression_opts.level: 32767
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:               Options.compression_opts.strategy: 0
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:                  Options.compression_opts.enabled: false
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:                        Options.arena_block_size: 1048576
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:                Options.disable_auto_compactions: 0
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:                   Options.inplace_update_support: 0
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:                           Options.bloom_locality: 0
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:                    Options.max_successive_merges: 0
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:                Options.paranoid_file_checks: 0
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:                Options.force_consistency_checks: 1
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:                Options.report_bg_io_stats: 0
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:                               Options.ttl: 2592000
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:                       Options.enable_blob_files: false
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:                           Options.min_blob_size: 0
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:                          Options.blob_file_size: 268435456
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:                Options.blob_file_starting_level: 0
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-1]:
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:           Options.merge_operator: None
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:        Options.compaction_filter: None
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:        Options.compaction_filter_factory: None
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:  Options.sst_partitioner_factory: None
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55d781626c60)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55d7805d98d0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:        Options.write_buffer_size: 16777216
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:  Options.max_write_buffer_number: 64
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:          Options.compression: LZ4
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:       Options.prefix_extractor: nullptr
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:             Options.num_levels: 7
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:                  Options.compression_opts.level: 32767
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:               Options.compression_opts.strategy: 0
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:                  Options.compression_opts.enabled: false
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:                        Options.arena_block_size: 1048576
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:                Options.disable_auto_compactions: 0
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:                   Options.inplace_update_support: 0
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:                           Options.bloom_locality: 0
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:                    Options.max_successive_merges: 0
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:                Options.paranoid_file_checks: 0
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:                Options.force_consistency_checks: 1
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:                Options.report_bg_io_stats: 0
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:                               Options.ttl: 2592000
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:                       Options.enable_blob_files: false
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:                           Options.min_blob_size: 0
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:                          Options.blob_file_size: 268435456
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:                Options.blob_file_starting_level: 0
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-2]:
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:           Options.merge_operator: None
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:        Options.compaction_filter: None
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:        Options.compaction_filter_factory: None
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:  Options.sst_partitioner_factory: None
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55d781626c60)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55d7805d98d0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:        Options.write_buffer_size: 16777216
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:  Options.max_write_buffer_number: 64
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:          Options.compression: LZ4
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:       Options.prefix_extractor: nullptr
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:             Options.num_levels: 7
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:                  Options.compression_opts.level: 32767
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:               Options.compression_opts.strategy: 0
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:                  Options.compression_opts.enabled: false
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:                        Options.arena_block_size: 1048576
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:                Options.disable_auto_compactions: 0
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:                   Options.inplace_update_support: 0
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:                           Options.bloom_locality: 0
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:                    Options.max_successive_merges: 0
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:                Options.paranoid_file_checks: 0
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:                Options.force_consistency_checks: 1
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:                Options.report_bg_io_stats: 0
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:                               Options.ttl: 2592000
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:                       Options.enable_blob_files: false
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:                           Options.min_blob_size: 0
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:                          Options.blob_file_size: 268435456
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:                Options.blob_file_starting_level: 0
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-0]:
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:           Options.merge_operator: None
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:        Options.compaction_filter: None
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:        Options.compaction_filter_factory: None
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:  Options.sst_partitioner_factory: None
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55d781626c60)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55d7805d98d0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:        Options.write_buffer_size: 16777216
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:  Options.max_write_buffer_number: 64
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:          Options.compression: LZ4
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:       Options.prefix_extractor: nullptr
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:             Options.num_levels: 7
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:                  Options.compression_opts.level: 32767
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:               Options.compression_opts.strategy: 0
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:                  Options.compression_opts.enabled: false
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:                        Options.arena_block_size: 1048576
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:                Options.disable_auto_compactions: 0
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:                   Options.inplace_update_support: 0
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:                           Options.bloom_locality: 0
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:                    Options.max_successive_merges: 0
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:                Options.paranoid_file_checks: 0
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:                Options.force_consistency_checks: 1
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:                Options.report_bg_io_stats: 0
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:                               Options.ttl: 2592000
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:                       Options.enable_blob_files: false
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:                           Options.min_blob_size: 0
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:                          Options.blob_file_size: 268435456
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:                Options.blob_file_starting_level: 0
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-1]:
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:           Options.merge_operator: None
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:        Options.compaction_filter: None
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:        Options.compaction_filter_factory: None
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:  Options.sst_partitioner_factory: None
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55d781626c60)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55d7805d98d0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:        Options.write_buffer_size: 16777216
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:  Options.max_write_buffer_number: 64
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:          Options.compression: LZ4
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:       Options.prefix_extractor: nullptr
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:             Options.num_levels: 7
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:                  Options.compression_opts.level: 32767
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:               Options.compression_opts.strategy: 0
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:                  Options.compression_opts.enabled: false
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:                        Options.arena_block_size: 1048576
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:                Options.disable_auto_compactions: 0
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:                   Options.inplace_update_support: 0
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:                           Options.bloom_locality: 0
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:                    Options.max_successive_merges: 0
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:                Options.paranoid_file_checks: 0
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:                Options.force_consistency_checks: 1
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:                Options.report_bg_io_stats: 0
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:                               Options.ttl: 2592000
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:                       Options.enable_blob_files: false
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:                           Options.min_blob_size: 0
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:                          Options.blob_file_size: 268435456
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:                Options.blob_file_starting_level: 0
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-2]:
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:           Options.merge_operator: None
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:        Options.compaction_filter: None
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:        Options.compaction_filter_factory: None
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:  Options.sst_partitioner_factory: None
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55d781626c60)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55d7805d98d0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:        Options.write_buffer_size: 16777216
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:  Options.max_write_buffer_number: 64
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:          Options.compression: LZ4
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:       Options.prefix_extractor: nullptr
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:             Options.num_levels: 7
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:                  Options.compression_opts.level: 32767
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:               Options.compression_opts.strategy: 0
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:                  Options.compression_opts.enabled: false
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:                        Options.arena_block_size: 1048576
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:                Options.disable_auto_compactions: 0
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:                   Options.inplace_update_support: 0
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:                           Options.bloom_locality: 0
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:                    Options.max_successive_merges: 0
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:                Options.paranoid_file_checks: 0
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:                Options.force_consistency_checks: 1
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:                Options.report_bg_io_stats: 0
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:                               Options.ttl: 2592000
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:                       Options.enable_blob_files: false
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:                           Options.min_blob_size: 0
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:                          Options.blob_file_size: 268435456
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:                Options.blob_file_starting_level: 0
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-0]:
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:           Options.merge_operator: None
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:        Options.compaction_filter: None
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:        Options.compaction_filter_factory: None
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:  Options.sst_partitioner_factory: None
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55d781626c80)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55d7805d9a30
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:        Options.write_buffer_size: 16777216
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:  Options.max_write_buffer_number: 64
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:          Options.compression: LZ4
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:       Options.prefix_extractor: nullptr
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:             Options.num_levels: 7
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:                  Options.compression_opts.level: 32767
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:               Options.compression_opts.strategy: 0
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:                  Options.compression_opts.enabled: false
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:                        Options.arena_block_size: 1048576
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:                Options.disable_auto_compactions: 0
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:                   Options.inplace_update_support: 0
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:                           Options.bloom_locality: 0
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:                    Options.max_successive_merges: 0
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:                Options.paranoid_file_checks: 0
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:                Options.force_consistency_checks: 1
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:                Options.report_bg_io_stats: 0
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:                               Options.ttl: 2592000
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:                       Options.enable_blob_files: false
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:                           Options.min_blob_size: 0
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:                          Options.blob_file_size: 268435456
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:                Options.blob_file_starting_level: 0
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-1]:
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:           Options.merge_operator: None
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:        Options.compaction_filter: None
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:        Options.compaction_filter_factory: None
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:  Options.sst_partitioner_factory: None
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55d781626c80)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55d7805d9a30
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:        Options.write_buffer_size: 16777216
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:  Options.max_write_buffer_number: 64
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:          Options.compression: LZ4
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:       Options.prefix_extractor: nullptr
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:             Options.num_levels: 7
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:                  Options.compression_opts.level: 32767
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:               Options.compression_opts.strategy: 0
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:                  Options.compression_opts.enabled: false
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:                        Options.arena_block_size: 1048576
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:                Options.disable_auto_compactions: 0
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:                   Options.inplace_update_support: 0
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:                           Options.bloom_locality: 0
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:                    Options.max_successive_merges: 0
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:                Options.paranoid_file_checks: 0
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:                Options.force_consistency_checks: 1
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:                Options.report_bg_io_stats: 0
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:                               Options.ttl: 2592000
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:                       Options.enable_blob_files: false
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:                           Options.min_blob_size: 0
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:                          Options.blob_file_size: 268435456
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:                Options.blob_file_starting_level: 0
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-2]:
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:           Options.merge_operator: None
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:        Options.compaction_filter: None
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:        Options.compaction_filter_factory: None
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:  Options.sst_partitioner_factory: None
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55d781626c80)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55d7805d9a30
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:        Options.write_buffer_size: 16777216
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:  Options.max_write_buffer_number: 64
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:          Options.compression: LZ4
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:       Options.prefix_extractor: nullptr
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:             Options.num_levels: 7
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:                  Options.compression_opts.level: 32767
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:               Options.compression_opts.strategy: 0
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:                  Options.compression_opts.enabled: false
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:                        Options.arena_block_size: 1048576
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:                Options.disable_auto_compactions: 0
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:                   Options.inplace_update_support: 0
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:                           Options.bloom_locality: 0
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:                    Options.max_successive_merges: 0
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:                Options.paranoid_file_checks: 0
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:                Options.force_consistency_checks: 1
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:                Options.report_bg_io_stats: 0
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:                               Options.ttl: 2592000
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:                       Options.enable_blob_files: false
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:                           Options.min_blob_size: 0
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:                          Options.blob_file_size: 268435456
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:                Options.blob_file_starting_level: 0
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb: [db/column_family.cc:635]         (skipping printing options)
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb: [db/column_family.cc:635]         (skipping printing options)
Jan 31 08:07:20 compute-0 ceph-mgr[75519]: [devicehealth WARNING root] not enough osds to create mgr pool
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:db/MANIFEST-000032 succeeded,manifest_file_number is 32, next_file_number is 34, last_sequence is 12, log_number is 5,prev_log_number is 0,max_column_family is 11,min_log_number_to_keep is 5
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb: [db/version_set.cc:5581] Column family [m-0] (ID 1), log number is 5
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb: [db/version_set.cc:5581] Column family [m-1] (ID 2), log number is 5
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb: [db/version_set.cc:5581] Column family [m-2] (ID 3), log number is 5
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb: [db/version_set.cc:5581] Column family [p-0] (ID 4), log number is 5
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb: [db/version_set.cc:5581] Column family [p-1] (ID 5), log number is 5
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb: [db/version_set.cc:5581] Column family [p-2] (ID 6), log number is 5
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb: [db/version_set.cc:5581] Column family [O-0] (ID 7), log number is 5
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb: [db/version_set.cc:5581] Column family [O-1] (ID 8), log number is 5
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb: [db/version_set.cc:5581] Column family [O-2] (ID 9), log number is 5
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb: [db/version_set.cc:5581] Column family [L] (ID 10), log number is 5
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb: [db/version_set.cc:5581] Column family [P] (ID 11), log number is 5
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: fafa2376-40a7-4fa7-b459-89b99fa109d9
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769846840771470, "job": 1, "event": "recovery_started", "wal_files": [31]}
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #31 mode 2
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769846840772917, "job": 1, "event": "recovery_finished"}
Jan 31 08:07:20 compute-0 ceph-osd[87035]: bluestore(/var/lib/ceph/osd/ceph-1) _open_db opened rocksdb path db options compression=kLZ4Compression,max_write_buffer_number=64,min_write_buffer_number_to_merge=6,compaction_style=kCompactionStyleLevel,write_buffer_size=16777216,max_background_jobs=4,level0_file_num_compaction_trigger=8,max_bytes_for_level_base=1073741824,max_bytes_for_level_multiplier=8,compaction_readahead_size=2MB,max_total_wal_size=1073741824,writable_file_max_buffer_size=0
Jan 31 08:07:20 compute-0 ceph-osd[87035]: bluestore(/var/lib/ceph/osd/ceph-1) _open_super_meta old nid_max 1025
Jan 31 08:07:20 compute-0 ceph-osd[87035]: bluestore(/var/lib/ceph/osd/ceph-1) _open_super_meta old blobid_max 10240
Jan 31 08:07:20 compute-0 ceph-osd[87035]: bluestore(/var/lib/ceph/osd/ceph-1) _open_super_meta ondisk_format 4 compat_ondisk_format 3
Jan 31 08:07:20 compute-0 ceph-osd[87035]: bluestore(/var/lib/ceph/osd/ceph-1) _open_super_meta min_alloc_size 0x1000
Jan 31 08:07:20 compute-0 ceph-osd[87035]: freelist init
Jan 31 08:07:20 compute-0 ceph-osd[87035]: freelist _read_cfg
Jan 31 08:07:20 compute-0 ceph-osd[87035]: bluestore(/var/lib/ceph/osd/ceph-1) _open_fm effective freelist_type = bitmap, freelist_alloc_size = 0x1000, min_alloc_size = 0x1000
Jan 31 08:07:20 compute-0 ceph-osd[87035]: bluestore(/var/lib/ceph/osd/ceph-1) _init_alloc loaded 20 GiB in 2 extents, allocator type hybrid, capacity 0x4ffc00000, block size 0x1000, free 0x4ffbfd000, fragmentation 1.9e-07
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb: [db/db_impl/db_impl.cc:496] Shutdown: canceling all background work
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb: [db/db_impl/db_impl.cc:704] Shutdown complete
Jan 31 08:07:20 compute-0 ceph-osd[87035]: bluefs umount
Jan 31 08:07:20 compute-0 ceph-osd[87035]: bdev(0x55d7813db800 /var/lib/ceph/osd/ceph-1/block) close
Jan 31 08:07:20 compute-0 ceph-osd[87035]: bdev(0x55d7813db800 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Jan 31 08:07:20 compute-0 ceph-osd[87035]: bdev(0x55d7813db800 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Jan 31 08:07:20 compute-0 ceph-osd[87035]: bdev(0x55d7813db800 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 31 08:07:20 compute-0 ceph-osd[87035]: bdev(0x55d7813db800 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 31 08:07:20 compute-0 ceph-osd[87035]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-1/block size 20 GiB
Jan 31 08:07:20 compute-0 ceph-osd[87035]: bluefs mount
Jan 31 08:07:20 compute-0 ceph-osd[87035]: bluefs _init_alloc shared, id 1, capacity 0x4ffc00000, block size 0x10000
Jan 31 08:07:20 compute-0 ceph-osd[87035]: bluefs mount final locked allocations 0 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Jan 31 08:07:20 compute-0 ceph-osd[87035]: bluefs mount final locked allocations 1 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Jan 31 08:07:20 compute-0 ceph-osd[87035]: bluefs mount final locked allocations 2 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Jan 31 08:07:20 compute-0 ceph-osd[87035]: bluefs mount final locked allocations 3 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Jan 31 08:07:20 compute-0 ceph-osd[87035]: bluefs mount final locked allocations 4 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Jan 31 08:07:20 compute-0 ceph-osd[87035]: bluefs mount shared_bdev_used = 27262976
Jan 31 08:07:20 compute-0 ceph-osd[87035]: bluestore(/var/lib/ceph/osd/ceph-1) _prepare_db_environment set db_paths to db,20397110067 db.slow,20397110067
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb: RocksDB version: 7.9.2
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb: Git sha 0
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb: Compile date 2025-10-30 15:42:43
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb: DB SUMMARY
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb: DB Session ID:  5YQD9ZNBLM5EUMTKY352
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb: CURRENT file:  CURRENT
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb: IDENTITY file:  IDENTITY
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb: MANIFEST file:  MANIFEST-000032 size: 1007 Bytes
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb: SST files in db dir, Total Num: 1, files: 000030.sst 
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb: SST files in db.slow dir, Total Num: 0, files: 
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb: Write Ahead Log file in db.wal: 000031.log size: 5097 ; 
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:                         Options.error_if_exists: 0
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:                       Options.create_if_missing: 0
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:                         Options.paranoid_checks: 1
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:             Options.flush_verify_memtable_count: 1
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:                                     Options.env: 0x55d7805d5ce0
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:                                      Options.fs: LegacyFileSystem
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:                                Options.info_log: 0x55d781626a20
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:                Options.max_file_opening_threads: 16
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:                              Options.statistics: (nil)
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:                               Options.use_fsync: 0
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:                       Options.max_log_file_size: 0
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:                   Options.log_file_time_to_roll: 0
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:                       Options.keep_log_file_num: 1000
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:                    Options.recycle_log_file_num: 0
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:                         Options.allow_fallocate: 1
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:                        Options.allow_mmap_reads: 0
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:                       Options.allow_mmap_writes: 0
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:                        Options.use_direct_reads: 0
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:          Options.create_missing_column_families: 0
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:                              Options.db_log_dir: 
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:                                 Options.wal_dir: db.wal
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:                Options.table_cache_numshardbits: 6
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:                         Options.WAL_ttl_seconds: 0
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:                       Options.WAL_size_limit_MB: 0
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:             Options.manifest_preallocation_size: 4194304
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:                     Options.is_fd_close_on_exec: 1
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:                   Options.advise_random_on_open: 1
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:                    Options.db_write_buffer_size: 0
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:                    Options.write_buffer_manager: 0x55d78063ab40
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:         Options.access_hint_on_compaction_start: 1
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:                      Options.use_adaptive_mutex: 0
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:                            Options.rate_limiter: (nil)
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:                       Options.wal_recovery_mode: 2
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:                  Options.enable_thread_tracking: 0
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:                  Options.enable_pipelined_write: 0
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:                  Options.unordered_write: 0
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:             Options.write_thread_max_yield_usec: 100
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:                               Options.row_cache: None
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:                              Options.wal_filter: None
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:             Options.avoid_flush_during_recovery: 0
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:             Options.allow_ingest_behind: 0
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:             Options.two_write_queues: 0
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:             Options.manual_wal_flush: 0
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:             Options.wal_compression: 0
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:             Options.atomic_flush: 0
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:                 Options.persist_stats_to_disk: 0
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:                 Options.write_dbid_to_manifest: 0
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:                 Options.log_readahead_size: 0
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:                 Options.best_efforts_recovery: 0
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:             Options.allow_data_in_errors: 0
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:             Options.db_host_id: __hostname__
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:             Options.enforce_single_del_contracts: true
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:             Options.max_background_jobs: 4
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:             Options.max_background_compactions: -1
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:             Options.max_subcompactions: 1
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:           Options.writable_file_max_buffer_size: 0
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:             Options.delayed_write_rate : 16777216
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:             Options.max_total_wal_size: 1073741824
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:                   Options.stats_dump_period_sec: 600
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:                 Options.stats_persist_period_sec: 600
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:                          Options.max_open_files: -1
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:                          Options.bytes_per_sync: 0
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:                      Options.wal_bytes_per_sync: 0
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:                   Options.strict_bytes_per_sync: 0
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:       Options.compaction_readahead_size: 2097152
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:                  Options.max_background_flushes: -1
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb: Compression algorithms supported:
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:         kZSTD supported: 0
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:         kXpressCompression supported: 0
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:         kBZip2Compression supported: 0
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:         kZSTDNotFinalCompression supported: 0
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:         kLZ4Compression supported: 1
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:         kZlibCompression supported: 1
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:         kLZ4HCCompression supported: 1
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:         kSnappyCompression supported: 1
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb: Fast CRC32 supported: Supported on x86
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb: DMutex implementation: pthread_mutex_t
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: db/MANIFEST-000032
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:           Options.merge_operator: .T:int64_array.b:bitwise_xor
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:        Options.compaction_filter: None
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:        Options.compaction_filter_factory: None
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:  Options.sst_partitioner_factory: None
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55d781626bc0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55d7805d98d0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:        Options.write_buffer_size: 16777216
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:  Options.max_write_buffer_number: 64
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:          Options.compression: LZ4
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:       Options.prefix_extractor: nullptr
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:             Options.num_levels: 7
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:                  Options.compression_opts.level: 32767
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:               Options.compression_opts.strategy: 0
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:                  Options.compression_opts.enabled: false
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:                        Options.arena_block_size: 1048576
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:                Options.disable_auto_compactions: 0
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:                   Options.inplace_update_support: 0
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:                           Options.bloom_locality: 0
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:                    Options.max_successive_merges: 0
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:                Options.paranoid_file_checks: 0
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:                Options.force_consistency_checks: 1
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:                Options.report_bg_io_stats: 0
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:                               Options.ttl: 2592000
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:                       Options.enable_blob_files: false
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:                           Options.min_blob_size: 0
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:                          Options.blob_file_size: 268435456
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:                Options.blob_file_starting_level: 0
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-0]:
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:           Options.merge_operator: None
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:        Options.compaction_filter: None
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:        Options.compaction_filter_factory: None
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:  Options.sst_partitioner_factory: None
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55d781626bc0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55d7805d98d0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:        Options.write_buffer_size: 16777216
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:  Options.max_write_buffer_number: 64
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:          Options.compression: LZ4
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:       Options.prefix_extractor: nullptr
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:             Options.num_levels: 7
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:                  Options.compression_opts.level: 32767
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:               Options.compression_opts.strategy: 0
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:                  Options.compression_opts.enabled: false
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:                        Options.arena_block_size: 1048576
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:                Options.disable_auto_compactions: 0
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:                   Options.inplace_update_support: 0
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:                           Options.bloom_locality: 0
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:                    Options.max_successive_merges: 0
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:                Options.paranoid_file_checks: 0
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:                Options.force_consistency_checks: 1
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:                Options.report_bg_io_stats: 0
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:                               Options.ttl: 2592000
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:                       Options.enable_blob_files: false
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:                           Options.min_blob_size: 0
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:                          Options.blob_file_size: 268435456
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:                Options.blob_file_starting_level: 0
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-1]:
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:           Options.merge_operator: None
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:        Options.compaction_filter: None
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:        Options.compaction_filter_factory: None
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:  Options.sst_partitioner_factory: None
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55d781626bc0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55d7805d98d0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:        Options.write_buffer_size: 16777216
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:  Options.max_write_buffer_number: 64
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:          Options.compression: LZ4
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:       Options.prefix_extractor: nullptr
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:             Options.num_levels: 7
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:                  Options.compression_opts.level: 32767
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:               Options.compression_opts.strategy: 0
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:                  Options.compression_opts.enabled: false
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:                        Options.arena_block_size: 1048576
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:                Options.disable_auto_compactions: 0
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:                   Options.inplace_update_support: 0
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:                           Options.bloom_locality: 0
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:                    Options.max_successive_merges: 0
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:                Options.paranoid_file_checks: 0
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:                Options.force_consistency_checks: 1
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:                Options.report_bg_io_stats: 0
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:                               Options.ttl: 2592000
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:                       Options.enable_blob_files: false
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:                           Options.min_blob_size: 0
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:                          Options.blob_file_size: 268435456
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:                Options.blob_file_starting_level: 0
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-2]:
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:           Options.merge_operator: None
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:        Options.compaction_filter: None
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:        Options.compaction_filter_factory: None
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:  Options.sst_partitioner_factory: None
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55d781626bc0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55d7805d98d0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:        Options.write_buffer_size: 16777216
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:  Options.max_write_buffer_number: 64
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:          Options.compression: LZ4
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:       Options.prefix_extractor: nullptr
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:             Options.num_levels: 7
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:                  Options.compression_opts.level: 32767
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:               Options.compression_opts.strategy: 0
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:                  Options.compression_opts.enabled: false
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:                        Options.arena_block_size: 1048576
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:                Options.disable_auto_compactions: 0
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:                   Options.inplace_update_support: 0
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:                           Options.bloom_locality: 0
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:                    Options.max_successive_merges: 0
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:                Options.paranoid_file_checks: 0
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:                Options.force_consistency_checks: 1
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:                Options.report_bg_io_stats: 0
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:                               Options.ttl: 2592000
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:                       Options.enable_blob_files: false
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:                           Options.min_blob_size: 0
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:                          Options.blob_file_size: 268435456
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:                Options.blob_file_starting_level: 0
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-0]:
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:           Options.merge_operator: None
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:        Options.compaction_filter: None
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:        Options.compaction_filter_factory: None
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:  Options.sst_partitioner_factory: None
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55d781626bc0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55d7805d98d0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:        Options.write_buffer_size: 16777216
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:  Options.max_write_buffer_number: 64
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:          Options.compression: LZ4
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:       Options.prefix_extractor: nullptr
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:             Options.num_levels: 7
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:                  Options.compression_opts.level: 32767
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:               Options.compression_opts.strategy: 0
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:                  Options.compression_opts.enabled: false
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:                        Options.arena_block_size: 1048576
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:                Options.disable_auto_compactions: 0
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:                   Options.inplace_update_support: 0
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:                           Options.bloom_locality: 0
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:                    Options.max_successive_merges: 0
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:                Options.paranoid_file_checks: 0
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:                Options.force_consistency_checks: 1
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:                Options.report_bg_io_stats: 0
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:                               Options.ttl: 2592000
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:                       Options.enable_blob_files: false
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:                           Options.min_blob_size: 0
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:                          Options.blob_file_size: 268435456
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:                Options.blob_file_starting_level: 0
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-1]:
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:           Options.merge_operator: None
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:        Options.compaction_filter: None
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:        Options.compaction_filter_factory: None
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:  Options.sst_partitioner_factory: None
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55d781626bc0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55d7805d98d0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:        Options.write_buffer_size: 16777216
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:  Options.max_write_buffer_number: 64
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:          Options.compression: LZ4
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:       Options.prefix_extractor: nullptr
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:             Options.num_levels: 7
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:                  Options.compression_opts.level: 32767
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:               Options.compression_opts.strategy: 0
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:                  Options.compression_opts.enabled: false
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:                        Options.arena_block_size: 1048576
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:                Options.disable_auto_compactions: 0
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:                   Options.inplace_update_support: 0
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:                           Options.bloom_locality: 0
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:                    Options.max_successive_merges: 0
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:                Options.paranoid_file_checks: 0
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:                Options.force_consistency_checks: 1
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:                Options.report_bg_io_stats: 0
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:                               Options.ttl: 2592000
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:                       Options.enable_blob_files: false
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:                           Options.min_blob_size: 0
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:                          Options.blob_file_size: 268435456
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:                Options.blob_file_starting_level: 0
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-2]:
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:           Options.merge_operator: None
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:        Options.compaction_filter: None
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:        Options.compaction_filter_factory: None
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:  Options.sst_partitioner_factory: None
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55d781626bc0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55d7805d98d0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:        Options.write_buffer_size: 16777216
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:  Options.max_write_buffer_number: 64
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:          Options.compression: LZ4
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:       Options.prefix_extractor: nullptr
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:             Options.num_levels: 7
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:                  Options.compression_opts.level: 32767
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:               Options.compression_opts.strategy: 0
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:                  Options.compression_opts.enabled: false
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:                        Options.arena_block_size: 1048576
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:                Options.disable_auto_compactions: 0
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:                   Options.inplace_update_support: 0
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:                           Options.bloom_locality: 0
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:                    Options.max_successive_merges: 0
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:                Options.paranoid_file_checks: 0
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:                Options.force_consistency_checks: 1
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:                Options.report_bg_io_stats: 0
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:                               Options.ttl: 2592000
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:                       Options.enable_blob_files: false
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:                           Options.min_blob_size: 0
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:                          Options.blob_file_size: 268435456
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:                Options.blob_file_starting_level: 0
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-0]:
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:           Options.merge_operator: None
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:        Options.compaction_filter: None
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:        Options.compaction_filter_factory: None
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:  Options.sst_partitioner_factory: None
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55d7816270c0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55d7805d9a30
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:        Options.write_buffer_size: 16777216
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:  Options.max_write_buffer_number: 64
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:          Options.compression: LZ4
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:       Options.prefix_extractor: nullptr
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:             Options.num_levels: 7
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:                  Options.compression_opts.level: 32767
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:               Options.compression_opts.strategy: 0
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:                  Options.compression_opts.enabled: false
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:                        Options.arena_block_size: 1048576
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:                Options.disable_auto_compactions: 0
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:                   Options.inplace_update_support: 0
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:                           Options.bloom_locality: 0
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:                    Options.max_successive_merges: 0
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:                Options.paranoid_file_checks: 0
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:                Options.force_consistency_checks: 1
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:                Options.report_bg_io_stats: 0
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:                               Options.ttl: 2592000
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:                       Options.enable_blob_files: false
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:                           Options.min_blob_size: 0
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:                          Options.blob_file_size: 268435456
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:                Options.blob_file_starting_level: 0
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-1]:
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:           Options.merge_operator: None
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:        Options.compaction_filter: None
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:        Options.compaction_filter_factory: None
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:  Options.sst_partitioner_factory: None
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55d7816270c0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55d7805d9a30
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:        Options.write_buffer_size: 16777216
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:  Options.max_write_buffer_number: 64
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:          Options.compression: LZ4
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:       Options.prefix_extractor: nullptr
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:             Options.num_levels: 7
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:                  Options.compression_opts.level: 32767
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:               Options.compression_opts.strategy: 0
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:                  Options.compression_opts.enabled: false
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:                        Options.arena_block_size: 1048576
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:                Options.disable_auto_compactions: 0
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:                   Options.inplace_update_support: 0
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:                           Options.bloom_locality: 0
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:                    Options.max_successive_merges: 0
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:                Options.paranoid_file_checks: 0
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:                Options.force_consistency_checks: 1
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:                Options.report_bg_io_stats: 0
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:                               Options.ttl: 2592000
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:                       Options.enable_blob_files: false
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:                           Options.min_blob_size: 0
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:                          Options.blob_file_size: 268435456
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:                Options.blob_file_starting_level: 0
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-2]:
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:           Options.merge_operator: None
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:        Options.compaction_filter: None
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:        Options.compaction_filter_factory: None
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:  Options.sst_partitioner_factory: None
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55d7816270c0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55d7805d9a30
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:        Options.write_buffer_size: 16777216
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:  Options.max_write_buffer_number: 64
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:          Options.compression: LZ4
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:       Options.prefix_extractor: nullptr
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:             Options.num_levels: 7
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:                  Options.compression_opts.level: 32767
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:               Options.compression_opts.strategy: 0
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:                  Options.compression_opts.enabled: false
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:                        Options.arena_block_size: 1048576
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:                Options.disable_auto_compactions: 0
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:                   Options.inplace_update_support: 0
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:                           Options.bloom_locality: 0
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:                    Options.max_successive_merges: 0
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:                Options.paranoid_file_checks: 0
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:                Options.force_consistency_checks: 1
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:                Options.report_bg_io_stats: 0
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:                               Options.ttl: 2592000
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:                       Options.enable_blob_files: false
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:                           Options.min_blob_size: 0
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:                          Options.blob_file_size: 268435456
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb:                Options.blob_file_starting_level: 0
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb: [db/column_family.cc:635]         (skipping printing options)
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb: [db/column_family.cc:635]         (skipping printing options)
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:db/MANIFEST-000032 succeeded,manifest_file_number is 32, next_file_number is 34, last_sequence is 12, log_number is 5,prev_log_number is 0,max_column_family is 11,min_log_number_to_keep is 5
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb: [db/version_set.cc:5581] Column family [m-0] (ID 1), log number is 5
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb: [db/version_set.cc:5581] Column family [m-1] (ID 2), log number is 5
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb: [db/version_set.cc:5581] Column family [m-2] (ID 3), log number is 5
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb: [db/version_set.cc:5581] Column family [p-0] (ID 4), log number is 5
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb: [db/version_set.cc:5581] Column family [p-1] (ID 5), log number is 5
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb: [db/version_set.cc:5581] Column family [p-2] (ID 6), log number is 5
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb: [db/version_set.cc:5581] Column family [O-0] (ID 7), log number is 5
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb: [db/version_set.cc:5581] Column family [O-1] (ID 8), log number is 5
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb: [db/version_set.cc:5581] Column family [O-2] (ID 9), log number is 5
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb: [db/version_set.cc:5581] Column family [L] (ID 10), log number is 5
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb: [db/version_set.cc:5581] Column family [P] (ID 11), log number is 5
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: fafa2376-40a7-4fa7-b459-89b99fa109d9
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769846840825236, "job": 1, "event": "recovery_started", "wal_files": [31]}
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #31 mode 2
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769846840890610, "cf_name": "default", "job": 1, "event": "table_file_creation", "file_number": 35, "file_size": 1275, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 13, "largest_seqno": 21, "table_properties": {"data_size": 131, "index_size": 27, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 87, "raw_average_key_size": 17, "raw_value_size": 82, "raw_average_value_size": 16, "num_data_blocks": 1, "num_entries": 5, "num_filter_entries": 5, "num_deletions": 0, "num_merge_operands": 2, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": ".T:int64_array.b:bitwise_xor", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769846840, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "fafa2376-40a7-4fa7-b459-89b99fa109d9", "db_session_id": "5YQD9ZNBLM5EUMTKY352", "orig_file_number": 35, "seqno_to_time_mapping": "N/A"}}
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769846840901983, "cf_name": "p-0", "job": 1, "event": "table_file_creation", "file_number": 36, "file_size": 1595, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 14, "largest_seqno": 15, "table_properties": {"data_size": 469, "index_size": 39, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 72, "raw_average_key_size": 36, "raw_value_size": 571, "raw_average_value_size": 285, "num_data_blocks": 1, "num_entries": 2, "num_filter_entries": 2, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "p-0", "column_family_id": 4, "comparator": "leveldb.BytewiseComparator", "merge_operator": "nullptr", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769846840, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "fafa2376-40a7-4fa7-b459-89b99fa109d9", "db_session_id": "5YQD9ZNBLM5EUMTKY352", "orig_file_number": 36, "seqno_to_time_mapping": "N/A"}}
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769846840933344, "cf_name": "O-2", "job": 1, "event": "table_file_creation", "file_number": 37, "file_size": 1275, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 16, "largest_seqno": 16, "table_properties": {"data_size": 121, "index_size": 64, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 55, "raw_average_key_size": 55, "raw_value_size": 50, "raw_average_value_size": 50, "num_data_blocks": 1, "num_entries": 1, "num_filter_entries": 1, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "O-2", "column_family_id": 9, "comparator": "leveldb.BytewiseComparator", "merge_operator": "nullptr", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769846840, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "fafa2376-40a7-4fa7-b459-89b99fa109d9", "db_session_id": "5YQD9ZNBLM5EUMTKY352", "orig_file_number": 37, "seqno_to_time_mapping": "N/A"}}
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769846840965462, "job": 1, "event": "recovery_finished"}
Jan 31 08:07:20 compute-0 ceph-osd[87035]: rocksdb: [db/version_set.cc:5047] Creating manifest 40
Jan 31 08:07:20 compute-0 ceph-mon[75227]: pgmap v23: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 31 08:07:20 compute-0 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 08:07:20 compute-0 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 08:07:20 compute-0 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "auth get", "entity": "osd.2"} : dispatch
Jan 31 08:07:20 compute-0 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 08:07:20 compute-0 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "osd metadata", "id": 0} : dispatch
Jan 31 08:07:20 compute-0 podman[87542]: 2026-01-31 08:07:20.977981535 +0000 UTC m=+0.071153110 container create 7a7b0e38be495495becc5267420a8899eeeabfff2551d109fc74e7f43b2774d0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gifted_borg, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Jan 31 08:07:21 compute-0 podman[87542]: 2026-01-31 08:07:20.927885407 +0000 UTC m=+0.021056992 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 08:07:21 compute-0 systemd[1]: Started libpod-conmon-7a7b0e38be495495becc5267420a8899eeeabfff2551d109fc74e7f43b2774d0.scope.
Jan 31 08:07:21 compute-0 ceph-osd[87035]: rocksdb: [db/db_impl/db_impl_open.cc:1987] SstFileManager instance 0x55d78180a000
Jan 31 08:07:21 compute-0 ceph-osd[87035]: rocksdb: DB pointer 0x55d7817e0000
Jan 31 08:07:21 compute-0 ceph-osd[87035]: bluestore(/var/lib/ceph/osd/ceph-1) _open_db opened rocksdb path db options compression=kLZ4Compression,max_write_buffer_number=64,min_write_buffer_number_to_merge=6,compaction_style=kCompactionStyleLevel,write_buffer_size=16777216,max_background_jobs=4,level0_file_num_compaction_trigger=8,max_bytes_for_level_base=1073741824,max_bytes_for_level_multiplier=8,compaction_readahead_size=2MB,max_total_wal_size=1073741824,writable_file_max_buffer_size=0
Jan 31 08:07:21 compute-0 ceph-osd[87035]: bluestore(/var/lib/ceph/osd/ceph-1) _upgrade_super from 4, latest 4
Jan 31 08:07:21 compute-0 ceph-osd[87035]: bluestore(/var/lib/ceph/osd/ceph-1) _upgrade_super done
Jan 31 08:07:21 compute-0 ceph-osd[87035]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 31 08:07:21 compute-0 ceph-osd[87035]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 0.3 total, 0.3 interval
                                           Cumulative writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 GB, 0.00 MB/s
                                           Cumulative WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s
                                           Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.06              0.00         1    0.065       0      0       0.0       0.0
                                            Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.06              0.00         1    0.065       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.06              0.00         1    0.065       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.06              0.00         1    0.065       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.3 total, 0.3 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.1 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.1 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55d7805d98d0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 5.4e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
                                           
                                           ** Compaction Stats [m-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.3 total, 0.3 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55d7805d98d0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 5.4e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-0] **
                                           
                                           ** Compaction Stats [m-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.3 total, 0.3 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55d7805d98d0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 5.4e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-1] **
                                           
                                           ** Compaction Stats [m-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.3 total, 0.3 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55d7805d98d0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 5.4e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-2] **
                                           
                                           ** Compaction Stats [p-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.56 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.01              0.00         1    0.011       0      0       0.0       0.0
                                            Sum      1/0    1.56 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.01              0.00         1    0.011       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.01              0.00         1    0.011       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.1      0.01              0.00         1    0.011       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.3 total, 0.3 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.01 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.01 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55d7805d98d0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 5.4e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-0] **
                                           
                                           ** Compaction Stats [p-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.3 total, 0.3 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55d7805d98d0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 5.4e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-1] **
                                           
                                           ** Compaction Stats [p-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.3 total, 0.3 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55d7805d98d0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 5.4e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-2] **
                                           
                                           ** Compaction Stats [O-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.3 total, 0.3 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55d7805d9a30#2 capacity: 512.00 MB usage: 0.25 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 2 last_secs: 9e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(1,0.11 KB,2.08616e-05%) IndexBlock(1,0.14 KB,2.68221e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-0] **
                                           
                                           ** Compaction Stats [O-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.3 total, 0.3 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55d7805d9a30#2 capacity: 512.00 MB usage: 0.25 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 2 last_secs: 9e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(1,0.11 KB,2.08616e-05%) IndexBlock(1,0.14 KB,2.68221e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-1] **
                                           
                                           ** Compaction Stats [O-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.25 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.03              0.00         1    0.031       0      0       0.0       0.0
                                            Sum      1/0    1.25 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.03              0.00         1    0.031       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.03              0.00         1    0.031       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.03              0.00         1    0.031       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.3 total, 0.3 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55d7805d9a30#2 capacity: 512.00 MB usage: 0.25 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 2 last_secs: 9e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(1,0.11 KB,2.08616e-05%) IndexBlock(1,0.14 KB,2.68221e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-2] **
                                           
                                           ** Compaction Stats [L] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.03              0.00         1    0.032       0      0       0.0       0.0
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.03              0.00         1    0.032       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.03              0.00         1    0.032       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [L] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.03              0.00         1    0.032       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.3 total, 0.3 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55d7805d98d0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 5.4e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [L] **
                                           
                                           ** Compaction Stats [P] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [P] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.3 total, 0.3 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55d7805d98d0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 5.4e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [P] **
Jan 31 08:07:21 compute-0 ceph-osd[87035]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/cephfs/cls_cephfs.cc:201: loading cephfs
Jan 31 08:07:21 compute-0 ceph-osd[87035]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/hello/cls_hello.cc:316: loading cls_hello
Jan 31 08:07:21 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:07:21 compute-0 ceph-osd[87035]: _get_class not permitted to load lua
Jan 31 08:07:21 compute-0 ceph-osd[87035]: _get_class not permitted to load sdk
Jan 31 08:07:21 compute-0 ceph-osd[87035]: osd.1 0 crush map has features 288232575208783872, adjusting msgr requires for clients
Jan 31 08:07:21 compute-0 ceph-osd[87035]: osd.1 0 crush map has features 288232575208783872 was 8705, adjusting msgr requires for mons
Jan 31 08:07:21 compute-0 ceph-osd[87035]: osd.1 0 crush map has features 288232575208783872, adjusting msgr requires for osds
Jan 31 08:07:21 compute-0 ceph-osd[87035]: osd.1 0 check_osdmap_features enabling on-disk ERASURE CODES compat feature
Jan 31 08:07:21 compute-0 ceph-osd[87035]: osd.1 0 load_pgs
Jan 31 08:07:21 compute-0 ceph-osd[87035]: osd.1 0 load_pgs opened 0 pgs
Jan 31 08:07:21 compute-0 ceph-82c880e6-d992-5408-8b12-efff9c275473-osd-1[87031]: 2026-01-31T08:07:21.090+0000 7f9bcc7778c0 -1 osd.1 0 log_to_monitors true
Jan 31 08:07:21 compute-0 ceph-osd[87035]: osd.1 0 log_to_monitors true
Jan 31 08:07:21 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]} v 0)
Jan 31 08:07:21 compute-0 ceph-mon[75227]: log_channel(audit) log [INF] : from='osd.1 [v2:192.168.122.100:6806/1439559419,v1:192.168.122.100:6807/1439559419]' entity='osd.1' cmd={"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]} : dispatch
Jan 31 08:07:21 compute-0 podman[87542]: 2026-01-31 08:07:21.549642765 +0000 UTC m=+0.642814420 container init 7a7b0e38be495495becc5267420a8899eeeabfff2551d109fc74e7f43b2774d0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gifted_borg, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.build-date=20251030, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 08:07:21 compute-0 podman[87542]: 2026-01-31 08:07:21.559147276 +0000 UTC m=+0.652318891 container start 7a7b0e38be495495becc5267420a8899eeeabfff2551d109fc74e7f43b2774d0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gifted_borg, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, ceph=True, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_REF=tentacle)
Jan 31 08:07:21 compute-0 gifted_borg[87558]: 167 167
Jan 31 08:07:21 compute-0 systemd[1]: libpod-7a7b0e38be495495becc5267420a8899eeeabfff2551d109fc74e7f43b2774d0.scope: Deactivated successfully.
Jan 31 08:07:21 compute-0 ceph-mgr[75519]: mgr.server handle_open ignoring open from osd.0 v2:192.168.122.100:6802/2898453618; not ready for session (expect reconnect)
Jan 31 08:07:21 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Jan 31 08:07:21 compute-0 ceph-mon[75227]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "osd metadata", "id": 0} : dispatch
Jan 31 08:07:21 compute-0 ceph-mgr[75519]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Jan 31 08:07:21 compute-0 podman[87542]: 2026-01-31 08:07:21.646827388 +0000 UTC m=+0.739999053 container attach 7a7b0e38be495495becc5267420a8899eeeabfff2551d109fc74e7f43b2774d0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gifted_borg, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 08:07:21 compute-0 podman[87542]: 2026-01-31 08:07:21.647821386 +0000 UTC m=+0.740993001 container died 7a7b0e38be495495becc5267420a8899eeeabfff2551d109fc74e7f43b2774d0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gifted_borg, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, ceph=True, org.label-schema.license=GPLv2)
Jan 31 08:07:21 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v24: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 31 08:07:21 compute-0 systemd[1]: var-lib-containers-storage-overlay-97f428055667257181deafffaac6fe9300da634419039147cbdfad71092458a9-merged.mount: Deactivated successfully.
Jan 31 08:07:21 compute-0 podman[87542]: 2026-01-31 08:07:21.920468335 +0000 UTC m=+1.013639950 container remove 7a7b0e38be495495becc5267420a8899eeeabfff2551d109fc74e7f43b2774d0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gifted_borg, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, ceph=True, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 08:07:21 compute-0 systemd[1]: libpod-conmon-7a7b0e38be495495becc5267420a8899eeeabfff2551d109fc74e7f43b2774d0.scope: Deactivated successfully.
Jan 31 08:07:22 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e8 do_prune osdmap full prune enabled
Jan 31 08:07:22 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e8 encode_pending skipping prime_pg_temp; mapping job did not start
Jan 31 08:07:22 compute-0 ceph-mon[75227]: Deploying daemon osd.2 on compute-0
Jan 31 08:07:22 compute-0 ceph-mon[75227]: from='osd.1 [v2:192.168.122.100:6806/1439559419,v1:192.168.122.100:6807/1439559419]' entity='osd.1' cmd={"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]} : dispatch
Jan 31 08:07:22 compute-0 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "osd metadata", "id": 0} : dispatch
Jan 31 08:07:22 compute-0 ceph-mon[75227]: log_channel(audit) log [INF] : from='osd.1 [v2:192.168.122.100:6806/1439559419,v1:192.168.122.100:6807/1439559419]' entity='osd.1' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]': finished
Jan 31 08:07:22 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e9 e9: 3 total, 0 up, 3 in
Jan 31 08:07:22 compute-0 ceph-mon[75227]: log_channel(cluster) log [DBG] : osdmap e9: 3 total, 0 up, 3 in
Jan 31 08:07:22 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=compute-0", "root=default"]} v 0)
Jan 31 08:07:22 compute-0 ceph-mon[75227]: log_channel(audit) log [INF] : from='osd.1 [v2:192.168.122.100:6806/1439559419,v1:192.168.122.100:6807/1439559419]' entity='osd.1' cmd={"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=compute-0", "root=default"]} : dispatch
Jan 31 08:07:22 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e9 create-or-move crush item name 'osd.1' initial_weight 0.02 at location {host=compute-0,root=default}
Jan 31 08:07:22 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Jan 31 08:07:22 compute-0 ceph-mon[75227]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "osd metadata", "id": 0} : dispatch
Jan 31 08:07:22 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Jan 31 08:07:22 compute-0 ceph-mon[75227]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "osd metadata", "id": 1} : dispatch
Jan 31 08:07:22 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Jan 31 08:07:22 compute-0 ceph-mon[75227]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Jan 31 08:07:22 compute-0 ceph-mgr[75519]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Jan 31 08:07:22 compute-0 ceph-mgr[75519]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Jan 31 08:07:22 compute-0 ceph-mgr[75519]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Jan 31 08:07:22 compute-0 ceph-osd[87035]: log_channel(cluster) log [DBG] : purged_snaps scrub starts
Jan 31 08:07:22 compute-0 ceph-osd[87035]: log_channel(cluster) log [DBG] : purged_snaps scrub ok
Jan 31 08:07:22 compute-0 podman[87620]: 2026-01-31 08:07:22.242291678 +0000 UTC m=+0.081182648 container create 255c67b2a7478100f99b05b264ec534c2cc5faeaa79d7f160a8fb772a5600e06 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-82c880e6-d992-5408-8b12-efff9c275473-osd-2-activate-test, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Jan 31 08:07:22 compute-0 podman[87620]: 2026-01-31 08:07:22.198636972 +0000 UTC m=+0.037528002 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 08:07:22 compute-0 systemd[1]: Started libpod-conmon-255c67b2a7478100f99b05b264ec534c2cc5faeaa79d7f160a8fb772a5600e06.scope.
Jan 31 08:07:22 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:07:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b5046018575ee5eeba3af44495083d271208c5e527a8445cdea2af329a351009/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 08:07:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b5046018575ee5eeba3af44495083d271208c5e527a8445cdea2af329a351009/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 08:07:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b5046018575ee5eeba3af44495083d271208c5e527a8445cdea2af329a351009/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 08:07:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b5046018575ee5eeba3af44495083d271208c5e527a8445cdea2af329a351009/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 08:07:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b5046018575ee5eeba3af44495083d271208c5e527a8445cdea2af329a351009/merged/var/lib/ceph/osd/ceph-2 supports timestamps until 2038 (0x7fffffff)
Jan 31 08:07:22 compute-0 podman[87620]: 2026-01-31 08:07:22.372198404 +0000 UTC m=+0.211089444 container init 255c67b2a7478100f99b05b264ec534c2cc5faeaa79d7f160a8fb772a5600e06 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-82c880e6-d992-5408-8b12-efff9c275473-osd-2-activate-test, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030)
Jan 31 08:07:22 compute-0 podman[87620]: 2026-01-31 08:07:22.38467868 +0000 UTC m=+0.223569630 container start 255c67b2a7478100f99b05b264ec534c2cc5faeaa79d7f160a8fb772a5600e06 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-82c880e6-d992-5408-8b12-efff9c275473-osd-2-activate-test, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 08:07:22 compute-0 podman[87620]: 2026-01-31 08:07:22.394530161 +0000 UTC m=+0.233421131 container attach 255c67b2a7478100f99b05b264ec534c2cc5faeaa79d7f160a8fb772a5600e06 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-82c880e6-d992-5408-8b12-efff9c275473-osd-2-activate-test, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 31 08:07:22 compute-0 ceph-82c880e6-d992-5408-8b12-efff9c275473-osd-2-activate-test[87637]: usage: ceph-volume activate [-h] [--osd-id OSD_ID] [--osd-uuid OSD_FSID]
Jan 31 08:07:22 compute-0 ceph-82c880e6-d992-5408-8b12-efff9c275473-osd-2-activate-test[87637]:                             [--no-systemd] [--no-tmpfs]
Jan 31 08:07:22 compute-0 ceph-82c880e6-d992-5408-8b12-efff9c275473-osd-2-activate-test[87637]: ceph-volume activate: error: unrecognized arguments: --bad-option
Jan 31 08:07:22 compute-0 systemd[1]: libpod-255c67b2a7478100f99b05b264ec534c2cc5faeaa79d7f160a8fb772a5600e06.scope: Deactivated successfully.
Jan 31 08:07:22 compute-0 podman[87620]: 2026-01-31 08:07:22.58864921 +0000 UTC m=+0.427540190 container died 255c67b2a7478100f99b05b264ec534c2cc5faeaa79d7f160a8fb772a5600e06 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-82c880e6-d992-5408-8b12-efff9c275473-osd-2-activate-test, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Jan 31 08:07:22 compute-0 ceph-mgr[75519]: mgr.server handle_open ignoring open from osd.0 v2:192.168.122.100:6802/2898453618; not ready for session (expect reconnect)
Jan 31 08:07:22 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Jan 31 08:07:22 compute-0 ceph-mon[75227]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "osd metadata", "id": 0} : dispatch
Jan 31 08:07:22 compute-0 ceph-mgr[75519]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Jan 31 08:07:22 compute-0 systemd[1]: var-lib-containers-storage-overlay-b5046018575ee5eeba3af44495083d271208c5e527a8445cdea2af329a351009-merged.mount: Deactivated successfully.
Jan 31 08:07:22 compute-0 podman[87620]: 2026-01-31 08:07:22.744610589 +0000 UTC m=+0.583501529 container remove 255c67b2a7478100f99b05b264ec534c2cc5faeaa79d7f160a8fb772a5600e06 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-82c880e6-d992-5408-8b12-efff9c275473-osd-2-activate-test, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 08:07:22 compute-0 ceph-mgr[75519]: [devicehealth WARNING root] not enough osds to create mgr pool
Jan 31 08:07:22 compute-0 systemd[1]: libpod-conmon-255c67b2a7478100f99b05b264ec534c2cc5faeaa79d7f160a8fb772a5600e06.scope: Deactivated successfully.
Jan 31 08:07:22 compute-0 systemd[1]: Reloading.
Jan 31 08:07:23 compute-0 ceph-mon[75227]: pgmap v24: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 31 08:07:23 compute-0 ceph-mon[75227]: from='osd.1 [v2:192.168.122.100:6806/1439559419,v1:192.168.122.100:6807/1439559419]' entity='osd.1' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]': finished
Jan 31 08:07:23 compute-0 ceph-mon[75227]: osdmap e9: 3 total, 0 up, 3 in
Jan 31 08:07:23 compute-0 ceph-mon[75227]: from='osd.1 [v2:192.168.122.100:6806/1439559419,v1:192.168.122.100:6807/1439559419]' entity='osd.1' cmd={"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=compute-0", "root=default"]} : dispatch
Jan 31 08:07:23 compute-0 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "osd metadata", "id": 0} : dispatch
Jan 31 08:07:23 compute-0 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "osd metadata", "id": 1} : dispatch
Jan 31 08:07:23 compute-0 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Jan 31 08:07:23 compute-0 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "osd metadata", "id": 0} : dispatch
Jan 31 08:07:23 compute-0 systemd-rc-local-generator[87692]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 08:07:23 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e9 do_prune osdmap full prune enabled
Jan 31 08:07:23 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e9 encode_pending skipping prime_pg_temp; mapping job did not start
Jan 31 08:07:23 compute-0 systemd-sysv-generator[87698]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 08:07:23 compute-0 ceph-osd[85971]: osd.0 0 maybe_override_max_osd_capacity_for_qos osd bench result - bandwidth (MiB/sec): 29.419 iops: 7531.179 elapsed_sec: 0.398
Jan 31 08:07:23 compute-0 ceph-osd[85971]: log_channel(cluster) log [WRN] : OSD bench result of 7531.179157 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.0. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd].
Jan 31 08:07:23 compute-0 ceph-osd[85971]: osd.0 0 waiting for initial osdmap
Jan 31 08:07:23 compute-0 ceph-82c880e6-d992-5408-8b12-efff9c275473-osd-0[85967]: 2026-01-31T08:07:23.083+0000 7fbc89349640 -1 osd.0 0 waiting for initial osdmap
Jan 31 08:07:23 compute-0 ceph-osd[85971]: osd.0 9 crush map has features 288514050185494528, adjusting msgr requires for clients
Jan 31 08:07:23 compute-0 ceph-osd[85971]: osd.0 9 crush map has features 288514050185494528 was 288232575208792577, adjusting msgr requires for mons
Jan 31 08:07:23 compute-0 ceph-osd[85971]: osd.0 9 crush map has features 3314932999778484224, adjusting msgr requires for osds
Jan 31 08:07:23 compute-0 ceph-osd[85971]: osd.0 9 check_osdmap_features require_osd_release unknown -> tentacle
Jan 31 08:07:23 compute-0 ceph-mon[75227]: log_channel(audit) log [INF] : from='osd.1 [v2:192.168.122.100:6806/1439559419,v1:192.168.122.100:6807/1439559419]' entity='osd.1' cmd='[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]': finished
Jan 31 08:07:23 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e10 e10: 3 total, 0 up, 3 in
Jan 31 08:07:23 compute-0 ceph-osd[87035]: osd.1 0 done with init, starting boot process
Jan 31 08:07:23 compute-0 ceph-osd[87035]: osd.1 0 start_boot
Jan 31 08:07:23 compute-0 ceph-osd[87035]: osd.1 0 maybe_override_options_for_qos osd_max_backfills set to 1
Jan 31 08:07:23 compute-0 ceph-osd[87035]: osd.1 0 maybe_override_options_for_qos osd_recovery_max_active set to 0
Jan 31 08:07:23 compute-0 ceph-osd[87035]: osd.1 0 maybe_override_options_for_qos osd_recovery_max_active_hdd set to 3
Jan 31 08:07:23 compute-0 ceph-osd[87035]: osd.1 0 maybe_override_options_for_qos osd_recovery_max_active_ssd set to 10
Jan 31 08:07:23 compute-0 ceph-osd[87035]: osd.1 0  bench count 12288000 bsize 4 KiB
Jan 31 08:07:23 compute-0 ceph-mon[75227]: log_channel(cluster) log [DBG] : osdmap e10: 3 total, 0 up, 3 in
Jan 31 08:07:23 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Jan 31 08:07:23 compute-0 ceph-mon[75227]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "osd metadata", "id": 0} : dispatch
Jan 31 08:07:23 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Jan 31 08:07:23 compute-0 ceph-mon[75227]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "osd metadata", "id": 1} : dispatch
Jan 31 08:07:23 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Jan 31 08:07:23 compute-0 ceph-mon[75227]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Jan 31 08:07:23 compute-0 ceph-mgr[75519]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Jan 31 08:07:23 compute-0 ceph-mgr[75519]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Jan 31 08:07:23 compute-0 ceph-mgr[75519]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Jan 31 08:07:23 compute-0 ceph-mgr[75519]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.100:6806/1439559419; not ready for session (expect reconnect)
Jan 31 08:07:23 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Jan 31 08:07:23 compute-0 ceph-mon[75227]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "osd metadata", "id": 1} : dispatch
Jan 31 08:07:23 compute-0 ceph-mgr[75519]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Jan 31 08:07:23 compute-0 ceph-82c880e6-d992-5408-8b12-efff9c275473-osd-0[85967]: 2026-01-31T08:07:23.140+0000 7fbc8414e640 -1 osd.0 9 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory
Jan 31 08:07:23 compute-0 ceph-osd[85971]: osd.0 9 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory
Jan 31 08:07:23 compute-0 ceph-osd[85971]: osd.0 9 set_numa_affinity not setting numa affinity
Jan 31 08:07:23 compute-0 ceph-osd[85971]: osd.0 9 _collect_metadata loop3:  no unique device id for loop3: fallback method has no model nor serial no unique device path for loop3: no symlink to loop3 in /dev/disk/by-path
Jan 31 08:07:23 compute-0 systemd[1]: Reloading.
Jan 31 08:07:23 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e10 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 08:07:23 compute-0 systemd-sysv-generator[87740]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 08:07:23 compute-0 systemd-rc-local-generator[87736]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 08:07:23 compute-0 systemd[1]: Starting Ceph osd.2 for 82c880e6-d992-5408-8b12-efff9c275473...
Jan 31 08:07:23 compute-0 ceph-mgr[75519]: mgr.server handle_open ignoring open from osd.0 v2:192.168.122.100:6802/2898453618; not ready for session (expect reconnect)
Jan 31 08:07:23 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Jan 31 08:07:23 compute-0 ceph-mon[75227]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "osd metadata", "id": 0} : dispatch
Jan 31 08:07:23 compute-0 ceph-mgr[75519]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Jan 31 08:07:23 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v27: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 31 08:07:23 compute-0 podman[87800]: 2026-01-31 08:07:23.75439324 +0000 UTC m=+0.047192288 container create 3591d717fc98c9c34d0088f3915afe8138b94c9e884d6a4d192687ea63b2968a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-82c880e6-d992-5408-8b12-efff9c275473-osd-2-activate, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 08:07:23 compute-0 podman[87800]: 2026-01-31 08:07:23.728879012 +0000 UTC m=+0.021678050 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 08:07:23 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:07:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/10b6e9c0098ee06685e5b96a0d8e447a91e7f2194db77e36a47476b5ce6f7ffc/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 08:07:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/10b6e9c0098ee06685e5b96a0d8e447a91e7f2194db77e36a47476b5ce6f7ffc/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 08:07:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/10b6e9c0098ee06685e5b96a0d8e447a91e7f2194db77e36a47476b5ce6f7ffc/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 08:07:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/10b6e9c0098ee06685e5b96a0d8e447a91e7f2194db77e36a47476b5ce6f7ffc/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 08:07:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/10b6e9c0098ee06685e5b96a0d8e447a91e7f2194db77e36a47476b5ce6f7ffc/merged/var/lib/ceph/osd/ceph-2 supports timestamps until 2038 (0x7fffffff)
Jan 31 08:07:23 compute-0 podman[87800]: 2026-01-31 08:07:23.893682224 +0000 UTC m=+0.186481252 container init 3591d717fc98c9c34d0088f3915afe8138b94c9e884d6a4d192687ea63b2968a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-82c880e6-d992-5408-8b12-efff9c275473-osd-2-activate, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 08:07:23 compute-0 podman[87800]: 2026-01-31 08:07:23.901189228 +0000 UTC m=+0.193988236 container start 3591d717fc98c9c34d0088f3915afe8138b94c9e884d6a4d192687ea63b2968a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-82c880e6-d992-5408-8b12-efff9c275473-osd-2-activate, org.label-schema.build-date=20251030, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 08:07:23 compute-0 podman[87800]: 2026-01-31 08:07:23.908788575 +0000 UTC m=+0.201587573 container attach 3591d717fc98c9c34d0088f3915afe8138b94c9e884d6a4d192687ea63b2968a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-82c880e6-d992-5408-8b12-efff9c275473-osd-2-activate, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 08:07:24 compute-0 ceph-82c880e6-d992-5408-8b12-efff9c275473-osd-2-activate[87815]: Running command: /usr/bin/ceph-authtool --gen-print-key
Jan 31 08:07:24 compute-0 bash[87800]: Running command: /usr/bin/ceph-authtool --gen-print-key
Jan 31 08:07:24 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e10 do_prune osdmap full prune enabled
Jan 31 08:07:24 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e10 encode_pending skipping prime_pg_temp; mapping job did not start
Jan 31 08:07:24 compute-0 ceph-osd[85971]: osd.0 9 tick checking mon for new map
Jan 31 08:07:24 compute-0 ceph-mgr[75519]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.100:6806/1439559419; not ready for session (expect reconnect)
Jan 31 08:07:24 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Jan 31 08:07:24 compute-0 ceph-mon[75227]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "osd metadata", "id": 1} : dispatch
Jan 31 08:07:24 compute-0 ceph-mgr[75519]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Jan 31 08:07:24 compute-0 ceph-82c880e6-d992-5408-8b12-efff9c275473-osd-2-activate[87815]: Running command: /usr/bin/ceph-authtool --gen-print-key
Jan 31 08:07:24 compute-0 bash[87800]: Running command: /usr/bin/ceph-authtool --gen-print-key
Jan 31 08:07:24 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e11 e11: 3 total, 1 up, 3 in
Jan 31 08:07:24 compute-0 ceph-mon[75227]: log_channel(cluster) log [INF] : osd.0 [v2:192.168.122.100:6802/2898453618,v1:192.168.122.100:6803/2898453618] boot
Jan 31 08:07:24 compute-0 ceph-mon[75227]: log_channel(cluster) log [DBG] : osdmap e11: 3 total, 1 up, 3 in
Jan 31 08:07:24 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Jan 31 08:07:24 compute-0 ceph-mon[75227]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "osd metadata", "id": 0} : dispatch
Jan 31 08:07:24 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Jan 31 08:07:24 compute-0 ceph-mon[75227]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "osd metadata", "id": 1} : dispatch
Jan 31 08:07:24 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Jan 31 08:07:24 compute-0 ceph-mon[75227]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Jan 31 08:07:24 compute-0 ceph-mgr[75519]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Jan 31 08:07:24 compute-0 ceph-mgr[75519]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Jan 31 08:07:24 compute-0 ceph-osd[85971]: osd.0 11 state: booting -> active
Jan 31 08:07:24 compute-0 ceph-mon[75227]: OSD bench result of 7531.179157 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.0. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd].
Jan 31 08:07:24 compute-0 ceph-mon[75227]: from='osd.1 [v2:192.168.122.100:6806/1439559419,v1:192.168.122.100:6807/1439559419]' entity='osd.1' cmd='[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]': finished
Jan 31 08:07:24 compute-0 ceph-mon[75227]: osdmap e10: 3 total, 0 up, 3 in
Jan 31 08:07:24 compute-0 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "osd metadata", "id": 0} : dispatch
Jan 31 08:07:24 compute-0 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "osd metadata", "id": 1} : dispatch
Jan 31 08:07:24 compute-0 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Jan 31 08:07:24 compute-0 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "osd metadata", "id": 1} : dispatch
Jan 31 08:07:24 compute-0 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "osd metadata", "id": 0} : dispatch
Jan 31 08:07:24 compute-0 ceph-mon[75227]: pgmap v27: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 31 08:07:24 compute-0 lvm[87901]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 31 08:07:24 compute-0 lvm[87899]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 31 08:07:24 compute-0 lvm[87901]: VG ceph_vg1 finished
Jan 31 08:07:24 compute-0 lvm[87899]: VG ceph_vg0 finished
Jan 31 08:07:24 compute-0 lvm[87903]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 31 08:07:24 compute-0 lvm[87903]: VG ceph_vg2 finished
Jan 31 08:07:24 compute-0 ceph-mgr[75519]: [devicehealth INFO root] creating mgr pool
Jan 31 08:07:24 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true} v 0)
Jan 31 08:07:24 compute-0 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true} : dispatch
Jan 31 08:07:24 compute-0 ceph-82c880e6-d992-5408-8b12-efff9c275473-osd-2-activate[87815]: --> Failed to activate via raw: did not find any matching OSD to activate
Jan 31 08:07:24 compute-0 ceph-82c880e6-d992-5408-8b12-efff9c275473-osd-2-activate[87815]: Running command: /usr/bin/ceph-authtool --gen-print-key
Jan 31 08:07:24 compute-0 bash[87800]: --> Failed to activate via raw: did not find any matching OSD to activate
Jan 31 08:07:24 compute-0 bash[87800]: Running command: /usr/bin/ceph-authtool --gen-print-key
Jan 31 08:07:24 compute-0 ceph-82c880e6-d992-5408-8b12-efff9c275473-osd-2-activate[87815]: Running command: /usr/bin/ceph-authtool --gen-print-key
Jan 31 08:07:24 compute-0 bash[87800]: Running command: /usr/bin/ceph-authtool --gen-print-key
Jan 31 08:07:24 compute-0 ceph-82c880e6-d992-5408-8b12-efff9c275473-osd-2-activate[87815]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2
Jan 31 08:07:24 compute-0 bash[87800]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2
Jan 31 08:07:24 compute-0 ceph-82c880e6-d992-5408-8b12-efff9c275473-osd-2-activate[87815]: Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph_vg2/ceph_lv2 --path /var/lib/ceph/osd/ceph-2 --no-mon-config
Jan 31 08:07:24 compute-0 bash[87800]: Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph_vg2/ceph_lv2 --path /var/lib/ceph/osd/ceph-2 --no-mon-config
Jan 31 08:07:24 compute-0 ceph-82c880e6-d992-5408-8b12-efff9c275473-osd-2-activate[87815]: Running command: /usr/bin/ln -snf /dev/ceph_vg2/ceph_lv2 /var/lib/ceph/osd/ceph-2/block
Jan 31 08:07:24 compute-0 bash[87800]: Running command: /usr/bin/ln -snf /dev/ceph_vg2/ceph_lv2 /var/lib/ceph/osd/ceph-2/block
Jan 31 08:07:25 compute-0 ceph-82c880e6-d992-5408-8b12-efff9c275473-osd-2-activate[87815]: Running command: /usr/bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-2/block
Jan 31 08:07:25 compute-0 bash[87800]: Running command: /usr/bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-2/block
Jan 31 08:07:25 compute-0 ceph-82c880e6-d992-5408-8b12-efff9c275473-osd-2-activate[87815]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-2
Jan 31 08:07:25 compute-0 bash[87800]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-2
Jan 31 08:07:25 compute-0 ceph-82c880e6-d992-5408-8b12-efff9c275473-osd-2-activate[87815]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2
Jan 31 08:07:25 compute-0 bash[87800]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2
Jan 31 08:07:25 compute-0 ceph-82c880e6-d992-5408-8b12-efff9c275473-osd-2-activate[87815]: --> ceph-volume lvm activate successful for osd ID: 2
Jan 31 08:07:25 compute-0 bash[87800]: --> ceph-volume lvm activate successful for osd ID: 2
Jan 31 08:07:25 compute-0 systemd[1]: libpod-3591d717fc98c9c34d0088f3915afe8138b94c9e884d6a4d192687ea63b2968a.scope: Deactivated successfully.
Jan 31 08:07:25 compute-0 podman[87800]: 2026-01-31 08:07:25.05580407 +0000 UTC m=+1.348603078 container died 3591d717fc98c9c34d0088f3915afe8138b94c9e884d6a4d192687ea63b2968a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-82c880e6-d992-5408-8b12-efff9c275473-osd-2-activate, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle)
Jan 31 08:07:25 compute-0 systemd[1]: libpod-3591d717fc98c9c34d0088f3915afe8138b94c9e884d6a4d192687ea63b2968a.scope: Consumed 1.275s CPU time.
Jan 31 08:07:25 compute-0 ceph-mgr[75519]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.100:6806/1439559419; not ready for session (expect reconnect)
Jan 31 08:07:25 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Jan 31 08:07:25 compute-0 ceph-mon[75227]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "osd metadata", "id": 1} : dispatch
Jan 31 08:07:25 compute-0 ceph-mgr[75519]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Jan 31 08:07:25 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e11 do_prune osdmap full prune enabled
Jan 31 08:07:25 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e11 encode_pending skipping prime_pg_temp; mapping job did not start
Jan 31 08:07:25 compute-0 ceph-mon[75227]: purged_snaps scrub starts
Jan 31 08:07:25 compute-0 ceph-mon[75227]: purged_snaps scrub ok
Jan 31 08:07:25 compute-0 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "osd metadata", "id": 1} : dispatch
Jan 31 08:07:25 compute-0 ceph-mon[75227]: osd.0 [v2:192.168.122.100:6802/2898453618,v1:192.168.122.100:6803/2898453618] boot
Jan 31 08:07:25 compute-0 ceph-mon[75227]: osdmap e11: 3 total, 1 up, 3 in
Jan 31 08:07:25 compute-0 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "osd metadata", "id": 0} : dispatch
Jan 31 08:07:25 compute-0 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "osd metadata", "id": 1} : dispatch
Jan 31 08:07:25 compute-0 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Jan 31 08:07:25 compute-0 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true} : dispatch
Jan 31 08:07:25 compute-0 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "osd metadata", "id": 1} : dispatch
Jan 31 08:07:25 compute-0 systemd[1]: var-lib-containers-storage-overlay-10b6e9c0098ee06685e5b96a0d8e447a91e7f2194db77e36a47476b5ce6f7ffc-merged.mount: Deactivated successfully.
Jan 31 08:07:25 compute-0 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd='[{"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true}]': finished
Jan 31 08:07:25 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e12 e12: 3 total, 1 up, 3 in
Jan 31 08:07:25 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e12 crush map has features 3314933000852226048, adjusting msgr requires
Jan 31 08:07:25 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e12 crush map has features 288514051259236352, adjusting msgr requires
Jan 31 08:07:25 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e12 crush map has features 288514051259236352, adjusting msgr requires
Jan 31 08:07:25 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e12 crush map has features 288514051259236352, adjusting msgr requires
Jan 31 08:07:25 compute-0 ceph-mon[75227]: log_channel(cluster) log [DBG] : osdmap e12: 3 total, 1 up, 3 in
Jan 31 08:07:25 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Jan 31 08:07:25 compute-0 ceph-mon[75227]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "osd metadata", "id": 1} : dispatch
Jan 31 08:07:25 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Jan 31 08:07:25 compute-0 ceph-mon[75227]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Jan 31 08:07:25 compute-0 ceph-mgr[75519]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Jan 31 08:07:25 compute-0 ceph-mgr[75519]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Jan 31 08:07:25 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true} v 0)
Jan 31 08:07:25 compute-0 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true} : dispatch
Jan 31 08:07:25 compute-0 ceph-osd[85971]: osd.0 12 crush map has features 288514051259236352, adjusting msgr requires for clients
Jan 31 08:07:25 compute-0 ceph-osd[85971]: osd.0 12 crush map has features 288514051259236352 was 288514050185503233, adjusting msgr requires for mons
Jan 31 08:07:25 compute-0 ceph-osd[85971]: osd.0 12 crush map has features 3314933000852226048, adjusting msgr requires for osds
Jan 31 08:07:25 compute-0 podman[87800]: 2026-01-31 08:07:25.623928869 +0000 UTC m=+1.916727867 container remove 3591d717fc98c9c34d0088f3915afe8138b94c9e884d6a4d192687ea63b2968a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-82c880e6-d992-5408-8b12-efff9c275473-osd-2-activate, ceph=True, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Jan 31 08:07:25 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v30: 1 pgs: 1 unknown; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail
Jan 31 08:07:25 compute-0 podman[88077]: 2026-01-31 08:07:25.879550542 +0000 UTC m=+0.092574012 container create b5c171002b43016761820d54d3db53db7b84c4e2383897c350be33b8e45afb5b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-82c880e6-d992-5408-8b12-efff9c275473-osd-2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 08:07:25 compute-0 podman[88077]: 2026-01-31 08:07:25.81183444 +0000 UTC m=+0.024857890 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 08:07:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d38402a3e79b59356d9e7ff74ea5f6eb172b21f5b29e55cbc90dfc647a7e89fa/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 08:07:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d38402a3e79b59356d9e7ff74ea5f6eb172b21f5b29e55cbc90dfc647a7e89fa/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 08:07:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d38402a3e79b59356d9e7ff74ea5f6eb172b21f5b29e55cbc90dfc647a7e89fa/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 08:07:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d38402a3e79b59356d9e7ff74ea5f6eb172b21f5b29e55cbc90dfc647a7e89fa/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 08:07:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d38402a3e79b59356d9e7ff74ea5f6eb172b21f5b29e55cbc90dfc647a7e89fa/merged/var/lib/ceph/osd/ceph-2 supports timestamps until 2038 (0x7fffffff)
Jan 31 08:07:26 compute-0 ceph-mgr[75519]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.100:6806/1439559419; not ready for session (expect reconnect)
Jan 31 08:07:26 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Jan 31 08:07:26 compute-0 ceph-mon[75227]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "osd metadata", "id": 1} : dispatch
Jan 31 08:07:26 compute-0 ceph-mgr[75519]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Jan 31 08:07:26 compute-0 podman[88077]: 2026-01-31 08:07:26.331428895 +0000 UTC m=+0.544452365 container init b5c171002b43016761820d54d3db53db7b84c4e2383897c350be33b8e45afb5b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-82c880e6-d992-5408-8b12-efff9c275473-osd-2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 08:07:26 compute-0 podman[88077]: 2026-01-31 08:07:26.340556745 +0000 UTC m=+0.553580175 container start b5c171002b43016761820d54d3db53db7b84c4e2383897c350be33b8e45afb5b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-82c880e6-d992-5408-8b12-efff9c275473-osd-2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle)
Jan 31 08:07:26 compute-0 ceph-osd[88096]: set uid:gid to 167:167 (ceph:ceph)
Jan 31 08:07:26 compute-0 ceph-osd[88096]: ceph version 20.2.0 (69f84cc2651aa259a15bc192ddaabd3baba07489) tentacle (stable - RelWithDebInfo), process ceph-osd, pid 2
Jan 31 08:07:26 compute-0 ceph-osd[88096]: pidfile_write: ignore empty --pid-file
Jan 31 08:07:26 compute-0 ceph-osd[88096]: bdev(0x5603a1f4c000 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Jan 31 08:07:26 compute-0 ceph-osd[88096]: bdev(0x5603a1f4c000 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Jan 31 08:07:26 compute-0 ceph-osd[88096]: bdev(0x5603a1f4c000 /var/lib/ceph/osd/ceph-2/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 31 08:07:26 compute-0 ceph-osd[88096]: bdev(0x5603a1f4c000 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 31 08:07:26 compute-0 ceph-osd[88096]: bdev(0x5603a1f4c000 /var/lib/ceph/osd/ceph-2/block) close
Jan 31 08:07:26 compute-0 bash[88077]: b5c171002b43016761820d54d3db53db7b84c4e2383897c350be33b8e45afb5b
Jan 31 08:07:26 compute-0 systemd[1]: Started Ceph osd.2 for 82c880e6-d992-5408-8b12-efff9c275473.
Jan 31 08:07:26 compute-0 ceph-osd[88096]: bdev(0x5603a1f4c000 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Jan 31 08:07:26 compute-0 ceph-osd[88096]: bdev(0x5603a1f4c000 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Jan 31 08:07:26 compute-0 ceph-osd[88096]: bdev(0x5603a1f4c000 /var/lib/ceph/osd/ceph-2/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 31 08:07:26 compute-0 ceph-osd[88096]: bdev(0x5603a1f4c000 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 31 08:07:26 compute-0 ceph-osd[88096]: bdev(0x5603a1f4c000 /var/lib/ceph/osd/ceph-2/block) close
Jan 31 08:07:26 compute-0 ceph-osd[88096]: bdev(0x5603a1f4c000 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Jan 31 08:07:26 compute-0 ceph-osd[88096]: bdev(0x5603a1f4c000 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Jan 31 08:07:26 compute-0 ceph-osd[88096]: bdev(0x5603a1f4c000 /var/lib/ceph/osd/ceph-2/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 31 08:07:26 compute-0 ceph-osd[88096]: bdev(0x5603a1f4c000 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 31 08:07:26 compute-0 ceph-osd[88096]: bdev(0x5603a1f4c000 /var/lib/ceph/osd/ceph-2/block) close
Jan 31 08:07:26 compute-0 ceph-osd[88096]: bdev(0x5603a1f4c000 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Jan 31 08:07:26 compute-0 ceph-osd[88096]: bdev(0x5603a1f4c000 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Jan 31 08:07:26 compute-0 ceph-osd[88096]: bdev(0x5603a1f4c000 /var/lib/ceph/osd/ceph-2/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 31 08:07:26 compute-0 ceph-osd[88096]: bdev(0x5603a1f4c000 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 31 08:07:26 compute-0 ceph-osd[88096]: bdev(0x5603a1f4c000 /var/lib/ceph/osd/ceph-2/block) close
Jan 31 08:07:26 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e12 do_prune osdmap full prune enabled
Jan 31 08:07:26 compute-0 ceph-osd[88096]: bdev(0x5603a1f4c000 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Jan 31 08:07:26 compute-0 ceph-osd[88096]: bdev(0x5603a1f4c000 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Jan 31 08:07:26 compute-0 ceph-osd[88096]: bdev(0x5603a1f4c000 /var/lib/ceph/osd/ceph-2/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 31 08:07:26 compute-0 ceph-osd[88096]: bdev(0x5603a1f4c000 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 31 08:07:26 compute-0 ceph-osd[88096]: bdev(0x5603a1f4c000 /var/lib/ceph/osd/ceph-2/block) close
Jan 31 08:07:26 compute-0 ceph-osd[88096]: bdev(0x5603a1f4c000 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Jan 31 08:07:26 compute-0 ceph-osd[88096]: bdev(0x5603a1f4c000 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Jan 31 08:07:26 compute-0 ceph-osd[88096]: bdev(0x5603a1f4c000 /var/lib/ceph/osd/ceph-2/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 31 08:07:26 compute-0 ceph-osd[88096]: bdev(0x5603a1f4c000 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 31 08:07:26 compute-0 ceph-osd[88096]: bluestore(/var/lib/ceph/osd/ceph-2) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Jan 31 08:07:26 compute-0 ceph-osd[88096]: bdev(0x5603a1f4c400 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Jan 31 08:07:26 compute-0 ceph-osd[88096]: bdev(0x5603a1f4c400 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Jan 31 08:07:26 compute-0 ceph-osd[88096]: bdev(0x5603a1f4c400 /var/lib/ceph/osd/ceph-2/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 31 08:07:26 compute-0 ceph-osd[88096]: bdev(0x5603a1f4c400 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 31 08:07:26 compute-0 ceph-osd[88096]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-2/block size 20 GiB
Jan 31 08:07:26 compute-0 ceph-osd[88096]: bdev(0x5603a1f4c400 /var/lib/ceph/osd/ceph-2/block) close
Jan 31 08:07:26 compute-0 ceph-osd[88096]: bdev(0x5603a1f4c000 /var/lib/ceph/osd/ceph-2/block) close
Jan 31 08:07:26 compute-0 ceph-osd[88096]: starting osd.2 osd_data /var/lib/ceph/osd/ceph-2 /var/lib/ceph/osd/ceph-2/journal
Jan 31 08:07:26 compute-0 sudo[87085]: pam_unix(sudo:session): session closed for user root
Jan 31 08:07:26 compute-0 ceph-osd[88096]: load: jerasure load: lrc 
Jan 31 08:07:26 compute-0 ceph-osd[88096]: bdev(0x5603a1f4dc00 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Jan 31 08:07:26 compute-0 ceph-osd[88096]: bdev(0x5603a1f4dc00 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Jan 31 08:07:26 compute-0 ceph-osd[88096]: bdev(0x5603a1f4dc00 /var/lib/ceph/osd/ceph-2/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 31 08:07:26 compute-0 ceph-osd[88096]: bdev(0x5603a1f4dc00 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 31 08:07:26 compute-0 ceph-osd[88096]: bluestore(/var/lib/ceph/osd/ceph-2) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Jan 31 08:07:26 compute-0 ceph-osd[88096]: bdev(0x5603a1f4dc00 /var/lib/ceph/osd/ceph-2/block) close
Jan 31 08:07:26 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 31 08:07:26 compute-0 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd='[{"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true}]': finished
Jan 31 08:07:26 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e13 e13: 3 total, 1 up, 3 in
Jan 31 08:07:26 compute-0 ceph-osd[88096]: bdev(0x5603a1f4dc00 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Jan 31 08:07:26 compute-0 ceph-osd[88096]: bdev(0x5603a1f4dc00 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Jan 31 08:07:26 compute-0 ceph-osd[88096]: bdev(0x5603a1f4dc00 /var/lib/ceph/osd/ceph-2/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 31 08:07:26 compute-0 ceph-osd[88096]: bdev(0x5603a1f4dc00 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 31 08:07:26 compute-0 ceph-osd[88096]: bluestore(/var/lib/ceph/osd/ceph-2) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Jan 31 08:07:26 compute-0 ceph-osd[88096]: bdev(0x5603a1f4dc00 /var/lib/ceph/osd/ceph-2/block) close
Jan 31 08:07:26 compute-0 ceph-osd[88096]: mClockScheduler: set_osd_capacity_params_from_config: osd_bandwidth_cost_per_io: 499321.90 bytes/io, osd_bandwidth_capacity_per_shard 157286400.00 bytes/second
Jan 31 08:07:26 compute-0 ceph-osd[88096]: osd.2:0.OSDShard using op scheduler mclock_scheduler, cutoff=196
Jan 31 08:07:26 compute-0 ceph-osd[88096]: bdev(0x5603a1f4dc00 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Jan 31 08:07:26 compute-0 ceph-osd[88096]: bdev(0x5603a1f4dc00 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Jan 31 08:07:26 compute-0 ceph-osd[88096]: bdev(0x5603a1f4dc00 /var/lib/ceph/osd/ceph-2/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 31 08:07:26 compute-0 ceph-osd[88096]: bdev(0x5603a1f4dc00 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 31 08:07:26 compute-0 ceph-osd[88096]: bdev(0x5603a1f4dc00 /var/lib/ceph/osd/ceph-2/block) close
Jan 31 08:07:26 compute-0 ceph-mon[75227]: log_channel(cluster) log [DBG] : osdmap e13: 3 total, 1 up, 3 in
Jan 31 08:07:26 compute-0 ceph-osd[88096]: bdev(0x5603a1f4dc00 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Jan 31 08:07:26 compute-0 ceph-osd[88096]: bdev(0x5603a1f4dc00 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Jan 31 08:07:26 compute-0 ceph-osd[88096]: bdev(0x5603a1f4dc00 /var/lib/ceph/osd/ceph-2/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 31 08:07:26 compute-0 ceph-osd[88096]: bdev(0x5603a1f4dc00 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 31 08:07:26 compute-0 ceph-osd[88096]: bdev(0x5603a1f4dc00 /var/lib/ceph/osd/ceph-2/block) close
Jan 31 08:07:26 compute-0 ceph-osd[88096]: bdev(0x5603a1f4dc00 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Jan 31 08:07:26 compute-0 ceph-osd[88096]: bdev(0x5603a1f4dc00 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Jan 31 08:07:26 compute-0 ceph-osd[88096]: bdev(0x5603a1f4dc00 /var/lib/ceph/osd/ceph-2/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 31 08:07:26 compute-0 ceph-osd[88096]: bdev(0x5603a1f4dc00 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 31 08:07:26 compute-0 ceph-osd[88096]: bdev(0x5603a1f4dc00 /var/lib/ceph/osd/ceph-2/block) close
Jan 31 08:07:26 compute-0 ceph-osd[88096]: bdev(0x5603a1f4dc00 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Jan 31 08:07:26 compute-0 ceph-osd[88096]: bdev(0x5603a1f4dc00 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Jan 31 08:07:26 compute-0 ceph-osd[88096]: bdev(0x5603a1f4dc00 /var/lib/ceph/osd/ceph-2/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 31 08:07:26 compute-0 ceph-osd[88096]: bdev(0x5603a1f4dc00 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 31 08:07:26 compute-0 ceph-osd[88096]: bluestore(/var/lib/ceph/osd/ceph-2) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Jan 31 08:07:26 compute-0 ceph-osd[88096]: bdev(0x5603a2be3800 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Jan 31 08:07:26 compute-0 ceph-osd[88096]: bdev(0x5603a2be3800 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Jan 31 08:07:26 compute-0 ceph-osd[88096]: bdev(0x5603a2be3800 /var/lib/ceph/osd/ceph-2/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 31 08:07:26 compute-0 ceph-osd[88096]: bdev(0x5603a2be3800 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 31 08:07:26 compute-0 ceph-osd[88096]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-2/block size 20 GiB
Jan 31 08:07:26 compute-0 ceph-osd[88096]: bluefs mount
Jan 31 08:07:26 compute-0 ceph-osd[88096]: bluefs _init_alloc shared, id 1, capacity 0x4ffc00000, block size 0x10000
Jan 31 08:07:26 compute-0 ceph-osd[88096]: bluefs mount final locked allocations 0 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Jan 31 08:07:26 compute-0 ceph-osd[88096]: bluefs mount final locked allocations 1 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Jan 31 08:07:26 compute-0 ceph-osd[88096]: bluefs mount final locked allocations 2 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Jan 31 08:07:26 compute-0 ceph-osd[88096]: bluefs mount final locked allocations 3 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Jan 31 08:07:26 compute-0 ceph-osd[88096]: bluefs mount final locked allocations 4 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Jan 31 08:07:26 compute-0 ceph-osd[88096]: bluefs mount shared_bdev_used = 0
Jan 31 08:07:26 compute-0 ceph-osd[88096]: bluestore(/var/lib/ceph/osd/ceph-2) _prepare_db_environment set db_paths to db,20397110067 db.slow,20397110067
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb: RocksDB version: 7.9.2
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb: Git sha 0
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb: Compile date 2025-10-30 15:42:43
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb: DB SUMMARY
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb: DB Session ID:  AMK3L2MLNV0PJCV1SAN1
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb: CURRENT file:  CURRENT
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb: IDENTITY file:  IDENTITY
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb: MANIFEST file:  MANIFEST-000032 size: 1007 Bytes
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb: SST files in db dir, Total Num: 1, files: 000030.sst 
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb: SST files in db.slow dir, Total Num: 0, files: 
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb: Write Ahead Log file in db.wal: 000031.log size: 5097 ; 
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:                         Options.error_if_exists: 0
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:                       Options.create_if_missing: 0
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:                         Options.paranoid_checks: 1
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:             Options.flush_verify_memtable_count: 1
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:                                     Options.env: 0x5603a1dddea0
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:                                      Options.fs: LegacyFileSystem
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:                                Options.info_log: 0x5603a2e388a0
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:                Options.max_file_opening_threads: 16
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:                              Options.statistics: (nil)
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:                               Options.use_fsync: 0
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:                       Options.max_log_file_size: 0
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:                   Options.log_file_time_to_roll: 0
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:                       Options.keep_log_file_num: 1000
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:                    Options.recycle_log_file_num: 0
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:                         Options.allow_fallocate: 1
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:                        Options.allow_mmap_reads: 0
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:                       Options.allow_mmap_writes: 0
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:                        Options.use_direct_reads: 0
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:          Options.create_missing_column_families: 0
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:                              Options.db_log_dir: 
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:                                 Options.wal_dir: db.wal
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:                Options.table_cache_numshardbits: 6
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:                         Options.WAL_ttl_seconds: 0
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:                       Options.WAL_size_limit_MB: 0
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:             Options.manifest_preallocation_size: 4194304
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:                     Options.is_fd_close_on_exec: 1
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:                   Options.advise_random_on_open: 1
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:                    Options.db_write_buffer_size: 0
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:                    Options.write_buffer_manager: 0x5603a1e42b40
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:         Options.access_hint_on_compaction_start: 1
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:                      Options.use_adaptive_mutex: 0
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:                            Options.rate_limiter: (nil)
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:                       Options.wal_recovery_mode: 2
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:                  Options.enable_thread_tracking: 0
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:                  Options.enable_pipelined_write: 0
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:                  Options.unordered_write: 0
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:             Options.write_thread_max_yield_usec: 100
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:                               Options.row_cache: None
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:                              Options.wal_filter: None
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:             Options.avoid_flush_during_recovery: 0
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:             Options.allow_ingest_behind: 0
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:             Options.two_write_queues: 0
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:             Options.manual_wal_flush: 0
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:             Options.wal_compression: 0
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:             Options.atomic_flush: 0
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:                 Options.persist_stats_to_disk: 0
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:                 Options.write_dbid_to_manifest: 0
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:                 Options.log_readahead_size: 0
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:                 Options.best_efforts_recovery: 0
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:             Options.allow_data_in_errors: 0
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:             Options.db_host_id: __hostname__
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:             Options.enforce_single_del_contracts: true
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:             Options.max_background_jobs: 4
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:             Options.max_background_compactions: -1
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:             Options.max_subcompactions: 1
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:           Options.writable_file_max_buffer_size: 0
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:             Options.delayed_write_rate : 16777216
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:             Options.max_total_wal_size: 1073741824
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:                   Options.stats_dump_period_sec: 600
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:                 Options.stats_persist_period_sec: 600
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:                          Options.max_open_files: -1
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:                          Options.bytes_per_sync: 0
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:                      Options.wal_bytes_per_sync: 0
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:                   Options.strict_bytes_per_sync: 0
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:       Options.compaction_readahead_size: 2097152
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:                  Options.max_background_flushes: -1
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb: Compression algorithms supported:
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:         kZSTD supported: 0
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:         kXpressCompression supported: 0
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:         kBZip2Compression supported: 0
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:         kZSTDNotFinalCompression supported: 0
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:         kLZ4Compression supported: 1
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:         kZlibCompression supported: 1
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:         kLZ4HCCompression supported: 1
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:         kSnappyCompression supported: 1
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb: Fast CRC32 supported: Supported on x86
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb: DMutex implementation: pthread_mutex_t
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb: [db/db_impl/db_impl_readonly.cc:25] Opening the db in read only mode
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: db/MANIFEST-000032
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:           Options.merge_operator: .T:int64_array.b:bitwise_xor
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:        Options.compaction_filter: None
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:        Options.compaction_filter_factory: None
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:  Options.sst_partitioner_factory: None
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5603a2e38c60)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x5603a1de18d0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:        Options.write_buffer_size: 16777216
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:  Options.max_write_buffer_number: 64
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:          Options.compression: LZ4
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:       Options.prefix_extractor: nullptr
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:             Options.num_levels: 7
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:                  Options.compression_opts.level: 32767
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:               Options.compression_opts.strategy: 0
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:                  Options.compression_opts.enabled: false
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:                        Options.arena_block_size: 1048576
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:                Options.disable_auto_compactions: 0
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:                   Options.inplace_update_support: 0
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:                           Options.bloom_locality: 0
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:                    Options.max_successive_merges: 0
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:                Options.paranoid_file_checks: 0
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:                Options.force_consistency_checks: 1
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:                Options.report_bg_io_stats: 0
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:                               Options.ttl: 2592000
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:                       Options.enable_blob_files: false
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:                           Options.min_blob_size: 0
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:                          Options.blob_file_size: 268435456
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:                Options.blob_file_starting_level: 0
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-0]:
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:           Options.merge_operator: None
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:        Options.compaction_filter: None
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:        Options.compaction_filter_factory: None
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:  Options.sst_partitioner_factory: None
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5603a2e38c60)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x5603a1de18d0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:        Options.write_buffer_size: 16777216
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:  Options.max_write_buffer_number: 64
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:          Options.compression: LZ4
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:       Options.prefix_extractor: nullptr
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:             Options.num_levels: 7
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:                  Options.compression_opts.level: 32767
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:               Options.compression_opts.strategy: 0
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:                  Options.compression_opts.enabled: false
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:                        Options.arena_block_size: 1048576
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:                Options.disable_auto_compactions: 0
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:                   Options.inplace_update_support: 0
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:                           Options.bloom_locality: 0
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:                    Options.max_successive_merges: 0
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:                Options.paranoid_file_checks: 0
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:                Options.force_consistency_checks: 1
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:                Options.report_bg_io_stats: 0
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:                               Options.ttl: 2592000
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:                       Options.enable_blob_files: false
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:                           Options.min_blob_size: 0
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:                          Options.blob_file_size: 268435456
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:                Options.blob_file_starting_level: 0
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-1]:
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:           Options.merge_operator: None
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:        Options.compaction_filter: None
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:        Options.compaction_filter_factory: None
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:  Options.sst_partitioner_factory: None
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5603a2e38c60)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x5603a1de18d0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:        Options.write_buffer_size: 16777216
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:  Options.max_write_buffer_number: 64
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:          Options.compression: LZ4
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:       Options.prefix_extractor: nullptr
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:             Options.num_levels: 7
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:                  Options.compression_opts.level: 32767
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:               Options.compression_opts.strategy: 0
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:                  Options.compression_opts.enabled: false
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:                        Options.arena_block_size: 1048576
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:                Options.disable_auto_compactions: 0
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:                   Options.inplace_update_support: 0
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:                           Options.bloom_locality: 0
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:                    Options.max_successive_merges: 0
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:                Options.paranoid_file_checks: 0
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:                Options.force_consistency_checks: 1
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:                Options.report_bg_io_stats: 0
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:                               Options.ttl: 2592000
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:                       Options.enable_blob_files: false
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:                           Options.min_blob_size: 0
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:                          Options.blob_file_size: 268435456
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:                Options.blob_file_starting_level: 0
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-2]:
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:           Options.merge_operator: None
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:        Options.compaction_filter: None
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:        Options.compaction_filter_factory: None
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:  Options.sst_partitioner_factory: None
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5603a2e38c60)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x5603a1de18d0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:        Options.write_buffer_size: 16777216
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:  Options.max_write_buffer_number: 64
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:          Options.compression: LZ4
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:       Options.prefix_extractor: nullptr
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:             Options.num_levels: 7
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:                  Options.compression_opts.level: 32767
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:               Options.compression_opts.strategy: 0
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:                  Options.compression_opts.enabled: false
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:                        Options.arena_block_size: 1048576
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:                Options.disable_auto_compactions: 0
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:                   Options.inplace_update_support: 0
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:                           Options.bloom_locality: 0
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:                    Options.max_successive_merges: 0
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:                Options.paranoid_file_checks: 0
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:                Options.force_consistency_checks: 1
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:                Options.report_bg_io_stats: 0
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:                               Options.ttl: 2592000
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:                       Options.enable_blob_files: false
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:                           Options.min_blob_size: 0
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:                          Options.blob_file_size: 268435456
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:                Options.blob_file_starting_level: 0
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-0]:
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:           Options.merge_operator: None
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:        Options.compaction_filter: None
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:        Options.compaction_filter_factory: None
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:  Options.sst_partitioner_factory: None
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5603a2e38c60)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x5603a1de18d0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:        Options.write_buffer_size: 16777216
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:  Options.max_write_buffer_number: 64
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:          Options.compression: LZ4
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:       Options.prefix_extractor: nullptr
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:             Options.num_levels: 7
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:                  Options.compression_opts.level: 32767
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:               Options.compression_opts.strategy: 0
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:                  Options.compression_opts.enabled: false
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:                        Options.arena_block_size: 1048576
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:                Options.disable_auto_compactions: 0
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:                   Options.inplace_update_support: 0
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:                           Options.bloom_locality: 0
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:                    Options.max_successive_merges: 0
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:                Options.paranoid_file_checks: 0
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:                Options.force_consistency_checks: 1
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:                Options.report_bg_io_stats: 0
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:                               Options.ttl: 2592000
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:                       Options.enable_blob_files: false
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:                           Options.min_blob_size: 0
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:                          Options.blob_file_size: 268435456
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:                Options.blob_file_starting_level: 0
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-1]:
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:           Options.merge_operator: None
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:        Options.compaction_filter: None
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:        Options.compaction_filter_factory: None
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:  Options.sst_partitioner_factory: None
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5603a2e38c60)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x5603a1de18d0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:        Options.write_buffer_size: 16777216
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:  Options.max_write_buffer_number: 64
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:          Options.compression: LZ4
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:       Options.prefix_extractor: nullptr
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:             Options.num_levels: 7
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:                  Options.compression_opts.level: 32767
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:               Options.compression_opts.strategy: 0
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:                  Options.compression_opts.enabled: false
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:                        Options.arena_block_size: 1048576
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:                Options.disable_auto_compactions: 0
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:                   Options.inplace_update_support: 0
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:                           Options.bloom_locality: 0
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:                    Options.max_successive_merges: 0
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:                Options.paranoid_file_checks: 0
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:                Options.force_consistency_checks: 1
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:                Options.report_bg_io_stats: 0
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:                               Options.ttl: 2592000
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:                       Options.enable_blob_files: false
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:                           Options.min_blob_size: 0
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:                          Options.blob_file_size: 268435456
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:                Options.blob_file_starting_level: 0
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-2]:
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:           Options.merge_operator: None
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:        Options.compaction_filter: None
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:        Options.compaction_filter_factory: None
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:  Options.sst_partitioner_factory: None
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5603a2e38c60)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x5603a1de18d0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:        Options.write_buffer_size: 16777216
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:  Options.max_write_buffer_number: 64
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:          Options.compression: LZ4
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:       Options.prefix_extractor: nullptr
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:             Options.num_levels: 7
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:                  Options.compression_opts.level: 32767
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:               Options.compression_opts.strategy: 0
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:                  Options.compression_opts.enabled: false
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:                        Options.arena_block_size: 1048576
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:                Options.disable_auto_compactions: 0
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:                   Options.inplace_update_support: 0
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:                           Options.bloom_locality: 0
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:                    Options.max_successive_merges: 0
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:                Options.paranoid_file_checks: 0
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:                Options.force_consistency_checks: 1
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:                Options.report_bg_io_stats: 0
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:                               Options.ttl: 2592000
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:                       Options.enable_blob_files: false
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:                           Options.min_blob_size: 0
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:                          Options.blob_file_size: 268435456
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:                Options.blob_file_starting_level: 0
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-0]:
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:           Options.merge_operator: None
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:        Options.compaction_filter: None
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:        Options.compaction_filter_factory: None
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:  Options.sst_partitioner_factory: None
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5603a2e38c80)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x5603a1de1a30
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:        Options.write_buffer_size: 16777216
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:  Options.max_write_buffer_number: 64
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:          Options.compression: LZ4
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:       Options.prefix_extractor: nullptr
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:             Options.num_levels: 7
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:                  Options.compression_opts.level: 32767
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:               Options.compression_opts.strategy: 0
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:                  Options.compression_opts.enabled: false
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:                        Options.arena_block_size: 1048576
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:                Options.disable_auto_compactions: 0
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:                   Options.inplace_update_support: 0
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:                           Options.bloom_locality: 0
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:                    Options.max_successive_merges: 0
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:                Options.paranoid_file_checks: 0
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:                Options.force_consistency_checks: 1
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:                Options.report_bg_io_stats: 0
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:                               Options.ttl: 2592000
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:                       Options.enable_blob_files: false
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:                           Options.min_blob_size: 0
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:                          Options.blob_file_size: 268435456
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:                Options.blob_file_starting_level: 0
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-1]:
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:           Options.merge_operator: None
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:        Options.compaction_filter: None
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:        Options.compaction_filter_factory: None
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:  Options.sst_partitioner_factory: None
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5603a2e38c80)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x5603a1de1a30
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:        Options.write_buffer_size: 16777216
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:  Options.max_write_buffer_number: 64
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:          Options.compression: LZ4
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:       Options.prefix_extractor: nullptr
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:             Options.num_levels: 7
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:                  Options.compression_opts.level: 32767
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:               Options.compression_opts.strategy: 0
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:                  Options.compression_opts.enabled: false
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:                        Options.arena_block_size: 1048576
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:                Options.disable_auto_compactions: 0
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:                   Options.inplace_update_support: 0
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:                           Options.bloom_locality: 0
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:                    Options.max_successive_merges: 0
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:                Options.paranoid_file_checks: 0
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:                Options.force_consistency_checks: 1
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:                Options.report_bg_io_stats: 0
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:                               Options.ttl: 2592000
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:                       Options.enable_blob_files: false
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:                           Options.min_blob_size: 0
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:                          Options.blob_file_size: 268435456
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:                Options.blob_file_starting_level: 0
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-2]:
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:           Options.merge_operator: None
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:        Options.compaction_filter: None
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:        Options.compaction_filter_factory: None
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:  Options.sst_partitioner_factory: None
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5603a2e38c80)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x5603a1de1a30
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:        Options.write_buffer_size: 16777216
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:  Options.max_write_buffer_number: 64
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:          Options.compression: LZ4
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:       Options.prefix_extractor: nullptr
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:             Options.num_levels: 7
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:                  Options.compression_opts.level: 32767
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:               Options.compression_opts.strategy: 0
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:                  Options.compression_opts.enabled: false
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:                        Options.arena_block_size: 1048576
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:                Options.disable_auto_compactions: 0
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:                   Options.inplace_update_support: 0
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:                           Options.bloom_locality: 0
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:                    Options.max_successive_merges: 0
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:                Options.paranoid_file_checks: 0
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:                Options.force_consistency_checks: 1
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:                Options.report_bg_io_stats: 0
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:                               Options.ttl: 2592000
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:                       Options.enable_blob_files: false
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:                           Options.min_blob_size: 0
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:                          Options.blob_file_size: 268435456
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:                Options.blob_file_starting_level: 0
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb: [db/column_family.cc:635]         (skipping printing options)
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb: [db/column_family.cc:635]         (skipping printing options)
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:db/MANIFEST-000032 succeeded,manifest_file_number is 32, next_file_number is 34, last_sequence is 12, log_number is 5,prev_log_number is 0,max_column_family is 11,min_log_number_to_keep is 5
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb: [db/version_set.cc:5581] Column family [m-0] (ID 1), log number is 5
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb: [db/version_set.cc:5581] Column family [m-1] (ID 2), log number is 5
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb: [db/version_set.cc:5581] Column family [m-2] (ID 3), log number is 5
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb: [db/version_set.cc:5581] Column family [p-0] (ID 4), log number is 5
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb: [db/version_set.cc:5581] Column family [p-1] (ID 5), log number is 5
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb: [db/version_set.cc:5581] Column family [p-2] (ID 6), log number is 5
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb: [db/version_set.cc:5581] Column family [O-0] (ID 7), log number is 5
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb: [db/version_set.cc:5581] Column family [O-1] (ID 8), log number is 5
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb: [db/version_set.cc:5581] Column family [O-2] (ID 9), log number is 5
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb: [db/version_set.cc:5581] Column family [L] (ID 10), log number is 5
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb: [db/version_set.cc:5581] Column family [P] (ID 11), log number is 5
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: e47fd480-0b39-49c2-8ccd-d36942261e3a
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769846846746435, "job": 1, "event": "recovery_started", "wal_files": [31]}
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #31 mode 2
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769846846747851, "job": 1, "event": "recovery_finished"}
Jan 31 08:07:26 compute-0 ceph-osd[88096]: bluestore(/var/lib/ceph/osd/ceph-2) _open_db opened rocksdb path db options compression=kLZ4Compression,max_write_buffer_number=64,min_write_buffer_number_to_merge=6,compaction_style=kCompactionStyleLevel,write_buffer_size=16777216,max_background_jobs=4,level0_file_num_compaction_trigger=8,max_bytes_for_level_base=1073741824,max_bytes_for_level_multiplier=8,compaction_readahead_size=2MB,max_total_wal_size=1073741824,writable_file_max_buffer_size=0
Jan 31 08:07:26 compute-0 ceph-osd[88096]: bluestore(/var/lib/ceph/osd/ceph-2) _open_super_meta old nid_max 1025
Jan 31 08:07:26 compute-0 ceph-osd[88096]: bluestore(/var/lib/ceph/osd/ceph-2) _open_super_meta old blobid_max 10240
Jan 31 08:07:26 compute-0 ceph-osd[88096]: bluestore(/var/lib/ceph/osd/ceph-2) _open_super_meta ondisk_format 4 compat_ondisk_format 3
Jan 31 08:07:26 compute-0 ceph-osd[88096]: bluestore(/var/lib/ceph/osd/ceph-2) _open_super_meta min_alloc_size 0x1000
Jan 31 08:07:26 compute-0 ceph-osd[88096]: freelist init
Jan 31 08:07:26 compute-0 ceph-osd[88096]: freelist _read_cfg
Jan 31 08:07:26 compute-0 ceph-osd[88096]: bluestore(/var/lib/ceph/osd/ceph-2) _open_fm effective freelist_type = bitmap, freelist_alloc_size = 0x1000, min_alloc_size = 0x1000
Jan 31 08:07:26 compute-0 ceph-osd[88096]: bluestore(/var/lib/ceph/osd/ceph-2) _init_alloc loaded 20 GiB in 2 extents, allocator type hybrid, capacity 0x4ffc00000, block size 0x1000, free 0x4ffbfd000, fragmentation 1.9e-07
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb: [db/db_impl/db_impl.cc:496] Shutdown: canceling all background work
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb: [db/db_impl/db_impl.cc:704] Shutdown complete
Jan 31 08:07:26 compute-0 ceph-osd[88096]: bluefs umount
Jan 31 08:07:26 compute-0 ceph-osd[88096]: bdev(0x5603a2be3800 /var/lib/ceph/osd/ceph-2/block) close
Jan 31 08:07:26 compute-0 ceph-osd[88096]: bdev(0x5603a2be3800 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Jan 31 08:07:26 compute-0 ceph-osd[88096]: bdev(0x5603a2be3800 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Jan 31 08:07:26 compute-0 ceph-osd[88096]: bdev(0x5603a2be3800 /var/lib/ceph/osd/ceph-2/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 31 08:07:26 compute-0 ceph-osd[88096]: bdev(0x5603a2be3800 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 31 08:07:26 compute-0 ceph-osd[88096]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-2/block size 20 GiB
Jan 31 08:07:26 compute-0 ceph-osd[88096]: bluefs mount
Jan 31 08:07:26 compute-0 ceph-osd[88096]: bluefs _init_alloc shared, id 1, capacity 0x4ffc00000, block size 0x10000
Jan 31 08:07:26 compute-0 ceph-osd[88096]: bluefs mount final locked allocations 0 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Jan 31 08:07:26 compute-0 ceph-osd[88096]: bluefs mount final locked allocations 1 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Jan 31 08:07:26 compute-0 ceph-osd[88096]: bluefs mount final locked allocations 2 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Jan 31 08:07:26 compute-0 ceph-osd[88096]: bluefs mount final locked allocations 3 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Jan 31 08:07:26 compute-0 ceph-osd[88096]: bluefs mount final locked allocations 4 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Jan 31 08:07:26 compute-0 ceph-osd[88096]: bluefs mount shared_bdev_used = 27262976
Jan 31 08:07:26 compute-0 ceph-osd[88096]: bluestore(/var/lib/ceph/osd/ceph-2) _prepare_db_environment set db_paths to db,20397110067 db.slow,20397110067
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb: RocksDB version: 7.9.2
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb: Git sha 0
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb: Compile date 2025-10-30 15:42:43
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb: DB SUMMARY
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb: DB Session ID:  AMK3L2MLNV0PJCV1SAN0
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb: CURRENT file:  CURRENT
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb: IDENTITY file:  IDENTITY
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb: MANIFEST file:  MANIFEST-000032 size: 1007 Bytes
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb: SST files in db dir, Total Num: 1, files: 000030.sst 
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb: SST files in db.slow dir, Total Num: 0, files: 
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb: Write Ahead Log file in db.wal: 000031.log size: 5097 ; 
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:                         Options.error_if_exists: 0
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:                       Options.create_if_missing: 0
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:                         Options.paranoid_checks: 1
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:             Options.flush_verify_memtable_count: 1
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:                                     Options.env: 0x5603a1dddce0
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:                                      Options.fs: LegacyFileSystem
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:                                Options.info_log: 0x5603a2e38a20
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:                Options.max_file_opening_threads: 16
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:                              Options.statistics: (nil)
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:                               Options.use_fsync: 0
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:                       Options.max_log_file_size: 0
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:                   Options.log_file_time_to_roll: 0
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:                       Options.keep_log_file_num: 1000
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:                    Options.recycle_log_file_num: 0
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:                         Options.allow_fallocate: 1
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:                        Options.allow_mmap_reads: 0
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:                       Options.allow_mmap_writes: 0
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:                        Options.use_direct_reads: 0
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:          Options.create_missing_column_families: 0
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:                              Options.db_log_dir: 
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:                                 Options.wal_dir: db.wal
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:                Options.table_cache_numshardbits: 6
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:                         Options.WAL_ttl_seconds: 0
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:                       Options.WAL_size_limit_MB: 0
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:             Options.manifest_preallocation_size: 4194304
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:                     Options.is_fd_close_on_exec: 1
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:                   Options.advise_random_on_open: 1
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:                    Options.db_write_buffer_size: 0
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:                    Options.write_buffer_manager: 0x5603a1e42b40
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:         Options.access_hint_on_compaction_start: 1
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:                      Options.use_adaptive_mutex: 0
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:                            Options.rate_limiter: (nil)
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:                       Options.wal_recovery_mode: 2
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:                  Options.enable_thread_tracking: 0
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:                  Options.enable_pipelined_write: 0
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:                  Options.unordered_write: 0
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:             Options.write_thread_max_yield_usec: 100
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:                               Options.row_cache: None
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:                              Options.wal_filter: None
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:             Options.avoid_flush_during_recovery: 0
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:             Options.allow_ingest_behind: 0
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:             Options.two_write_queues: 0
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:             Options.manual_wal_flush: 0
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:             Options.wal_compression: 0
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:             Options.atomic_flush: 0
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:                 Options.persist_stats_to_disk: 0
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:                 Options.write_dbid_to_manifest: 0
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:                 Options.log_readahead_size: 0
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:                 Options.best_efforts_recovery: 0
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:             Options.allow_data_in_errors: 0
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:             Options.db_host_id: __hostname__
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:             Options.enforce_single_del_contracts: true
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:             Options.max_background_jobs: 4
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:             Options.max_background_compactions: -1
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:             Options.max_subcompactions: 1
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:           Options.writable_file_max_buffer_size: 0
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:             Options.delayed_write_rate : 16777216
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:             Options.max_total_wal_size: 1073741824
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:                   Options.stats_dump_period_sec: 600
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:                 Options.stats_persist_period_sec: 600
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:                          Options.max_open_files: -1
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:                          Options.bytes_per_sync: 0
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:                      Options.wal_bytes_per_sync: 0
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:                   Options.strict_bytes_per_sync: 0
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:       Options.compaction_readahead_size: 2097152
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:                  Options.max_background_flushes: -1
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb: Compression algorithms supported:
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:         kZSTD supported: 0
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:         kXpressCompression supported: 0
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:         kBZip2Compression supported: 0
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:         kZSTDNotFinalCompression supported: 0
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:         kLZ4Compression supported: 1
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:         kZlibCompression supported: 1
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:         kLZ4HCCompression supported: 1
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:         kSnappyCompression supported: 1
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb: Fast CRC32 supported: Supported on x86
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb: DMutex implementation: pthread_mutex_t
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: db/MANIFEST-000032
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:           Options.merge_operator: .T:int64_array.b:bitwise_xor
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:        Options.compaction_filter: None
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:        Options.compaction_filter_factory: None
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:  Options.sst_partitioner_factory: None
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5603a2e38bc0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x5603a1de18d0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:        Options.write_buffer_size: 16777216
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:  Options.max_write_buffer_number: 64
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:          Options.compression: LZ4
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:       Options.prefix_extractor: nullptr
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:             Options.num_levels: 7
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:                  Options.compression_opts.level: 32767
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:               Options.compression_opts.strategy: 0
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:                  Options.compression_opts.enabled: false
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:                        Options.arena_block_size: 1048576
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:                Options.disable_auto_compactions: 0
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:                   Options.inplace_update_support: 0
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:                           Options.bloom_locality: 0
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:                    Options.max_successive_merges: 0
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:                Options.paranoid_file_checks: 0
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:                Options.force_consistency_checks: 1
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:                Options.report_bg_io_stats: 0
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:                               Options.ttl: 2592000
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:                       Options.enable_blob_files: false
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:                           Options.min_blob_size: 0
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:                          Options.blob_file_size: 268435456
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:                Options.blob_file_starting_level: 0
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-0]:
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:           Options.merge_operator: None
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:        Options.compaction_filter: None
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:        Options.compaction_filter_factory: None
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:  Options.sst_partitioner_factory: None
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5603a2e38bc0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x5603a1de18d0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:        Options.write_buffer_size: 16777216
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:  Options.max_write_buffer_number: 64
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:          Options.compression: LZ4
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:       Options.prefix_extractor: nullptr
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:             Options.num_levels: 7
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:                  Options.compression_opts.level: 32767
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:               Options.compression_opts.strategy: 0
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:                  Options.compression_opts.enabled: false
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:                        Options.arena_block_size: 1048576
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:                Options.disable_auto_compactions: 0
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:                   Options.inplace_update_support: 0
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:                           Options.bloom_locality: 0
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:                    Options.max_successive_merges: 0
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:                Options.paranoid_file_checks: 0
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:                Options.force_consistency_checks: 1
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:                Options.report_bg_io_stats: 0
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:                               Options.ttl: 2592000
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:                       Options.enable_blob_files: false
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:                           Options.min_blob_size: 0
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:                          Options.blob_file_size: 268435456
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:                Options.blob_file_starting_level: 0
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-1]:
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:           Options.merge_operator: None
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:        Options.compaction_filter: None
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:        Options.compaction_filter_factory: None
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:  Options.sst_partitioner_factory: None
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5603a2e38bc0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x5603a1de18d0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:        Options.write_buffer_size: 16777216
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:  Options.max_write_buffer_number: 64
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:          Options.compression: LZ4
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:       Options.prefix_extractor: nullptr
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:             Options.num_levels: 7
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:                  Options.compression_opts.level: 32767
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:               Options.compression_opts.strategy: 0
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:                  Options.compression_opts.enabled: false
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:                        Options.arena_block_size: 1048576
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:                Options.disable_auto_compactions: 0
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:                   Options.inplace_update_support: 0
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:                           Options.bloom_locality: 0
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:                    Options.max_successive_merges: 0
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:                Options.paranoid_file_checks: 0
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:                Options.force_consistency_checks: 1
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:                Options.report_bg_io_stats: 0
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:                               Options.ttl: 2592000
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:                       Options.enable_blob_files: false
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:                           Options.min_blob_size: 0
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:                          Options.blob_file_size: 268435456
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:                Options.blob_file_starting_level: 0
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-2]:
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:           Options.merge_operator: None
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:        Options.compaction_filter: None
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:        Options.compaction_filter_factory: None
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:  Options.sst_partitioner_factory: None
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5603a2e38bc0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x5603a1de18d0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:        Options.write_buffer_size: 16777216
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:  Options.max_write_buffer_number: 64
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:          Options.compression: LZ4
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:       Options.prefix_extractor: nullptr
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:             Options.num_levels: 7
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:                  Options.compression_opts.level: 32767
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:               Options.compression_opts.strategy: 0
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:                  Options.compression_opts.enabled: false
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:                        Options.arena_block_size: 1048576
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:                Options.disable_auto_compactions: 0
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:                   Options.inplace_update_support: 0
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:                           Options.bloom_locality: 0
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:                    Options.max_successive_merges: 0
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:                Options.paranoid_file_checks: 0
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:                Options.force_consistency_checks: 1
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:                Options.report_bg_io_stats: 0
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:                               Options.ttl: 2592000
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:                       Options.enable_blob_files: false
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:                           Options.min_blob_size: 0
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:                          Options.blob_file_size: 268435456
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:                Options.blob_file_starting_level: 0
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-0]:
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:           Options.merge_operator: None
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:        Options.compaction_filter: None
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:        Options.compaction_filter_factory: None
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:  Options.sst_partitioner_factory: None
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5603a2e38bc0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x5603a1de18d0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:        Options.write_buffer_size: 16777216
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:  Options.max_write_buffer_number: 64
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:          Options.compression: LZ4
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:       Options.prefix_extractor: nullptr
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:             Options.num_levels: 7
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:                  Options.compression_opts.level: 32767
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:               Options.compression_opts.strategy: 0
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:                  Options.compression_opts.enabled: false
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:                        Options.arena_block_size: 1048576
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:                Options.disable_auto_compactions: 0
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:                   Options.inplace_update_support: 0
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:                           Options.bloom_locality: 0
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:                    Options.max_successive_merges: 0
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:                Options.paranoid_file_checks: 0
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:                Options.force_consistency_checks: 1
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:                Options.report_bg_io_stats: 0
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:                               Options.ttl: 2592000
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:                       Options.enable_blob_files: false
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:                           Options.min_blob_size: 0
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:                          Options.blob_file_size: 268435456
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:                Options.blob_file_starting_level: 0
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-1]:
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:           Options.merge_operator: None
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:        Options.compaction_filter: None
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:        Options.compaction_filter_factory: None
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:  Options.sst_partitioner_factory: None
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5603a2e38bc0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x5603a1de18d0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:        Options.write_buffer_size: 16777216
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:  Options.max_write_buffer_number: 64
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:          Options.compression: LZ4
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:       Options.prefix_extractor: nullptr
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:             Options.num_levels: 7
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:                  Options.compression_opts.level: 32767
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:               Options.compression_opts.strategy: 0
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:                  Options.compression_opts.enabled: false
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:                        Options.arena_block_size: 1048576
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:                Options.disable_auto_compactions: 0
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:                   Options.inplace_update_support: 0
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:                           Options.bloom_locality: 0
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:                    Options.max_successive_merges: 0
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:                Options.paranoid_file_checks: 0
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:                Options.force_consistency_checks: 1
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:                Options.report_bg_io_stats: 0
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:                               Options.ttl: 2592000
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:                       Options.enable_blob_files: false
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:                           Options.min_blob_size: 0
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:                          Options.blob_file_size: 268435456
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:                Options.blob_file_starting_level: 0
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-2]:
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:           Options.merge_operator: None
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:        Options.compaction_filter: None
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:        Options.compaction_filter_factory: None
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:  Options.sst_partitioner_factory: None
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5603a2e38bc0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x5603a1de18d0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:        Options.write_buffer_size: 16777216
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:  Options.max_write_buffer_number: 64
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:          Options.compression: LZ4
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:       Options.prefix_extractor: nullptr
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:             Options.num_levels: 7
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:                  Options.compression_opts.level: 32767
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:               Options.compression_opts.strategy: 0
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:                  Options.compression_opts.enabled: false
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:                        Options.arena_block_size: 1048576
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:                Options.disable_auto_compactions: 0
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:                   Options.inplace_update_support: 0
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:                           Options.bloom_locality: 0
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:                    Options.max_successive_merges: 0
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:                Options.paranoid_file_checks: 0
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:                Options.force_consistency_checks: 1
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:                Options.report_bg_io_stats: 0
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:                               Options.ttl: 2592000
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:                       Options.enable_blob_files: false
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:                           Options.min_blob_size: 0
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:                          Options.blob_file_size: 268435456
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:                Options.blob_file_starting_level: 0
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-0]:
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:           Options.merge_operator: None
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:        Options.compaction_filter: None
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:        Options.compaction_filter_factory: None
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:  Options.sst_partitioner_factory: None
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5603a2e390c0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x5603a1de1a30
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:        Options.write_buffer_size: 16777216
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:  Options.max_write_buffer_number: 64
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:          Options.compression: LZ4
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:       Options.prefix_extractor: nullptr
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:             Options.num_levels: 7
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:                  Options.compression_opts.level: 32767
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:               Options.compression_opts.strategy: 0
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:                  Options.compression_opts.enabled: false
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:                        Options.arena_block_size: 1048576
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:                Options.disable_auto_compactions: 0
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:                   Options.inplace_update_support: 0
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:                           Options.bloom_locality: 0
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:                    Options.max_successive_merges: 0
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:                Options.paranoid_file_checks: 0
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:                Options.force_consistency_checks: 1
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:                Options.report_bg_io_stats: 0
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:                               Options.ttl: 2592000
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:                       Options.enable_blob_files: false
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:                           Options.min_blob_size: 0
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:                          Options.blob_file_size: 268435456
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:                Options.blob_file_starting_level: 0
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-1]:
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:           Options.merge_operator: None
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:        Options.compaction_filter: None
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:        Options.compaction_filter_factory: None
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:  Options.sst_partitioner_factory: None
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5603a2e390c0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x5603a1de1a30
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:        Options.write_buffer_size: 16777216
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:  Options.max_write_buffer_number: 64
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:          Options.compression: LZ4
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:       Options.prefix_extractor: nullptr
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:             Options.num_levels: 7
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:                  Options.compression_opts.level: 32767
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:               Options.compression_opts.strategy: 0
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:                  Options.compression_opts.enabled: false
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:                        Options.arena_block_size: 1048576
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:                Options.disable_auto_compactions: 0
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:                   Options.inplace_update_support: 0
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:                           Options.bloom_locality: 0
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:                    Options.max_successive_merges: 0
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:                Options.paranoid_file_checks: 0
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:                Options.force_consistency_checks: 1
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:                Options.report_bg_io_stats: 0
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:                               Options.ttl: 2592000
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:                       Options.enable_blob_files: false
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:                           Options.min_blob_size: 0
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:                          Options.blob_file_size: 268435456
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:                Options.blob_file_starting_level: 0
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-2]:
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:           Options.merge_operator: None
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:        Options.compaction_filter: None
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:        Options.compaction_filter_factory: None
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:  Options.sst_partitioner_factory: None
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5603a2e390c0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x5603a1de1a30
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:        Options.write_buffer_size: 16777216
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:  Options.max_write_buffer_number: 64
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:          Options.compression: LZ4
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:       Options.prefix_extractor: nullptr
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:             Options.num_levels: 7
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:                  Options.compression_opts.level: 32767
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:               Options.compression_opts.strategy: 0
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:                  Options.compression_opts.enabled: false
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:                        Options.arena_block_size: 1048576
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:                Options.disable_auto_compactions: 0
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:                   Options.inplace_update_support: 0
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:                           Options.bloom_locality: 0
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:                    Options.max_successive_merges: 0
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:                Options.paranoid_file_checks: 0
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:                Options.force_consistency_checks: 1
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:                Options.report_bg_io_stats: 0
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:                               Options.ttl: 2592000
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:                       Options.enable_blob_files: false
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:                           Options.min_blob_size: 0
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:                          Options.blob_file_size: 268435456
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb:                Options.blob_file_starting_level: 0
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb: [db/column_family.cc:635]         (skipping printing options)
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb: [db/column_family.cc:635]         (skipping printing options)
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:db/MANIFEST-000032 succeeded,manifest_file_number is 32, next_file_number is 34, last_sequence is 12, log_number is 5,prev_log_number is 0,max_column_family is 11,min_log_number_to_keep is 5
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb: [db/version_set.cc:5581] Column family [m-0] (ID 1), log number is 5
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb: [db/version_set.cc:5581] Column family [m-1] (ID 2), log number is 5
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb: [db/version_set.cc:5581] Column family [m-2] (ID 3), log number is 5
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb: [db/version_set.cc:5581] Column family [p-0] (ID 4), log number is 5
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb: [db/version_set.cc:5581] Column family [p-1] (ID 5), log number is 5
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb: [db/version_set.cc:5581] Column family [p-2] (ID 6), log number is 5
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb: [db/version_set.cc:5581] Column family [O-0] (ID 7), log number is 5
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb: [db/version_set.cc:5581] Column family [O-1] (ID 8), log number is 5
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb: [db/version_set.cc:5581] Column family [O-2] (ID 9), log number is 5
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb: [db/version_set.cc:5581] Column family [L] (ID 10), log number is 5
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb: [db/version_set.cc:5581] Column family [P] (ID 11), log number is 5
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: e47fd480-0b39-49c2-8ccd-d36942261e3a
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769846846793688, "job": 1, "event": "recovery_started", "wal_files": [31]}
Jan 31 08:07:26 compute-0 ceph-osd[88096]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #31 mode 2
Jan 31 08:07:26 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Jan 31 08:07:26 compute-0 ceph-mon[75227]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "osd metadata", "id": 1} : dispatch
Jan 31 08:07:26 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Jan 31 08:07:26 compute-0 ceph-mon[75227]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Jan 31 08:07:26 compute-0 ceph-mgr[75519]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Jan 31 08:07:26 compute-0 ceph-mgr[75519]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Jan 31 08:07:26 compute-0 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd='[{"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true}]': finished
Jan 31 08:07:26 compute-0 ceph-mon[75227]: osdmap e12: 3 total, 1 up, 3 in
Jan 31 08:07:26 compute-0 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "osd metadata", "id": 1} : dispatch
Jan 31 08:07:26 compute-0 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Jan 31 08:07:26 compute-0 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true} : dispatch
Jan 31 08:07:26 compute-0 ceph-mon[75227]: pgmap v30: 1 pgs: 1 unknown; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail
Jan 31 08:07:26 compute-0 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "osd metadata", "id": 1} : dispatch
Jan 31 08:07:27 compute-0 ceph-osd[88096]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769846847043226, "cf_name": "default", "job": 1, "event": "table_file_creation", "file_number": 35, "file_size": 1275, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 13, "largest_seqno": 21, "table_properties": {"data_size": 131, "index_size": 27, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 87, "raw_average_key_size": 17, "raw_value_size": 82, "raw_average_value_size": 16, "num_data_blocks": 1, "num_entries": 5, "num_filter_entries": 5, "num_deletions": 0, "num_merge_operands": 2, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": ".T:int64_array.b:bitwise_xor", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769846846, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e47fd480-0b39-49c2-8ccd-d36942261e3a", "db_session_id": "AMK3L2MLNV0PJCV1SAN0", "orig_file_number": 35, "seqno_to_time_mapping": "N/A"}}
Jan 31 08:07:27 compute-0 ceph-mgr[75519]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.100:6806/1439559419; not ready for session (expect reconnect)
Jan 31 08:07:27 compute-0 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 08:07:27 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Jan 31 08:07:27 compute-0 ceph-mon[75227]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "osd metadata", "id": 1} : dispatch
Jan 31 08:07:27 compute-0 ceph-mgr[75519]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Jan 31 08:07:27 compute-0 ceph-osd[88096]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769846847114210, "cf_name": "p-0", "job": 1, "event": "table_file_creation", "file_number": 36, "file_size": 1595, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 14, "largest_seqno": 15, "table_properties": {"data_size": 469, "index_size": 39, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 72, "raw_average_key_size": 36, "raw_value_size": 571, "raw_average_value_size": 285, "num_data_blocks": 1, "num_entries": 2, "num_filter_entries": 2, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "p-0", "column_family_id": 4, "comparator": "leveldb.BytewiseComparator", "merge_operator": "nullptr", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769846847, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e47fd480-0b39-49c2-8ccd-d36942261e3a", "db_session_id": "AMK3L2MLNV0PJCV1SAN0", "orig_file_number": 36, "seqno_to_time_mapping": "N/A"}}
Jan 31 08:07:27 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 31 08:07:27 compute-0 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 08:07:27 compute-0 ceph-osd[88096]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769846847320988, "cf_name": "O-2", "job": 1, "event": "table_file_creation", "file_number": 37, "file_size": 1275, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 16, "largest_seqno": 16, "table_properties": {"data_size": 121, "index_size": 64, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 55, "raw_average_key_size": 55, "raw_value_size": 50, "raw_average_value_size": 50, "num_data_blocks": 1, "num_entries": 1, "num_filter_entries": 1, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "O-2", "column_family_id": 9, "comparator": "leveldb.BytewiseComparator", "merge_operator": "nullptr", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769846847, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e47fd480-0b39-49c2-8ccd-d36942261e3a", "db_session_id": "AMK3L2MLNV0PJCV1SAN0", "orig_file_number": 37, "seqno_to_time_mapping": "N/A"}}
Jan 31 08:07:27 compute-0 ceph-osd[88096]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769846847364872, "job": 1, "event": "recovery_finished"}
Jan 31 08:07:27 compute-0 ceph-osd[88096]: rocksdb: [db/version_set.cc:5047] Creating manifest 40
Jan 31 08:07:27 compute-0 sudo[88513]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 08:07:27 compute-0 sudo[88513]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:07:27 compute-0 sudo[88513]: pam_unix(sudo:session): session closed for user root
Jan 31 08:07:27 compute-0 sudo[88538]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/82c880e6-d992-5408-8b12-efff9c275473/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 82c880e6-d992-5408-8b12-efff9c275473 -- raw list --format json
Jan 31 08:07:27 compute-0 sudo[88538]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:07:27 compute-0 ceph-osd[88096]: rocksdb: [db/db_impl/db_impl_open.cc:1987] SstFileManager instance 0x5603a2e3a000
Jan 31 08:07:27 compute-0 ceph-osd[88096]: rocksdb: DB pointer 0x5603a2ff2000
Jan 31 08:07:27 compute-0 ceph-osd[88096]: bluestore(/var/lib/ceph/osd/ceph-2) _open_db opened rocksdb path db options compression=kLZ4Compression,max_write_buffer_number=64,min_write_buffer_number_to_merge=6,compaction_style=kCompactionStyleLevel,write_buffer_size=16777216,max_background_jobs=4,level0_file_num_compaction_trigger=8,max_bytes_for_level_base=1073741824,max_bytes_for_level_multiplier=8,compaction_readahead_size=2MB,max_total_wal_size=1073741824,writable_file_max_buffer_size=0
Jan 31 08:07:27 compute-0 ceph-osd[88096]: bluestore(/var/lib/ceph/osd/ceph-2) _upgrade_super from 4, latest 4
Jan 31 08:07:27 compute-0 ceph-osd[88096]: bluestore(/var/lib/ceph/osd/ceph-2) _upgrade_super done
Jan 31 08:07:27 compute-0 ceph-osd[88096]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 31 08:07:27 compute-0 ceph-osd[88096]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 0.8 total, 0.8 interval
                                           Cumulative writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 GB, 0.00 MB/s
                                           Cumulative WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s
                                           Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.25              0.00         1    0.249       0      0       0.0       0.0
                                            Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.25              0.00         1    0.249       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.25              0.00         1    0.249       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.25              0.00         1    0.249       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.8 total, 0.8 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.2 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.2 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5603a1de18d0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 6.1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
                                           
                                           ** Compaction Stats [m-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.8 total, 0.8 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5603a1de18d0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 6.1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-0] **
                                           
                                           ** Compaction Stats [m-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.8 total, 0.8 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5603a1de18d0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 6.1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-1] **
                                           
                                           ** Compaction Stats [m-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.8 total, 0.8 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5603a1de18d0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 6.1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-2] **
                                           
                                           ** Compaction Stats [p-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.56 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.07              0.00         1    0.071       0      0       0.0       0.0
                                            Sum      1/0    1.56 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.07              0.00         1    0.071       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.07              0.00         1    0.071       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.07              0.00         1    0.071       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.8 total, 0.8 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.1 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.1 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5603a1de18d0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 6.1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-0] **
                                           
                                           ** Compaction Stats [p-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.8 total, 0.8 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5603a1de18d0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 6.1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-1] **
                                           
                                           ** Compaction Stats [p-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.8 total, 0.8 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5603a1de18d0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 6.1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-2] **
                                           
                                           ** Compaction Stats [O-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.8 total, 0.8 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5603a1de1a30#2 capacity: 512.00 MB usage: 0.25 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 2 last_secs: 7e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(1,0.11 KB,2.08616e-05%) IndexBlock(1,0.14 KB,2.68221e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-0] **
                                           
                                           ** Compaction Stats [O-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.8 total, 0.8 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5603a1de1a30#2 capacity: 512.00 MB usage: 0.25 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 2 last_secs: 7e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(1,0.11 KB,2.08616e-05%) IndexBlock(1,0.14 KB,2.68221e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-1] **
                                           
                                           ** Compaction Stats [O-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.25 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.21              0.00         1    0.207       0      0       0.0       0.0
                                            Sum      1/0    1.25 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.21              0.00         1    0.207       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.21              0.00         1    0.207       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.21              0.00         1    0.207       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.8 total, 0.8 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.2 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.2 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5603a1de1a30#2 capacity: 512.00 MB usage: 0.25 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 2 last_secs: 7e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(1,0.11 KB,2.08616e-05%) IndexBlock(1,0.14 KB,2.68221e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-2] **
                                           
                                           ** Compaction Stats [L] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.04              0.00         1    0.043       0      0       0.0       0.0
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.04              0.00         1    0.043       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.04              0.00         1    0.043       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [L] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.04              0.00         1    0.043       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.8 total, 0.8 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5603a1de18d0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 6.1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [L] **
                                           
                                           ** Compaction Stats [P] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [P] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.8 total, 0.8 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5603a1de18d0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 6.1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [P] **
Jan 31 08:07:27 compute-0 ceph-osd[88096]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/cephfs/cls_cephfs.cc:201: loading cephfs
Jan 31 08:07:27 compute-0 ceph-osd[88096]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/hello/cls_hello.cc:316: loading cls_hello
Jan 31 08:07:27 compute-0 ceph-osd[88096]: _get_class not permitted to load lua
Jan 31 08:07:27 compute-0 ceph-osd[88096]: _get_class not permitted to load sdk
Jan 31 08:07:27 compute-0 ceph-osd[88096]: osd.2 0 crush map has features 288232575208783872, adjusting msgr requires for clients
Jan 31 08:07:27 compute-0 ceph-osd[88096]: osd.2 0 crush map has features 288232575208783872 was 8705, adjusting msgr requires for mons
Jan 31 08:07:27 compute-0 ceph-osd[88096]: osd.2 0 crush map has features 288232575208783872, adjusting msgr requires for osds
Jan 31 08:07:27 compute-0 ceph-osd[88096]: osd.2 0 check_osdmap_features enabling on-disk ERASURE CODES compat feature
Jan 31 08:07:27 compute-0 ceph-osd[88096]: osd.2 0 load_pgs
Jan 31 08:07:27 compute-0 ceph-osd[88096]: osd.2 0 load_pgs opened 0 pgs
Jan 31 08:07:27 compute-0 ceph-osd[88096]: osd.2 0 log_to_monitors true
Jan 31 08:07:27 compute-0 ceph-82c880e6-d992-5408-8b12-efff9c275473-osd-2[88092]: 2026-01-31T08:07:27.605+0000 7f0dcf1a48c0 -1 osd.2 0 log_to_monitors true
Jan 31 08:07:27 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]} v 0)
Jan 31 08:07:27 compute-0 ceph-mon[75227]: log_channel(audit) log [INF] : from='osd.2 [v2:192.168.122.100:6810/3457121780,v1:192.168.122.100:6811/3457121780]' entity='osd.2' cmd={"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]} : dispatch
Jan 31 08:07:27 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v32: 1 pgs: 1 unknown; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail
Jan 31 08:07:27 compute-0 podman[88608]: 2026-01-31 08:07:27.73008237 +0000 UTC m=+0.029746199 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 08:07:27 compute-0 podman[88608]: 2026-01-31 08:07:27.860394189 +0000 UTC m=+0.160057978 container create a6b18cbc637361727bdfef2eee5ee49c7514d24c15cbef3c16dff0b2ce00535b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=naughty_curie, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True)
Jan 31 08:07:27 compute-0 systemd[1]: Started libpod-conmon-a6b18cbc637361727bdfef2eee5ee49c7514d24c15cbef3c16dff0b2ce00535b.scope.
Jan 31 08:07:27 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:07:28 compute-0 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd='[{"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true}]': finished
Jan 31 08:07:28 compute-0 ceph-mon[75227]: osdmap e13: 3 total, 1 up, 3 in
Jan 31 08:07:28 compute-0 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "osd metadata", "id": 1} : dispatch
Jan 31 08:07:28 compute-0 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Jan 31 08:07:28 compute-0 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 08:07:28 compute-0 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "osd metadata", "id": 1} : dispatch
Jan 31 08:07:28 compute-0 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 08:07:28 compute-0 ceph-mon[75227]: from='osd.2 [v2:192.168.122.100:6810/3457121780,v1:192.168.122.100:6811/3457121780]' entity='osd.2' cmd={"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]} : dispatch
Jan 31 08:07:28 compute-0 ceph-mgr[75519]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.100:6806/1439559419; not ready for session (expect reconnect)
Jan 31 08:07:28 compute-0 podman[88608]: 2026-01-31 08:07:28.10822918 +0000 UTC m=+0.407893009 container init a6b18cbc637361727bdfef2eee5ee49c7514d24c15cbef3c16dff0b2ce00535b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=naughty_curie, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 31 08:07:28 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Jan 31 08:07:28 compute-0 ceph-mon[75227]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "osd metadata", "id": 1} : dispatch
Jan 31 08:07:28 compute-0 ceph-mgr[75519]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Jan 31 08:07:28 compute-0 podman[88608]: 2026-01-31 08:07:28.118243976 +0000 UTC m=+0.417907735 container start a6b18cbc637361727bdfef2eee5ee49c7514d24c15cbef3c16dff0b2ce00535b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=naughty_curie, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=tentacle, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 31 08:07:28 compute-0 naughty_curie[88624]: 167 167
Jan 31 08:07:28 compute-0 systemd[1]: libpod-a6b18cbc637361727bdfef2eee5ee49c7514d24c15cbef3c16dff0b2ce00535b.scope: Deactivated successfully.
Jan 31 08:07:28 compute-0 podman[88608]: 2026-01-31 08:07:28.147157519 +0000 UTC m=+0.446821318 container attach a6b18cbc637361727bdfef2eee5ee49c7514d24c15cbef3c16dff0b2ce00535b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=naughty_curie, io.buildah.version=1.41.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 08:07:28 compute-0 podman[88608]: 2026-01-31 08:07:28.151051121 +0000 UTC m=+0.450714920 container died a6b18cbc637361727bdfef2eee5ee49c7514d24c15cbef3c16dff0b2ce00535b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=naughty_curie, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Jan 31 08:07:28 compute-0 systemd[1]: var-lib-containers-storage-overlay-932da1693f15d17382c6daf177625d408002300c7d9fc8e9e6cb5ddedb368f69-merged.mount: Deactivated successfully.
Jan 31 08:07:28 compute-0 podman[88608]: 2026-01-31 08:07:28.288680497 +0000 UTC m=+0.588344246 container remove a6b18cbc637361727bdfef2eee5ee49c7514d24c15cbef3c16dff0b2ce00535b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=naughty_curie, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Jan 31 08:07:28 compute-0 systemd[1]: libpod-conmon-a6b18cbc637361727bdfef2eee5ee49c7514d24c15cbef3c16dff0b2ce00535b.scope: Deactivated successfully.
Jan 31 08:07:28 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e13 do_prune osdmap full prune enabled
Jan 31 08:07:28 compute-0 ceph-mon[75227]: log_channel(audit) log [INF] : from='osd.2 [v2:192.168.122.100:6810/3457121780,v1:192.168.122.100:6811/3457121780]' entity='osd.2' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]': finished
Jan 31 08:07:28 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e14 e14: 3 total, 1 up, 3 in
Jan 31 08:07:28 compute-0 ceph-mon[75227]: log_channel(cluster) log [DBG] : osdmap e14: 3 total, 1 up, 3 in
Jan 31 08:07:28 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=compute-0", "root=default"]} v 0)
Jan 31 08:07:28 compute-0 ceph-mon[75227]: log_channel(audit) log [INF] : from='osd.2 [v2:192.168.122.100:6810/3457121780,v1:192.168.122.100:6811/3457121780]' entity='osd.2' cmd={"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=compute-0", "root=default"]} : dispatch
Jan 31 08:07:28 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e14 create-or-move crush item name 'osd.2' initial_weight 0.02 at location {host=compute-0,root=default}
Jan 31 08:07:28 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Jan 31 08:07:28 compute-0 ceph-mon[75227]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "osd metadata", "id": 1} : dispatch
Jan 31 08:07:28 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Jan 31 08:07:28 compute-0 ceph-mon[75227]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Jan 31 08:07:28 compute-0 ceph-mgr[75519]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Jan 31 08:07:28 compute-0 ceph-mgr[75519]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Jan 31 08:07:28 compute-0 podman[88650]: 2026-01-31 08:07:28.468703804 +0000 UTC m=+0.062868905 container create 959675a77e654a77e52439a114d49d8044eaf9040e0fd6bddc25c359601ce561 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sweet_feistel, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 08:07:28 compute-0 systemd[1]: Started libpod-conmon-959675a77e654a77e52439a114d49d8044eaf9040e0fd6bddc25c359601ce561.scope.
Jan 31 08:07:28 compute-0 podman[88650]: 2026-01-31 08:07:28.443047652 +0000 UTC m=+0.037212793 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 08:07:28 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:07:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3995f1083b8c993ed30a5b8dee7f4fe53d951baccd1e0b853852d0bc684a1fbd/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 08:07:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3995f1083b8c993ed30a5b8dee7f4fe53d951baccd1e0b853852d0bc684a1fbd/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 08:07:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3995f1083b8c993ed30a5b8dee7f4fe53d951baccd1e0b853852d0bc684a1fbd/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 08:07:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3995f1083b8c993ed30a5b8dee7f4fe53d951baccd1e0b853852d0bc684a1fbd/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 08:07:28 compute-0 podman[88650]: 2026-01-31 08:07:28.58808035 +0000 UTC m=+0.182245451 container init 959675a77e654a77e52439a114d49d8044eaf9040e0fd6bddc25c359601ce561 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sweet_feistel, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 31 08:07:28 compute-0 podman[88650]: 2026-01-31 08:07:28.593997589 +0000 UTC m=+0.188162650 container start 959675a77e654a77e52439a114d49d8044eaf9040e0fd6bddc25c359601ce561 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sweet_feistel, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.build-date=20251030)
Jan 31 08:07:28 compute-0 podman[88650]: 2026-01-31 08:07:28.604139088 +0000 UTC m=+0.198304139 container attach 959675a77e654a77e52439a114d49d8044eaf9040e0fd6bddc25c359601ce561 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sweet_feistel, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 08:07:28 compute-0 ceph-osd[88096]: log_channel(cluster) log [DBG] : purged_snaps scrub starts
Jan 31 08:07:28 compute-0 ceph-osd[88096]: log_channel(cluster) log [DBG] : purged_snaps scrub ok
Jan 31 08:07:28 compute-0 ceph-osd[87035]: osd.1 0 maybe_override_max_osd_capacity_for_qos osd bench result - bandwidth (MiB/sec): 42.254 iops: 10817.054 elapsed_sec: 0.277
Jan 31 08:07:28 compute-0 ceph-osd[87035]: log_channel(cluster) log [WRN] : OSD bench result of 10817.053791 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.1. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd].
Jan 31 08:07:28 compute-0 ceph-osd[87035]: osd.1 0 waiting for initial osdmap
Jan 31 08:07:28 compute-0 ceph-82c880e6-d992-5408-8b12-efff9c275473-osd-1[87031]: 2026-01-31T08:07:28.930+0000 7f9bc86f9640 -1 osd.1 0 waiting for initial osdmap
Jan 31 08:07:28 compute-0 ceph-osd[87035]: osd.1 14 crush map has features 288514051259236352, adjusting msgr requires for clients
Jan 31 08:07:28 compute-0 ceph-osd[87035]: osd.1 14 crush map has features 288514051259236352 was 288232575208792577, adjusting msgr requires for mons
Jan 31 08:07:28 compute-0 ceph-osd[87035]: osd.1 14 crush map has features 3314933000852226048, adjusting msgr requires for osds
Jan 31 08:07:28 compute-0 ceph-osd[87035]: osd.1 14 check_osdmap_features require_osd_release unknown -> tentacle
Jan 31 08:07:28 compute-0 ceph-osd[87035]: osd.1 14 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory
Jan 31 08:07:28 compute-0 ceph-osd[87035]: osd.1 14 set_numa_affinity not setting numa affinity
Jan 31 08:07:28 compute-0 ceph-82c880e6-d992-5408-8b12-efff9c275473-osd-1[87031]: 2026-01-31T08:07:28.948+0000 7f9bc34fe640 -1 osd.1 14 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory
Jan 31 08:07:28 compute-0 ceph-osd[87035]: osd.1 14 _collect_metadata loop4:  no unique device id for loop4: fallback method has no model nor serial no unique device path for loop4: no symlink to loop4 in /dev/disk/by-path
Jan 31 08:07:29 compute-0 ceph-mon[75227]: pgmap v32: 1 pgs: 1 unknown; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail
Jan 31 08:07:29 compute-0 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "osd metadata", "id": 1} : dispatch
Jan 31 08:07:29 compute-0 ceph-mon[75227]: from='osd.2 [v2:192.168.122.100:6810/3457121780,v1:192.168.122.100:6811/3457121780]' entity='osd.2' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]': finished
Jan 31 08:07:29 compute-0 ceph-mon[75227]: osdmap e14: 3 total, 1 up, 3 in
Jan 31 08:07:29 compute-0 ceph-mon[75227]: from='osd.2 [v2:192.168.122.100:6810/3457121780,v1:192.168.122.100:6811/3457121780]' entity='osd.2' cmd={"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=compute-0", "root=default"]} : dispatch
Jan 31 08:07:29 compute-0 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "osd metadata", "id": 1} : dispatch
Jan 31 08:07:29 compute-0 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Jan 31 08:07:29 compute-0 ceph-mgr[75519]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.100:6806/1439559419; not ready for session (expect reconnect)
Jan 31 08:07:29 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Jan 31 08:07:29 compute-0 ceph-mon[75227]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "osd metadata", "id": 1} : dispatch
Jan 31 08:07:29 compute-0 ceph-mgr[75519]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Jan 31 08:07:29 compute-0 lvm[88744]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 31 08:07:29 compute-0 lvm[88744]: VG ceph_vg0 finished
Jan 31 08:07:29 compute-0 lvm[88746]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 31 08:07:29 compute-0 lvm[88746]: VG ceph_vg1 finished
Jan 31 08:07:29 compute-0 lvm[88748]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 31 08:07:29 compute-0 lvm[88748]: VG ceph_vg2 finished
Jan 31 08:07:29 compute-0 sweet_feistel[88667]: {}
Jan 31 08:07:29 compute-0 systemd[1]: libpod-959675a77e654a77e52439a114d49d8044eaf9040e0fd6bddc25c359601ce561.scope: Deactivated successfully.
Jan 31 08:07:29 compute-0 podman[88650]: 2026-01-31 08:07:29.28592468 +0000 UTC m=+0.880089771 container died 959675a77e654a77e52439a114d49d8044eaf9040e0fd6bddc25c359601ce561 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sweet_feistel, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251030)
Jan 31 08:07:29 compute-0 systemd[1]: var-lib-containers-storage-overlay-3995f1083b8c993ed30a5b8dee7f4fe53d951baccd1e0b853852d0bc684a1fbd-merged.mount: Deactivated successfully.
Jan 31 08:07:29 compute-0 podman[88650]: 2026-01-31 08:07:29.334514286 +0000 UTC m=+0.928679347 container remove 959675a77e654a77e52439a114d49d8044eaf9040e0fd6bddc25c359601ce561 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sweet_feistel, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True)
Jan 31 08:07:29 compute-0 systemd[1]: libpod-conmon-959675a77e654a77e52439a114d49d8044eaf9040e0fd6bddc25c359601ce561.scope: Deactivated successfully.
Jan 31 08:07:29 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e14 do_prune osdmap full prune enabled
Jan 31 08:07:29 compute-0 ceph-mon[75227]: log_channel(audit) log [INF] : from='osd.2 [v2:192.168.122.100:6810/3457121780,v1:192.168.122.100:6811/3457121780]' entity='osd.2' cmd='[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]': finished
Jan 31 08:07:29 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e15 e15: 3 total, 2 up, 3 in
Jan 31 08:07:29 compute-0 ceph-osd[88096]: osd.2 0 done with init, starting boot process
Jan 31 08:07:29 compute-0 ceph-osd[88096]: osd.2 0 start_boot
Jan 31 08:07:29 compute-0 ceph-osd[88096]: osd.2 0 maybe_override_options_for_qos osd_max_backfills set to 1
Jan 31 08:07:29 compute-0 ceph-osd[88096]: osd.2 0 maybe_override_options_for_qos osd_recovery_max_active set to 0
Jan 31 08:07:29 compute-0 ceph-osd[88096]: osd.2 0 maybe_override_options_for_qos osd_recovery_max_active_hdd set to 3
Jan 31 08:07:29 compute-0 ceph-osd[88096]: osd.2 0 maybe_override_options_for_qos osd_recovery_max_active_ssd set to 10
Jan 31 08:07:29 compute-0 ceph-osd[88096]: osd.2 0  bench count 12288000 bsize 4 KiB
Jan 31 08:07:29 compute-0 ceph-osd[87035]: osd.1 15 state: booting -> active
Jan 31 08:07:29 compute-0 ceph-mon[75227]: log_channel(cluster) log [INF] : osd.1 [v2:192.168.122.100:6806/1439559419,v1:192.168.122.100:6807/1439559419] boot
Jan 31 08:07:29 compute-0 ceph-mon[75227]: log_channel(cluster) log [DBG] : osdmap e15: 3 total, 2 up, 3 in
Jan 31 08:07:29 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Jan 31 08:07:29 compute-0 ceph-mon[75227]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "osd metadata", "id": 1} : dispatch
Jan 31 08:07:29 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Jan 31 08:07:29 compute-0 ceph-mon[75227]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Jan 31 08:07:29 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 15 pg[1.0( empty local-lis/les=0/0 n=0 ec=12/12 lis/c=0/0 les/c/f=0/0/0 sis=15) [1] r=0 lpr=15 pi=[12,15)/0 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:07:29 compute-0 ceph-mgr[75519]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Jan 31 08:07:29 compute-0 ceph-mgr[75519]: mgr.server handle_open ignoring open from osd.2 v2:192.168.122.100:6810/3457121780; not ready for session (expect reconnect)
Jan 31 08:07:29 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Jan 31 08:07:29 compute-0 ceph-mon[75227]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Jan 31 08:07:29 compute-0 ceph-mgr[75519]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Jan 31 08:07:29 compute-0 sudo[88538]: pam_unix(sudo:session): session closed for user root
Jan 31 08:07:29 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 31 08:07:29 compute-0 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 08:07:29 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 31 08:07:29 compute-0 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 08:07:29 compute-0 sudo[88762]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 31 08:07:29 compute-0 sudo[88762]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:07:29 compute-0 sudo[88762]: pam_unix(sudo:session): session closed for user root
Jan 31 08:07:29 compute-0 sudo[88787]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 08:07:29 compute-0 sudo[88787]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:07:29 compute-0 sudo[88787]: pam_unix(sudo:session): session closed for user root
Jan 31 08:07:29 compute-0 sudo[88812]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/82c880e6-d992-5408-8b12-efff9c275473/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ls
Jan 31 08:07:29 compute-0 sudo[88812]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:07:29 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v35: 1 pgs: 1 unknown; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail
Jan 31 08:07:30 compute-0 ceph-mon[75227]: OSD bench result of 10817.053791 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.1. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd].
Jan 31 08:07:30 compute-0 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "osd metadata", "id": 1} : dispatch
Jan 31 08:07:30 compute-0 ceph-mon[75227]: from='osd.2 [v2:192.168.122.100:6810/3457121780,v1:192.168.122.100:6811/3457121780]' entity='osd.2' cmd='[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]': finished
Jan 31 08:07:30 compute-0 ceph-mon[75227]: osd.1 [v2:192.168.122.100:6806/1439559419,v1:192.168.122.100:6807/1439559419] boot
Jan 31 08:07:30 compute-0 ceph-mon[75227]: osdmap e15: 3 total, 2 up, 3 in
Jan 31 08:07:30 compute-0 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "osd metadata", "id": 1} : dispatch
Jan 31 08:07:30 compute-0 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Jan 31 08:07:30 compute-0 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Jan 31 08:07:30 compute-0 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 08:07:30 compute-0 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 08:07:30 compute-0 ceph-mon[75227]: pgmap v35: 1 pgs: 1 unknown; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail
Jan 31 08:07:30 compute-0 podman[88880]: 2026-01-31 08:07:30.083808385 +0000 UTC m=+0.105877602 container exec 2c160fb9852a007dc977740f88f96001cc57b1cb392a9e315d541aef8037777a (image=quay.io/ceph/ceph:v20, name=ceph-82c880e6-d992-5408-8b12-efff9c275473-mon-compute-0, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 08:07:30 compute-0 podman[88880]: 2026-01-31 08:07:30.214476283 +0000 UTC m=+0.236545490 container exec_died 2c160fb9852a007dc977740f88f96001cc57b1cb392a9e315d541aef8037777a (image=quay.io/ceph/ceph:v20, name=ceph-82c880e6-d992-5408-8b12-efff9c275473-mon-compute-0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 08:07:30 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e15 do_prune osdmap full prune enabled
Jan 31 08:07:30 compute-0 ceph-mgr[75519]: mgr.server handle_open ignoring open from osd.2 v2:192.168.122.100:6810/3457121780; not ready for session (expect reconnect)
Jan 31 08:07:30 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Jan 31 08:07:30 compute-0 ceph-mon[75227]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Jan 31 08:07:30 compute-0 ceph-mgr[75519]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Jan 31 08:07:30 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e16 e16: 3 total, 2 up, 3 in
Jan 31 08:07:30 compute-0 ceph-mon[75227]: log_channel(cluster) log [DBG] : osdmap e16: 3 total, 2 up, 3 in
Jan 31 08:07:30 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Jan 31 08:07:30 compute-0 ceph-mon[75227]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Jan 31 08:07:30 compute-0 ceph-mgr[75519]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Jan 31 08:07:30 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 16 pg[1.0( empty local-lis/les=15/16 n=0 ec=12/12 lis/c=0/0 les/c/f=0/0/0 sis=15) [1] r=0 lpr=15 pi=[12,15)/0 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:07:30 compute-0 sudo[88812]: pam_unix(sudo:session): session closed for user root
Jan 31 08:07:30 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 31 08:07:30 compute-0 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 08:07:30 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 31 08:07:30 compute-0 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 08:07:30 compute-0 sudo[89026]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 08:07:30 compute-0 sudo[89026]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:07:30 compute-0 sudo[89026]: pam_unix(sudo:session): session closed for user root
Jan 31 08:07:30 compute-0 sudo[89051]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/82c880e6-d992-5408-8b12-efff9c275473/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 82c880e6-d992-5408-8b12-efff9c275473 -- inventory --format=json-pretty --filter-for-batch
Jan 31 08:07:30 compute-0 sudo[89051]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:07:31 compute-0 ceph-mon[75227]: purged_snaps scrub starts
Jan 31 08:07:31 compute-0 ceph-mon[75227]: purged_snaps scrub ok
Jan 31 08:07:31 compute-0 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Jan 31 08:07:31 compute-0 ceph-mon[75227]: osdmap e16: 3 total, 2 up, 3 in
Jan 31 08:07:31 compute-0 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Jan 31 08:07:31 compute-0 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 08:07:31 compute-0 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 08:07:31 compute-0 podman[89089]: 2026-01-31 08:07:31.274223508 +0000 UTC m=+0.075900696 container create ce22a00f1ee68b3f74afa315ceda7506af259cf32ce333a023b9f29bc3531e08 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=interesting_mcclintock, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, ceph=True, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Jan 31 08:07:31 compute-0 podman[89089]: 2026-01-31 08:07:31.230569643 +0000 UTC m=+0.032246871 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 08:07:31 compute-0 systemd[1]: Started libpod-conmon-ce22a00f1ee68b3f74afa315ceda7506af259cf32ce333a023b9f29bc3531e08.scope.
Jan 31 08:07:31 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:07:31 compute-0 ceph-mgr[75519]: mgr.server handle_open ignoring open from osd.2 v2:192.168.122.100:6810/3457121780; not ready for session (expect reconnect)
Jan 31 08:07:31 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Jan 31 08:07:31 compute-0 ceph-mon[75227]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Jan 31 08:07:31 compute-0 ceph-mgr[75519]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Jan 31 08:07:31 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e16 do_prune osdmap full prune enabled
Jan 31 08:07:31 compute-0 podman[89089]: 2026-01-31 08:07:31.403830436 +0000 UTC m=+0.205507604 container init ce22a00f1ee68b3f74afa315ceda7506af259cf32ce333a023b9f29bc3531e08 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=interesting_mcclintock, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 08:07:31 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e17 e17: 3 total, 2 up, 3 in
Jan 31 08:07:31 compute-0 ceph-mon[75227]: log_channel(cluster) log [DBG] : osdmap e17: 3 total, 2 up, 3 in
Jan 31 08:07:31 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Jan 31 08:07:31 compute-0 ceph-mon[75227]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Jan 31 08:07:31 compute-0 ceph-mgr[75519]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Jan 31 08:07:31 compute-0 podman[89089]: 2026-01-31 08:07:31.412805752 +0000 UTC m=+0.214482930 container start ce22a00f1ee68b3f74afa315ceda7506af259cf32ce333a023b9f29bc3531e08 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=interesting_mcclintock, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 08:07:31 compute-0 interesting_mcclintock[89105]: 167 167
Jan 31 08:07:31 compute-0 systemd[1]: libpod-ce22a00f1ee68b3f74afa315ceda7506af259cf32ce333a023b9f29bc3531e08.scope: Deactivated successfully.
Jan 31 08:07:31 compute-0 podman[89089]: 2026-01-31 08:07:31.438696501 +0000 UTC m=+0.240373729 container attach ce22a00f1ee68b3f74afa315ceda7506af259cf32ce333a023b9f29bc3531e08 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=interesting_mcclintock, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Jan 31 08:07:31 compute-0 podman[89089]: 2026-01-31 08:07:31.439129413 +0000 UTC m=+0.240806601 container died ce22a00f1ee68b3f74afa315ceda7506af259cf32ce333a023b9f29bc3531e08 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=interesting_mcclintock, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 08:07:31 compute-0 systemd[1]: var-lib-containers-storage-overlay-72ecb5e8812ba81a5be7f05c338e6ace22800a6235dfbe799b9907a5eb3b18ba-merged.mount: Deactivated successfully.
Jan 31 08:07:31 compute-0 podman[89089]: 2026-01-31 08:07:31.580392234 +0000 UTC m=+0.382069422 container remove ce22a00f1ee68b3f74afa315ceda7506af259cf32ce333a023b9f29bc3531e08 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=interesting_mcclintock, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.41.3)
Jan 31 08:07:31 compute-0 systemd[1]: libpod-conmon-ce22a00f1ee68b3f74afa315ceda7506af259cf32ce333a023b9f29bc3531e08.scope: Deactivated successfully.
Jan 31 08:07:31 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v38: 1 pgs: 1 creating+peering; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail
Jan 31 08:07:31 compute-0 ceph-mgr[75519]: [balancer INFO root] Optimize plan auto_2026-01-31_08:07:31
Jan 31 08:07:31 compute-0 ceph-mgr[75519]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 31 08:07:31 compute-0 ceph-mgr[75519]: [balancer INFO root] Some PGs (1.000000) are inactive; try again later
Jan 31 08:07:31 compute-0 ceph-mgr[75519]: [devicehealth INFO root] creating main.db for devicehealth
Jan 31 08:07:31 compute-0 podman[89132]: 2026-01-31 08:07:31.769755545 +0000 UTC m=+0.059098497 container create 765eea69cf1104408dd73a54f584b6ae8b0882e3c1ccf485105c4776e9ea285f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=determined_rubin, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.schema-version=1.0)
Jan 31 08:07:31 compute-0 systemd[1]: Started libpod-conmon-765eea69cf1104408dd73a54f584b6ae8b0882e3c1ccf485105c4776e9ea285f.scope.
Jan 31 08:07:31 compute-0 podman[89132]: 2026-01-31 08:07:31.739329287 +0000 UTC m=+0.028672219 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 08:07:31 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:07:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/967b55a6bc1d4af2d3edf1b7b1aeaa0ae51edd4f907e5686eff4404d7dcdcdcb/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 08:07:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/967b55a6bc1d4af2d3edf1b7b1aeaa0ae51edd4f907e5686eff4404d7dcdcdcb/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 08:07:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/967b55a6bc1d4af2d3edf1b7b1aeaa0ae51edd4f907e5686eff4404d7dcdcdcb/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 08:07:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/967b55a6bc1d4af2d3edf1b7b1aeaa0ae51edd4f907e5686eff4404d7dcdcdcb/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 08:07:31 compute-0 ceph-mgr[75519]: [devicehealth INFO root] Check health
Jan 31 08:07:31 compute-0 podman[89132]: 2026-01-31 08:07:31.887442943 +0000 UTC m=+0.176785925 container init 765eea69cf1104408dd73a54f584b6ae8b0882e3c1ccf485105c4776e9ea285f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=determined_rubin, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Jan 31 08:07:31 compute-0 podman[89132]: 2026-01-31 08:07:31.894886025 +0000 UTC m=+0.184228977 container start 765eea69cf1104408dd73a54f584b6ae8b0882e3c1ccf485105c4776e9ea285f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=determined_rubin, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 08:07:31 compute-0 ceph-mgr[75519]: [devicehealth ERROR root] Fail to parse JSON result from daemon osd.2 ()
Jan 31 08:07:31 compute-0 ceph-mon[75227]: log_channel(audit) log [INF] : from='admin socket' entity='admin socket' cmd='smart' args=[json]: dispatch
Jan 31 08:07:31 compute-0 podman[89132]: 2026-01-31 08:07:31.911351495 +0000 UTC m=+0.200694457 container attach 765eea69cf1104408dd73a54f584b6ae8b0882e3c1ccf485105c4776e9ea285f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=determined_rubin, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True)
Jan 31 08:07:31 compute-0 sudo[89184]:     ceph : PWD=/ ; USER=root ; COMMAND=/usr/sbin/smartctl -x --json=o /dev/vda
Jan 31 08:07:31 compute-0 sudo[89184]: pam_systemd(sudo:session): Failed to connect to system bus: No such file or directory
Jan 31 08:07:31 compute-0 sudo[89184]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=167)
Jan 31 08:07:31 compute-0 sudo[89188]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sgfzwwjoopuotmukisxdjfyrqvnarxev ; /usr/bin/python3'
Jan 31 08:07:31 compute-0 sudo[89188]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:07:31 compute-0 sudo[89184]: pam_unix(sudo:session): session closed for user root
Jan 31 08:07:31 compute-0 ceph-mon[75227]: log_channel(audit) log [INF] : from='admin socket' entity='admin socket' cmd=smart args=[json]: finished
Jan 31 08:07:31 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon metadata", "id": "compute-0"} v 0)
Jan 31 08:07:31 compute-0 ceph-mon[75227]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "mon metadata", "id": "compute-0"} : dispatch
Jan 31 08:07:32 compute-0 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Jan 31 08:07:32 compute-0 ceph-mon[75227]: osdmap e17: 3 total, 2 up, 3 in
Jan 31 08:07:32 compute-0 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Jan 31 08:07:32 compute-0 ceph-mon[75227]: pgmap v38: 1 pgs: 1 creating+peering; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail
Jan 31 08:07:32 compute-0 ceph-mon[75227]: from='admin socket' entity='admin socket' cmd='smart' args=[json]: dispatch
Jan 31 08:07:32 compute-0 ceph-mon[75227]: from='admin socket' entity='admin socket' cmd=smart args=[json]: finished
Jan 31 08:07:32 compute-0 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "mon metadata", "id": "compute-0"} : dispatch
Jan 31 08:07:32 compute-0 python3[89192]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 82c880e6-d992-5408-8b12-efff9c275473 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   status --format json | jq .osdmap.num_up_osds _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 08:07:32 compute-0 podman[89196]: 2026-01-31 08:07:32.174548935 +0000 UTC m=+0.039401406 container create b5e25529015fc402ffc9b4fdc82e81bdab6d944d3e3cde03b2e14ae736703240 (image=quay.io/ceph/ceph:v20, name=bold_williams, org.label-schema.license=GPLv2, CEPH_REF=tentacle, ceph=True, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, OSD_FLAVOR=default)
Jan 31 08:07:32 compute-0 systemd[1]: Started libpod-conmon-b5e25529015fc402ffc9b4fdc82e81bdab6d944d3e3cde03b2e14ae736703240.scope.
Jan 31 08:07:32 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:07:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/134dab89c47b4e5b28279a04d7526d907862be535868c478ece908206a472c51/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 08:07:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/134dab89c47b4e5b28279a04d7526d907862be535868c478ece908206a472c51/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 08:07:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/134dab89c47b4e5b28279a04d7526d907862be535868c478ece908206a472c51/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Jan 31 08:07:32 compute-0 podman[89196]: 2026-01-31 08:07:32.154553804 +0000 UTC m=+0.019406255 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 31 08:07:32 compute-0 podman[89196]: 2026-01-31 08:07:32.273410495 +0000 UTC m=+0.138262976 container init b5e25529015fc402ffc9b4fdc82e81bdab6d944d3e3cde03b2e14ae736703240 (image=quay.io/ceph/ceph:v20, name=bold_williams, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 08:07:32 compute-0 podman[89196]: 2026-01-31 08:07:32.282418112 +0000 UTC m=+0.147270543 container start b5e25529015fc402ffc9b4fdc82e81bdab6d944d3e3cde03b2e14ae736703240 (image=quay.io/ceph/ceph:v20, name=bold_williams, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 31 08:07:32 compute-0 podman[89196]: 2026-01-31 08:07:32.292691745 +0000 UTC m=+0.157544176 container attach b5e25529015fc402ffc9b4fdc82e81bdab6d944d3e3cde03b2e14ae736703240 (image=quay.io/ceph/ceph:v20, name=bold_williams, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 08:07:32 compute-0 ceph-mgr[75519]: mgr.server handle_open ignoring open from osd.2 v2:192.168.122.100:6810/3457121780; not ready for session (expect reconnect)
Jan 31 08:07:32 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Jan 31 08:07:32 compute-0 ceph-mon[75227]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Jan 31 08:07:32 compute-0 ceph-mgr[75519]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Jan 31 08:07:32 compute-0 determined_rubin[89158]: [
Jan 31 08:07:32 compute-0 determined_rubin[89158]:     {
Jan 31 08:07:32 compute-0 determined_rubin[89158]:         "available": false,
Jan 31 08:07:32 compute-0 determined_rubin[89158]:         "being_replaced": false,
Jan 31 08:07:32 compute-0 determined_rubin[89158]:         "ceph_device_lvm": false,
Jan 31 08:07:32 compute-0 determined_rubin[89158]:         "device_id": "QEMU_DVD-ROM_QM00001",
Jan 31 08:07:32 compute-0 determined_rubin[89158]:         "lsm_data": {},
Jan 31 08:07:32 compute-0 determined_rubin[89158]:         "lvs": [],
Jan 31 08:07:32 compute-0 determined_rubin[89158]:         "path": "/dev/sr0",
Jan 31 08:07:32 compute-0 determined_rubin[89158]:         "rejected_reasons": [
Jan 31 08:07:32 compute-0 determined_rubin[89158]:             "Has a FileSystem",
Jan 31 08:07:32 compute-0 determined_rubin[89158]:             "Insufficient space (<5GB)"
Jan 31 08:07:32 compute-0 determined_rubin[89158]:         ],
Jan 31 08:07:32 compute-0 determined_rubin[89158]:         "sys_api": {
Jan 31 08:07:32 compute-0 determined_rubin[89158]:             "actuators": null,
Jan 31 08:07:32 compute-0 determined_rubin[89158]:             "device_nodes": [
Jan 31 08:07:32 compute-0 determined_rubin[89158]:                 "sr0"
Jan 31 08:07:32 compute-0 determined_rubin[89158]:             ],
Jan 31 08:07:32 compute-0 determined_rubin[89158]:             "devname": "sr0",
Jan 31 08:07:32 compute-0 determined_rubin[89158]:             "human_readable_size": "482.00 KB",
Jan 31 08:07:32 compute-0 determined_rubin[89158]:             "id_bus": "ata",
Jan 31 08:07:32 compute-0 determined_rubin[89158]:             "model": "QEMU DVD-ROM",
Jan 31 08:07:32 compute-0 determined_rubin[89158]:             "nr_requests": "2",
Jan 31 08:07:32 compute-0 determined_rubin[89158]:             "parent": "/dev/sr0",
Jan 31 08:07:32 compute-0 determined_rubin[89158]:             "partitions": {},
Jan 31 08:07:32 compute-0 determined_rubin[89158]:             "path": "/dev/sr0",
Jan 31 08:07:32 compute-0 determined_rubin[89158]:             "removable": "1",
Jan 31 08:07:32 compute-0 determined_rubin[89158]:             "rev": "2.5+",
Jan 31 08:07:32 compute-0 determined_rubin[89158]:             "ro": "0",
Jan 31 08:07:32 compute-0 determined_rubin[89158]:             "rotational": "1",
Jan 31 08:07:32 compute-0 determined_rubin[89158]:             "sas_address": "",
Jan 31 08:07:32 compute-0 determined_rubin[89158]:             "sas_device_handle": "",
Jan 31 08:07:32 compute-0 determined_rubin[89158]:             "scheduler_mode": "mq-deadline",
Jan 31 08:07:32 compute-0 determined_rubin[89158]:             "sectors": 0,
Jan 31 08:07:32 compute-0 determined_rubin[89158]:             "sectorsize": "2048",
Jan 31 08:07:32 compute-0 determined_rubin[89158]:             "size": 493568.0,
Jan 31 08:07:32 compute-0 determined_rubin[89158]:             "support_discard": "2048",
Jan 31 08:07:32 compute-0 determined_rubin[89158]:             "type": "disk",
Jan 31 08:07:32 compute-0 determined_rubin[89158]:             "vendor": "QEMU"
Jan 31 08:07:32 compute-0 determined_rubin[89158]:         }
Jan 31 08:07:32 compute-0 determined_rubin[89158]:     }
Jan 31 08:07:32 compute-0 determined_rubin[89158]: ]
Jan 31 08:07:32 compute-0 ceph-osd[88096]: osd.2 0 maybe_override_max_osd_capacity_for_qos osd bench result - bandwidth (MiB/sec): 35.559 iops: 9103.206 elapsed_sec: 0.330
Jan 31 08:07:32 compute-0 ceph-osd[88096]: log_channel(cluster) log [WRN] : OSD bench result of 9103.205508 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.2. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd].
Jan 31 08:07:32 compute-0 ceph-osd[88096]: osd.2 0 waiting for initial osdmap
Jan 31 08:07:32 compute-0 ceph-82c880e6-d992-5408-8b12-efff9c275473-osd-2[88092]: 2026-01-31T08:07:32.419+0000 7f0dcb126640 -1 osd.2 0 waiting for initial osdmap
Jan 31 08:07:32 compute-0 ceph-osd[88096]: osd.2 17 crush map has features 288514051259236352, adjusting msgr requires for clients
Jan 31 08:07:32 compute-0 ceph-osd[88096]: osd.2 17 crush map has features 288514051259236352 was 288232575208792577, adjusting msgr requires for mons
Jan 31 08:07:32 compute-0 ceph-osd[88096]: osd.2 17 crush map has features 3314933000852226048, adjusting msgr requires for osds
Jan 31 08:07:32 compute-0 ceph-osd[88096]: osd.2 17 check_osdmap_features require_osd_release unknown -> tentacle
Jan 31 08:07:32 compute-0 systemd[1]: libpod-765eea69cf1104408dd73a54f584b6ae8b0882e3c1ccf485105c4776e9ea285f.scope: Deactivated successfully.
Jan 31 08:07:32 compute-0 podman[89132]: 2026-01-31 08:07:32.434830031 +0000 UTC m=+0.724172943 container died 765eea69cf1104408dd73a54f584b6ae8b0882e3c1ccf485105c4776e9ea285f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=determined_rubin, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 08:07:32 compute-0 ceph-osd[88096]: osd.2 17 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory
Jan 31 08:07:32 compute-0 ceph-82c880e6-d992-5408-8b12-efff9c275473-osd-2[88092]: 2026-01-31T08:07:32.450+0000 7f0dc5f2b640 -1 osd.2 17 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory
Jan 31 08:07:32 compute-0 ceph-osd[88096]: osd.2 17 set_numa_affinity not setting numa affinity
Jan 31 08:07:32 compute-0 ceph-osd[88096]: osd.2 17 _collect_metadata loop5:  no unique device id for loop5: fallback method has no model nor serial no unique device path for loop5: no symlink to loop5 in /dev/disk/by-path
Jan 31 08:07:32 compute-0 systemd[1]: var-lib-containers-storage-overlay-967b55a6bc1d4af2d3edf1b7b1aeaa0ae51edd4f907e5686eff4404d7dcdcdcb-merged.mount: Deactivated successfully.
Jan 31 08:07:32 compute-0 podman[89132]: 2026-01-31 08:07:32.471686462 +0000 UTC m=+0.761029374 container remove 765eea69cf1104408dd73a54f584b6ae8b0882e3c1ccf485105c4776e9ea285f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=determined_rubin, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 08:07:32 compute-0 systemd[1]: libpod-conmon-765eea69cf1104408dd73a54f584b6ae8b0882e3c1ccf485105c4776e9ea285f.scope: Deactivated successfully.
Jan 31 08:07:32 compute-0 sudo[89051]: pam_unix(sudo:session): session closed for user root
Jan 31 08:07:32 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 31 08:07:32 compute-0 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 08:07:32 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 31 08:07:32 compute-0 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 08:07:32 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"} v 0)
Jan 31 08:07:32 compute-0 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"} : dispatch
Jan 31 08:07:32 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"} v 0)
Jan 31 08:07:32 compute-0 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"} : dispatch
Jan 31 08:07:32 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"} v 0)
Jan 31 08:07:32 compute-0 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"} : dispatch
Jan 31 08:07:32 compute-0 ceph-mgr[75519]: [cephadm INFO root] Adjusting osd_memory_target on compute-0 to 43685k
Jan 31 08:07:32 compute-0 ceph-mgr[75519]: log_channel(cephadm) log [INF] : Adjusting osd_memory_target on compute-0 to 43685k
Jan 31 08:07:32 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=osd_memory_target}] v 0)
Jan 31 08:07:32 compute-0 ceph-mgr[75519]: [cephadm WARNING cephadm.serve] Unable to set osd_memory_target on compute-0 to 44733508: error parsing value: Value '44733508' is below minimum 939524096
Jan 31 08:07:32 compute-0 ceph-mgr[75519]: log_channel(cephadm) log [WRN] : Unable to set osd_memory_target on compute-0 to 44733508: error parsing value: Value '44733508' is below minimum 939524096
Jan 31 08:07:32 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 31 08:07:32 compute-0 ceph-mon[75227]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 08:07:32 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 31 08:07:32 compute-0 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 31 08:07:32 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 31 08:07:32 compute-0 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 08:07:32 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 31 08:07:32 compute-0 ceph-mon[75227]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 31 08:07:32 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 31 08:07:32 compute-0 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 31 08:07:32 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 31 08:07:32 compute-0 ceph-mon[75227]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 08:07:32 compute-0 sudo[90026]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 08:07:32 compute-0 sudo[90026]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:07:32 compute-0 sudo[90026]: pam_unix(sudo:session): session closed for user root
Jan 31 08:07:32 compute-0 sudo[90051]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/82c880e6-d992-5408-8b12-efff9c275473/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 82c880e6-d992-5408-8b12-efff9c275473 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Jan 31 08:07:32 compute-0 sudo[90051]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:07:32 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] _maybe_adjust
Jan 31 08:07:32 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Jan 31 08:07:32 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 1 (current 1)
Jan 31 08:07:32 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json"} v 0)
Jan 31 08:07:32 compute-0 ceph-mon[75227]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3395999579' entity='client.admin' cmd={"prefix": "status", "format": "json"} : dispatch
Jan 31 08:07:32 compute-0 bold_williams[89214]: 
Jan 31 08:07:32 compute-0 bold_williams[89214]: {"fsid":"82c880e6-d992-5408-8b12-efff9c275473","health":{"status":"HEALTH_OK","checks":{},"mutes":[]},"election_epoch":5,"quorum":[0],"quorum_names":["compute-0"],"quorum_age":79,"monmap":{"epoch":1,"min_mon_release_name":"tentacle","num_mons":1},"osdmap":{"epoch":17,"num_osds":3,"num_up_osds":2,"osd_up_since":1769846849,"num_in_osds":3,"osd_in_since":1769846828,"num_remapped_pgs":0},"pgmap":{"pgs_by_state":[{"state_name":"creating+peering","count":1}],"num_pgs":1,"num_pools":1,"num_objects":0,"data_bytes":0,"bytes_used":55218176,"bytes_avail":42886066176,"bytes_total":42941284352,"inactive_pgs_ratio":1},"fsmap":{"epoch":1,"btime":"2026-01-31T08:06:11:330734+0000","by_rank":[],"up:standby":0},"mgrmap":{"available":true,"num_standbys":0,"modules":["cephadm","iostat","nfs"],"services":{}},"servicemap":{"epoch":1,"modified":"2026-01-31T08:06:11.333031+0000","services":{}},"progress_events":{}}
Jan 31 08:07:32 compute-0 ceph-mgr[75519]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:07:32 compute-0 ceph-mgr[75519]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:07:32 compute-0 ceph-mgr[75519]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 31 08:07:32 compute-0 ceph-mgr[75519]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 31 08:07:32 compute-0 systemd[1]: libpod-b5e25529015fc402ffc9b4fdc82e81bdab6d944d3e3cde03b2e14ae736703240.scope: Deactivated successfully.
Jan 31 08:07:32 compute-0 ceph-mgr[75519]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:07:32 compute-0 ceph-mgr[75519]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:07:32 compute-0 ceph-mgr[75519]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:07:32 compute-0 ceph-mgr[75519]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:07:32 compute-0 podman[90086]: 2026-01-31 08:07:32.83191403 +0000 UTC m=+0.026403654 container died b5e25529015fc402ffc9b4fdc82e81bdab6d944d3e3cde03b2e14ae736703240 (image=quay.io/ceph/ceph:v20, name=bold_williams, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 08:07:32 compute-0 systemd[1]: var-lib-containers-storage-overlay-134dab89c47b4e5b28279a04d7526d907862be535868c478ece908206a472c51-merged.mount: Deactivated successfully.
Jan 31 08:07:32 compute-0 podman[90095]: 2026-01-31 08:07:32.86802507 +0000 UTC m=+0.043795750 container create b9bd81783d2237cdcc4bb25614bfe0589274db79c28a7da02f9d20cdc122d1ff (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=affectionate_ardinghelli, ceph=True, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 08:07:32 compute-0 podman[90086]: 2026-01-31 08:07:32.873574549 +0000 UTC m=+0.068064153 container remove b5e25529015fc402ffc9b4fdc82e81bdab6d944d3e3cde03b2e14ae736703240 (image=quay.io/ceph/ceph:v20, name=bold_williams, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default)
Jan 31 08:07:32 compute-0 systemd[1]: libpod-conmon-b5e25529015fc402ffc9b4fdc82e81bdab6d944d3e3cde03b2e14ae736703240.scope: Deactivated successfully.
Jan 31 08:07:32 compute-0 sudo[89188]: pam_unix(sudo:session): session closed for user root
Jan 31 08:07:32 compute-0 systemd[1]: Started libpod-conmon-b9bd81783d2237cdcc4bb25614bfe0589274db79c28a7da02f9d20cdc122d1ff.scope.
Jan 31 08:07:32 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:07:32 compute-0 podman[90095]: 2026-01-31 08:07:32.926459968 +0000 UTC m=+0.102230648 container init b9bd81783d2237cdcc4bb25614bfe0589274db79c28a7da02f9d20cdc122d1ff (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=affectionate_ardinghelli, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_REF=tentacle)
Jan 31 08:07:32 compute-0 podman[90095]: 2026-01-31 08:07:32.929948587 +0000 UTC m=+0.105719267 container start b9bd81783d2237cdcc4bb25614bfe0589274db79c28a7da02f9d20cdc122d1ff (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=affectionate_ardinghelli, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Jan 31 08:07:32 compute-0 affectionate_ardinghelli[90118]: 167 167
Jan 31 08:07:32 compute-0 podman[90095]: 2026-01-31 08:07:32.932596113 +0000 UTC m=+0.108366793 container attach b9bd81783d2237cdcc4bb25614bfe0589274db79c28a7da02f9d20cdc122d1ff (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=affectionate_ardinghelli, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle)
Jan 31 08:07:32 compute-0 systemd[1]: libpod-b9bd81783d2237cdcc4bb25614bfe0589274db79c28a7da02f9d20cdc122d1ff.scope: Deactivated successfully.
Jan 31 08:07:32 compute-0 podman[90095]: 2026-01-31 08:07:32.933487338 +0000 UTC m=+0.109258018 container died b9bd81783d2237cdcc4bb25614bfe0589274db79c28a7da02f9d20cdc122d1ff (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=affectionate_ardinghelli, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 08:07:32 compute-0 podman[90095]: 2026-01-31 08:07:32.848570775 +0000 UTC m=+0.024341475 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 08:07:32 compute-0 systemd[1]: var-lib-containers-storage-overlay-45c0b44f7f4cf3c08279c154f04477e43da43900f233b798181cc4c43150acc9-merged.mount: Deactivated successfully.
Jan 31 08:07:32 compute-0 podman[90095]: 2026-01-31 08:07:32.967985562 +0000 UTC m=+0.143756242 container remove b9bd81783d2237cdcc4bb25614bfe0589274db79c28a7da02f9d20cdc122d1ff (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=affectionate_ardinghelli, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, ceph=True)
Jan 31 08:07:32 compute-0 systemd[1]: libpod-conmon-b9bd81783d2237cdcc4bb25614bfe0589274db79c28a7da02f9d20cdc122d1ff.scope: Deactivated successfully.
Jan 31 08:07:33 compute-0 podman[90143]: 2026-01-31 08:07:33.113927496 +0000 UTC m=+0.057262065 container create 9665ea1f7464a9f99372c20967e06db74e187eac928bd1b848255f34fa54d03f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wonderful_saha, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 08:07:33 compute-0 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Jan 31 08:07:33 compute-0 ceph-mon[75227]: OSD bench result of 9103.205508 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.2. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd].
Jan 31 08:07:33 compute-0 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 08:07:33 compute-0 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 08:07:33 compute-0 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"} : dispatch
Jan 31 08:07:33 compute-0 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"} : dispatch
Jan 31 08:07:33 compute-0 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"} : dispatch
Jan 31 08:07:33 compute-0 ceph-mon[75227]: Adjusting osd_memory_target on compute-0 to 43685k
Jan 31 08:07:33 compute-0 ceph-mon[75227]: Unable to set osd_memory_target on compute-0 to 44733508: error parsing value: Value '44733508' is below minimum 939524096
Jan 31 08:07:33 compute-0 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 08:07:33 compute-0 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 31 08:07:33 compute-0 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 08:07:33 compute-0 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 31 08:07:33 compute-0 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 31 08:07:33 compute-0 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 08:07:33 compute-0 ceph-mon[75227]: from='client.? 192.168.122.100:0/3395999579' entity='client.admin' cmd={"prefix": "status", "format": "json"} : dispatch
Jan 31 08:07:33 compute-0 systemd[1]: Started libpod-conmon-9665ea1f7464a9f99372c20967e06db74e187eac928bd1b848255f34fa54d03f.scope.
Jan 31 08:07:33 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:07:33 compute-0 sudo[90179]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jizscupexkmuvnkgjtgellolfxgrzone ; /usr/bin/python3'
Jan 31 08:07:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/df42b9841b49f6a8d57fb48b63d61465530e93d8f818c83423a4fa5c60640370/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 08:07:33 compute-0 sudo[90179]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:07:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/df42b9841b49f6a8d57fb48b63d61465530e93d8f818c83423a4fa5c60640370/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 08:07:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/df42b9841b49f6a8d57fb48b63d61465530e93d8f818c83423a4fa5c60640370/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 08:07:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/df42b9841b49f6a8d57fb48b63d61465530e93d8f818c83423a4fa5c60640370/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 08:07:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/df42b9841b49f6a8d57fb48b63d61465530e93d8f818c83423a4fa5c60640370/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 31 08:07:33 compute-0 podman[90143]: 2026-01-31 08:07:33.088011387 +0000 UTC m=+0.031346036 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 08:07:33 compute-0 podman[90143]: 2026-01-31 08:07:33.193436725 +0000 UTC m=+0.136771334 container init 9665ea1f7464a9f99372c20967e06db74e187eac928bd1b848255f34fa54d03f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wonderful_saha, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0)
Jan 31 08:07:33 compute-0 podman[90143]: 2026-01-31 08:07:33.203403939 +0000 UTC m=+0.146738518 container start 9665ea1f7464a9f99372c20967e06db74e187eac928bd1b848255f34fa54d03f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wonderful_saha, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3)
Jan 31 08:07:33 compute-0 podman[90143]: 2026-01-31 08:07:33.208539396 +0000 UTC m=+0.151873985 container attach 9665ea1f7464a9f99372c20967e06db74e187eac928bd1b848255f34fa54d03f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wonderful_saha, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 31 08:07:33 compute-0 python3[90184]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 82c880e6-d992-5408-8b12-efff9c275473 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool create vms  replicated_rule --autoscale-mode on _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 08:07:33 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e17 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 08:07:33 compute-0 ceph-mgr[75519]: mgr.server handle_open ignoring open from osd.2 v2:192.168.122.100:6810/3457121780; not ready for session (expect reconnect)
Jan 31 08:07:33 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Jan 31 08:07:33 compute-0 ceph-mon[75227]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Jan 31 08:07:33 compute-0 ceph-mgr[75519]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Jan 31 08:07:33 compute-0 podman[90191]: 2026-01-31 08:07:33.385245987 +0000 UTC m=+0.051542801 container create 7e7e6466a6379f1fc2c0060bd84ea394aac8e962465498ba0c165c15dd8c57e4 (image=quay.io/ceph/ceph:v20, name=gallant_antonelli, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030)
Jan 31 08:07:33 compute-0 systemd[1]: Started libpod-conmon-7e7e6466a6379f1fc2c0060bd84ea394aac8e962465498ba0c165c15dd8c57e4.scope.
Jan 31 08:07:33 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e17 do_prune osdmap full prune enabled
Jan 31 08:07:33 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e18 e18: 3 total, 3 up, 3 in
Jan 31 08:07:33 compute-0 ceph-mon[75227]: log_channel(cluster) log [INF] : osd.2 [v2:192.168.122.100:6810/3457121780,v1:192.168.122.100:6811/3457121780] boot
Jan 31 08:07:33 compute-0 ceph-mon[75227]: log_channel(cluster) log [DBG] : osdmap e18: 3 total, 3 up, 3 in
Jan 31 08:07:33 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Jan 31 08:07:33 compute-0 ceph-mon[75227]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Jan 31 08:07:33 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:07:33 compute-0 ceph-osd[88096]: osd.2 18 state: booting -> active
Jan 31 08:07:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/84f7c969dede8fb9aae8eac2dcf5b64fc3acf596f819914cdc5e8a50515359e5/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 08:07:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/84f7c969dede8fb9aae8eac2dcf5b64fc3acf596f819914cdc5e8a50515359e5/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 08:07:33 compute-0 podman[90191]: 2026-01-31 08:07:33.442801649 +0000 UTC m=+0.109098483 container init 7e7e6466a6379f1fc2c0060bd84ea394aac8e962465498ba0c165c15dd8c57e4 (image=quay.io/ceph/ceph:v20, name=gallant_antonelli, CEPH_REF=tentacle, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 31 08:07:33 compute-0 podman[90191]: 2026-01-31 08:07:33.44739444 +0000 UTC m=+0.113691254 container start 7e7e6466a6379f1fc2c0060bd84ea394aac8e962465498ba0c165c15dd8c57e4 (image=quay.io/ceph/ceph:v20, name=gallant_antonelli, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 08:07:33 compute-0 podman[90191]: 2026-01-31 08:07:33.450025936 +0000 UTC m=+0.116322780 container attach 7e7e6466a6379f1fc2c0060bd84ea394aac8e962465498ba0c165c15dd8c57e4 (image=quay.io/ceph/ceph:v20, name=gallant_antonelli, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 08:07:33 compute-0 podman[90191]: 2026-01-31 08:07:33.365426272 +0000 UTC m=+0.031723106 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 31 08:07:33 compute-0 ceph-mon[75227]: log_channel(cluster) log [DBG] : mgrmap e9: compute-0.fqetdi(active, since 62s)
Jan 31 08:07:33 compute-0 wonderful_saha[90180]: --> passed data devices: 0 physical, 3 LVM
Jan 31 08:07:33 compute-0 wonderful_saha[90180]: --> All data devices are unavailable
Jan 31 08:07:33 compute-0 systemd[1]: libpod-9665ea1f7464a9f99372c20967e06db74e187eac928bd1b848255f34fa54d03f.scope: Deactivated successfully.
Jan 31 08:07:33 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v40: 1 pgs: 1 creating+peering; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail
Jan 31 08:07:33 compute-0 podman[90245]: 2026-01-31 08:07:33.659275086 +0000 UTC m=+0.036680368 container died 9665ea1f7464a9f99372c20967e06db74e187eac928bd1b848255f34fa54d03f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wonderful_saha, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 08:07:33 compute-0 systemd[1]: var-lib-containers-storage-overlay-df42b9841b49f6a8d57fb48b63d61465530e93d8f818c83423a4fa5c60640370-merged.mount: Deactivated successfully.
Jan 31 08:07:33 compute-0 podman[90245]: 2026-01-31 08:07:33.711456504 +0000 UTC m=+0.088861786 container remove 9665ea1f7464a9f99372c20967e06db74e187eac928bd1b848255f34fa54d03f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wonderful_saha, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030)
Jan 31 08:07:33 compute-0 systemd[1]: libpod-conmon-9665ea1f7464a9f99372c20967e06db74e187eac928bd1b848255f34fa54d03f.scope: Deactivated successfully.
Jan 31 08:07:33 compute-0 sudo[90051]: pam_unix(sudo:session): session closed for user root
Jan 31 08:07:33 compute-0 sudo[90259]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 08:07:33 compute-0 sudo[90259]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:07:33 compute-0 sudo[90259]: pam_unix(sudo:session): session closed for user root
Jan 31 08:07:33 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool create", "pool": "vms", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} v 0)
Jan 31 08:07:33 compute-0 ceph-mon[75227]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/685575208' entity='client.admin' cmd={"prefix": "osd pool create", "pool": "vms", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} : dispatch
Jan 31 08:07:33 compute-0 sudo[90284]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/82c880e6-d992-5408-8b12-efff9c275473/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 82c880e6-d992-5408-8b12-efff9c275473 -- lvm list --format json
Jan 31 08:07:33 compute-0 sudo[90284]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:07:34 compute-0 podman[90324]: 2026-01-31 08:07:34.098290291 +0000 UTC m=+0.043060209 container create db08e22ba650a538c61f6a0b40b016f2b5e4ae4de142128666ab489d6ac3d293 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wizardly_curran, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, OSD_FLAVOR=default)
Jan 31 08:07:34 compute-0 systemd[1]: Started libpod-conmon-db08e22ba650a538c61f6a0b40b016f2b5e4ae4de142128666ab489d6ac3d293.scope.
Jan 31 08:07:34 compute-0 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Jan 31 08:07:34 compute-0 ceph-mon[75227]: osd.2 [v2:192.168.122.100:6810/3457121780,v1:192.168.122.100:6811/3457121780] boot
Jan 31 08:07:34 compute-0 ceph-mon[75227]: osdmap e18: 3 total, 3 up, 3 in
Jan 31 08:07:34 compute-0 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Jan 31 08:07:34 compute-0 ceph-mon[75227]: mgrmap e9: compute-0.fqetdi(active, since 62s)
Jan 31 08:07:34 compute-0 ceph-mon[75227]: pgmap v40: 1 pgs: 1 creating+peering; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail
Jan 31 08:07:34 compute-0 ceph-mon[75227]: from='client.? 192.168.122.100:0/685575208' entity='client.admin' cmd={"prefix": "osd pool create", "pool": "vms", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} : dispatch
Jan 31 08:07:34 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:07:34 compute-0 podman[90324]: 2026-01-31 08:07:34.157572303 +0000 UTC m=+0.102342271 container init db08e22ba650a538c61f6a0b40b016f2b5e4ae4de142128666ab489d6ac3d293 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wizardly_curran, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 08:07:34 compute-0 podman[90324]: 2026-01-31 08:07:34.161542586 +0000 UTC m=+0.106312514 container start db08e22ba650a538c61f6a0b40b016f2b5e4ae4de142128666ab489d6ac3d293 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wizardly_curran, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Jan 31 08:07:34 compute-0 podman[90324]: 2026-01-31 08:07:34.16484668 +0000 UTC m=+0.109616618 container attach db08e22ba650a538c61f6a0b40b016f2b5e4ae4de142128666ab489d6ac3d293 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wizardly_curran, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 08:07:34 compute-0 wizardly_curran[90341]: 167 167
Jan 31 08:07:34 compute-0 systemd[1]: libpod-db08e22ba650a538c61f6a0b40b016f2b5e4ae4de142128666ab489d6ac3d293.scope: Deactivated successfully.
Jan 31 08:07:34 compute-0 conmon[90341]: conmon db08e22ba650a538c61f <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-db08e22ba650a538c61f6a0b40b016f2b5e4ae4de142128666ab489d6ac3d293.scope/container/memory.events
Jan 31 08:07:34 compute-0 podman[90324]: 2026-01-31 08:07:34.166614091 +0000 UTC m=+0.111384009 container died db08e22ba650a538c61f6a0b40b016f2b5e4ae4de142128666ab489d6ac3d293 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wizardly_curran, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 08:07:34 compute-0 podman[90324]: 2026-01-31 08:07:34.083852619 +0000 UTC m=+0.028622557 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 08:07:34 compute-0 systemd[1]: var-lib-containers-storage-overlay-4df02e7f83ade91d3ffa9ef74d3d2f7840203b5b4399e8a6e36b6b81f131bc98-merged.mount: Deactivated successfully.
Jan 31 08:07:34 compute-0 podman[90324]: 2026-01-31 08:07:34.202160855 +0000 UTC m=+0.146930773 container remove db08e22ba650a538c61f6a0b40b016f2b5e4ae4de142128666ab489d6ac3d293 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wizardly_curran, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Jan 31 08:07:34 compute-0 systemd[1]: libpod-conmon-db08e22ba650a538c61f6a0b40b016f2b5e4ae4de142128666ab489d6ac3d293.scope: Deactivated successfully.
Jan 31 08:07:34 compute-0 podman[90364]: 2026-01-31 08:07:34.328557991 +0000 UTC m=+0.043947325 container create 73059d3a5841549c761a24b36c403408bf0ec83bb0b25102a371bf0787364806 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dreamy_shockley, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 08:07:34 compute-0 systemd[1]: Started libpod-conmon-73059d3a5841549c761a24b36c403408bf0ec83bb0b25102a371bf0787364806.scope.
Jan 31 08:07:34 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:07:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/78949b41a8ecd53606a51d2c95259adaeba3bba4064ff8c47a8817526a43ff84/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 08:07:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/78949b41a8ecd53606a51d2c95259adaeba3bba4064ff8c47a8817526a43ff84/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 08:07:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/78949b41a8ecd53606a51d2c95259adaeba3bba4064ff8c47a8817526a43ff84/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 08:07:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/78949b41a8ecd53606a51d2c95259adaeba3bba4064ff8c47a8817526a43ff84/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 08:07:34 compute-0 podman[90364]: 2026-01-31 08:07:34.302360304 +0000 UTC m=+0.017749638 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 08:07:34 compute-0 podman[90364]: 2026-01-31 08:07:34.430083388 +0000 UTC m=+0.145472742 container init 73059d3a5841549c761a24b36c403408bf0ec83bb0b25102a371bf0787364806 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dreamy_shockley, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 08:07:34 compute-0 podman[90364]: 2026-01-31 08:07:34.436899682 +0000 UTC m=+0.152288986 container start 73059d3a5841549c761a24b36c403408bf0ec83bb0b25102a371bf0787364806 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dreamy_shockley, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 08:07:34 compute-0 podman[90364]: 2026-01-31 08:07:34.440214437 +0000 UTC m=+0.155603761 container attach 73059d3a5841549c761a24b36c403408bf0ec83bb0b25102a371bf0787364806 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dreamy_shockley, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Jan 31 08:07:34 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e18 do_prune osdmap full prune enabled
Jan 31 08:07:34 compute-0 ceph-mon[75227]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/685575208' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "vms", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Jan 31 08:07:34 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e19 e19: 3 total, 3 up, 3 in
Jan 31 08:07:34 compute-0 gallant_antonelli[90210]: pool 'vms' created
Jan 31 08:07:34 compute-0 ceph-mon[75227]: log_channel(cluster) log [DBG] : osdmap e19: 3 total, 3 up, 3 in
Jan 31 08:07:34 compute-0 podman[90191]: 2026-01-31 08:07:34.568805086 +0000 UTC m=+1.235101910 container died 7e7e6466a6379f1fc2c0060bd84ea394aac8e962465498ba0c165c15dd8c57e4 (image=quay.io/ceph/ceph:v20, name=gallant_antonelli, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 08:07:34 compute-0 systemd[1]: libpod-7e7e6466a6379f1fc2c0060bd84ea394aac8e962465498ba0c165c15dd8c57e4.scope: Deactivated successfully.
Jan 31 08:07:34 compute-0 systemd[1]: var-lib-containers-storage-overlay-84f7c969dede8fb9aae8eac2dcf5b64fc3acf596f819914cdc5e8a50515359e5-merged.mount: Deactivated successfully.
Jan 31 08:07:34 compute-0 podman[90191]: 2026-01-31 08:07:34.605180443 +0000 UTC m=+1.271477267 container remove 7e7e6466a6379f1fc2c0060bd84ea394aac8e962465498ba0c165c15dd8c57e4 (image=quay.io/ceph/ceph:v20, name=gallant_antonelli, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Jan 31 08:07:34 compute-0 systemd[1]: libpod-conmon-7e7e6466a6379f1fc2c0060bd84ea394aac8e962465498ba0c165c15dd8c57e4.scope: Deactivated successfully.
Jan 31 08:07:34 compute-0 sudo[90179]: pam_unix(sudo:session): session closed for user root
Jan 31 08:07:34 compute-0 sudo[90425]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nwktbgspyocarmfmgxjhifjcreduuuxb ; /usr/bin/python3'
Jan 31 08:07:34 compute-0 sudo[90425]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:07:34 compute-0 dreamy_shockley[90380]: {
Jan 31 08:07:34 compute-0 dreamy_shockley[90380]:     "0": [
Jan 31 08:07:34 compute-0 dreamy_shockley[90380]:         {
Jan 31 08:07:34 compute-0 dreamy_shockley[90380]:             "devices": [
Jan 31 08:07:34 compute-0 dreamy_shockley[90380]:                 "/dev/loop3"
Jan 31 08:07:34 compute-0 dreamy_shockley[90380]:             ],
Jan 31 08:07:34 compute-0 dreamy_shockley[90380]:             "lv_name": "ceph_lv0",
Jan 31 08:07:34 compute-0 dreamy_shockley[90380]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 08:07:34 compute-0 dreamy_shockley[90380]:             "lv_size": "21470642176",
Jan 31 08:07:34 compute-0 dreamy_shockley[90380]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=MTsNbY-MKaT-jGv0-3onj-5WQa-gnK0-BbfLsK,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=82c880e6-d992-5408-8b12-efff9c275473,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=39c36249-2898-4a76-b317-8e4ca379866f,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 31 08:07:34 compute-0 dreamy_shockley[90380]:             "lv_uuid": "MTsNbY-MKaT-jGv0-3onj-5WQa-gnK0-BbfLsK",
Jan 31 08:07:34 compute-0 dreamy_shockley[90380]:             "name": "ceph_lv0",
Jan 31 08:07:34 compute-0 dreamy_shockley[90380]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 08:07:34 compute-0 dreamy_shockley[90380]:             "tags": {
Jan 31 08:07:34 compute-0 dreamy_shockley[90380]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 31 08:07:34 compute-0 dreamy_shockley[90380]:                 "ceph.block_uuid": "MTsNbY-MKaT-jGv0-3onj-5WQa-gnK0-BbfLsK",
Jan 31 08:07:34 compute-0 dreamy_shockley[90380]:                 "ceph.cephx_lockbox_secret": "",
Jan 31 08:07:34 compute-0 dreamy_shockley[90380]:                 "ceph.cluster_fsid": "82c880e6-d992-5408-8b12-efff9c275473",
Jan 31 08:07:34 compute-0 dreamy_shockley[90380]:                 "ceph.cluster_name": "ceph",
Jan 31 08:07:34 compute-0 dreamy_shockley[90380]:                 "ceph.crush_device_class": "",
Jan 31 08:07:34 compute-0 dreamy_shockley[90380]:                 "ceph.encrypted": "0",
Jan 31 08:07:34 compute-0 dreamy_shockley[90380]:                 "ceph.objectstore": "bluestore",
Jan 31 08:07:34 compute-0 dreamy_shockley[90380]:                 "ceph.osd_fsid": "39c36249-2898-4a76-b317-8e4ca379866f",
Jan 31 08:07:34 compute-0 dreamy_shockley[90380]:                 "ceph.osd_id": "0",
Jan 31 08:07:34 compute-0 dreamy_shockley[90380]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 31 08:07:34 compute-0 dreamy_shockley[90380]:                 "ceph.type": "block",
Jan 31 08:07:34 compute-0 dreamy_shockley[90380]:                 "ceph.vdo": "0",
Jan 31 08:07:34 compute-0 dreamy_shockley[90380]:                 "ceph.with_tpm": "0"
Jan 31 08:07:34 compute-0 dreamy_shockley[90380]:             },
Jan 31 08:07:34 compute-0 dreamy_shockley[90380]:             "type": "block",
Jan 31 08:07:34 compute-0 dreamy_shockley[90380]:             "vg_name": "ceph_vg0"
Jan 31 08:07:34 compute-0 dreamy_shockley[90380]:         }
Jan 31 08:07:34 compute-0 dreamy_shockley[90380]:     ],
Jan 31 08:07:34 compute-0 dreamy_shockley[90380]:     "1": [
Jan 31 08:07:34 compute-0 dreamy_shockley[90380]:         {
Jan 31 08:07:34 compute-0 dreamy_shockley[90380]:             "devices": [
Jan 31 08:07:34 compute-0 dreamy_shockley[90380]:                 "/dev/loop4"
Jan 31 08:07:34 compute-0 dreamy_shockley[90380]:             ],
Jan 31 08:07:34 compute-0 dreamy_shockley[90380]:             "lv_name": "ceph_lv1",
Jan 31 08:07:34 compute-0 dreamy_shockley[90380]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 31 08:07:34 compute-0 dreamy_shockley[90380]:             "lv_size": "21470642176",
Jan 31 08:07:34 compute-0 dreamy_shockley[90380]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=p93Mbf-DMxT-pcUt-jSJE-SFna-oscq-yTAd40,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=82c880e6-d992-5408-8b12-efff9c275473,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=dacad4fa-56d8-4937-b2d8-306fb75187f3,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 31 08:07:34 compute-0 dreamy_shockley[90380]:             "lv_uuid": "p93Mbf-DMxT-pcUt-jSJE-SFna-oscq-yTAd40",
Jan 31 08:07:34 compute-0 dreamy_shockley[90380]:             "name": "ceph_lv1",
Jan 31 08:07:34 compute-0 dreamy_shockley[90380]:             "path": "/dev/ceph_vg1/ceph_lv1",
Jan 31 08:07:34 compute-0 dreamy_shockley[90380]:             "tags": {
Jan 31 08:07:34 compute-0 dreamy_shockley[90380]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 31 08:07:34 compute-0 dreamy_shockley[90380]:                 "ceph.block_uuid": "p93Mbf-DMxT-pcUt-jSJE-SFna-oscq-yTAd40",
Jan 31 08:07:34 compute-0 dreamy_shockley[90380]:                 "ceph.cephx_lockbox_secret": "",
Jan 31 08:07:34 compute-0 dreamy_shockley[90380]:                 "ceph.cluster_fsid": "82c880e6-d992-5408-8b12-efff9c275473",
Jan 31 08:07:34 compute-0 dreamy_shockley[90380]:                 "ceph.cluster_name": "ceph",
Jan 31 08:07:34 compute-0 dreamy_shockley[90380]:                 "ceph.crush_device_class": "",
Jan 31 08:07:34 compute-0 dreamy_shockley[90380]:                 "ceph.encrypted": "0",
Jan 31 08:07:34 compute-0 dreamy_shockley[90380]:                 "ceph.objectstore": "bluestore",
Jan 31 08:07:34 compute-0 dreamy_shockley[90380]:                 "ceph.osd_fsid": "dacad4fa-56d8-4937-b2d8-306fb75187f3",
Jan 31 08:07:34 compute-0 dreamy_shockley[90380]:                 "ceph.osd_id": "1",
Jan 31 08:07:34 compute-0 dreamy_shockley[90380]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 31 08:07:34 compute-0 dreamy_shockley[90380]:                 "ceph.type": "block",
Jan 31 08:07:34 compute-0 dreamy_shockley[90380]:                 "ceph.vdo": "0",
Jan 31 08:07:34 compute-0 dreamy_shockley[90380]:                 "ceph.with_tpm": "0"
Jan 31 08:07:34 compute-0 dreamy_shockley[90380]:             },
Jan 31 08:07:34 compute-0 dreamy_shockley[90380]:             "type": "block",
Jan 31 08:07:34 compute-0 dreamy_shockley[90380]:             "vg_name": "ceph_vg1"
Jan 31 08:07:34 compute-0 dreamy_shockley[90380]:         }
Jan 31 08:07:34 compute-0 dreamy_shockley[90380]:     ],
Jan 31 08:07:34 compute-0 dreamy_shockley[90380]:     "2": [
Jan 31 08:07:34 compute-0 dreamy_shockley[90380]:         {
Jan 31 08:07:34 compute-0 dreamy_shockley[90380]:             "devices": [
Jan 31 08:07:34 compute-0 dreamy_shockley[90380]:                 "/dev/loop5"
Jan 31 08:07:34 compute-0 dreamy_shockley[90380]:             ],
Jan 31 08:07:34 compute-0 dreamy_shockley[90380]:             "lv_name": "ceph_lv2",
Jan 31 08:07:34 compute-0 dreamy_shockley[90380]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 31 08:07:34 compute-0 dreamy_shockley[90380]:             "lv_size": "21470642176",
Jan 31 08:07:34 compute-0 dreamy_shockley[90380]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=fxd7JU-HnwP-NvcE-M4xv-EgEF-kK7y-w6dXCS,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=82c880e6-d992-5408-8b12-efff9c275473,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=faa25865-e7b6-44f9-8188-08bf287b941b,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 31 08:07:34 compute-0 dreamy_shockley[90380]:             "lv_uuid": "fxd7JU-HnwP-NvcE-M4xv-EgEF-kK7y-w6dXCS",
Jan 31 08:07:34 compute-0 dreamy_shockley[90380]:             "name": "ceph_lv2",
Jan 31 08:07:34 compute-0 dreamy_shockley[90380]:             "path": "/dev/ceph_vg2/ceph_lv2",
Jan 31 08:07:34 compute-0 dreamy_shockley[90380]:             "tags": {
Jan 31 08:07:34 compute-0 dreamy_shockley[90380]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 31 08:07:34 compute-0 dreamy_shockley[90380]:                 "ceph.block_uuid": "fxd7JU-HnwP-NvcE-M4xv-EgEF-kK7y-w6dXCS",
Jan 31 08:07:34 compute-0 dreamy_shockley[90380]:                 "ceph.cephx_lockbox_secret": "",
Jan 31 08:07:34 compute-0 dreamy_shockley[90380]:                 "ceph.cluster_fsid": "82c880e6-d992-5408-8b12-efff9c275473",
Jan 31 08:07:34 compute-0 dreamy_shockley[90380]:                 "ceph.cluster_name": "ceph",
Jan 31 08:07:34 compute-0 dreamy_shockley[90380]:                 "ceph.crush_device_class": "",
Jan 31 08:07:34 compute-0 dreamy_shockley[90380]:                 "ceph.encrypted": "0",
Jan 31 08:07:34 compute-0 dreamy_shockley[90380]:                 "ceph.objectstore": "bluestore",
Jan 31 08:07:34 compute-0 dreamy_shockley[90380]:                 "ceph.osd_fsid": "faa25865-e7b6-44f9-8188-08bf287b941b",
Jan 31 08:07:34 compute-0 dreamy_shockley[90380]:                 "ceph.osd_id": "2",
Jan 31 08:07:34 compute-0 dreamy_shockley[90380]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 31 08:07:34 compute-0 dreamy_shockley[90380]:                 "ceph.type": "block",
Jan 31 08:07:34 compute-0 dreamy_shockley[90380]:                 "ceph.vdo": "0",
Jan 31 08:07:34 compute-0 dreamy_shockley[90380]:                 "ceph.with_tpm": "0"
Jan 31 08:07:34 compute-0 dreamy_shockley[90380]:             },
Jan 31 08:07:34 compute-0 dreamy_shockley[90380]:             "type": "block",
Jan 31 08:07:34 compute-0 dreamy_shockley[90380]:             "vg_name": "ceph_vg2"
Jan 31 08:07:34 compute-0 dreamy_shockley[90380]:         }
Jan 31 08:07:34 compute-0 dreamy_shockley[90380]:     ]
Jan 31 08:07:34 compute-0 dreamy_shockley[90380]: }
Jan 31 08:07:34 compute-0 ceph-osd[88096]: osd.2 pg_epoch: 19 pg[2.0( empty local-lis/les=0/0 n=0 ec=19/19 lis/c=0/0 les/c/f=0/0/0 sis=19) [2] r=0 lpr=19 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:07:34 compute-0 systemd[1]: libpod-73059d3a5841549c761a24b36c403408bf0ec83bb0b25102a371bf0787364806.scope: Deactivated successfully.
Jan 31 08:07:34 compute-0 conmon[90380]: conmon 73059d3a5841549c761a <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-73059d3a5841549c761a24b36c403408bf0ec83bb0b25102a371bf0787364806.scope/container/memory.events
Jan 31 08:07:34 compute-0 podman[90364]: 2026-01-31 08:07:34.768888414 +0000 UTC m=+0.484277758 container died 73059d3a5841549c761a24b36c403408bf0ec83bb0b25102a371bf0787364806 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dreamy_shockley, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030)
Jan 31 08:07:34 compute-0 systemd[1]: var-lib-containers-storage-overlay-78949b41a8ecd53606a51d2c95259adaeba3bba4064ff8c47a8817526a43ff84-merged.mount: Deactivated successfully.
Jan 31 08:07:34 compute-0 podman[90364]: 2026-01-31 08:07:34.816346668 +0000 UTC m=+0.531736002 container remove 73059d3a5841549c761a24b36c403408bf0ec83bb0b25102a371bf0787364806 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dreamy_shockley, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True)
Jan 31 08:07:34 compute-0 systemd[1]: libpod-conmon-73059d3a5841549c761a24b36c403408bf0ec83bb0b25102a371bf0787364806.scope: Deactivated successfully.
Jan 31 08:07:34 compute-0 sudo[90284]: pam_unix(sudo:session): session closed for user root
Jan 31 08:07:34 compute-0 sudo[90442]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 08:07:34 compute-0 sudo[90442]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:07:34 compute-0 sudo[90442]: pam_unix(sudo:session): session closed for user root
Jan 31 08:07:34 compute-0 python3[90427]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 82c880e6-d992-5408-8b12-efff9c275473 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool create volumes  replicated_rule --autoscale-mode on _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 08:07:34 compute-0 sudo[90467]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/82c880e6-d992-5408-8b12-efff9c275473/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 82c880e6-d992-5408-8b12-efff9c275473 -- raw list --format json
Jan 31 08:07:34 compute-0 sudo[90467]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:07:34 compute-0 podman[90468]: 2026-01-31 08:07:34.963327292 +0000 UTC m=+0.044514781 container create 4b5231c3a29176903f2c9198acb2bfe7251612e0fb86a776be977f1b76281e83 (image=quay.io/ceph/ceph:v20, name=romantic_pare, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 08:07:34 compute-0 systemd[1]: Started libpod-conmon-4b5231c3a29176903f2c9198acb2bfe7251612e0fb86a776be977f1b76281e83.scope.
Jan 31 08:07:35 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:07:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/01a2a86c27653a7f20fce58cb337f777283fc65922cffce57fabba5bd8d3476c/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 08:07:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/01a2a86c27653a7f20fce58cb337f777283fc65922cffce57fabba5bd8d3476c/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 08:07:35 compute-0 podman[90468]: 2026-01-31 08:07:34.945604356 +0000 UTC m=+0.026791885 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 31 08:07:35 compute-0 podman[90468]: 2026-01-31 08:07:35.050998873 +0000 UTC m=+0.132186412 container init 4b5231c3a29176903f2c9198acb2bfe7251612e0fb86a776be977f1b76281e83 (image=quay.io/ceph/ceph:v20, name=romantic_pare, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 31 08:07:35 compute-0 podman[90468]: 2026-01-31 08:07:35.055630895 +0000 UTC m=+0.136818394 container start 4b5231c3a29176903f2c9198acb2bfe7251612e0fb86a776be977f1b76281e83 (image=quay.io/ceph/ceph:v20, name=romantic_pare, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 08:07:35 compute-0 podman[90468]: 2026-01-31 08:07:35.059966649 +0000 UTC m=+0.141154208 container attach 4b5231c3a29176903f2c9198acb2bfe7251612e0fb86a776be977f1b76281e83 (image=quay.io/ceph/ceph:v20, name=romantic_pare, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20251030, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 08:07:35 compute-0 podman[90524]: 2026-01-31 08:07:35.192873501 +0000 UTC m=+0.039022584 container create f08f1b45bf9d46acc6fc5ecd0ab575d2a2ceed2a5133153ea74c9f1136bbdb30 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elastic_kilby, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 08:07:35 compute-0 systemd[1]: Started libpod-conmon-f08f1b45bf9d46acc6fc5ecd0ab575d2a2ceed2a5133153ea74c9f1136bbdb30.scope.
Jan 31 08:07:35 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:07:35 compute-0 podman[90524]: 2026-01-31 08:07:35.246606104 +0000 UTC m=+0.092755207 container init f08f1b45bf9d46acc6fc5ecd0ab575d2a2ceed2a5133153ea74c9f1136bbdb30 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elastic_kilby, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Jan 31 08:07:35 compute-0 podman[90524]: 2026-01-31 08:07:35.250595968 +0000 UTC m=+0.096745051 container start f08f1b45bf9d46acc6fc5ecd0ab575d2a2ceed2a5133153ea74c9f1136bbdb30 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elastic_kilby, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0)
Jan 31 08:07:35 compute-0 elastic_kilby[90559]: 167 167
Jan 31 08:07:35 compute-0 systemd[1]: libpod-f08f1b45bf9d46acc6fc5ecd0ab575d2a2ceed2a5133153ea74c9f1136bbdb30.scope: Deactivated successfully.
Jan 31 08:07:35 compute-0 podman[90524]: 2026-01-31 08:07:35.253631754 +0000 UTC m=+0.099780827 container attach f08f1b45bf9d46acc6fc5ecd0ab575d2a2ceed2a5133153ea74c9f1136bbdb30 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elastic_kilby, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, OSD_FLAVOR=default)
Jan 31 08:07:35 compute-0 podman[90524]: 2026-01-31 08:07:35.253902422 +0000 UTC m=+0.100051505 container died f08f1b45bf9d46acc6fc5ecd0ab575d2a2ceed2a5133153ea74c9f1136bbdb30 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elastic_kilby, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 08:07:35 compute-0 podman[90524]: 2026-01-31 08:07:35.172745347 +0000 UTC m=+0.018894480 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 08:07:35 compute-0 systemd[1]: var-lib-containers-storage-overlay-5a6ba582449da7830ae0bc6b858444e19af027c3dbcfa05727e41de0a36c936e-merged.mount: Deactivated successfully.
Jan 31 08:07:35 compute-0 podman[90524]: 2026-01-31 08:07:35.281383576 +0000 UTC m=+0.127532659 container remove f08f1b45bf9d46acc6fc5ecd0ab575d2a2ceed2a5133153ea74c9f1136bbdb30 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elastic_kilby, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 08:07:35 compute-0 systemd[1]: libpod-conmon-f08f1b45bf9d46acc6fc5ecd0ab575d2a2ceed2a5133153ea74c9f1136bbdb30.scope: Deactivated successfully.
Jan 31 08:07:35 compute-0 podman[90583]: 2026-01-31 08:07:35.425397064 +0000 UTC m=+0.059290853 container create aae51acb3a8cf5dfddace07e2bc9950cbdcd7237f81a7a6eb7c73c25dad2e5cf (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=amazing_volhard, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=tentacle, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 08:07:35 compute-0 systemd[1]: Started libpod-conmon-aae51acb3a8cf5dfddace07e2bc9950cbdcd7237f81a7a6eb7c73c25dad2e5cf.scope.
Jan 31 08:07:35 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool create", "pool": "volumes", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} v 0)
Jan 31 08:07:35 compute-0 ceph-mon[75227]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2921628157' entity='client.admin' cmd={"prefix": "osd pool create", "pool": "volumes", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} : dispatch
Jan 31 08:07:35 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:07:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1205cca189f073eedddcc3841cdb2501393abf265e40ab7f4e20c5e2a80ac1da/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 08:07:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1205cca189f073eedddcc3841cdb2501393abf265e40ab7f4e20c5e2a80ac1da/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 08:07:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1205cca189f073eedddcc3841cdb2501393abf265e40ab7f4e20c5e2a80ac1da/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 08:07:35 compute-0 podman[90583]: 2026-01-31 08:07:35.400847754 +0000 UTC m=+0.034741603 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 08:07:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1205cca189f073eedddcc3841cdb2501393abf265e40ab7f4e20c5e2a80ac1da/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 08:07:35 compute-0 podman[90583]: 2026-01-31 08:07:35.507488736 +0000 UTC m=+0.141382575 container init aae51acb3a8cf5dfddace07e2bc9950cbdcd7237f81a7a6eb7c73c25dad2e5cf (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=amazing_volhard, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 08:07:35 compute-0 podman[90583]: 2026-01-31 08:07:35.51639528 +0000 UTC m=+0.150289079 container start aae51acb3a8cf5dfddace07e2bc9950cbdcd7237f81a7a6eb7c73c25dad2e5cf (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=amazing_volhard, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 08:07:35 compute-0 podman[90583]: 2026-01-31 08:07:35.519731256 +0000 UTC m=+0.153625115 container attach aae51acb3a8cf5dfddace07e2bc9950cbdcd7237f81a7a6eb7c73c25dad2e5cf (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=amazing_volhard, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Jan 31 08:07:35 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e19 do_prune osdmap full prune enabled
Jan 31 08:07:35 compute-0 ceph-mon[75227]: from='client.? 192.168.122.100:0/685575208' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "vms", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Jan 31 08:07:35 compute-0 ceph-mon[75227]: osdmap e19: 3 total, 3 up, 3 in
Jan 31 08:07:35 compute-0 ceph-mon[75227]: from='client.? 192.168.122.100:0/2921628157' entity='client.admin' cmd={"prefix": "osd pool create", "pool": "volumes", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} : dispatch
Jan 31 08:07:35 compute-0 ceph-mon[75227]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2921628157' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "volumes", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Jan 31 08:07:35 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e20 e20: 3 total, 3 up, 3 in
Jan 31 08:07:35 compute-0 romantic_pare[90508]: pool 'volumes' created
Jan 31 08:07:35 compute-0 ceph-mon[75227]: log_channel(cluster) log [DBG] : osdmap e20: 3 total, 3 up, 3 in
Jan 31 08:07:35 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 20 pg[3.0( empty local-lis/les=0/0 n=0 ec=20/20 lis/c=0/0 les/c/f=0/0/0 sis=20) [1] r=0 lpr=20 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:07:35 compute-0 ceph-osd[88096]: osd.2 pg_epoch: 20 pg[2.0( empty local-lis/les=19/20 n=0 ec=19/19 lis/c=0/0 les/c/f=0/0/0 sis=19) [2] r=0 lpr=19 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:07:35 compute-0 systemd[1]: libpod-4b5231c3a29176903f2c9198acb2bfe7251612e0fb86a776be977f1b76281e83.scope: Deactivated successfully.
Jan 31 08:07:35 compute-0 conmon[90508]: conmon 4b5231c3a29176903f2c <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-4b5231c3a29176903f2c9198acb2bfe7251612e0fb86a776be977f1b76281e83.scope/container/memory.events
Jan 31 08:07:35 compute-0 podman[90468]: 2026-01-31 08:07:35.59490621 +0000 UTC m=+0.676093759 container died 4b5231c3a29176903f2c9198acb2bfe7251612e0fb86a776be977f1b76281e83 (image=quay.io/ceph/ceph:v20, name=romantic_pare, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=tentacle)
Jan 31 08:07:35 compute-0 systemd[1]: var-lib-containers-storage-overlay-01a2a86c27653a7f20fce58cb337f777283fc65922cffce57fabba5bd8d3476c-merged.mount: Deactivated successfully.
Jan 31 08:07:35 compute-0 podman[90468]: 2026-01-31 08:07:35.63240268 +0000 UTC m=+0.713590169 container remove 4b5231c3a29176903f2c9198acb2bfe7251612e0fb86a776be977f1b76281e83 (image=quay.io/ceph/ceph:v20, name=romantic_pare, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 31 08:07:35 compute-0 systemd[1]: libpod-conmon-4b5231c3a29176903f2c9198acb2bfe7251612e0fb86a776be977f1b76281e83.scope: Deactivated successfully.
Jan 31 08:07:35 compute-0 sudo[90425]: pam_unix(sudo:session): session closed for user root
Jan 31 08:07:35 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v43: 3 pgs: 1 active+clean, 2 unknown; 449 KiB data, 480 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:07:35 compute-0 sudo[90655]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bpahjygumvkxrpvtfczzsrphvtzjutxs ; /usr/bin/python3'
Jan 31 08:07:35 compute-0 sudo[90655]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:07:35 compute-0 python3[90657]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 82c880e6-d992-5408-8b12-efff9c275473 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool create backups  replicated_rule --autoscale-mode on _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 08:07:35 compute-0 podman[90698]: 2026-01-31 08:07:35.996769086 +0000 UTC m=+0.038856599 container create a93c421fd95b64f62e51a4b7014e6cac80630b2536a4c42214c92c19b936bb27 (image=quay.io/ceph/ceph:v20, name=sweet_villani, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=tentacle)
Jan 31 08:07:36 compute-0 systemd[1]: Started libpod-conmon-a93c421fd95b64f62e51a4b7014e6cac80630b2536a4c42214c92c19b936bb27.scope.
Jan 31 08:07:36 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:07:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/85d38ae5622bffecfa5717ccf21c11fdac9a2243cc5c14ddfbaec4ce45ed98a7/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 08:07:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/85d38ae5622bffecfa5717ccf21c11fdac9a2243cc5c14ddfbaec4ce45ed98a7/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 08:07:36 compute-0 podman[90698]: 2026-01-31 08:07:36.051320762 +0000 UTC m=+0.093408315 container init a93c421fd95b64f62e51a4b7014e6cac80630b2536a4c42214c92c19b936bb27 (image=quay.io/ceph/ceph:v20, name=sweet_villani, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_REF=tentacle)
Jan 31 08:07:36 compute-0 podman[90698]: 2026-01-31 08:07:36.055912274 +0000 UTC m=+0.097999767 container start a93c421fd95b64f62e51a4b7014e6cac80630b2536a4c42214c92c19b936bb27 (image=quay.io/ceph/ceph:v20, name=sweet_villani, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Jan 31 08:07:36 compute-0 podman[90698]: 2026-01-31 08:07:36.059026782 +0000 UTC m=+0.101114395 container attach a93c421fd95b64f62e51a4b7014e6cac80630b2536a4c42214c92c19b936bb27 (image=quay.io/ceph/ceph:v20, name=sweet_villani, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.41.3)
Jan 31 08:07:36 compute-0 podman[90698]: 2026-01-31 08:07:35.979496223 +0000 UTC m=+0.021583716 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 31 08:07:36 compute-0 lvm[90738]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 31 08:07:36 compute-0 lvm[90741]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 31 08:07:36 compute-0 lvm[90741]: VG ceph_vg1 finished
Jan 31 08:07:36 compute-0 lvm[90738]: VG ceph_vg0 finished
Jan 31 08:07:36 compute-0 lvm[90745]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 31 08:07:36 compute-0 lvm[90745]: VG ceph_vg2 finished
Jan 31 08:07:36 compute-0 amazing_volhard[90600]: {}
Jan 31 08:07:36 compute-0 systemd[1]: libpod-aae51acb3a8cf5dfddace07e2bc9950cbdcd7237f81a7a6eb7c73c25dad2e5cf.scope: Deactivated successfully.
Jan 31 08:07:36 compute-0 systemd[1]: libpod-aae51acb3a8cf5dfddace07e2bc9950cbdcd7237f81a7a6eb7c73c25dad2e5cf.scope: Consumed 1.048s CPU time.
Jan 31 08:07:36 compute-0 podman[90583]: 2026-01-31 08:07:36.273770509 +0000 UTC m=+0.907664288 container died aae51acb3a8cf5dfddace07e2bc9950cbdcd7237f81a7a6eb7c73c25dad2e5cf (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=amazing_volhard, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Jan 31 08:07:36 compute-0 systemd[1]: var-lib-containers-storage-overlay-1205cca189f073eedddcc3841cdb2501393abf265e40ab7f4e20c5e2a80ac1da-merged.mount: Deactivated successfully.
Jan 31 08:07:36 compute-0 podman[90583]: 2026-01-31 08:07:36.30781212 +0000 UTC m=+0.941705899 container remove aae51acb3a8cf5dfddace07e2bc9950cbdcd7237f81a7a6eb7c73c25dad2e5cf (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=amazing_volhard, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_REF=tentacle, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 08:07:36 compute-0 systemd[1]: libpod-conmon-aae51acb3a8cf5dfddace07e2bc9950cbdcd7237f81a7a6eb7c73c25dad2e5cf.scope: Deactivated successfully.
Jan 31 08:07:36 compute-0 sudo[90467]: pam_unix(sudo:session): session closed for user root
Jan 31 08:07:36 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 31 08:07:36 compute-0 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 08:07:36 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 31 08:07:36 compute-0 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 08:07:36 compute-0 sudo[90776]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 31 08:07:36 compute-0 sudo[90776]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:07:36 compute-0 sudo[90776]: pam_unix(sudo:session): session closed for user root
Jan 31 08:07:36 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool create", "pool": "backups", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} v 0)
Jan 31 08:07:36 compute-0 ceph-mon[75227]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3668516579' entity='client.admin' cmd={"prefix": "osd pool create", "pool": "backups", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} : dispatch
Jan 31 08:07:36 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e20 do_prune osdmap full prune enabled
Jan 31 08:07:36 compute-0 ceph-mon[75227]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3668516579' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "backups", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Jan 31 08:07:36 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e21 e21: 3 total, 3 up, 3 in
Jan 31 08:07:36 compute-0 sweet_villani[90728]: pool 'backups' created
Jan 31 08:07:36 compute-0 ceph-mon[75227]: log_channel(cluster) log [DBG] : osdmap e21: 3 total, 3 up, 3 in
Jan 31 08:07:36 compute-0 ceph-mon[75227]: from='client.? 192.168.122.100:0/2921628157' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "volumes", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Jan 31 08:07:36 compute-0 ceph-mon[75227]: osdmap e20: 3 total, 3 up, 3 in
Jan 31 08:07:36 compute-0 ceph-mon[75227]: pgmap v43: 3 pgs: 1 active+clean, 2 unknown; 449 KiB data, 480 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:07:36 compute-0 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 08:07:36 compute-0 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 08:07:36 compute-0 ceph-mon[75227]: from='client.? 192.168.122.100:0/3668516579' entity='client.admin' cmd={"prefix": "osd pool create", "pool": "backups", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} : dispatch
Jan 31 08:07:36 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 21 pg[3.0( empty local-lis/les=20/21 n=0 ec=20/20 lis/c=0/0 les/c/f=0/0/0 sis=20) [1] r=0 lpr=20 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:07:36 compute-0 systemd[1]: libpod-a93c421fd95b64f62e51a4b7014e6cac80630b2536a4c42214c92c19b936bb27.scope: Deactivated successfully.
Jan 31 08:07:36 compute-0 podman[90698]: 2026-01-31 08:07:36.592550944 +0000 UTC m=+0.634638477 container died a93c421fd95b64f62e51a4b7014e6cac80630b2536a4c42214c92c19b936bb27 (image=quay.io/ceph/ceph:v20, name=sweet_villani, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 31 08:07:36 compute-0 systemd[1]: var-lib-containers-storage-overlay-85d38ae5622bffecfa5717ccf21c11fdac9a2243cc5c14ddfbaec4ce45ed98a7-merged.mount: Deactivated successfully.
Jan 31 08:07:36 compute-0 podman[90698]: 2026-01-31 08:07:36.632505784 +0000 UTC m=+0.674593287 container remove a93c421fd95b64f62e51a4b7014e6cac80630b2536a4c42214c92c19b936bb27 (image=quay.io/ceph/ceph:v20, name=sweet_villani, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, OSD_FLAVOR=default)
Jan 31 08:07:36 compute-0 systemd[1]: libpod-conmon-a93c421fd95b64f62e51a4b7014e6cac80630b2536a4c42214c92c19b936bb27.scope: Deactivated successfully.
Jan 31 08:07:36 compute-0 sudo[90655]: pam_unix(sudo:session): session closed for user root
Jan 31 08:07:36 compute-0 sudo[90841]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bawpwymjeslsmkxlzomyobyjnrtsgitn ; /usr/bin/python3'
Jan 31 08:07:36 compute-0 sudo[90841]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:07:36 compute-0 python3[90843]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 82c880e6-d992-5408-8b12-efff9c275473 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool create images  replicated_rule --autoscale-mode on _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 08:07:36 compute-0 ceph-osd[85971]: osd.0 pg_epoch: 21 pg[4.0( empty local-lis/les=0/0 n=0 ec=21/21 lis/c=0/0 les/c/f=0/0/0 sis=21) [0] r=0 lpr=21 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:07:36 compute-0 podman[90844]: 2026-01-31 08:07:36.984512528 +0000 UTC m=+0.056633767 container create be0eb96bfc2e227375880800c8511382ce4cf3588b16de63c116424995a280f4 (image=quay.io/ceph/ceph:v20, name=reverent_golick, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 08:07:37 compute-0 systemd[1]: Started libpod-conmon-be0eb96bfc2e227375880800c8511382ce4cf3588b16de63c116424995a280f4.scope.
Jan 31 08:07:37 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:07:37 compute-0 podman[90844]: 2026-01-31 08:07:36.961552872 +0000 UTC m=+0.033674161 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 31 08:07:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5303a2fd46dc0775300aa94f03d9257ce838f2f114e4f26ee81a06e589bfbf92/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 08:07:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5303a2fd46dc0775300aa94f03d9257ce838f2f114e4f26ee81a06e589bfbf92/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 08:07:37 compute-0 podman[90844]: 2026-01-31 08:07:37.071414257 +0000 UTC m=+0.143535566 container init be0eb96bfc2e227375880800c8511382ce4cf3588b16de63c116424995a280f4 (image=quay.io/ceph/ceph:v20, name=reverent_golick, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Jan 31 08:07:37 compute-0 podman[90844]: 2026-01-31 08:07:37.0802973 +0000 UTC m=+0.152418549 container start be0eb96bfc2e227375880800c8511382ce4cf3588b16de63c116424995a280f4 (image=quay.io/ceph/ceph:v20, name=reverent_golick, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 08:07:37 compute-0 podman[90844]: 2026-01-31 08:07:37.083568584 +0000 UTC m=+0.155689833 container attach be0eb96bfc2e227375880800c8511382ce4cf3588b16de63c116424995a280f4 (image=quay.io/ceph/ceph:v20, name=reverent_golick, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 08:07:37 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool create", "pool": "images", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} v 0)
Jan 31 08:07:37 compute-0 ceph-mon[75227]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/745180937' entity='client.admin' cmd={"prefix": "osd pool create", "pool": "images", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} : dispatch
Jan 31 08:07:37 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e21 do_prune osdmap full prune enabled
Jan 31 08:07:37 compute-0 ceph-mon[75227]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/745180937' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "images", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Jan 31 08:07:37 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e22 e22: 3 total, 3 up, 3 in
Jan 31 08:07:37 compute-0 reverent_golick[90860]: pool 'images' created
Jan 31 08:07:37 compute-0 ceph-mon[75227]: log_channel(cluster) log [DBG] : osdmap e22: 3 total, 3 up, 3 in
Jan 31 08:07:37 compute-0 ceph-mon[75227]: from='client.? 192.168.122.100:0/3668516579' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "backups", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Jan 31 08:07:37 compute-0 ceph-mon[75227]: osdmap e21: 3 total, 3 up, 3 in
Jan 31 08:07:37 compute-0 ceph-mon[75227]: from='client.? 192.168.122.100:0/745180937' entity='client.admin' cmd={"prefix": "osd pool create", "pool": "images", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} : dispatch
Jan 31 08:07:37 compute-0 ceph-osd[85971]: osd.0 pg_epoch: 22 pg[4.0( empty local-lis/les=21/22 n=0 ec=21/21 lis/c=0/0 les/c/f=0/0/0 sis=21) [0] r=0 lpr=21 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:07:37 compute-0 systemd[1]: libpod-be0eb96bfc2e227375880800c8511382ce4cf3588b16de63c116424995a280f4.scope: Deactivated successfully.
Jan 31 08:07:37 compute-0 conmon[90860]: conmon be0eb96bfc2e22737588 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-be0eb96bfc2e227375880800c8511382ce4cf3588b16de63c116424995a280f4.scope/container/memory.events
Jan 31 08:07:37 compute-0 podman[90844]: 2026-01-31 08:07:37.610195559 +0000 UTC m=+0.682316768 container died be0eb96bfc2e227375880800c8511382ce4cf3588b16de63c116424995a280f4 (image=quay.io/ceph/ceph:v20, name=reverent_golick, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, ceph=True)
Jan 31 08:07:37 compute-0 systemd[1]: var-lib-containers-storage-overlay-5303a2fd46dc0775300aa94f03d9257ce838f2f114e4f26ee81a06e589bfbf92-merged.mount: Deactivated successfully.
Jan 31 08:07:37 compute-0 podman[90844]: 2026-01-31 08:07:37.64774065 +0000 UTC m=+0.719861859 container remove be0eb96bfc2e227375880800c8511382ce4cf3588b16de63c116424995a280f4 (image=quay.io/ceph/ceph:v20, name=reverent_golick, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Jan 31 08:07:37 compute-0 systemd[1]: libpod-conmon-be0eb96bfc2e227375880800c8511382ce4cf3588b16de63c116424995a280f4.scope: Deactivated successfully.
Jan 31 08:07:37 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v46: 5 pgs: 1 active+clean, 4 unknown; 449 KiB data, 480 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:07:37 compute-0 sudo[90841]: pam_unix(sudo:session): session closed for user root
Jan 31 08:07:37 compute-0 sudo[90922]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ogrzxurieturzsddcasaswbhrktthzvd ; /usr/bin/python3'
Jan 31 08:07:37 compute-0 sudo[90922]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:07:37 compute-0 ceph-osd[88096]: osd.2 pg_epoch: 22 pg[5.0( empty local-lis/les=0/0 n=0 ec=22/22 lis/c=0/0 les/c/f=0/0/0 sis=22) [2] r=0 lpr=22 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:07:37 compute-0 python3[90924]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 82c880e6-d992-5408-8b12-efff9c275473 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool create cephfs.cephfs.meta  replicated_rule --autoscale-mode on _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 08:07:38 compute-0 podman[90925]: 2026-01-31 08:07:38.022843152 +0000 UTC m=+0.044749627 container create b2b630959f51b6c9ba04375ba5f97ab19e6cb4ee7fffc50f85eb5d5de3963339 (image=quay.io/ceph/ceph:v20, name=kind_perlman, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 08:07:38 compute-0 systemd[1]: Started libpod-conmon-b2b630959f51b6c9ba04375ba5f97ab19e6cb4ee7fffc50f85eb5d5de3963339.scope.
Jan 31 08:07:38 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:07:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ef29180b75fc61fb38f24daa57d92a9c871b099cf9e2c031aeaa3fba70cd78b1/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 08:07:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ef29180b75fc61fb38f24daa57d92a9c871b099cf9e2c031aeaa3fba70cd78b1/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 08:07:38 compute-0 podman[90925]: 2026-01-31 08:07:37.999821626 +0000 UTC m=+0.021728161 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 31 08:07:38 compute-0 podman[90925]: 2026-01-31 08:07:38.112786739 +0000 UTC m=+0.134693264 container init b2b630959f51b6c9ba04375ba5f97ab19e6cb4ee7fffc50f85eb5d5de3963339 (image=quay.io/ceph/ceph:v20, name=kind_perlman, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 08:07:38 compute-0 podman[90925]: 2026-01-31 08:07:38.117701899 +0000 UTC m=+0.139608374 container start b2b630959f51b6c9ba04375ba5f97ab19e6cb4ee7fffc50f85eb5d5de3963339 (image=quay.io/ceph/ceph:v20, name=kind_perlman, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, io.buildah.version=1.41.3, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Jan 31 08:07:38 compute-0 podman[90925]: 2026-01-31 08:07:38.12791713 +0000 UTC m=+0.149823655 container attach b2b630959f51b6c9ba04375ba5f97ab19e6cb4ee7fffc50f85eb5d5de3963339 (image=quay.io/ceph/ceph:v20, name=kind_perlman, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, OSD_FLAVOR=default, io.buildah.version=1.41.3)
Jan 31 08:07:38 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e22 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 08:07:38 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool create", "pool": "cephfs.cephfs.meta", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} v 0)
Jan 31 08:07:38 compute-0 ceph-mon[75227]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/6573086' entity='client.admin' cmd={"prefix": "osd pool create", "pool": "cephfs.cephfs.meta", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} : dispatch
Jan 31 08:07:38 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e22 do_prune osdmap full prune enabled
Jan 31 08:07:38 compute-0 ceph-mon[75227]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/6573086' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "cephfs.cephfs.meta", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Jan 31 08:07:38 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e23 e23: 3 total, 3 up, 3 in
Jan 31 08:07:38 compute-0 kind_perlman[90940]: pool 'cephfs.cephfs.meta' created
Jan 31 08:07:38 compute-0 ceph-mon[75227]: from='client.? 192.168.122.100:0/745180937' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "images", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Jan 31 08:07:38 compute-0 ceph-mon[75227]: osdmap e22: 3 total, 3 up, 3 in
Jan 31 08:07:38 compute-0 ceph-mon[75227]: pgmap v46: 5 pgs: 1 active+clean, 4 unknown; 449 KiB data, 480 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:07:38 compute-0 ceph-mon[75227]: from='client.? 192.168.122.100:0/6573086' entity='client.admin' cmd={"prefix": "osd pool create", "pool": "cephfs.cephfs.meta", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} : dispatch
Jan 31 08:07:38 compute-0 ceph-osd[88096]: osd.2 pg_epoch: 23 pg[5.0( empty local-lis/les=22/23 n=0 ec=22/22 lis/c=0/0 les/c/f=0/0/0 sis=22) [2] r=0 lpr=22 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:07:38 compute-0 ceph-mon[75227]: log_channel(cluster) log [DBG] : osdmap e23: 3 total, 3 up, 3 in
Jan 31 08:07:38 compute-0 systemd[1]: libpod-b2b630959f51b6c9ba04375ba5f97ab19e6cb4ee7fffc50f85eb5d5de3963339.scope: Deactivated successfully.
Jan 31 08:07:38 compute-0 podman[90925]: 2026-01-31 08:07:38.635166393 +0000 UTC m=+0.657072838 container died b2b630959f51b6c9ba04375ba5f97ab19e6cb4ee7fffc50f85eb5d5de3963339 (image=quay.io/ceph/ceph:v20, name=kind_perlman, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 08:07:38 compute-0 systemd[1]: var-lib-containers-storage-overlay-ef29180b75fc61fb38f24daa57d92a9c871b099cf9e2c031aeaa3fba70cd78b1-merged.mount: Deactivated successfully.
Jan 31 08:07:38 compute-0 podman[90925]: 2026-01-31 08:07:38.675210075 +0000 UTC m=+0.697116510 container remove b2b630959f51b6c9ba04375ba5f97ab19e6cb4ee7fffc50f85eb5d5de3963339 (image=quay.io/ceph/ceph:v20, name=kind_perlman, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 31 08:07:38 compute-0 systemd[1]: libpod-conmon-b2b630959f51b6c9ba04375ba5f97ab19e6cb4ee7fffc50f85eb5d5de3963339.scope: Deactivated successfully.
Jan 31 08:07:38 compute-0 sudo[90922]: pam_unix(sudo:session): session closed for user root
Jan 31 08:07:38 compute-0 sudo[91005]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bbbrtvkmkcioyhxxoyhccroyvlxwfftr ; /usr/bin/python3'
Jan 31 08:07:38 compute-0 sudo[91005]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:07:38 compute-0 python3[91007]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 82c880e6-d992-5408-8b12-efff9c275473 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool create cephfs.cephfs.data  replicated_rule --autoscale-mode on _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 08:07:39 compute-0 ceph-osd[85971]: osd.0 pg_epoch: 23 pg[6.0( empty local-lis/les=0/0 n=0 ec=23/23 lis/c=0/0 les/c/f=0/0/0 sis=23) [0] r=0 lpr=23 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:07:39 compute-0 podman[91008]: 2026-01-31 08:07:39.01294549 +0000 UTC m=+0.044787589 container create c80e7a212fee4aa7d24f7a2b86fc019646acdc66f5880c440f8436614f98975e (image=quay.io/ceph/ceph:v20, name=kind_khayyam, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Jan 31 08:07:39 compute-0 systemd[1]: Started libpod-conmon-c80e7a212fee4aa7d24f7a2b86fc019646acdc66f5880c440f8436614f98975e.scope.
Jan 31 08:07:39 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:07:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4d8b0b9891eaf0206cbbb892d50afdae6952bc975419a5238e1036e5597118e6/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 08:07:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4d8b0b9891eaf0206cbbb892d50afdae6952bc975419a5238e1036e5597118e6/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 08:07:39 compute-0 podman[91008]: 2026-01-31 08:07:38.98982042 +0000 UTC m=+0.021662569 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 31 08:07:39 compute-0 podman[91008]: 2026-01-31 08:07:39.090011049 +0000 UTC m=+0.121853128 container init c80e7a212fee4aa7d24f7a2b86fc019646acdc66f5880c440f8436614f98975e (image=quay.io/ceph/ceph:v20, name=kind_khayyam, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 08:07:39 compute-0 podman[91008]: 2026-01-31 08:07:39.094887348 +0000 UTC m=+0.126729417 container start c80e7a212fee4aa7d24f7a2b86fc019646acdc66f5880c440f8436614f98975e (image=quay.io/ceph/ceph:v20, name=kind_khayyam, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Jan 31 08:07:39 compute-0 podman[91008]: 2026-01-31 08:07:39.098395538 +0000 UTC m=+0.130237627 container attach c80e7a212fee4aa7d24f7a2b86fc019646acdc66f5880c440f8436614f98975e (image=quay.io/ceph/ceph:v20, name=kind_khayyam, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Jan 31 08:07:39 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool create", "pool": "cephfs.cephfs.data", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} v 0)
Jan 31 08:07:39 compute-0 ceph-mon[75227]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/890048932' entity='client.admin' cmd={"prefix": "osd pool create", "pool": "cephfs.cephfs.data", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} : dispatch
Jan 31 08:07:39 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e23 do_prune osdmap full prune enabled
Jan 31 08:07:39 compute-0 ceph-mon[75227]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/890048932' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "cephfs.cephfs.data", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Jan 31 08:07:39 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e24 e24: 3 total, 3 up, 3 in
Jan 31 08:07:39 compute-0 kind_khayyam[91023]: pool 'cephfs.cephfs.data' created
Jan 31 08:07:39 compute-0 ceph-mon[75227]: log_channel(cluster) log [DBG] : osdmap e24: 3 total, 3 up, 3 in
Jan 31 08:07:39 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 24 pg[7.0( empty local-lis/les=0/0 n=0 ec=24/24 lis/c=0/0 les/c/f=0/0/0 sis=24) [1] r=0 lpr=24 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:07:39 compute-0 ceph-osd[85971]: osd.0 pg_epoch: 24 pg[6.0( empty local-lis/les=23/24 n=0 ec=23/23 lis/c=0/0 les/c/f=0/0/0 sis=23) [0] r=0 lpr=23 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:07:39 compute-0 ceph-mon[75227]: from='client.? 192.168.122.100:0/6573086' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "cephfs.cephfs.meta", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Jan 31 08:07:39 compute-0 ceph-mon[75227]: osdmap e23: 3 total, 3 up, 3 in
Jan 31 08:07:39 compute-0 ceph-mon[75227]: from='client.? 192.168.122.100:0/890048932' entity='client.admin' cmd={"prefix": "osd pool create", "pool": "cephfs.cephfs.data", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} : dispatch
Jan 31 08:07:39 compute-0 systemd[1]: libpod-c80e7a212fee4aa7d24f7a2b86fc019646acdc66f5880c440f8436614f98975e.scope: Deactivated successfully.
Jan 31 08:07:39 compute-0 podman[91008]: 2026-01-31 08:07:39.642948905 +0000 UTC m=+0.674791004 container died c80e7a212fee4aa7d24f7a2b86fc019646acdc66f5880c440f8436614f98975e (image=quay.io/ceph/ceph:v20, name=kind_khayyam, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030)
Jan 31 08:07:39 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v49: 7 pgs: 1 creating+peering, 4 active+clean, 2 unknown; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:07:39 compute-0 systemd[1]: var-lib-containers-storage-overlay-4d8b0b9891eaf0206cbbb892d50afdae6952bc975419a5238e1036e5597118e6-merged.mount: Deactivated successfully.
Jan 31 08:07:39 compute-0 podman[91008]: 2026-01-31 08:07:39.684475179 +0000 UTC m=+0.716317278 container remove c80e7a212fee4aa7d24f7a2b86fc019646acdc66f5880c440f8436614f98975e (image=quay.io/ceph/ceph:v20, name=kind_khayyam, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 08:07:39 compute-0 systemd[1]: libpod-conmon-c80e7a212fee4aa7d24f7a2b86fc019646acdc66f5880c440f8436614f98975e.scope: Deactivated successfully.
Jan 31 08:07:39 compute-0 sudo[91005]: pam_unix(sudo:session): session closed for user root
Jan 31 08:07:39 compute-0 sudo[91085]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wathdpxgaivqnogsroiyfmzasfhabpmv ; /usr/bin/python3'
Jan 31 08:07:39 compute-0 sudo[91085]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:07:40 compute-0 python3[91087]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 82c880e6-d992-5408-8b12-efff9c275473 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool application enable vms rbd _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 08:07:40 compute-0 podman[91088]: 2026-01-31 08:07:40.077187204 +0000 UTC m=+0.055437103 container create 5d0b699f042d43ccb3c3633c6cb5a2f97816fe34fe96e7147e0bb89d283d8ef4 (image=quay.io/ceph/ceph:v20, name=recursing_heyrovsky, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 08:07:40 compute-0 systemd[1]: Started libpod-conmon-5d0b699f042d43ccb3c3633c6cb5a2f97816fe34fe96e7147e0bb89d283d8ef4.scope.
Jan 31 08:07:40 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:07:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d9c9d0ddfa926f603e0e94d83f9729ff7bf40ae657d704c2f79e941760270a5b/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 08:07:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d9c9d0ddfa926f603e0e94d83f9729ff7bf40ae657d704c2f79e941760270a5b/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 08:07:40 compute-0 podman[91088]: 2026-01-31 08:07:40.149437235 +0000 UTC m=+0.127687224 container init 5d0b699f042d43ccb3c3633c6cb5a2f97816fe34fe96e7147e0bb89d283d8ef4 (image=quay.io/ceph/ceph:v20, name=recursing_heyrovsky, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3)
Jan 31 08:07:40 compute-0 podman[91088]: 2026-01-31 08:07:40.061201038 +0000 UTC m=+0.039451037 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 31 08:07:40 compute-0 podman[91088]: 2026-01-31 08:07:40.158155824 +0000 UTC m=+0.136405723 container start 5d0b699f042d43ccb3c3633c6cb5a2f97816fe34fe96e7147e0bb89d283d8ef4 (image=quay.io/ceph/ceph:v20, name=recursing_heyrovsky, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 08:07:40 compute-0 podman[91088]: 2026-01-31 08:07:40.16116584 +0000 UTC m=+0.139415769 container attach 5d0b699f042d43ccb3c3633c6cb5a2f97816fe34fe96e7147e0bb89d283d8ef4 (image=quay.io/ceph/ceph:v20, name=recursing_heyrovsky, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Jan 31 08:07:40 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e24 do_prune osdmap full prune enabled
Jan 31 08:07:40 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e25 e25: 3 total, 3 up, 3 in
Jan 31 08:07:40 compute-0 ceph-mon[75227]: log_channel(cluster) log [DBG] : osdmap e25: 3 total, 3 up, 3 in
Jan 31 08:07:40 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 25 pg[7.0( empty local-lis/les=24/25 n=0 ec=24/24 lis/c=0/0 les/c/f=0/0/0 sis=24) [1] r=0 lpr=24 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:07:40 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable", "pool": "vms", "app": "rbd"} v 0)
Jan 31 08:07:40 compute-0 ceph-mon[75227]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1893307948' entity='client.admin' cmd={"prefix": "osd pool application enable", "pool": "vms", "app": "rbd"} : dispatch
Jan 31 08:07:40 compute-0 ceph-mon[75227]: from='client.? 192.168.122.100:0/890048932' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "cephfs.cephfs.data", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Jan 31 08:07:40 compute-0 ceph-mon[75227]: osdmap e24: 3 total, 3 up, 3 in
Jan 31 08:07:40 compute-0 ceph-mon[75227]: pgmap v49: 7 pgs: 1 creating+peering, 4 active+clean, 2 unknown; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:07:40 compute-0 ceph-mon[75227]: osdmap e25: 3 total, 3 up, 3 in
Jan 31 08:07:41 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e25 do_prune osdmap full prune enabled
Jan 31 08:07:41 compute-0 ceph-mon[75227]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1893307948' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "vms", "app": "rbd"}]': finished
Jan 31 08:07:41 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e26 e26: 3 total, 3 up, 3 in
Jan 31 08:07:41 compute-0 recursing_heyrovsky[91104]: enabled application 'rbd' on pool 'vms'
Jan 31 08:07:41 compute-0 ceph-mon[75227]: log_channel(cluster) log [DBG] : osdmap e26: 3 total, 3 up, 3 in
Jan 31 08:07:41 compute-0 ceph-mon[75227]: from='client.? 192.168.122.100:0/1893307948' entity='client.admin' cmd={"prefix": "osd pool application enable", "pool": "vms", "app": "rbd"} : dispatch
Jan 31 08:07:41 compute-0 ceph-mon[75227]: from='client.? 192.168.122.100:0/1893307948' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "vms", "app": "rbd"}]': finished
Jan 31 08:07:41 compute-0 ceph-mon[75227]: osdmap e26: 3 total, 3 up, 3 in
Jan 31 08:07:41 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v52: 7 pgs: 2 creating+peering, 5 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:07:41 compute-0 systemd[1]: libpod-5d0b699f042d43ccb3c3633c6cb5a2f97816fe34fe96e7147e0bb89d283d8ef4.scope: Deactivated successfully.
Jan 31 08:07:41 compute-0 podman[91088]: 2026-01-31 08:07:41.673456047 +0000 UTC m=+1.651706026 container died 5d0b699f042d43ccb3c3633c6cb5a2f97816fe34fe96e7147e0bb89d283d8ef4 (image=quay.io/ceph/ceph:v20, name=recursing_heyrovsky, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2)
Jan 31 08:07:41 compute-0 systemd[1]: var-lib-containers-storage-overlay-d9c9d0ddfa926f603e0e94d83f9729ff7bf40ae657d704c2f79e941760270a5b-merged.mount: Deactivated successfully.
Jan 31 08:07:41 compute-0 podman[91088]: 2026-01-31 08:07:41.750881486 +0000 UTC m=+1.729131415 container remove 5d0b699f042d43ccb3c3633c6cb5a2f97816fe34fe96e7147e0bb89d283d8ef4 (image=quay.io/ceph/ceph:v20, name=recursing_heyrovsky, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 08:07:41 compute-0 systemd[1]: libpod-conmon-5d0b699f042d43ccb3c3633c6cb5a2f97816fe34fe96e7147e0bb89d283d8ef4.scope: Deactivated successfully.
Jan 31 08:07:41 compute-0 sudo[91085]: pam_unix(sudo:session): session closed for user root
Jan 31 08:07:41 compute-0 sudo[91166]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lhxmsmhmfctbrkoqwycdcmbwvunigujm ; /usr/bin/python3'
Jan 31 08:07:41 compute-0 sudo[91166]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:07:42 compute-0 python3[91168]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 82c880e6-d992-5408-8b12-efff9c275473 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool application enable volumes rbd _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 08:07:42 compute-0 podman[91169]: 2026-01-31 08:07:42.14502338 +0000 UTC m=+0.018803907 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 31 08:07:42 compute-0 podman[91169]: 2026-01-31 08:07:42.375394442 +0000 UTC m=+0.249174949 container create 9ad1a7643f661d9ed5ccb0805d2df3c6e03b683671112b10f506e0011161e677 (image=quay.io/ceph/ceph:v20, name=quirky_herschel, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 08:07:42 compute-0 systemd[1]: Started libpod-conmon-9ad1a7643f661d9ed5ccb0805d2df3c6e03b683671112b10f506e0011161e677.scope.
Jan 31 08:07:42 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:07:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/79f290810d7801b0a5260eebd810d89f58f89777ded1a90753d8d5e20e435ad3/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 08:07:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/79f290810d7801b0a5260eebd810d89f58f89777ded1a90753d8d5e20e435ad3/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 08:07:42 compute-0 podman[91169]: 2026-01-31 08:07:42.438546403 +0000 UTC m=+0.312326940 container init 9ad1a7643f661d9ed5ccb0805d2df3c6e03b683671112b10f506e0011161e677 (image=quay.io/ceph/ceph:v20, name=quirky_herschel, io.buildah.version=1.41.3, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2)
Jan 31 08:07:42 compute-0 podman[91169]: 2026-01-31 08:07:42.444343599 +0000 UTC m=+0.318124116 container start 9ad1a7643f661d9ed5ccb0805d2df3c6e03b683671112b10f506e0011161e677 (image=quay.io/ceph/ceph:v20, name=quirky_herschel, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=tentacle)
Jan 31 08:07:42 compute-0 podman[91169]: 2026-01-31 08:07:42.448064864 +0000 UTC m=+0.321845361 container attach 9ad1a7643f661d9ed5ccb0805d2df3c6e03b683671112b10f506e0011161e677 (image=quay.io/ceph/ceph:v20, name=quirky_herschel, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 08:07:42 compute-0 ceph-mon[75227]: pgmap v52: 7 pgs: 2 creating+peering, 5 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:07:42 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable", "pool": "volumes", "app": "rbd"} v 0)
Jan 31 08:07:42 compute-0 ceph-mon[75227]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/26065837' entity='client.admin' cmd={"prefix": "osd pool application enable", "pool": "volumes", "app": "rbd"} : dispatch
Jan 31 08:07:43 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e26 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 08:07:43 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v53: 7 pgs: 2 creating+peering, 5 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:07:43 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e26 do_prune osdmap full prune enabled
Jan 31 08:07:43 compute-0 ceph-mon[75227]: from='client.? 192.168.122.100:0/26065837' entity='client.admin' cmd={"prefix": "osd pool application enable", "pool": "volumes", "app": "rbd"} : dispatch
Jan 31 08:07:43 compute-0 ceph-mon[75227]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/26065837' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "volumes", "app": "rbd"}]': finished
Jan 31 08:07:43 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e27 e27: 3 total, 3 up, 3 in
Jan 31 08:07:43 compute-0 quirky_herschel[91184]: enabled application 'rbd' on pool 'volumes'
Jan 31 08:07:43 compute-0 ceph-mon[75227]: log_channel(cluster) log [DBG] : osdmap e27: 3 total, 3 up, 3 in
Jan 31 08:07:43 compute-0 systemd[1]: libpod-9ad1a7643f661d9ed5ccb0805d2df3c6e03b683671112b10f506e0011161e677.scope: Deactivated successfully.
Jan 31 08:07:43 compute-0 podman[91169]: 2026-01-31 08:07:43.736430528 +0000 UTC m=+1.610211085 container died 9ad1a7643f661d9ed5ccb0805d2df3c6e03b683671112b10f506e0011161e677 (image=quay.io/ceph/ceph:v20, name=quirky_herschel, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3)
Jan 31 08:07:43 compute-0 systemd[1]: var-lib-containers-storage-overlay-79f290810d7801b0a5260eebd810d89f58f89777ded1a90753d8d5e20e435ad3-merged.mount: Deactivated successfully.
Jan 31 08:07:43 compute-0 podman[91169]: 2026-01-31 08:07:43.78102884 +0000 UTC m=+1.654809387 container remove 9ad1a7643f661d9ed5ccb0805d2df3c6e03b683671112b10f506e0011161e677 (image=quay.io/ceph/ceph:v20, name=quirky_herschel, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 08:07:43 compute-0 systemd[1]: libpod-conmon-9ad1a7643f661d9ed5ccb0805d2df3c6e03b683671112b10f506e0011161e677.scope: Deactivated successfully.
Jan 31 08:07:43 compute-0 sudo[91166]: pam_unix(sudo:session): session closed for user root
Jan 31 08:07:43 compute-0 sudo[91243]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cnlvtqmvdbvcgvsddwbnypkexxigfhpl ; /usr/bin/python3'
Jan 31 08:07:43 compute-0 sudo[91243]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:07:44 compute-0 python3[91245]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 82c880e6-d992-5408-8b12-efff9c275473 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool application enable backups rbd _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 08:07:44 compute-0 podman[91246]: 2026-01-31 08:07:44.085961639 +0000 UTC m=+0.041371771 container create 80d6d181cde1574fcb53b08ec7a3fe289a464e0794dc1a9245978215b85faff9 (image=quay.io/ceph/ceph:v20, name=beautiful_chaplygin, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 08:07:44 compute-0 systemd[1]: Started libpod-conmon-80d6d181cde1574fcb53b08ec7a3fe289a464e0794dc1a9245978215b85faff9.scope.
Jan 31 08:07:44 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:07:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ed1d0d61216a4645ee71a94b0af8fdd8439c65973f5e3e19751b53cd72333940/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 08:07:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ed1d0d61216a4645ee71a94b0af8fdd8439c65973f5e3e19751b53cd72333940/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 08:07:44 compute-0 podman[91246]: 2026-01-31 08:07:44.139692822 +0000 UTC m=+0.095102954 container init 80d6d181cde1574fcb53b08ec7a3fe289a464e0794dc1a9245978215b85faff9 (image=quay.io/ceph/ceph:v20, name=beautiful_chaplygin, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, ceph=True)
Jan 31 08:07:44 compute-0 podman[91246]: 2026-01-31 08:07:44.14454684 +0000 UTC m=+0.099956972 container start 80d6d181cde1574fcb53b08ec7a3fe289a464e0794dc1a9245978215b85faff9 (image=quay.io/ceph/ceph:v20, name=beautiful_chaplygin, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default)
Jan 31 08:07:44 compute-0 podman[91246]: 2026-01-31 08:07:44.147800673 +0000 UTC m=+0.103210805 container attach 80d6d181cde1574fcb53b08ec7a3fe289a464e0794dc1a9245978215b85faff9 (image=quay.io/ceph/ceph:v20, name=beautiful_chaplygin, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 08:07:44 compute-0 podman[91246]: 2026-01-31 08:07:44.063647692 +0000 UTC m=+0.019057854 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 31 08:07:44 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable", "pool": "backups", "app": "rbd"} v 0)
Jan 31 08:07:44 compute-0 ceph-mon[75227]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3360151616' entity='client.admin' cmd={"prefix": "osd pool application enable", "pool": "backups", "app": "rbd"} : dispatch
Jan 31 08:07:44 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e27 do_prune osdmap full prune enabled
Jan 31 08:07:44 compute-0 ceph-mon[75227]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3360151616' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "backups", "app": "rbd"}]': finished
Jan 31 08:07:44 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e28 e28: 3 total, 3 up, 3 in
Jan 31 08:07:44 compute-0 beautiful_chaplygin[91261]: enabled application 'rbd' on pool 'backups'
Jan 31 08:07:44 compute-0 ceph-mon[75227]: log_channel(cluster) log [DBG] : osdmap e28: 3 total, 3 up, 3 in
Jan 31 08:07:44 compute-0 ceph-mon[75227]: pgmap v53: 7 pgs: 2 creating+peering, 5 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:07:44 compute-0 ceph-mon[75227]: from='client.? 192.168.122.100:0/26065837' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "volumes", "app": "rbd"}]': finished
Jan 31 08:07:44 compute-0 ceph-mon[75227]: osdmap e27: 3 total, 3 up, 3 in
Jan 31 08:07:44 compute-0 ceph-mon[75227]: from='client.? 192.168.122.100:0/3360151616' entity='client.admin' cmd={"prefix": "osd pool application enable", "pool": "backups", "app": "rbd"} : dispatch
Jan 31 08:07:44 compute-0 systemd[1]: libpod-80d6d181cde1574fcb53b08ec7a3fe289a464e0794dc1a9245978215b85faff9.scope: Deactivated successfully.
Jan 31 08:07:44 compute-0 podman[91246]: 2026-01-31 08:07:44.740272676 +0000 UTC m=+0.695682808 container died 80d6d181cde1574fcb53b08ec7a3fe289a464e0794dc1a9245978215b85faff9 (image=quay.io/ceph/ceph:v20, name=beautiful_chaplygin, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 31 08:07:44 compute-0 systemd[1]: var-lib-containers-storage-overlay-ed1d0d61216a4645ee71a94b0af8fdd8439c65973f5e3e19751b53cd72333940-merged.mount: Deactivated successfully.
Jan 31 08:07:44 compute-0 podman[91246]: 2026-01-31 08:07:44.779129324 +0000 UTC m=+0.734539456 container remove 80d6d181cde1574fcb53b08ec7a3fe289a464e0794dc1a9245978215b85faff9 (image=quay.io/ceph/ceph:v20, name=beautiful_chaplygin, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20251030, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 08:07:44 compute-0 systemd[1]: libpod-conmon-80d6d181cde1574fcb53b08ec7a3fe289a464e0794dc1a9245978215b85faff9.scope: Deactivated successfully.
Jan 31 08:07:44 compute-0 sudo[91243]: pam_unix(sudo:session): session closed for user root
Jan 31 08:07:44 compute-0 sudo[91321]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xvtwozuyhjxzmhozjcvnchxfekxkyjit ; /usr/bin/python3'
Jan 31 08:07:44 compute-0 sudo[91321]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:07:45 compute-0 python3[91323]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 82c880e6-d992-5408-8b12-efff9c275473 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool application enable images rbd _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 08:07:45 compute-0 podman[91324]: 2026-01-31 08:07:45.087147201 +0000 UTC m=+0.053537758 container create 1c4a34b1d7dc61df6f7e042ce32f4ae5001f1ed3910f27422bfc0ea3b99b4c5f (image=quay.io/ceph/ceph:v20, name=infallible_kare, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 08:07:45 compute-0 systemd[1]: Started libpod-conmon-1c4a34b1d7dc61df6f7e042ce32f4ae5001f1ed3910f27422bfc0ea3b99b4c5f.scope.
Jan 31 08:07:45 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:07:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9b865e25c039a93fc8b12375fafe506e4e2fe16baacf7f01c374bde12108c39e/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 08:07:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9b865e25c039a93fc8b12375fafe506e4e2fe16baacf7f01c374bde12108c39e/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 08:07:45 compute-0 podman[91324]: 2026-01-31 08:07:45.1565222 +0000 UTC m=+0.122912727 container init 1c4a34b1d7dc61df6f7e042ce32f4ae5001f1ed3910f27422bfc0ea3b99b4c5f (image=quay.io/ceph/ceph:v20, name=infallible_kare, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Jan 31 08:07:45 compute-0 podman[91324]: 2026-01-31 08:07:45.062714724 +0000 UTC m=+0.029105271 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 31 08:07:45 compute-0 podman[91324]: 2026-01-31 08:07:45.161123332 +0000 UTC m=+0.127513849 container start 1c4a34b1d7dc61df6f7e042ce32f4ae5001f1ed3910f27422bfc0ea3b99b4c5f (image=quay.io/ceph/ceph:v20, name=infallible_kare, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 08:07:45 compute-0 podman[91324]: 2026-01-31 08:07:45.16458336 +0000 UTC m=+0.130973907 container attach 1c4a34b1d7dc61df6f7e042ce32f4ae5001f1ed3910f27422bfc0ea3b99b4c5f (image=quay.io/ceph/ceph:v20, name=infallible_kare, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 08:07:45 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable", "pool": "images", "app": "rbd"} v 0)
Jan 31 08:07:45 compute-0 ceph-mon[75227]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3809225829' entity='client.admin' cmd={"prefix": "osd pool application enable", "pool": "images", "app": "rbd"} : dispatch
Jan 31 08:07:45 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v56: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:07:45 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e28 do_prune osdmap full prune enabled
Jan 31 08:07:45 compute-0 ceph-mon[75227]: from='client.? 192.168.122.100:0/3360151616' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "backups", "app": "rbd"}]': finished
Jan 31 08:07:45 compute-0 ceph-mon[75227]: osdmap e28: 3 total, 3 up, 3 in
Jan 31 08:07:45 compute-0 ceph-mon[75227]: from='client.? 192.168.122.100:0/3809225829' entity='client.admin' cmd={"prefix": "osd pool application enable", "pool": "images", "app": "rbd"} : dispatch
Jan 31 08:07:45 compute-0 ceph-mon[75227]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3809225829' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "images", "app": "rbd"}]': finished
Jan 31 08:07:45 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e29 e29: 3 total, 3 up, 3 in
Jan 31 08:07:45 compute-0 infallible_kare[91338]: enabled application 'rbd' on pool 'images'
Jan 31 08:07:45 compute-0 ceph-mon[75227]: log_channel(cluster) log [DBG] : osdmap e29: 3 total, 3 up, 3 in
Jan 31 08:07:45 compute-0 systemd[1]: libpod-1c4a34b1d7dc61df6f7e042ce32f4ae5001f1ed3910f27422bfc0ea3b99b4c5f.scope: Deactivated successfully.
Jan 31 08:07:45 compute-0 podman[91324]: 2026-01-31 08:07:45.761566191 +0000 UTC m=+0.727956708 container died 1c4a34b1d7dc61df6f7e042ce32f4ae5001f1ed3910f27422bfc0ea3b99b4c5f (image=quay.io/ceph/ceph:v20, name=infallible_kare, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20251030, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 08:07:45 compute-0 systemd[1]: var-lib-containers-storage-overlay-9b865e25c039a93fc8b12375fafe506e4e2fe16baacf7f01c374bde12108c39e-merged.mount: Deactivated successfully.
Jan 31 08:07:45 compute-0 podman[91324]: 2026-01-31 08:07:45.802045966 +0000 UTC m=+0.768436513 container remove 1c4a34b1d7dc61df6f7e042ce32f4ae5001f1ed3910f27422bfc0ea3b99b4c5f (image=quay.io/ceph/ceph:v20, name=infallible_kare, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Jan 31 08:07:45 compute-0 systemd[1]: libpod-conmon-1c4a34b1d7dc61df6f7e042ce32f4ae5001f1ed3910f27422bfc0ea3b99b4c5f.scope: Deactivated successfully.
Jan 31 08:07:45 compute-0 sudo[91321]: pam_unix(sudo:session): session closed for user root
Jan 31 08:07:45 compute-0 sudo[91400]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jijuizcjgefzemkvfjyebopxynegtczj ; /usr/bin/python3'
Jan 31 08:07:45 compute-0 sudo[91400]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:07:46 compute-0 python3[91402]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 82c880e6-d992-5408-8b12-efff9c275473 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool application enable cephfs.cephfs.meta cephfs _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 08:07:46 compute-0 podman[91403]: 2026-01-31 08:07:46.131172004 +0000 UTC m=+0.039806117 container create 11e27b7dd17d8114617a940e8b037b26b48fbbf7c731a8ffc4b2b42767cdaff5 (image=quay.io/ceph/ceph:v20, name=recursing_moser, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.build-date=20251030)
Jan 31 08:07:46 compute-0 systemd[1]: Started libpod-conmon-11e27b7dd17d8114617a940e8b037b26b48fbbf7c731a8ffc4b2b42767cdaff5.scope.
Jan 31 08:07:46 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:07:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0c6e56cd481e85f8cb94db65c0481780ffb68521b84973a09a256f974eda8306/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 08:07:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0c6e56cd481e85f8cb94db65c0481780ffb68521b84973a09a256f974eda8306/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 08:07:46 compute-0 podman[91403]: 2026-01-31 08:07:46.188610163 +0000 UTC m=+0.097244276 container init 11e27b7dd17d8114617a940e8b037b26b48fbbf7c731a8ffc4b2b42767cdaff5 (image=quay.io/ceph/ceph:v20, name=recursing_moser, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.41.3)
Jan 31 08:07:46 compute-0 podman[91403]: 2026-01-31 08:07:46.193534003 +0000 UTC m=+0.102168156 container start 11e27b7dd17d8114617a940e8b037b26b48fbbf7c731a8ffc4b2b42767cdaff5 (image=quay.io/ceph/ceph:v20, name=recursing_moser, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Jan 31 08:07:46 compute-0 podman[91403]: 2026-01-31 08:07:46.196635762 +0000 UTC m=+0.105269895 container attach 11e27b7dd17d8114617a940e8b037b26b48fbbf7c731a8ffc4b2b42767cdaff5 (image=quay.io/ceph/ceph:v20, name=recursing_moser, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 08:07:46 compute-0 podman[91403]: 2026-01-31 08:07:46.114396715 +0000 UTC m=+0.023030918 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 31 08:07:46 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable", "pool": "cephfs.cephfs.meta", "app": "cephfs"} v 0)
Jan 31 08:07:46 compute-0 ceph-mon[75227]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3137856532' entity='client.admin' cmd={"prefix": "osd pool application enable", "pool": "cephfs.cephfs.meta", "app": "cephfs"} : dispatch
Jan 31 08:07:46 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e29 do_prune osdmap full prune enabled
Jan 31 08:07:46 compute-0 ceph-mon[75227]: pgmap v56: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:07:46 compute-0 ceph-mon[75227]: from='client.? 192.168.122.100:0/3809225829' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "images", "app": "rbd"}]': finished
Jan 31 08:07:46 compute-0 ceph-mon[75227]: osdmap e29: 3 total, 3 up, 3 in
Jan 31 08:07:46 compute-0 ceph-mon[75227]: from='client.? 192.168.122.100:0/3137856532' entity='client.admin' cmd={"prefix": "osd pool application enable", "pool": "cephfs.cephfs.meta", "app": "cephfs"} : dispatch
Jan 31 08:07:46 compute-0 ceph-mon[75227]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3137856532' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.meta", "app": "cephfs"}]': finished
Jan 31 08:07:46 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e30 e30: 3 total, 3 up, 3 in
Jan 31 08:07:46 compute-0 recursing_moser[91418]: enabled application 'cephfs' on pool 'cephfs.cephfs.meta'
Jan 31 08:07:46 compute-0 ceph-mon[75227]: log_channel(cluster) log [DBG] : osdmap e30: 3 total, 3 up, 3 in
Jan 31 08:07:46 compute-0 systemd[1]: libpod-11e27b7dd17d8114617a940e8b037b26b48fbbf7c731a8ffc4b2b42767cdaff5.scope: Deactivated successfully.
Jan 31 08:07:46 compute-0 podman[91403]: 2026-01-31 08:07:46.77601539 +0000 UTC m=+0.684649503 container died 11e27b7dd17d8114617a940e8b037b26b48fbbf7c731a8ffc4b2b42767cdaff5 (image=quay.io/ceph/ceph:v20, name=recursing_moser, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 08:07:46 compute-0 systemd[1]: var-lib-containers-storage-overlay-0c6e56cd481e85f8cb94db65c0481780ffb68521b84973a09a256f974eda8306-merged.mount: Deactivated successfully.
Jan 31 08:07:46 compute-0 podman[91403]: 2026-01-31 08:07:46.810084552 +0000 UTC m=+0.718718695 container remove 11e27b7dd17d8114617a940e8b037b26b48fbbf7c731a8ffc4b2b42767cdaff5 (image=quay.io/ceph/ceph:v20, name=recursing_moser, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 08:07:46 compute-0 systemd[1]: libpod-conmon-11e27b7dd17d8114617a940e8b037b26b48fbbf7c731a8ffc4b2b42767cdaff5.scope: Deactivated successfully.
Jan 31 08:07:46 compute-0 sudo[91400]: pam_unix(sudo:session): session closed for user root
Jan 31 08:07:46 compute-0 sudo[91477]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ljqhoijrlknoklompajykydacfvqbhga ; /usr/bin/python3'
Jan 31 08:07:46 compute-0 sudo[91477]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:07:47 compute-0 python3[91479]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 82c880e6-d992-5408-8b12-efff9c275473 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool application enable cephfs.cephfs.data cephfs _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 08:07:47 compute-0 podman[91480]: 2026-01-31 08:07:47.132654294 +0000 UTC m=+0.051545862 container create d89d2d4aef17eb5678acae3ccbcd5a536f508e0504a725d005fc1e29db30de1d (image=quay.io/ceph/ceph:v20, name=funny_hertz, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 08:07:47 compute-0 systemd[1]: Started libpod-conmon-d89d2d4aef17eb5678acae3ccbcd5a536f508e0504a725d005fc1e29db30de1d.scope.
Jan 31 08:07:47 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:07:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1a61f92ec24b54cd0741dd5027b670206c2237fc0d267e866517dc39b216ee9f/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 08:07:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1a61f92ec24b54cd0741dd5027b670206c2237fc0d267e866517dc39b216ee9f/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 08:07:47 compute-0 podman[91480]: 2026-01-31 08:07:47.191888984 +0000 UTC m=+0.110780832 container init d89d2d4aef17eb5678acae3ccbcd5a536f508e0504a725d005fc1e29db30de1d (image=quay.io/ceph/ceph:v20, name=funny_hertz, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Jan 31 08:07:47 compute-0 podman[91480]: 2026-01-31 08:07:47.195925309 +0000 UTC m=+0.114816907 container start d89d2d4aef17eb5678acae3ccbcd5a536f508e0504a725d005fc1e29db30de1d (image=quay.io/ceph/ceph:v20, name=funny_hertz, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.41.3)
Jan 31 08:07:47 compute-0 podman[91480]: 2026-01-31 08:07:47.199371437 +0000 UTC m=+0.118263035 container attach d89d2d4aef17eb5678acae3ccbcd5a536f508e0504a725d005fc1e29db30de1d (image=quay.io/ceph/ceph:v20, name=funny_hertz, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 08:07:47 compute-0 podman[91480]: 2026-01-31 08:07:47.11287418 +0000 UTC m=+0.031765768 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 31 08:07:47 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable", "pool": "cephfs.cephfs.data", "app": "cephfs"} v 0)
Jan 31 08:07:47 compute-0 ceph-mon[75227]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/237575243' entity='client.admin' cmd={"prefix": "osd pool application enable", "pool": "cephfs.cephfs.data", "app": "cephfs"} : dispatch
Jan 31 08:07:47 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v59: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:07:47 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e30 do_prune osdmap full prune enabled
Jan 31 08:07:47 compute-0 ceph-mon[75227]: from='client.? 192.168.122.100:0/3137856532' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.meta", "app": "cephfs"}]': finished
Jan 31 08:07:47 compute-0 ceph-mon[75227]: osdmap e30: 3 total, 3 up, 3 in
Jan 31 08:07:47 compute-0 ceph-mon[75227]: from='client.? 192.168.122.100:0/237575243' entity='client.admin' cmd={"prefix": "osd pool application enable", "pool": "cephfs.cephfs.data", "app": "cephfs"} : dispatch
Jan 31 08:07:47 compute-0 ceph-mon[75227]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/237575243' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.data", "app": "cephfs"}]': finished
Jan 31 08:07:47 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e31 e31: 3 total, 3 up, 3 in
Jan 31 08:07:47 compute-0 funny_hertz[91495]: enabled application 'cephfs' on pool 'cephfs.cephfs.data'
Jan 31 08:07:47 compute-0 ceph-mon[75227]: log_channel(cluster) log [DBG] : osdmap e31: 3 total, 3 up, 3 in
Jan 31 08:07:47 compute-0 systemd[1]: libpod-d89d2d4aef17eb5678acae3ccbcd5a536f508e0504a725d005fc1e29db30de1d.scope: Deactivated successfully.
Jan 31 08:07:47 compute-0 podman[91480]: 2026-01-31 08:07:47.799554899 +0000 UTC m=+0.718446507 container died d89d2d4aef17eb5678acae3ccbcd5a536f508e0504a725d005fc1e29db30de1d (image=quay.io/ceph/ceph:v20, name=funny_hertz, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Jan 31 08:07:47 compute-0 systemd[1]: var-lib-containers-storage-overlay-1a61f92ec24b54cd0741dd5027b670206c2237fc0d267e866517dc39b216ee9f-merged.mount: Deactivated successfully.
Jan 31 08:07:47 compute-0 podman[91480]: 2026-01-31 08:07:47.845828239 +0000 UTC m=+0.764719837 container remove d89d2d4aef17eb5678acae3ccbcd5a536f508e0504a725d005fc1e29db30de1d (image=quay.io/ceph/ceph:v20, name=funny_hertz, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 08:07:47 compute-0 systemd[1]: libpod-conmon-d89d2d4aef17eb5678acae3ccbcd5a536f508e0504a725d005fc1e29db30de1d.scope: Deactivated successfully.
Jan 31 08:07:47 compute-0 sudo[91477]: pam_unix(sudo:session): session closed for user root
Jan 31 08:07:48 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e31 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 08:07:48 compute-0 python3[91606]: ansible-ansible.legacy.stat Invoked with path=/tmp/ceph_rgw.yml follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 31 08:07:49 compute-0 python3[91677]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1769846868.463966-36775-35686451693142/source dest=/tmp/ceph_rgw.yml mode=0644 force=True follow=False _original_basename=ceph_rgw.yml.j2 checksum=0a1ea65aada399f80274d3cc2047646f2797712b backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 08:07:49 compute-0 ceph-mon[75227]: pgmap v59: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:07:49 compute-0 ceph-mon[75227]: from='client.? 192.168.122.100:0/237575243' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.data", "app": "cephfs"}]': finished
Jan 31 08:07:49 compute-0 ceph-mon[75227]: osdmap e31: 3 total, 3 up, 3 in
Jan 31 08:07:49 compute-0 sudo[91777]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fcqurhofrwhaiiobjpkpjiestmpjbdrs ; /usr/bin/python3'
Jan 31 08:07:49 compute-0 sudo[91777]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:07:49 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v61: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:07:49 compute-0 python3[91779]: ansible-ansible.legacy.stat Invoked with path=/home/ceph-admin/assimilate_ceph.conf follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 31 08:07:49 compute-0 sudo[91777]: pam_unix(sudo:session): session closed for user root
Jan 31 08:07:49 compute-0 sudo[91852]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jjeosoyzkgltkxtdcywwtyovkgmhgxtt ; /usr/bin/python3'
Jan 31 08:07:49 compute-0 sudo[91852]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:07:50 compute-0 python3[91854]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1769846869.3998997-36789-143413542952391/source dest=/home/ceph-admin/assimilate_ceph.conf owner=167 group=167 mode=0644 follow=False _original_basename=ceph_rgw.conf.j2 checksum=061161ae8da8cd523119e3ac10ce6756b3664db4 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 08:07:50 compute-0 sudo[91852]: pam_unix(sudo:session): session closed for user root
Jan 31 08:07:50 compute-0 ceph-mon[75227]: pgmap v61: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:07:50 compute-0 sudo[91902]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-brycqfecznaegstdlgbdmytnyenguttu ; /usr/bin/python3'
Jan 31 08:07:50 compute-0 sudo[91902]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:07:50 compute-0 python3[91904]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /tmp/ceph_rgw.yml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 82c880e6-d992-5408-8b12-efff9c275473 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config assimilate-conf -i /home/assimilate_ceph.conf
                                           _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 08:07:50 compute-0 podman[91905]: 2026-01-31 08:07:50.600892573 +0000 UTC m=+0.034962918 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 31 08:07:50 compute-0 podman[91905]: 2026-01-31 08:07:50.751060677 +0000 UTC m=+0.185130932 container create 0c76014a452baf6b1f75d9a4cc2f7dd06de98af0695c59f34e80b58090de6d2c (image=quay.io/ceph/ceph:v20, name=reverent_mirzakhani, io.buildah.version=1.41.3, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Jan 31 08:07:50 compute-0 systemd[1]: Started libpod-conmon-0c76014a452baf6b1f75d9a4cc2f7dd06de98af0695c59f34e80b58090de6d2c.scope.
Jan 31 08:07:50 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:07:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/70d12cd45277ac2a7dcb60d09ae07fc605bb32f158c93267802529aa7bee359d/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 08:07:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/70d12cd45277ac2a7dcb60d09ae07fc605bb32f158c93267802529aa7bee359d/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Jan 31 08:07:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/70d12cd45277ac2a7dcb60d09ae07fc605bb32f158c93267802529aa7bee359d/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 08:07:50 compute-0 podman[91905]: 2026-01-31 08:07:50.878302437 +0000 UTC m=+0.312372722 container init 0c76014a452baf6b1f75d9a4cc2f7dd06de98af0695c59f34e80b58090de6d2c (image=quay.io/ceph/ceph:v20, name=reverent_mirzakhani, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 08:07:50 compute-0 podman[91905]: 2026-01-31 08:07:50.883798664 +0000 UTC m=+0.317868929 container start 0c76014a452baf6b1f75d9a4cc2f7dd06de98af0695c59f34e80b58090de6d2c (image=quay.io/ceph/ceph:v20, name=reverent_mirzakhani, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030)
Jan 31 08:07:50 compute-0 podman[91905]: 2026-01-31 08:07:50.96571508 +0000 UTC m=+0.399785445 container attach 0c76014a452baf6b1f75d9a4cc2f7dd06de98af0695c59f34e80b58090de6d2c (image=quay.io/ceph/ceph:v20, name=reverent_mirzakhani, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 08:07:51 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config assimilate-conf"} v 0)
Jan 31 08:07:51 compute-0 ceph-mon[75227]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1028385981' entity='client.admin' cmd={"prefix": "config assimilate-conf"} : dispatch
Jan 31 08:07:51 compute-0 ceph-mon[75227]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1028385981' entity='client.admin' cmd='[{"prefix": "config assimilate-conf"}]': finished
Jan 31 08:07:51 compute-0 reverent_mirzakhani[91921]: 
Jan 31 08:07:51 compute-0 reverent_mirzakhani[91921]: [global]
Jan 31 08:07:51 compute-0 reverent_mirzakhani[91921]:         fsid = 82c880e6-d992-5408-8b12-efff9c275473
Jan 31 08:07:51 compute-0 reverent_mirzakhani[91921]:         mon_host = 192.168.122.100
Jan 31 08:07:51 compute-0 reverent_mirzakhani[91921]:         rgw_keystone_api_version = 3
Jan 31 08:07:51 compute-0 systemd[1]: libpod-0c76014a452baf6b1f75d9a4cc2f7dd06de98af0695c59f34e80b58090de6d2c.scope: Deactivated successfully.
Jan 31 08:07:51 compute-0 podman[91947]: 2026-01-31 08:07:51.471385776 +0000 UTC m=+0.031927002 container died 0c76014a452baf6b1f75d9a4cc2f7dd06de98af0695c59f34e80b58090de6d2c (image=quay.io/ceph/ceph:v20, name=reverent_mirzakhani, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS)
Jan 31 08:07:51 compute-0 sudo[91946]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 08:07:51 compute-0 sudo[91946]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:07:51 compute-0 sudo[91946]: pam_unix(sudo:session): session closed for user root
Jan 31 08:07:51 compute-0 sudo[91982]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/82c880e6-d992-5408-8b12-efff9c275473/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ls
Jan 31 08:07:51 compute-0 sudo[91982]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:07:51 compute-0 ceph-mon[75227]: from='client.? 192.168.122.100:0/1028385981' entity='client.admin' cmd={"prefix": "config assimilate-conf"} : dispatch
Jan 31 08:07:51 compute-0 ceph-mon[75227]: from='client.? 192.168.122.100:0/1028385981' entity='client.admin' cmd='[{"prefix": "config assimilate-conf"}]': finished
Jan 31 08:07:51 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v62: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:07:51 compute-0 systemd[1]: var-lib-containers-storage-overlay-70d12cd45277ac2a7dcb60d09ae07fc605bb32f158c93267802529aa7bee359d-merged.mount: Deactivated successfully.
Jan 31 08:07:51 compute-0 podman[91947]: 2026-01-31 08:07:51.978965396 +0000 UTC m=+0.539506592 container remove 0c76014a452baf6b1f75d9a4cc2f7dd06de98af0695c59f34e80b58090de6d2c (image=quay.io/ceph/ceph:v20, name=reverent_mirzakhani, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle)
Jan 31 08:07:51 compute-0 systemd[1]: libpod-conmon-0c76014a452baf6b1f75d9a4cc2f7dd06de98af0695c59f34e80b58090de6d2c.scope: Deactivated successfully.
Jan 31 08:07:52 compute-0 sudo[91902]: pam_unix(sudo:session): session closed for user root
Jan 31 08:07:52 compute-0 sudo[92062]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nroumjwgmbsecdxmrtxccafzolzbonwv ; /usr/bin/python3'
Jan 31 08:07:52 compute-0 sudo[92062]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:07:52 compute-0 python3[92074]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /tmp/ceph_rgw.yml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 82c880e6-d992-5408-8b12-efff9c275473 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config-key set ssl_option no_sslv2:sslv3:no_tlsv1:no_tlsv1_1
                                           _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 08:07:52 compute-0 podman[92077]: 2026-01-31 08:07:52.482532882 +0000 UTC m=+0.311709993 container exec 2c160fb9852a007dc977740f88f96001cc57b1cb392a9e315d541aef8037777a (image=quay.io/ceph/ceph:v20, name=ceph-82c880e6-d992-5408-8b12-efff9c275473-mon-compute-0, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Jan 31 08:07:52 compute-0 podman[92091]: 2026-01-31 08:07:52.555858784 +0000 UTC m=+0.242084407 container create b052c4ccb5811c5ed639c9ebd1288e4d19bdac594940043115dc9f0707bbcb8a (image=quay.io/ceph/ceph:v20, name=stoic_mestorf, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 08:07:52 compute-0 podman[92091]: 2026-01-31 08:07:52.474669627 +0000 UTC m=+0.160895260 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 31 08:07:52 compute-0 podman[92077]: 2026-01-31 08:07:52.627914009 +0000 UTC m=+0.457091160 container exec_died 2c160fb9852a007dc977740f88f96001cc57b1cb392a9e315d541aef8037777a (image=quay.io/ceph/ceph:v20, name=ceph-82c880e6-d992-5408-8b12-efff9c275473-mon-compute-0, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 08:07:52 compute-0 ceph-mon[75227]: pgmap v62: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:07:52 compute-0 systemd[1]: Started libpod-conmon-b052c4ccb5811c5ed639c9ebd1288e4d19bdac594940043115dc9f0707bbcb8a.scope.
Jan 31 08:07:52 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:07:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f66fefaaacbf3d3188ee88cfe9053d4fc9ea0d3fbbf2ce3b658b12c4fc340647/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Jan 31 08:07:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f66fefaaacbf3d3188ee88cfe9053d4fc9ea0d3fbbf2ce3b658b12c4fc340647/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 08:07:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f66fefaaacbf3d3188ee88cfe9053d4fc9ea0d3fbbf2ce3b658b12c4fc340647/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 08:07:52 compute-0 podman[92091]: 2026-01-31 08:07:52.840934236 +0000 UTC m=+0.527159919 container init b052c4ccb5811c5ed639c9ebd1288e4d19bdac594940043115dc9f0707bbcb8a (image=quay.io/ceph/ceph:v20, name=stoic_mestorf, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 31 08:07:52 compute-0 podman[92091]: 2026-01-31 08:07:52.845649061 +0000 UTC m=+0.531874684 container start b052c4ccb5811c5ed639c9ebd1288e4d19bdac594940043115dc9f0707bbcb8a (image=quay.io/ceph/ceph:v20, name=stoic_mestorf, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True)
Jan 31 08:07:52 compute-0 podman[92091]: 2026-01-31 08:07:52.954366802 +0000 UTC m=+0.640592405 container attach b052c4ccb5811c5ed639c9ebd1288e4d19bdac594940043115dc9f0707bbcb8a (image=quay.io/ceph/ceph:v20, name=stoic_mestorf, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, ceph=True)
Jan 31 08:07:53 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e31 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 08:07:53 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=ssl_option}] v 0)
Jan 31 08:07:53 compute-0 sudo[91982]: pam_unix(sudo:session): session closed for user root
Jan 31 08:07:53 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 31 08:07:53 compute-0 ceph-mon[75227]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/82400921' entity='client.admin' 
Jan 31 08:07:53 compute-0 stoic_mestorf[92124]: set ssl_option
Jan 31 08:07:53 compute-0 systemd[1]: libpod-b052c4ccb5811c5ed639c9ebd1288e4d19bdac594940043115dc9f0707bbcb8a.scope: Deactivated successfully.
Jan 31 08:07:53 compute-0 podman[92091]: 2026-01-31 08:07:53.631992562 +0000 UTC m=+1.318218145 container died b052c4ccb5811c5ed639c9ebd1288e4d19bdac594940043115dc9f0707bbcb8a (image=quay.io/ceph/ceph:v20, name=stoic_mestorf, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 31 08:07:53 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v63: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:07:53 compute-0 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 08:07:53 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 31 08:07:54 compute-0 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 08:07:54 compute-0 systemd[1]: var-lib-containers-storage-overlay-f66fefaaacbf3d3188ee88cfe9053d4fc9ea0d3fbbf2ce3b658b12c4fc340647-merged.mount: Deactivated successfully.
Jan 31 08:07:54 compute-0 sudo[92282]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 08:07:54 compute-0 sudo[92282]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:07:54 compute-0 sudo[92282]: pam_unix(sudo:session): session closed for user root
Jan 31 08:07:54 compute-0 sudo[92307]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/82c880e6-d992-5408-8b12-efff9c275473/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Jan 31 08:07:54 compute-0 sudo[92307]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:07:54 compute-0 podman[92091]: 2026-01-31 08:07:54.401783993 +0000 UTC m=+2.088009606 container remove b052c4ccb5811c5ed639c9ebd1288e4d19bdac594940043115dc9f0707bbcb8a (image=quay.io/ceph/ceph:v20, name=stoic_mestorf, CEPH_REF=tentacle, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 31 08:07:54 compute-0 sudo[92062]: pam_unix(sudo:session): session closed for user root
Jan 31 08:07:54 compute-0 systemd[1]: libpod-conmon-b052c4ccb5811c5ed639c9ebd1288e4d19bdac594940043115dc9f0707bbcb8a.scope: Deactivated successfully.
Jan 31 08:07:54 compute-0 sudo[92368]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tmkbnzxtpqxtxrtcoewegzmkjxpcjpvs ; /usr/bin/python3'
Jan 31 08:07:54 compute-0 sudo[92368]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:07:54 compute-0 python3[92370]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /tmp/ceph_rgw.yml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 82c880e6-d992-5408-8b12-efff9c275473 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch apply --in-file /home/ceph_spec.yaml _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 08:07:54 compute-0 sudo[92307]: pam_unix(sudo:session): session closed for user root
Jan 31 08:07:54 compute-0 ceph-mon[75227]: from='client.? 192.168.122.100:0/82400921' entity='client.admin' 
Jan 31 08:07:54 compute-0 ceph-mon[75227]: pgmap v63: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:07:54 compute-0 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 08:07:54 compute-0 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 08:07:54 compute-0 podman[92386]: 2026-01-31 08:07:54.83214765 +0000 UTC m=+0.094979201 container create 54d880109f58e0e17438b7e34c5716b8b98a30811f71e6c540cb06b75e334f43 (image=quay.io/ceph/ceph:v20, name=friendly_torvalds, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, io.buildah.version=1.41.3, ceph=True)
Jan 31 08:07:54 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 31 08:07:54 compute-0 ceph-mon[75227]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 08:07:54 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 31 08:07:54 compute-0 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 31 08:07:54 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 31 08:07:54 compute-0 podman[92386]: 2026-01-31 08:07:54.75819475 +0000 UTC m=+0.021026311 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 31 08:07:54 compute-0 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 08:07:54 compute-0 systemd[1]: Started libpod-conmon-54d880109f58e0e17438b7e34c5716b8b98a30811f71e6c540cb06b75e334f43.scope.
Jan 31 08:07:54 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 31 08:07:54 compute-0 ceph-mon[75227]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 31 08:07:55 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 31 08:07:55 compute-0 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 31 08:07:55 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 31 08:07:55 compute-0 ceph-mon[75227]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 08:07:55 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:07:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f654822f402f9f7ac953c077b8536f17da11ac79a1040ad97964d357a548ba29/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 08:07:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f654822f402f9f7ac953c077b8536f17da11ac79a1040ad97964d357a548ba29/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 08:07:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f654822f402f9f7ac953c077b8536f17da11ac79a1040ad97964d357a548ba29/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Jan 31 08:07:55 compute-0 sudo[92407]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 08:07:55 compute-0 sudo[92407]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:07:55 compute-0 sudo[92407]: pam_unix(sudo:session): session closed for user root
Jan 31 08:07:55 compute-0 sudo[92432]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/82c880e6-d992-5408-8b12-efff9c275473/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 82c880e6-d992-5408-8b12-efff9c275473 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Jan 31 08:07:55 compute-0 sudo[92432]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:07:55 compute-0 podman[92386]: 2026-01-31 08:07:55.164622615 +0000 UTC m=+0.427454226 container init 54d880109f58e0e17438b7e34c5716b8b98a30811f71e6c540cb06b75e334f43 (image=quay.io/ceph/ceph:v20, name=friendly_torvalds, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 08:07:55 compute-0 podman[92386]: 2026-01-31 08:07:55.174468446 +0000 UTC m=+0.437299997 container start 54d880109f58e0e17438b7e34c5716b8b98a30811f71e6c540cb06b75e334f43 (image=quay.io/ceph/ceph:v20, name=friendly_torvalds, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 08:07:55 compute-0 podman[92386]: 2026-01-31 08:07:55.304092363 +0000 UTC m=+0.566923924 container attach 54d880109f58e0e17438b7e34c5716b8b98a30811f71e6c540cb06b75e334f43 (image=quay.io/ceph/ceph:v20, name=friendly_torvalds, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 08:07:55 compute-0 podman[92489]: 2026-01-31 08:07:55.483948194 +0000 UTC m=+0.100360134 container create 037aa3993d5377365407a3d9e41684aef2a5a18f2d90f902a349b8d29dc81696 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_leavitt, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2)
Jan 31 08:07:55 compute-0 podman[92489]: 2026-01-31 08:07:55.417751116 +0000 UTC m=+0.034163106 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 08:07:55 compute-0 systemd[1]: Started libpod-conmon-037aa3993d5377365407a3d9e41684aef2a5a18f2d90f902a349b8d29dc81696.scope.
Jan 31 08:07:55 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:07:55 compute-0 ceph-mgr[75519]: log_channel(audit) log [DBG] : from='client.14234 -' entity='client.admin' cmd=[{"prefix": "orch apply", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 08:07:55 compute-0 ceph-mgr[75519]: [cephadm INFO root] Saving service rgw.rgw spec with placement compute-0
Jan 31 08:07:55 compute-0 ceph-mgr[75519]: log_channel(cephadm) log [INF] : Saving service rgw.rgw spec with placement compute-0
Jan 31 08:07:55 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.rgw.rgw}] v 0)
Jan 31 08:07:55 compute-0 podman[92489]: 2026-01-31 08:07:55.632662697 +0000 UTC m=+0.249074687 container init 037aa3993d5377365407a3d9e41684aef2a5a18f2d90f902a349b8d29dc81696 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_leavitt, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 31 08:07:55 compute-0 podman[92489]: 2026-01-31 08:07:55.639034359 +0000 UTC m=+0.255446299 container start 037aa3993d5377365407a3d9e41684aef2a5a18f2d90f902a349b8d29dc81696 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_leavitt, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 08:07:55 compute-0 stoic_leavitt[92505]: 167 167
Jan 31 08:07:55 compute-0 systemd[1]: libpod-037aa3993d5377365407a3d9e41684aef2a5a18f2d90f902a349b8d29dc81696.scope: Deactivated successfully.
Jan 31 08:07:55 compute-0 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 08:07:55 compute-0 friendly_torvalds[92404]: Scheduled rgw.rgw update...
Jan 31 08:07:55 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v64: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:07:55 compute-0 systemd[1]: libpod-54d880109f58e0e17438b7e34c5716b8b98a30811f71e6c540cb06b75e334f43.scope: Deactivated successfully.
Jan 31 08:07:55 compute-0 podman[92489]: 2026-01-31 08:07:55.688975253 +0000 UTC m=+0.305387173 container attach 037aa3993d5377365407a3d9e41684aef2a5a18f2d90f902a349b8d29dc81696 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_leavitt, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Jan 31 08:07:55 compute-0 podman[92489]: 2026-01-31 08:07:55.689734135 +0000 UTC m=+0.306146065 container died 037aa3993d5377365407a3d9e41684aef2a5a18f2d90f902a349b8d29dc81696 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_leavitt, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 08:07:55 compute-0 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 08:07:55 compute-0 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 31 08:07:55 compute-0 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 08:07:55 compute-0 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 31 08:07:55 compute-0 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 31 08:07:55 compute-0 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 08:07:55 compute-0 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 08:07:55 compute-0 systemd[1]: var-lib-containers-storage-overlay-801d2c5bbed6f422da49c43ca80226fb8263c201cf51e2ae6475f4d4e8db2dab-merged.mount: Deactivated successfully.
Jan 31 08:07:56 compute-0 podman[92489]: 2026-01-31 08:07:56.084217919 +0000 UTC m=+0.700629859 container remove 037aa3993d5377365407a3d9e41684aef2a5a18f2d90f902a349b8d29dc81696 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_leavitt, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Jan 31 08:07:56 compute-0 podman[92386]: 2026-01-31 08:07:56.104393184 +0000 UTC m=+1.367224745 container died 54d880109f58e0e17438b7e34c5716b8b98a30811f71e6c540cb06b75e334f43 (image=quay.io/ceph/ceph:v20, name=friendly_torvalds, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 08:07:56 compute-0 systemd[1]: libpod-conmon-037aa3993d5377365407a3d9e41684aef2a5a18f2d90f902a349b8d29dc81696.scope: Deactivated successfully.
Jan 31 08:07:56 compute-0 systemd[1]: var-lib-containers-storage-overlay-f654822f402f9f7ac953c077b8536f17da11ac79a1040ad97964d357a548ba29-merged.mount: Deactivated successfully.
Jan 31 08:07:56 compute-0 podman[92541]: 2026-01-31 08:07:56.22029351 +0000 UTC m=+0.024073947 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 08:07:56 compute-0 podman[92386]: 2026-01-31 08:07:56.411034672 +0000 UTC m=+1.673866203 container remove 54d880109f58e0e17438b7e34c5716b8b98a30811f71e6c540cb06b75e334f43 (image=quay.io/ceph/ceph:v20, name=friendly_torvalds, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 08:07:56 compute-0 sudo[92368]: pam_unix(sudo:session): session closed for user root
Jan 31 08:07:56 compute-0 podman[92541]: 2026-01-31 08:07:56.495541533 +0000 UTC m=+0.299321990 container create 832be97bf11008356fa2cc107579d58b70ca77bcbbc02f69e72a1ae473ac514b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=compassionate_ritchie, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 08:07:56 compute-0 systemd[1]: Started libpod-conmon-832be97bf11008356fa2cc107579d58b70ca77bcbbc02f69e72a1ae473ac514b.scope.
Jan 31 08:07:56 compute-0 systemd[1]: libpod-conmon-54d880109f58e0e17438b7e34c5716b8b98a30811f71e6c540cb06b75e334f43.scope: Deactivated successfully.
Jan 31 08:07:56 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:07:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/45eed93a61547a8d67fe97e01ffd324f8b00ac4027ec05159e2830455e48f7b7/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 08:07:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/45eed93a61547a8d67fe97e01ffd324f8b00ac4027ec05159e2830455e48f7b7/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 08:07:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/45eed93a61547a8d67fe97e01ffd324f8b00ac4027ec05159e2830455e48f7b7/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 08:07:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/45eed93a61547a8d67fe97e01ffd324f8b00ac4027ec05159e2830455e48f7b7/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 08:07:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/45eed93a61547a8d67fe97e01ffd324f8b00ac4027ec05159e2830455e48f7b7/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 31 08:07:56 compute-0 podman[92541]: 2026-01-31 08:07:56.655754953 +0000 UTC m=+0.459535460 container init 832be97bf11008356fa2cc107579d58b70ca77bcbbc02f69e72a1ae473ac514b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=compassionate_ritchie, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 08:07:56 compute-0 podman[92541]: 2026-01-31 08:07:56.663996458 +0000 UTC m=+0.467776915 container start 832be97bf11008356fa2cc107579d58b70ca77bcbbc02f69e72a1ae473ac514b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=compassionate_ritchie, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_REF=tentacle, ceph=True, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 31 08:07:56 compute-0 podman[92541]: 2026-01-31 08:07:56.679911912 +0000 UTC m=+0.483692369 container attach 832be97bf11008356fa2cc107579d58b70ca77bcbbc02f69e72a1ae473ac514b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=compassionate_ritchie, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 08:07:56 compute-0 ceph-mon[75227]: from='client.14234 -' entity='client.admin' cmd=[{"prefix": "orch apply", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 08:07:56 compute-0 ceph-mon[75227]: Saving service rgw.rgw spec with placement compute-0
Jan 31 08:07:56 compute-0 ceph-mon[75227]: pgmap v64: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:07:57 compute-0 compassionate_ritchie[92559]: --> passed data devices: 0 physical, 3 LVM
Jan 31 08:07:57 compute-0 compassionate_ritchie[92559]: --> All data devices are unavailable
Jan 31 08:07:57 compute-0 systemd[1]: libpod-832be97bf11008356fa2cc107579d58b70ca77bcbbc02f69e72a1ae473ac514b.scope: Deactivated successfully.
Jan 31 08:07:57 compute-0 podman[92541]: 2026-01-31 08:07:57.192434982 +0000 UTC m=+0.996215439 container died 832be97bf11008356fa2cc107579d58b70ca77bcbbc02f69e72a1ae473ac514b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=compassionate_ritchie, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Jan 31 08:07:57 compute-0 systemd[1]: var-lib-containers-storage-overlay-45eed93a61547a8d67fe97e01ffd324f8b00ac4027ec05159e2830455e48f7b7-merged.mount: Deactivated successfully.
Jan 31 08:07:57 compute-0 podman[92541]: 2026-01-31 08:07:57.246610448 +0000 UTC m=+1.050390895 container remove 832be97bf11008356fa2cc107579d58b70ca77bcbbc02f69e72a1ae473ac514b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=compassionate_ritchie, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_REF=tentacle, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 31 08:07:57 compute-0 systemd[1]: libpod-conmon-832be97bf11008356fa2cc107579d58b70ca77bcbbc02f69e72a1ae473ac514b.scope: Deactivated successfully.
Jan 31 08:07:57 compute-0 sudo[92432]: pam_unix(sudo:session): session closed for user root
Jan 31 08:07:57 compute-0 python3[92651]: ansible-ansible.legacy.stat Invoked with path=/tmp/ceph_mds.yml follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 31 08:07:57 compute-0 sudo[92666]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 08:07:57 compute-0 sudo[92666]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:07:57 compute-0 sudo[92666]: pam_unix(sudo:session): session closed for user root
Jan 31 08:07:57 compute-0 sudo[92712]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/82c880e6-d992-5408-8b12-efff9c275473/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 82c880e6-d992-5408-8b12-efff9c275473 -- lvm list --format json
Jan 31 08:07:57 compute-0 sudo[92712]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:07:57 compute-0 python3[92786]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1769846876.9895024-36830-98997909560458/source dest=/tmp/ceph_mds.yml mode=0644 force=True follow=False _original_basename=ceph_mds.yml.j2 checksum=e359e26d9e42bc107a0de03375144cf8590b6f68 backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 08:07:57 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v65: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:07:57 compute-0 podman[92806]: 2026-01-31 08:07:57.716153653 +0000 UTC m=+0.048374781 container create 30ad397b34a2691a36fe40b9f7a7814efd46d35164c4bdb5ff98d2ef19e1c54f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=unruffled_lamport, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Jan 31 08:07:57 compute-0 systemd[1]: Started libpod-conmon-30ad397b34a2691a36fe40b9f7a7814efd46d35164c4bdb5ff98d2ef19e1c54f.scope.
Jan 31 08:07:57 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:07:57 compute-0 podman[92806]: 2026-01-31 08:07:57.691489609 +0000 UTC m=+0.023710817 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 08:07:57 compute-0 podman[92806]: 2026-01-31 08:07:57.792429329 +0000 UTC m=+0.124650497 container init 30ad397b34a2691a36fe40b9f7a7814efd46d35164c4bdb5ff98d2ef19e1c54f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=unruffled_lamport, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, ceph=True, org.label-schema.license=GPLv2)
Jan 31 08:07:57 compute-0 podman[92806]: 2026-01-31 08:07:57.797846943 +0000 UTC m=+0.130068091 container start 30ad397b34a2691a36fe40b9f7a7814efd46d35164c4bdb5ff98d2ef19e1c54f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=unruffled_lamport, ceph=True, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 08:07:57 compute-0 unruffled_lamport[92839]: 167 167
Jan 31 08:07:57 compute-0 systemd[1]: libpod-30ad397b34a2691a36fe40b9f7a7814efd46d35164c4bdb5ff98d2ef19e1c54f.scope: Deactivated successfully.
Jan 31 08:07:57 compute-0 podman[92806]: 2026-01-31 08:07:57.80158253 +0000 UTC m=+0.133803668 container attach 30ad397b34a2691a36fe40b9f7a7814efd46d35164c4bdb5ff98d2ef19e1c54f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=unruffled_lamport, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20251030, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 08:07:57 compute-0 podman[92806]: 2026-01-31 08:07:57.801987221 +0000 UTC m=+0.134208339 container died 30ad397b34a2691a36fe40b9f7a7814efd46d35164c4bdb5ff98d2ef19e1c54f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=unruffled_lamport, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 08:07:57 compute-0 systemd[1]: var-lib-containers-storage-overlay-96b3efe8852b5afe70540dae4a008de41d1f4b8085947019782c60acbc47e182-merged.mount: Deactivated successfully.
Jan 31 08:07:57 compute-0 podman[92806]: 2026-01-31 08:07:57.838096351 +0000 UTC m=+0.170317469 container remove 30ad397b34a2691a36fe40b9f7a7814efd46d35164c4bdb5ff98d2ef19e1c54f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=unruffled_lamport, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True)
Jan 31 08:07:57 compute-0 systemd[1]: libpod-conmon-30ad397b34a2691a36fe40b9f7a7814efd46d35164c4bdb5ff98d2ef19e1c54f.scope: Deactivated successfully.
Jan 31 08:07:57 compute-0 sudo[92888]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dyzjcryeanqzlougggwmtzdvzfgthaez ; /usr/bin/python3'
Jan 31 08:07:57 compute-0 sudo[92888]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:07:58 compute-0 podman[92887]: 2026-01-31 08:07:58.047012701 +0000 UTC m=+0.109878735 container create 31951f04471f9058d9d7072f7966fb2603ead56816684467107bd937f17d05cc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nervous_morse, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Jan 31 08:07:58 compute-0 podman[92887]: 2026-01-31 08:07:57.959186726 +0000 UTC m=+0.022052850 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 08:07:58 compute-0 python3[92901]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /tmp/ceph_mds.yml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 82c880e6-d992-5408-8b12-efff9c275473 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   fs volume create cephfs '--placement=compute-0 '
                                           _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 08:07:58 compute-0 systemd[1]: Started libpod-conmon-31951f04471f9058d9d7072f7966fb2603ead56816684467107bd937f17d05cc.scope.
Jan 31 08:07:58 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:07:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/12fd2ed479432f2d4addf3d10e2182091e3ded55c87602cff67af03a377129e0/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 08:07:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/12fd2ed479432f2d4addf3d10e2182091e3ded55c87602cff67af03a377129e0/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 08:07:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/12fd2ed479432f2d4addf3d10e2182091e3ded55c87602cff67af03a377129e0/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 08:07:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/12fd2ed479432f2d4addf3d10e2182091e3ded55c87602cff67af03a377129e0/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 08:07:58 compute-0 podman[92904]: 2026-01-31 08:07:58.227239083 +0000 UTC m=+0.094629261 container create 123aa49d8e81824e9d07b2813fa98d184b2142a160db2cd99ccf288e47344631 (image=quay.io/ceph/ceph:v20, name=nervous_kalam, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Jan 31 08:07:58 compute-0 podman[92904]: 2026-01-31 08:07:58.155572128 +0000 UTC m=+0.022962286 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 31 08:07:58 compute-0 systemd[1]: Started libpod-conmon-123aa49d8e81824e9d07b2813fa98d184b2142a160db2cd99ccf288e47344631.scope.
Jan 31 08:07:58 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:07:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/20bb9f05f49a9baf8a18052cb5549b3065b5ce16c06dd7dab3c142e64bbb024f/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 08:07:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/20bb9f05f49a9baf8a18052cb5549b3065b5ce16c06dd7dab3c142e64bbb024f/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 08:07:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/20bb9f05f49a9baf8a18052cb5549b3065b5ce16c06dd7dab3c142e64bbb024f/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Jan 31 08:07:58 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e31 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 08:07:58 compute-0 podman[92887]: 2026-01-31 08:07:58.358119306 +0000 UTC m=+0.420985430 container init 31951f04471f9058d9d7072f7966fb2603ead56816684467107bd937f17d05cc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nervous_morse, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 31 08:07:58 compute-0 podman[92887]: 2026-01-31 08:07:58.363945072 +0000 UTC m=+0.426811136 container start 31951f04471f9058d9d7072f7966fb2603ead56816684467107bd937f17d05cc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nervous_morse, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 08:07:58 compute-0 podman[92887]: 2026-01-31 08:07:58.440550488 +0000 UTC m=+0.503416552 container attach 31951f04471f9058d9d7072f7966fb2603ead56816684467107bd937f17d05cc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nervous_morse, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle)
Jan 31 08:07:58 compute-0 podman[92904]: 2026-01-31 08:07:58.54335245 +0000 UTC m=+0.410742608 container init 123aa49d8e81824e9d07b2813fa98d184b2142a160db2cd99ccf288e47344631 (image=quay.io/ceph/ceph:v20, name=nervous_kalam, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 08:07:58 compute-0 podman[92904]: 2026-01-31 08:07:58.552637055 +0000 UTC m=+0.420027203 container start 123aa49d8e81824e9d07b2813fa98d184b2142a160db2cd99ccf288e47344631 (image=quay.io/ceph/ceph:v20, name=nervous_kalam, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, ceph=True)
Jan 31 08:07:58 compute-0 nervous_morse[92919]: {
Jan 31 08:07:58 compute-0 nervous_morse[92919]:     "0": [
Jan 31 08:07:58 compute-0 nervous_morse[92919]:         {
Jan 31 08:07:58 compute-0 nervous_morse[92919]:             "devices": [
Jan 31 08:07:58 compute-0 nervous_morse[92919]:                 "/dev/loop3"
Jan 31 08:07:58 compute-0 nervous_morse[92919]:             ],
Jan 31 08:07:58 compute-0 nervous_morse[92919]:             "lv_name": "ceph_lv0",
Jan 31 08:07:58 compute-0 nervous_morse[92919]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 08:07:58 compute-0 nervous_morse[92919]:             "lv_size": "21470642176",
Jan 31 08:07:58 compute-0 nervous_morse[92919]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=MTsNbY-MKaT-jGv0-3onj-5WQa-gnK0-BbfLsK,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=82c880e6-d992-5408-8b12-efff9c275473,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=39c36249-2898-4a76-b317-8e4ca379866f,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 31 08:07:58 compute-0 nervous_morse[92919]:             "lv_uuid": "MTsNbY-MKaT-jGv0-3onj-5WQa-gnK0-BbfLsK",
Jan 31 08:07:58 compute-0 nervous_morse[92919]:             "name": "ceph_lv0",
Jan 31 08:07:58 compute-0 nervous_morse[92919]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 08:07:58 compute-0 nervous_morse[92919]:             "tags": {
Jan 31 08:07:58 compute-0 nervous_morse[92919]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 31 08:07:58 compute-0 nervous_morse[92919]:                 "ceph.block_uuid": "MTsNbY-MKaT-jGv0-3onj-5WQa-gnK0-BbfLsK",
Jan 31 08:07:58 compute-0 nervous_morse[92919]:                 "ceph.cephx_lockbox_secret": "",
Jan 31 08:07:58 compute-0 nervous_morse[92919]:                 "ceph.cluster_fsid": "82c880e6-d992-5408-8b12-efff9c275473",
Jan 31 08:07:58 compute-0 nervous_morse[92919]:                 "ceph.cluster_name": "ceph",
Jan 31 08:07:58 compute-0 nervous_morse[92919]:                 "ceph.crush_device_class": "",
Jan 31 08:07:58 compute-0 nervous_morse[92919]:                 "ceph.encrypted": "0",
Jan 31 08:07:58 compute-0 nervous_morse[92919]:                 "ceph.objectstore": "bluestore",
Jan 31 08:07:58 compute-0 nervous_morse[92919]:                 "ceph.osd_fsid": "39c36249-2898-4a76-b317-8e4ca379866f",
Jan 31 08:07:58 compute-0 nervous_morse[92919]:                 "ceph.osd_id": "0",
Jan 31 08:07:58 compute-0 nervous_morse[92919]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 31 08:07:58 compute-0 nervous_morse[92919]:                 "ceph.type": "block",
Jan 31 08:07:58 compute-0 nervous_morse[92919]:                 "ceph.vdo": "0",
Jan 31 08:07:58 compute-0 nervous_morse[92919]:                 "ceph.with_tpm": "0"
Jan 31 08:07:58 compute-0 nervous_morse[92919]:             },
Jan 31 08:07:58 compute-0 nervous_morse[92919]:             "type": "block",
Jan 31 08:07:58 compute-0 nervous_morse[92919]:             "vg_name": "ceph_vg0"
Jan 31 08:07:58 compute-0 nervous_morse[92919]:         }
Jan 31 08:07:58 compute-0 nervous_morse[92919]:     ],
Jan 31 08:07:58 compute-0 nervous_morse[92919]:     "1": [
Jan 31 08:07:58 compute-0 nervous_morse[92919]:         {
Jan 31 08:07:58 compute-0 nervous_morse[92919]:             "devices": [
Jan 31 08:07:58 compute-0 nervous_morse[92919]:                 "/dev/loop4"
Jan 31 08:07:58 compute-0 nervous_morse[92919]:             ],
Jan 31 08:07:58 compute-0 nervous_morse[92919]:             "lv_name": "ceph_lv1",
Jan 31 08:07:58 compute-0 nervous_morse[92919]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 31 08:07:58 compute-0 nervous_morse[92919]:             "lv_size": "21470642176",
Jan 31 08:07:58 compute-0 nervous_morse[92919]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=p93Mbf-DMxT-pcUt-jSJE-SFna-oscq-yTAd40,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=82c880e6-d992-5408-8b12-efff9c275473,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=dacad4fa-56d8-4937-b2d8-306fb75187f3,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 31 08:07:58 compute-0 nervous_morse[92919]:             "lv_uuid": "p93Mbf-DMxT-pcUt-jSJE-SFna-oscq-yTAd40",
Jan 31 08:07:58 compute-0 nervous_morse[92919]:             "name": "ceph_lv1",
Jan 31 08:07:58 compute-0 nervous_morse[92919]:             "path": "/dev/ceph_vg1/ceph_lv1",
Jan 31 08:07:58 compute-0 nervous_morse[92919]:             "tags": {
Jan 31 08:07:58 compute-0 nervous_morse[92919]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 31 08:07:58 compute-0 nervous_morse[92919]:                 "ceph.block_uuid": "p93Mbf-DMxT-pcUt-jSJE-SFna-oscq-yTAd40",
Jan 31 08:07:58 compute-0 nervous_morse[92919]:                 "ceph.cephx_lockbox_secret": "",
Jan 31 08:07:58 compute-0 nervous_morse[92919]:                 "ceph.cluster_fsid": "82c880e6-d992-5408-8b12-efff9c275473",
Jan 31 08:07:58 compute-0 nervous_morse[92919]:                 "ceph.cluster_name": "ceph",
Jan 31 08:07:58 compute-0 nervous_morse[92919]:                 "ceph.crush_device_class": "",
Jan 31 08:07:58 compute-0 nervous_morse[92919]:                 "ceph.encrypted": "0",
Jan 31 08:07:58 compute-0 nervous_morse[92919]:                 "ceph.objectstore": "bluestore",
Jan 31 08:07:58 compute-0 nervous_morse[92919]:                 "ceph.osd_fsid": "dacad4fa-56d8-4937-b2d8-306fb75187f3",
Jan 31 08:07:58 compute-0 nervous_morse[92919]:                 "ceph.osd_id": "1",
Jan 31 08:07:58 compute-0 nervous_morse[92919]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 31 08:07:58 compute-0 nervous_morse[92919]:                 "ceph.type": "block",
Jan 31 08:07:58 compute-0 nervous_morse[92919]:                 "ceph.vdo": "0",
Jan 31 08:07:58 compute-0 nervous_morse[92919]:                 "ceph.with_tpm": "0"
Jan 31 08:07:58 compute-0 nervous_morse[92919]:             },
Jan 31 08:07:58 compute-0 nervous_morse[92919]:             "type": "block",
Jan 31 08:07:58 compute-0 nervous_morse[92919]:             "vg_name": "ceph_vg1"
Jan 31 08:07:58 compute-0 nervous_morse[92919]:         }
Jan 31 08:07:58 compute-0 nervous_morse[92919]:     ],
Jan 31 08:07:58 compute-0 nervous_morse[92919]:     "2": [
Jan 31 08:07:58 compute-0 nervous_morse[92919]:         {
Jan 31 08:07:58 compute-0 nervous_morse[92919]:             "devices": [
Jan 31 08:07:58 compute-0 nervous_morse[92919]:                 "/dev/loop5"
Jan 31 08:07:58 compute-0 nervous_morse[92919]:             ],
Jan 31 08:07:58 compute-0 nervous_morse[92919]:             "lv_name": "ceph_lv2",
Jan 31 08:07:58 compute-0 nervous_morse[92919]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 31 08:07:58 compute-0 nervous_morse[92919]:             "lv_size": "21470642176",
Jan 31 08:07:58 compute-0 nervous_morse[92919]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=fxd7JU-HnwP-NvcE-M4xv-EgEF-kK7y-w6dXCS,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=82c880e6-d992-5408-8b12-efff9c275473,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=faa25865-e7b6-44f9-8188-08bf287b941b,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 31 08:07:58 compute-0 nervous_morse[92919]:             "lv_uuid": "fxd7JU-HnwP-NvcE-M4xv-EgEF-kK7y-w6dXCS",
Jan 31 08:07:58 compute-0 nervous_morse[92919]:             "name": "ceph_lv2",
Jan 31 08:07:58 compute-0 nervous_morse[92919]:             "path": "/dev/ceph_vg2/ceph_lv2",
Jan 31 08:07:58 compute-0 nervous_morse[92919]:             "tags": {
Jan 31 08:07:58 compute-0 nervous_morse[92919]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 31 08:07:58 compute-0 nervous_morse[92919]:                 "ceph.block_uuid": "fxd7JU-HnwP-NvcE-M4xv-EgEF-kK7y-w6dXCS",
Jan 31 08:07:58 compute-0 nervous_morse[92919]:                 "ceph.cephx_lockbox_secret": "",
Jan 31 08:07:58 compute-0 nervous_morse[92919]:                 "ceph.cluster_fsid": "82c880e6-d992-5408-8b12-efff9c275473",
Jan 31 08:07:58 compute-0 nervous_morse[92919]:                 "ceph.cluster_name": "ceph",
Jan 31 08:07:58 compute-0 nervous_morse[92919]:                 "ceph.crush_device_class": "",
Jan 31 08:07:58 compute-0 nervous_morse[92919]:                 "ceph.encrypted": "0",
Jan 31 08:07:58 compute-0 nervous_morse[92919]:                 "ceph.objectstore": "bluestore",
Jan 31 08:07:58 compute-0 nervous_morse[92919]:                 "ceph.osd_fsid": "faa25865-e7b6-44f9-8188-08bf287b941b",
Jan 31 08:07:58 compute-0 nervous_morse[92919]:                 "ceph.osd_id": "2",
Jan 31 08:07:58 compute-0 nervous_morse[92919]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 31 08:07:58 compute-0 nervous_morse[92919]:                 "ceph.type": "block",
Jan 31 08:07:58 compute-0 nervous_morse[92919]:                 "ceph.vdo": "0",
Jan 31 08:07:58 compute-0 nervous_morse[92919]:                 "ceph.with_tpm": "0"
Jan 31 08:07:58 compute-0 nervous_morse[92919]:             },
Jan 31 08:07:58 compute-0 nervous_morse[92919]:             "type": "block",
Jan 31 08:07:58 compute-0 nervous_morse[92919]:             "vg_name": "ceph_vg2"
Jan 31 08:07:58 compute-0 nervous_morse[92919]:         }
Jan 31 08:07:58 compute-0 nervous_morse[92919]:     ]
Jan 31 08:07:58 compute-0 nervous_morse[92919]: }
Jan 31 08:07:58 compute-0 systemd[1]: libpod-31951f04471f9058d9d7072f7966fb2603ead56816684467107bd937f17d05cc.scope: Deactivated successfully.
Jan 31 08:07:58 compute-0 podman[92904]: 2026-01-31 08:07:58.678610849 +0000 UTC m=+0.546001077 container attach 123aa49d8e81824e9d07b2813fa98d184b2142a160db2cd99ccf288e47344631 (image=quay.io/ceph/ceph:v20, name=nervous_kalam, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle)
Jan 31 08:07:58 compute-0 podman[92887]: 2026-01-31 08:07:58.679587907 +0000 UTC m=+0.742453941 container died 31951f04471f9058d9d7072f7966fb2603ead56816684467107bd937f17d05cc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nervous_morse, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030)
Jan 31 08:07:58 compute-0 systemd[1]: var-lib-containers-storage-overlay-12fd2ed479432f2d4addf3d10e2182091e3ded55c87602cff67af03a377129e0-merged.mount: Deactivated successfully.
Jan 31 08:07:59 compute-0 ceph-mgr[75519]: log_channel(audit) log [DBG] : from='client.14236 -' entity='client.admin' cmd=[{"prefix": "fs volume create", "name": "cephfs", "placement": "compute-0 ", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 08:07:59 compute-0 ceph-mgr[75519]: [volumes INFO volumes.module] Starting _cmd_fs_volume_create(name:cephfs, placement:compute-0 , prefix:fs volume create, target:['mon-mgr', '']) < ""
Jan 31 08:07:59 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool create", "pool": "cephfs.cephfs.meta"} v 0)
Jan 31 08:07:59 compute-0 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "osd pool create", "pool": "cephfs.cephfs.meta"} : dispatch
Jan 31 08:07:59 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"bulk": true, "prefix": "osd pool create", "pool": "cephfs.cephfs.data"} v 0)
Jan 31 08:07:59 compute-0 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"bulk": true, "prefix": "osd pool create", "pool": "cephfs.cephfs.data"} : dispatch
Jan 31 08:07:59 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "fs new", "fs_name": "cephfs", "metadata": "cephfs.cephfs.meta", "data": "cephfs.cephfs.data"} v 0)
Jan 31 08:07:59 compute-0 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "fs new", "fs_name": "cephfs", "metadata": "cephfs.cephfs.meta", "data": "cephfs.cephfs.data"} : dispatch
Jan 31 08:07:59 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e31 do_prune osdmap full prune enabled
Jan 31 08:07:59 compute-0 ceph-mon[75227]: log_channel(cluster) log [ERR] : Health check failed: 1 filesystem is offline (MDS_ALL_DOWN)
Jan 31 08:07:59 compute-0 ceph-mon[75227]: log_channel(cluster) log [WRN] : Health check failed: 1 filesystem is online with fewer MDS than max_mds (MDS_UP_LESS_THAN_MAX)
Jan 31 08:07:59 compute-0 ceph-82c880e6-d992-5408-8b12-efff9c275473-mon-compute-0[75223]: 2026-01-31T08:07:59.021+0000 7f922797a640 -1 log_channel(cluster) log [ERR] : Health check failed: 1 filesystem is offline (MDS_ALL_DOWN)
Jan 31 08:07:59 compute-0 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd='[{"prefix": "fs new", "fs_name": "cephfs", "metadata": "cephfs.cephfs.meta", "data": "cephfs.cephfs.data"}]': finished
Jan 31 08:07:59 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).mds e2 new map
Jan 31 08:07:59 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).mds e2 print_map
                                           e2
                                           btime 2026-01-31T08:07:59:022862+0000
                                           enable_multiple, ever_enabled_multiple: 1,1
                                           default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}
                                           legacy client fscid: 1
                                            
                                           Filesystem 'cephfs' (1)
                                           fs_name        cephfs
                                           epoch        2
                                           flags        12 joinable allow_snaps allow_multimds_snaps
                                           created        2026-01-31T08:07:59.022433+0000
                                           modified        2026-01-31T08:07:59.022433+0000
                                           tableserver        0
                                           root        0
                                           session_timeout        60
                                           session_autoclose        300
                                           max_file_size        1099511627776
                                           max_xattr_size        65536
                                           required_client_features        {}
                                           last_failure        0
                                           last_failure_osd_epoch        0
                                           compat        compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}
                                           max_mds        1
                                           in        
                                           up        {}
                                           failed        
                                           damaged        
                                           stopped        
                                           data_pools        [7]
                                           metadata_pool        6
                                           inline_data        disabled
                                           balancer        
                                           bal_rank_mask        -1
                                           standby_count_wanted        0
                                           qdb_cluster        leader: 0 members: 
                                            
                                            
Jan 31 08:07:59 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e32 e32: 3 total, 3 up, 3 in
Jan 31 08:07:59 compute-0 ceph-mon[75227]: log_channel(cluster) log [DBG] : osdmap e32: 3 total, 3 up, 3 in
Jan 31 08:07:59 compute-0 ceph-mon[75227]: log_channel(cluster) log [DBG] : fsmap cephfs:0
Jan 31 08:07:59 compute-0 podman[92887]: 2026-01-31 08:07:59.39546727 +0000 UTC m=+1.458333344 container remove 31951f04471f9058d9d7072f7966fb2603ead56816684467107bd937f17d05cc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nervous_morse, ceph=True, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 08:07:59 compute-0 ceph-mgr[75519]: [cephadm INFO root] Saving service mds.cephfs spec with placement compute-0
Jan 31 08:07:59 compute-0 ceph-mgr[75519]: log_channel(cephadm) log [INF] : Saving service mds.cephfs spec with placement compute-0
Jan 31 08:07:59 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mds.cephfs}] v 0)
Jan 31 08:07:59 compute-0 ceph-mon[75227]: pgmap v65: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:07:59 compute-0 ceph-mon[75227]: from='client.14236 -' entity='client.admin' cmd=[{"prefix": "fs volume create", "name": "cephfs", "placement": "compute-0 ", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 08:07:59 compute-0 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "osd pool create", "pool": "cephfs.cephfs.meta"} : dispatch
Jan 31 08:07:59 compute-0 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"bulk": true, "prefix": "osd pool create", "pool": "cephfs.cephfs.data"} : dispatch
Jan 31 08:07:59 compute-0 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "fs new", "fs_name": "cephfs", "metadata": "cephfs.cephfs.meta", "data": "cephfs.cephfs.data"} : dispatch
Jan 31 08:07:59 compute-0 ceph-mon[75227]: Health check failed: 1 filesystem is offline (MDS_ALL_DOWN)
Jan 31 08:07:59 compute-0 ceph-mon[75227]: Health check failed: 1 filesystem is online with fewer MDS than max_mds (MDS_UP_LESS_THAN_MAX)
Jan 31 08:07:59 compute-0 sudo[92712]: pam_unix(sudo:session): session closed for user root
Jan 31 08:07:59 compute-0 systemd[1]: libpod-conmon-31951f04471f9058d9d7072f7966fb2603ead56816684467107bd937f17d05cc.scope: Deactivated successfully.
Jan 31 08:07:59 compute-0 sudo[92969]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 08:07:59 compute-0 sudo[92969]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:07:59 compute-0 sudo[92969]: pam_unix(sudo:session): session closed for user root
Jan 31 08:07:59 compute-0 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 08:07:59 compute-0 ceph-mgr[75519]: [volumes INFO volumes.module] Finishing _cmd_fs_volume_create(name:cephfs, placement:compute-0 , prefix:fs volume create, target:['mon-mgr', '']) < ""
Jan 31 08:07:59 compute-0 systemd[1]: libpod-123aa49d8e81824e9d07b2813fa98d184b2142a160db2cd99ccf288e47344631.scope: Deactivated successfully.
Jan 31 08:07:59 compute-0 podman[92904]: 2026-01-31 08:07:59.58303097 +0000 UTC m=+1.450421208 container died 123aa49d8e81824e9d07b2813fa98d184b2142a160db2cd99ccf288e47344631 (image=quay.io/ceph/ceph:v20, name=nervous_kalam, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 08:07:59 compute-0 sudo[92994]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/82c880e6-d992-5408-8b12-efff9c275473/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 82c880e6-d992-5408-8b12-efff9c275473 -- raw list --format json
Jan 31 08:07:59 compute-0 sudo[92994]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:07:59 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v67: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:07:59 compute-0 systemd[1]: var-lib-containers-storage-overlay-20bb9f05f49a9baf8a18052cb5549b3065b5ce16c06dd7dab3c142e64bbb024f-merged.mount: Deactivated successfully.
Jan 31 08:07:59 compute-0 podman[92904]: 2026-01-31 08:07:59.86380159 +0000 UTC m=+1.731191768 container remove 123aa49d8e81824e9d07b2813fa98d184b2142a160db2cd99ccf288e47344631 (image=quay.io/ceph/ceph:v20, name=nervous_kalam, org.label-schema.build-date=20251030, ceph=True, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 08:07:59 compute-0 sudo[92888]: pam_unix(sudo:session): session closed for user root
Jan 31 08:07:59 compute-0 systemd[1]: libpod-conmon-123aa49d8e81824e9d07b2813fa98d184b2142a160db2cd99ccf288e47344631.scope: Deactivated successfully.
Jan 31 08:07:59 compute-0 podman[93045]: 2026-01-31 08:07:59.997042501 +0000 UTC m=+0.098133671 container create cca6eaaf0e6a4e88e78d5c480e89067966cb97228cb82db1d789b17524358013 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gracious_lalande, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 08:08:00 compute-0 sudo[93082]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rsgeshbbccfidzgdjgdoxsdrbezmhlaa ; /usr/bin/python3'
Jan 31 08:08:00 compute-0 sudo[93082]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:08:00 compute-0 podman[93045]: 2026-01-31 08:07:59.929437232 +0000 UTC m=+0.030528452 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 08:08:00 compute-0 systemd[1]: Started libpod-conmon-cca6eaaf0e6a4e88e78d5c480e89067966cb97228cb82db1d789b17524358013.scope.
Jan 31 08:08:00 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:08:00 compute-0 python3[93084]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /tmp/ceph_mds.yml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 82c880e6-d992-5408-8b12-efff9c275473 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch apply --in-file /home/ceph_spec.yaml _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 08:08:00 compute-0 podman[93045]: 2026-01-31 08:08:00.239414775 +0000 UTC m=+0.340505935 container init cca6eaaf0e6a4e88e78d5c480e89067966cb97228cb82db1d789b17524358013 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gracious_lalande, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 08:08:00 compute-0 podman[93045]: 2026-01-31 08:08:00.245097697 +0000 UTC m=+0.346188837 container start cca6eaaf0e6a4e88e78d5c480e89067966cb97228cb82db1d789b17524358013 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gracious_lalande, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 08:08:00 compute-0 gracious_lalande[93087]: 167 167
Jan 31 08:08:00 compute-0 systemd[1]: libpod-cca6eaaf0e6a4e88e78d5c480e89067966cb97228cb82db1d789b17524358013.scope: Deactivated successfully.
Jan 31 08:08:00 compute-0 podman[93045]: 2026-01-31 08:08:00.314884178 +0000 UTC m=+0.415975428 container attach cca6eaaf0e6a4e88e78d5c480e89067966cb97228cb82db1d789b17524358013 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gracious_lalande, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Jan 31 08:08:00 compute-0 podman[93045]: 2026-01-31 08:08:00.315519947 +0000 UTC m=+0.416611117 container died cca6eaaf0e6a4e88e78d5c480e89067966cb97228cb82db1d789b17524358013 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gracious_lalande, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030)
Jan 31 08:08:00 compute-0 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd='[{"prefix": "fs new", "fs_name": "cephfs", "metadata": "cephfs.cephfs.meta", "data": "cephfs.cephfs.data"}]': finished
Jan 31 08:08:00 compute-0 ceph-mon[75227]: osdmap e32: 3 total, 3 up, 3 in
Jan 31 08:08:00 compute-0 ceph-mon[75227]: fsmap cephfs:0
Jan 31 08:08:00 compute-0 ceph-mon[75227]: Saving service mds.cephfs spec with placement compute-0
Jan 31 08:08:00 compute-0 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 08:08:00 compute-0 ceph-mon[75227]: pgmap v67: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:08:00 compute-0 systemd[1]: var-lib-containers-storage-overlay-8a63cf6bd11f30f87e41c697eb8c64cd3b1400600c42b13b3c022e1c10b3663b-merged.mount: Deactivated successfully.
Jan 31 08:08:00 compute-0 podman[93045]: 2026-01-31 08:08:00.952022803 +0000 UTC m=+1.053113943 container remove cca6eaaf0e6a4e88e78d5c480e89067966cb97228cb82db1d789b17524358013 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gracious_lalande, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 08:08:00 compute-0 systemd[1]: libpod-conmon-cca6eaaf0e6a4e88e78d5c480e89067966cb97228cb82db1d789b17524358013.scope: Deactivated successfully.
Jan 31 08:08:01 compute-0 podman[93090]: 2026-01-31 08:08:01.018927562 +0000 UTC m=+0.784733977 container create fdb31e040ba11c58b5eec5485817aa820cdf3b0c50db96dbc7444662ebba0b13 (image=quay.io/ceph/ceph:v20, name=kind_napier, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Jan 31 08:08:01 compute-0 podman[93090]: 2026-01-31 08:08:00.988091722 +0000 UTC m=+0.753898217 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 31 08:08:01 compute-0 systemd[1]: Started libpod-conmon-fdb31e040ba11c58b5eec5485817aa820cdf3b0c50db96dbc7444662ebba0b13.scope.
Jan 31 08:08:01 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:08:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4fb6a8950d01fe18c37fa0e1f88f5ee6c5deec8d31e61383dbea71f820482606/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 08:08:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4fb6a8950d01fe18c37fa0e1f88f5ee6c5deec8d31e61383dbea71f820482606/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 08:08:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4fb6a8950d01fe18c37fa0e1f88f5ee6c5deec8d31e61383dbea71f820482606/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Jan 31 08:08:01 compute-0 podman[93125]: 2026-01-31 08:08:01.120416837 +0000 UTC m=+0.078139030 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 08:08:01 compute-0 podman[93125]: 2026-01-31 08:08:01.225192066 +0000 UTC m=+0.182914199 container create 91f9926870e61c9294f5f719526b7edb30337de7c46f7dbec093d01285d928cb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=silly_jemison, CEPH_REF=tentacle, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Jan 31 08:08:01 compute-0 podman[93090]: 2026-01-31 08:08:01.269766628 +0000 UTC m=+1.035573103 container init fdb31e040ba11c58b5eec5485817aa820cdf3b0c50db96dbc7444662ebba0b13 (image=quay.io/ceph/ceph:v20, name=kind_napier, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.schema-version=1.0)
Jan 31 08:08:01 compute-0 podman[93090]: 2026-01-31 08:08:01.278529308 +0000 UTC m=+1.044335733 container start fdb31e040ba11c58b5eec5485817aa820cdf3b0c50db96dbc7444662ebba0b13 (image=quay.io/ceph/ceph:v20, name=kind_napier, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3)
Jan 31 08:08:01 compute-0 podman[93090]: 2026-01-31 08:08:01.34451644 +0000 UTC m=+1.110322875 container attach fdb31e040ba11c58b5eec5485817aa820cdf3b0c50db96dbc7444662ebba0b13 (image=quay.io/ceph/ceph:v20, name=kind_napier, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 08:08:01 compute-0 systemd[1]: Started libpod-conmon-91f9926870e61c9294f5f719526b7edb30337de7c46f7dbec093d01285d928cb.scope.
Jan 31 08:08:01 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:08:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/767618a7083ac06471bd2ffee34e7fa05823f542249cc42361817bedcbe7fea9/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 08:08:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/767618a7083ac06471bd2ffee34e7fa05823f542249cc42361817bedcbe7fea9/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 08:08:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/767618a7083ac06471bd2ffee34e7fa05823f542249cc42361817bedcbe7fea9/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 08:08:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/767618a7083ac06471bd2ffee34e7fa05823f542249cc42361817bedcbe7fea9/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 08:08:01 compute-0 podman[93125]: 2026-01-31 08:08:01.540939904 +0000 UTC m=+0.498662047 container init 91f9926870e61c9294f5f719526b7edb30337de7c46f7dbec093d01285d928cb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=silly_jemison, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.license=GPLv2)
Jan 31 08:08:01 compute-0 podman[93125]: 2026-01-31 08:08:01.547217933 +0000 UTC m=+0.504940046 container start 91f9926870e61c9294f5f719526b7edb30337de7c46f7dbec093d01285d928cb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=silly_jemison, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3)
Jan 31 08:08:01 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v68: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:08:01 compute-0 ceph-mgr[75519]: log_channel(audit) log [DBG] : from='client.14238 -' entity='client.admin' cmd=[{"prefix": "orch apply", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 08:08:01 compute-0 ceph-mgr[75519]: [cephadm INFO root] Saving service mds.cephfs spec with placement compute-0
Jan 31 08:08:01 compute-0 ceph-mgr[75519]: log_channel(cephadm) log [INF] : Saving service mds.cephfs spec with placement compute-0
Jan 31 08:08:01 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mds.cephfs}] v 0)
Jan 31 08:08:01 compute-0 podman[93125]: 2026-01-31 08:08:01.738970403 +0000 UTC m=+0.696692506 container attach 91f9926870e61c9294f5f719526b7edb30337de7c46f7dbec093d01285d928cb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=silly_jemison, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 08:08:01 compute-0 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 08:08:01 compute-0 kind_napier[93141]: Scheduled mds.cephfs update...
Jan 31 08:08:01 compute-0 systemd[1]: libpod-fdb31e040ba11c58b5eec5485817aa820cdf3b0c50db96dbc7444662ebba0b13.scope: Deactivated successfully.
Jan 31 08:08:01 compute-0 podman[93090]: 2026-01-31 08:08:01.80264813 +0000 UTC m=+1.568454545 container died fdb31e040ba11c58b5eec5485817aa820cdf3b0c50db96dbc7444662ebba0b13 (image=quay.io/ceph/ceph:v20, name=kind_napier, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 08:08:02 compute-0 systemd[1]: var-lib-containers-storage-overlay-4fb6a8950d01fe18c37fa0e1f88f5ee6c5deec8d31e61383dbea71f820482606-merged.mount: Deactivated successfully.
Jan 31 08:08:02 compute-0 lvm[93259]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 31 08:08:02 compute-0 lvm[93259]: VG ceph_vg1 finished
Jan 31 08:08:02 compute-0 lvm[93256]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 31 08:08:02 compute-0 lvm[93256]: VG ceph_vg0 finished
Jan 31 08:08:02 compute-0 lvm[93261]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 31 08:08:02 compute-0 lvm[93261]: VG ceph_vg2 finished
Jan 31 08:08:02 compute-0 silly_jemison[93166]: {}
Jan 31 08:08:02 compute-0 systemd[1]: libpod-91f9926870e61c9294f5f719526b7edb30337de7c46f7dbec093d01285d928cb.scope: Deactivated successfully.
Jan 31 08:08:02 compute-0 systemd[1]: libpod-91f9926870e61c9294f5f719526b7edb30337de7c46f7dbec093d01285d928cb.scope: Consumed 1.082s CPU time.
Jan 31 08:08:02 compute-0 podman[93090]: 2026-01-31 08:08:02.430920383 +0000 UTC m=+2.196726778 container remove fdb31e040ba11c58b5eec5485817aa820cdf3b0c50db96dbc7444662ebba0b13 (image=quay.io/ceph/ceph:v20, name=kind_napier, ceph=True, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 08:08:02 compute-0 systemd[1]: libpod-conmon-fdb31e040ba11c58b5eec5485817aa820cdf3b0c50db96dbc7444662ebba0b13.scope: Deactivated successfully.
Jan 31 08:08:02 compute-0 sudo[93082]: pam_unix(sudo:session): session closed for user root
Jan 31 08:08:02 compute-0 podman[93125]: 2026-01-31 08:08:02.479820588 +0000 UTC m=+1.437542731 container died 91f9926870e61c9294f5f719526b7edb30337de7c46f7dbec093d01285d928cb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=silly_jemison, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 08:08:02 compute-0 ceph-mgr[75519]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:08:02 compute-0 ceph-mgr[75519]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:08:02 compute-0 ceph-mgr[75519]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:08:02 compute-0 ceph-mgr[75519]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:08:02 compute-0 ceph-mgr[75519]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:08:02 compute-0 ceph-mgr[75519]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:08:02 compute-0 systemd[1]: var-lib-containers-storage-overlay-767618a7083ac06471bd2ffee34e7fa05823f542249cc42361817bedcbe7fea9-merged.mount: Deactivated successfully.
Jan 31 08:08:03 compute-0 ceph-mon[75227]: pgmap v68: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:08:03 compute-0 ceph-mon[75227]: from='client.14238 -' entity='client.admin' cmd=[{"prefix": "orch apply", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 08:08:03 compute-0 ceph-mon[75227]: Saving service mds.cephfs spec with placement compute-0
Jan 31 08:08:03 compute-0 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 08:08:03 compute-0 podman[93264]: 2026-01-31 08:08:03.220164058 +0000 UTC m=+0.903776324 container remove 91f9926870e61c9294f5f719526b7edb30337de7c46f7dbec093d01285d928cb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=silly_jemison, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3)
Jan 31 08:08:03 compute-0 systemd[1]: libpod-conmon-91f9926870e61c9294f5f719526b7edb30337de7c46f7dbec093d01285d928cb.scope: Deactivated successfully.
Jan 31 08:08:03 compute-0 sudo[92994]: pam_unix(sudo:session): session closed for user root
Jan 31 08:08:03 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 31 08:08:03 compute-0 sudo[93355]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-volqbwtmvtqqftvrdtnqkuihskkwcrpg ; /usr/bin/python3'
Jan 31 08:08:03 compute-0 sudo[93355]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:08:03 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e32 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 08:08:03 compute-0 python3[93357]: ansible-ansible.legacy.stat Invoked with path=/etc/ceph/ceph.client.openstack.keyring follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 31 08:08:03 compute-0 sudo[93355]: pam_unix(sudo:session): session closed for user root
Jan 31 08:08:03 compute-0 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 08:08:03 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 31 08:08:03 compute-0 sudo[93428]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sejdznmfcksnscenndglutawdaucehej ; /usr/bin/python3'
Jan 31 08:08:03 compute-0 sudo[93428]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:08:03 compute-0 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 08:08:03 compute-0 sudo[93431]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 31 08:08:03 compute-0 sudo[93431]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:08:03 compute-0 sudo[93431]: pam_unix(sudo:session): session closed for user root
Jan 31 08:08:03 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v69: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:08:03 compute-0 sudo[93456]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 08:08:03 compute-0 sudo[93456]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:08:03 compute-0 sudo[93456]: pam_unix(sudo:session): session closed for user root
Jan 31 08:08:03 compute-0 python3[93430]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1769846883.1898232-36878-228351024344322/source dest=/etc/ceph/ceph.client.openstack.keyring mode=0644 force=True owner=167 group=167 follow=False _original_basename=ceph_key.j2 checksum=5ead94c69bd1df72757f346af781128058784f3a backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 08:08:03 compute-0 sudo[93481]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/82c880e6-d992-5408-8b12-efff9c275473/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ls
Jan 31 08:08:03 compute-0 sudo[93481]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:08:03 compute-0 sudo[93428]: pam_unix(sudo:session): session closed for user root
Jan 31 08:08:04 compute-0 sudo[93583]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jknylmhdbcomqpgniqyvmtjebrvatdzi ; /usr/bin/python3'
Jan 31 08:08:04 compute-0 sudo[93583]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:08:04 compute-0 python3[93586]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 82c880e6-d992-5408-8b12-efff9c275473 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   auth import -i /etc/ceph/ceph.client.openstack.keyring _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 08:08:04 compute-0 podman[93597]: 2026-01-31 08:08:04.321863675 +0000 UTC m=+0.214863210 container exec 2c160fb9852a007dc977740f88f96001cc57b1cb392a9e315d541aef8037777a (image=quay.io/ceph/ceph:v20, name=ceph-82c880e6-d992-5408-8b12-efff9c275473-mon-compute-0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.41.3)
Jan 31 08:08:04 compute-0 podman[93611]: 2026-01-31 08:08:04.318670174 +0000 UTC m=+0.099614322 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 31 08:08:05 compute-0 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 08:08:05 compute-0 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 08:08:05 compute-0 ceph-mon[75227]: pgmap v69: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:08:05 compute-0 podman[93611]: 2026-01-31 08:08:05.39924641 +0000 UTC m=+1.180190518 container create c446fc5afdbd5c90554a332dc1a6c4328cdb70310c7ccb9768d1be57bc330c1c (image=quay.io/ceph/ceph:v20, name=festive_dirac, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 08:08:05 compute-0 systemd[1]: Started libpod-conmon-c446fc5afdbd5c90554a332dc1a6c4328cdb70310c7ccb9768d1be57bc330c1c.scope.
Jan 31 08:08:05 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:08:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/592fb36597048816fd878d8403fa53bd4d06dbceda77ccdd0f6f41b0c8c7be20/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 08:08:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/592fb36597048816fd878d8403fa53bd4d06dbceda77ccdd0f6f41b0c8c7be20/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 08:08:05 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v70: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:08:05 compute-0 podman[93611]: 2026-01-31 08:08:05.859686406 +0000 UTC m=+1.640630564 container init c446fc5afdbd5c90554a332dc1a6c4328cdb70310c7ccb9768d1be57bc330c1c (image=quay.io/ceph/ceph:v20, name=festive_dirac, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 08:08:05 compute-0 podman[93611]: 2026-01-31 08:08:05.865110451 +0000 UTC m=+1.646054579 container start c446fc5afdbd5c90554a332dc1a6c4328cdb70310c7ccb9768d1be57bc330c1c (image=quay.io/ceph/ceph:v20, name=festive_dirac, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=tentacle)
Jan 31 08:08:06 compute-0 podman[93611]: 2026-01-31 08:08:06.013582636 +0000 UTC m=+1.794526784 container attach c446fc5afdbd5c90554a332dc1a6c4328cdb70310c7ccb9768d1be57bc330c1c (image=quay.io/ceph/ceph:v20, name=festive_dirac, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 08:08:06 compute-0 podman[93597]: 2026-01-31 08:08:06.077409327 +0000 UTC m=+1.970408862 container exec_died 2c160fb9852a007dc977740f88f96001cc57b1cb392a9e315d541aef8037777a (image=quay.io/ceph/ceph:v20, name=ceph-82c880e6-d992-5408-8b12-efff9c275473-mon-compute-0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, ceph=True, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0)
Jan 31 08:08:06 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth import"} v 0)
Jan 31 08:08:06 compute-0 ceph-mon[75227]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/81335662' entity='client.admin' cmd={"prefix": "auth import"} : dispatch
Jan 31 08:08:06 compute-0 ceph-mon[75227]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/81335662' entity='client.admin' cmd='[{"prefix": "auth import"}]': finished
Jan 31 08:08:06 compute-0 systemd[1]: libpod-c446fc5afdbd5c90554a332dc1a6c4328cdb70310c7ccb9768d1be57bc330c1c.scope: Deactivated successfully.
Jan 31 08:08:06 compute-0 podman[93611]: 2026-01-31 08:08:06.457227842 +0000 UTC m=+2.238192071 container died c446fc5afdbd5c90554a332dc1a6c4328cdb70310c7ccb9768d1be57bc330c1c (image=quay.io/ceph/ceph:v20, name=festive_dirac, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.build-date=20251030)
Jan 31 08:08:06 compute-0 systemd[1]: var-lib-containers-storage-overlay-592fb36597048816fd878d8403fa53bd4d06dbceda77ccdd0f6f41b0c8c7be20-merged.mount: Deactivated successfully.
Jan 31 08:08:06 compute-0 ceph-mon[75227]: pgmap v70: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:08:06 compute-0 ceph-mon[75227]: from='client.? 192.168.122.100:0/81335662' entity='client.admin' cmd={"prefix": "auth import"} : dispatch
Jan 31 08:08:06 compute-0 ceph-mon[75227]: from='client.? 192.168.122.100:0/81335662' entity='client.admin' cmd='[{"prefix": "auth import"}]': finished
Jan 31 08:08:06 compute-0 podman[93611]: 2026-01-31 08:08:06.833769604 +0000 UTC m=+2.614713742 container remove c446fc5afdbd5c90554a332dc1a6c4328cdb70310c7ccb9768d1be57bc330c1c (image=quay.io/ceph/ceph:v20, name=festive_dirac, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 08:08:06 compute-0 sudo[93583]: pam_unix(sudo:session): session closed for user root
Jan 31 08:08:06 compute-0 systemd[1]: libpod-conmon-c446fc5afdbd5c90554a332dc1a6c4328cdb70310c7ccb9768d1be57bc330c1c.scope: Deactivated successfully.
Jan 31 08:08:07 compute-0 sudo[93481]: pam_unix(sudo:session): session closed for user root
Jan 31 08:08:07 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 31 08:08:07 compute-0 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 08:08:07 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 31 08:08:07 compute-0 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 08:08:07 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 31 08:08:07 compute-0 ceph-mon[75227]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 08:08:07 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 31 08:08:07 compute-0 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 31 08:08:07 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 31 08:08:07 compute-0 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 08:08:07 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 31 08:08:07 compute-0 ceph-mon[75227]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 31 08:08:07 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 31 08:08:07 compute-0 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 31 08:08:07 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 31 08:08:07 compute-0 ceph-mon[75227]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 08:08:07 compute-0 sudo[93828]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pegtjlwnrncjpgkevzvugiknefmektoz ; /usr/bin/python3'
Jan 31 08:08:07 compute-0 sudo[93828]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:08:07 compute-0 sudo[93825]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 08:08:07 compute-0 sudo[93825]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:08:07 compute-0 sudo[93825]: pam_unix(sudo:session): session closed for user root
Jan 31 08:08:07 compute-0 sudo[93853]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/82c880e6-d992-5408-8b12-efff9c275473/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 82c880e6-d992-5408-8b12-efff9c275473 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Jan 31 08:08:07 compute-0 sudo[93853]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:08:07 compute-0 python3[93848]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 82c880e6-d992-5408-8b12-efff9c275473 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   status --format json | jq .monmap.num_mons _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 08:08:07 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v71: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:08:07 compute-0 podman[93879]: 2026-01-31 08:08:07.722239129 +0000 UTC m=+0.108543128 container create 292394b2690ded854182f588fd86aa405763027952746842bfce4de8079903c0 (image=quay.io/ceph/ceph:v20, name=xenodochial_babbage, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Jan 31 08:08:07 compute-0 podman[93879]: 2026-01-31 08:08:07.637699547 +0000 UTC m=+0.024003586 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 31 08:08:07 compute-0 systemd[1]: Started libpod-conmon-292394b2690ded854182f588fd86aa405763027952746842bfce4de8079903c0.scope.
Jan 31 08:08:07 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:08:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/837265d097033e6971d21347b8ff5c0c83f349a6300c855a3163e22588645f8e/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 08:08:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/837265d097033e6971d21347b8ff5c0c83f349a6300c855a3163e22588645f8e/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 08:08:07 compute-0 podman[93879]: 2026-01-31 08:08:07.97290597 +0000 UTC m=+0.359210039 container init 292394b2690ded854182f588fd86aa405763027952746842bfce4de8079903c0 (image=quay.io/ceph/ceph:v20, name=xenodochial_babbage, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 08:08:07 compute-0 podman[93879]: 2026-01-31 08:08:07.982055291 +0000 UTC m=+0.368359280 container start 292394b2690ded854182f588fd86aa405763027952746842bfce4de8079903c0 (image=quay.io/ceph/ceph:v20, name=xenodochial_babbage, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, ceph=True)
Jan 31 08:08:08 compute-0 podman[93906]: 2026-01-31 08:08:07.938347284 +0000 UTC m=+0.200131040 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 08:08:08 compute-0 podman[93879]: 2026-01-31 08:08:08.104301168 +0000 UTC m=+0.490605127 container attach 292394b2690ded854182f588fd86aa405763027952746842bfce4de8079903c0 (image=quay.io/ceph/ceph:v20, name=xenodochial_babbage, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 08:08:08 compute-0 podman[93906]: 2026-01-31 08:08:08.176686543 +0000 UTC m=+0.438470299 container create f0fb042824d0f33e2aa33dba90522cac3a5c6da280af6704b19c097f9a0747a1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=unruffled_mestorf, io.buildah.version=1.41.3, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030)
Jan 31 08:08:08 compute-0 systemd[1]: Started libpod-conmon-f0fb042824d0f33e2aa33dba90522cac3a5c6da280af6704b19c097f9a0747a1.scope.
Jan 31 08:08:08 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:08:08 compute-0 podman[93906]: 2026-01-31 08:08:08.257612582 +0000 UTC m=+0.519396388 container init f0fb042824d0f33e2aa33dba90522cac3a5c6da280af6704b19c097f9a0747a1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=unruffled_mestorf, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 08:08:08 compute-0 podman[93906]: 2026-01-31 08:08:08.26456733 +0000 UTC m=+0.526351096 container start f0fb042824d0f33e2aa33dba90522cac3a5c6da280af6704b19c097f9a0747a1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=unruffled_mestorf, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 08:08:08 compute-0 unruffled_mestorf[93948]: 167 167
Jan 31 08:08:08 compute-0 systemd[1]: libpod-f0fb042824d0f33e2aa33dba90522cac3a5c6da280af6704b19c097f9a0747a1.scope: Deactivated successfully.
Jan 31 08:08:08 compute-0 conmon[93948]: conmon f0fb042824d0f33e2aa3 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-f0fb042824d0f33e2aa33dba90522cac3a5c6da280af6704b19c097f9a0747a1.scope/container/memory.events
Jan 31 08:08:08 compute-0 podman[93906]: 2026-01-31 08:08:08.269798659 +0000 UTC m=+0.531582465 container attach f0fb042824d0f33e2aa33dba90522cac3a5c6da280af6704b19c097f9a0747a1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=unruffled_mestorf, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0)
Jan 31 08:08:08 compute-0 podman[93906]: 2026-01-31 08:08:08.270154549 +0000 UTC m=+0.531938305 container died f0fb042824d0f33e2aa33dba90522cac3a5c6da280af6704b19c097f9a0747a1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=unruffled_mestorf, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.license=GPLv2)
Jan 31 08:08:08 compute-0 systemd[1]: var-lib-containers-storage-overlay-7eaabd533092cac6ae6846c663bfa7f58ac7329711d570dad79254f2aa564ba7-merged.mount: Deactivated successfully.
Jan 31 08:08:08 compute-0 podman[93906]: 2026-01-31 08:08:08.310167121 +0000 UTC m=+0.571950847 container remove f0fb042824d0f33e2aa33dba90522cac3a5c6da280af6704b19c097f9a0747a1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=unruffled_mestorf, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True)
Jan 31 08:08:08 compute-0 systemd[1]: libpod-conmon-f0fb042824d0f33e2aa33dba90522cac3a5c6da280af6704b19c097f9a0747a1.scope: Deactivated successfully.
Jan 31 08:08:08 compute-0 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 08:08:08 compute-0 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 08:08:08 compute-0 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 08:08:08 compute-0 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 31 08:08:08 compute-0 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 08:08:08 compute-0 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 31 08:08:08 compute-0 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 31 08:08:08 compute-0 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 08:08:08 compute-0 ceph-mon[75227]: pgmap v71: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:08:08 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e32 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 08:08:08 compute-0 podman[93973]: 2026-01-31 08:08:08.460888981 +0000 UTC m=+0.056278067 container create 3bc106cc8fe013228f6fa92e0844e5b166496a022e9616b561847c71b32fe0cb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=modest_mestorf, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True)
Jan 31 08:08:08 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json"} v 0)
Jan 31 08:08:08 compute-0 ceph-mon[75227]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2508594488' entity='client.admin' cmd={"prefix": "status", "format": "json"} : dispatch
Jan 31 08:08:08 compute-0 xenodochial_babbage[93921]: 
Jan 31 08:08:08 compute-0 xenodochial_babbage[93921]: {"fsid":"82c880e6-d992-5408-8b12-efff9c275473","health":{"status":"HEALTH_ERR","checks":{"MDS_ALL_DOWN":{"severity":"HEALTH_ERR","summary":{"message":"1 filesystem is offline","count":1},"muted":false},"MDS_UP_LESS_THAN_MAX":{"severity":"HEALTH_WARN","summary":{"message":"1 filesystem is online with fewer MDS than max_mds","count":1},"muted":false}},"mutes":[]},"election_epoch":5,"quorum":[0],"quorum_names":["compute-0"],"quorum_age":115,"monmap":{"epoch":1,"min_mon_release_name":"tentacle","num_mons":1},"osdmap":{"epoch":32,"num_osds":3,"num_up_osds":3,"osd_up_since":1769846853,"num_in_osds":3,"osd_in_since":1769846828,"num_remapped_pgs":0},"pgmap":{"pgs_by_state":[{"state_name":"active+clean","count":7}],"num_pgs":7,"num_pools":7,"num_objects":2,"data_bytes":459280,"bytes_used":83894272,"bytes_avail":64328032256,"bytes_total":64411926528},"fsmap":{"epoch":2,"btime":"2026-01-31T08:07:59:022862+0000","id":1,"up":0,"in":0,"max":1,"by_rank":[],"up:standby":0},"mgrmap":{"available":true,"num_standbys":0,"modules":["cephadm","iostat","nfs"],"services":{}},"servicemap":{"epoch":2,"modified":"2026-01-31T08:07:33.658076+0000","services":{"osd":{"daemons":{"summary":"","0":{"start_epoch":0,"start_stamp":"0.000000","gid":0,"addr":"(unrecognized address family 0)/0","metadata":{},"task_status":{}},"1":{"start_epoch":0,"start_stamp":"0.000000","gid":0,"addr":"(unrecognized address family 0)/0","metadata":{},"task_status":{}}}}}},"progress_events":{}}
Jan 31 08:08:08 compute-0 systemd[1]: Started libpod-conmon-3bc106cc8fe013228f6fa92e0844e5b166496a022e9616b561847c71b32fe0cb.scope.
Jan 31 08:08:08 compute-0 podman[93879]: 2026-01-31 08:08:08.511456733 +0000 UTC m=+0.897760692 container died 292394b2690ded854182f588fd86aa405763027952746842bfce4de8079903c0 (image=quay.io/ceph/ceph:v20, name=xenodochial_babbage, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030)
Jan 31 08:08:08 compute-0 systemd[1]: libpod-292394b2690ded854182f588fd86aa405763027952746842bfce4de8079903c0.scope: Deactivated successfully.
Jan 31 08:08:08 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:08:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2870281f9628df399d8b04fa961b5faba5e0de43fbf50ec6f45d27c70e6b25c0/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 08:08:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2870281f9628df399d8b04fa961b5faba5e0de43fbf50ec6f45d27c70e6b25c0/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 08:08:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2870281f9628df399d8b04fa961b5faba5e0de43fbf50ec6f45d27c70e6b25c0/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 08:08:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2870281f9628df399d8b04fa961b5faba5e0de43fbf50ec6f45d27c70e6b25c0/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 08:08:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2870281f9628df399d8b04fa961b5faba5e0de43fbf50ec6f45d27c70e6b25c0/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 31 08:08:08 compute-0 podman[93973]: 2026-01-31 08:08:08.437322298 +0000 UTC m=+0.032711424 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 08:08:08 compute-0 systemd[1]: var-lib-containers-storage-overlay-837265d097033e6971d21347b8ff5c0c83f349a6300c855a3163e22588645f8e-merged.mount: Deactivated successfully.
Jan 31 08:08:08 compute-0 podman[93973]: 2026-01-31 08:08:08.55902297 +0000 UTC m=+0.154412066 container init 3bc106cc8fe013228f6fa92e0844e5b166496a022e9616b561847c71b32fe0cb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=modest_mestorf, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3)
Jan 31 08:08:08 compute-0 podman[93973]: 2026-01-31 08:08:08.572927877 +0000 UTC m=+0.168316963 container start 3bc106cc8fe013228f6fa92e0844e5b166496a022e9616b561847c71b32fe0cb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=modest_mestorf, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 08:08:08 compute-0 podman[93879]: 2026-01-31 08:08:08.582377306 +0000 UTC m=+0.968681265 container remove 292394b2690ded854182f588fd86aa405763027952746842bfce4de8079903c0 (image=quay.io/ceph/ceph:v20, name=xenodochial_babbage, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True)
Jan 31 08:08:08 compute-0 systemd[1]: libpod-conmon-292394b2690ded854182f588fd86aa405763027952746842bfce4de8079903c0.scope: Deactivated successfully.
Jan 31 08:08:08 compute-0 podman[93973]: 2026-01-31 08:08:08.596831539 +0000 UTC m=+0.192220695 container attach 3bc106cc8fe013228f6fa92e0844e5b166496a022e9616b561847c71b32fe0cb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=modest_mestorf, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 31 08:08:08 compute-0 sudo[93828]: pam_unix(sudo:session): session closed for user root
Jan 31 08:08:08 compute-0 sudo[94029]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dzdamseqypwjizhusmbgzwmqjoirtrvc ; /usr/bin/python3'
Jan 31 08:08:08 compute-0 sudo[94029]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:08:08 compute-0 python3[94031]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 82c880e6-d992-5408-8b12-efff9c275473 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   mon dump --format json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 08:08:08 compute-0 podman[94041]: 2026-01-31 08:08:08.989152871 +0000 UTC m=+0.080152428 container create 8b0018edebd6b12ba93215c2d0f40c618916e259be6009bd7d2e5736949975b3 (image=quay.io/ceph/ceph:v20, name=charming_nash, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 08:08:09 compute-0 modest_mestorf[93991]: --> passed data devices: 0 physical, 3 LVM
Jan 31 08:08:09 compute-0 modest_mestorf[93991]: --> All data devices are unavailable
Jan 31 08:08:09 compute-0 systemd[1]: libpod-3bc106cc8fe013228f6fa92e0844e5b166496a022e9616b561847c71b32fe0cb.scope: Deactivated successfully.
Jan 31 08:08:09 compute-0 podman[94041]: 2026-01-31 08:08:08.947202024 +0000 UTC m=+0.038201561 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 31 08:08:09 compute-0 podman[93973]: 2026-01-31 08:08:09.109693889 +0000 UTC m=+0.705082975 container died 3bc106cc8fe013228f6fa92e0844e5b166496a022e9616b561847c71b32fe0cb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=modest_mestorf, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 08:08:09 compute-0 systemd[1]: Started libpod-conmon-8b0018edebd6b12ba93215c2d0f40c618916e259be6009bd7d2e5736949975b3.scope.
Jan 31 08:08:09 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:08:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7760e8ffa8e4c4b301165faf031c8a397fcff553872db6d413e2dbdf35ac2106/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 08:08:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7760e8ffa8e4c4b301165faf031c8a397fcff553872db6d413e2dbdf35ac2106/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 08:08:09 compute-0 podman[94041]: 2026-01-31 08:08:09.203941668 +0000 UTC m=+0.294941215 container init 8b0018edebd6b12ba93215c2d0f40c618916e259be6009bd7d2e5736949975b3 (image=quay.io/ceph/ceph:v20, name=charming_nash, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 08:08:09 compute-0 podman[94041]: 2026-01-31 08:08:09.212354978 +0000 UTC m=+0.303354515 container start 8b0018edebd6b12ba93215c2d0f40c618916e259be6009bd7d2e5736949975b3 (image=quay.io/ceph/ceph:v20, name=charming_nash, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 08:08:09 compute-0 podman[94041]: 2026-01-31 08:08:09.240848451 +0000 UTC m=+0.331848048 container attach 8b0018edebd6b12ba93215c2d0f40c618916e259be6009bd7d2e5736949975b3 (image=quay.io/ceph/ceph:v20, name=charming_nash, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 08:08:09 compute-0 systemd[1]: var-lib-containers-storage-overlay-2870281f9628df399d8b04fa961b5faba5e0de43fbf50ec6f45d27c70e6b25c0-merged.mount: Deactivated successfully.
Jan 31 08:08:09 compute-0 podman[94060]: 2026-01-31 08:08:09.290331032 +0000 UTC m=+0.245164174 container remove 3bc106cc8fe013228f6fa92e0844e5b166496a022e9616b561847c71b32fe0cb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=modest_mestorf, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle)
Jan 31 08:08:09 compute-0 systemd[1]: libpod-conmon-3bc106cc8fe013228f6fa92e0844e5b166496a022e9616b561847c71b32fe0cb.scope: Deactivated successfully.
Jan 31 08:08:09 compute-0 sudo[93853]: pam_unix(sudo:session): session closed for user root
Jan 31 08:08:09 compute-0 ceph-mon[75227]: from='client.? 192.168.122.100:0/2508594488' entity='client.admin' cmd={"prefix": "status", "format": "json"} : dispatch
Jan 31 08:08:09 compute-0 sudo[94100]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 08:08:09 compute-0 sudo[94100]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:08:09 compute-0 sudo[94100]: pam_unix(sudo:session): session closed for user root
Jan 31 08:08:09 compute-0 sudo[94125]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/82c880e6-d992-5408-8b12-efff9c275473/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 82c880e6-d992-5408-8b12-efff9c275473 -- lvm list --format json
Jan 31 08:08:09 compute-0 sudo[94125]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:08:09 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v72: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:08:09 compute-0 podman[94161]: 2026-01-31 08:08:09.740409922 +0000 UTC m=+0.065647134 container create 5e0f923888ae588b764057eb91b5eb7524467c5da730f62294e24bea70d5178b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=objective_bartik, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 08:08:09 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 31 08:08:09 compute-0 ceph-mon[75227]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/47939758' entity='client.admin' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 31 08:08:09 compute-0 charming_nash[94075]: 
Jan 31 08:08:09 compute-0 charming_nash[94075]: {"epoch":1,"fsid":"82c880e6-d992-5408-8b12-efff9c275473","modified":"2026-01-31T08:06:09.429767Z","created":"2026-01-31T08:06:09.429767Z","min_mon_release":20,"min_mon_release_name":"tentacle","election_strategy":1,"disallowed_leaders":"","stretch_mode":false,"tiebreaker_mon":"","removed_ranks":"","features":{"persistent":["kraken","luminous","mimic","osdmap-prune","nautilus","octopus","pacific","elector-pinging","quincy","reef","squid","tentacle"],"optional":[]},"mons":[{"rank":0,"name":"compute-0","public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.122.100:3300","nonce":0},{"type":"v1","addr":"192.168.122.100:6789","nonce":0}]},"addr":"192.168.122.100:6789/0","public_addr":"192.168.122.100:6789/0","priority":0,"weight":0,"crush_location":"{}"}],"quorum":[0]}
Jan 31 08:08:09 compute-0 charming_nash[94075]: dumped monmap epoch 1
Jan 31 08:08:09 compute-0 systemd[1]: libpod-8b0018edebd6b12ba93215c2d0f40c618916e259be6009bd7d2e5736949975b3.scope: Deactivated successfully.
Jan 31 08:08:09 compute-0 podman[94041]: 2026-01-31 08:08:09.758858568 +0000 UTC m=+0.849858075 container died 8b0018edebd6b12ba93215c2d0f40c618916e259be6009bd7d2e5736949975b3 (image=quay.io/ceph/ceph:v20, name=charming_nash, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, ceph=True)
Jan 31 08:08:09 compute-0 systemd[1]: Started libpod-conmon-5e0f923888ae588b764057eb91b5eb7524467c5da730f62294e24bea70d5178b.scope.
Jan 31 08:08:09 compute-0 podman[94161]: 2026-01-31 08:08:09.695348176 +0000 UTC m=+0.020585398 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 08:08:09 compute-0 systemd[1]: var-lib-containers-storage-overlay-7760e8ffa8e4c4b301165faf031c8a397fcff553872db6d413e2dbdf35ac2106-merged.mount: Deactivated successfully.
Jan 31 08:08:09 compute-0 podman[94041]: 2026-01-31 08:08:09.818201721 +0000 UTC m=+0.909201238 container remove 8b0018edebd6b12ba93215c2d0f40c618916e259be6009bd7d2e5736949975b3 (image=quay.io/ceph/ceph:v20, name=charming_nash, org.label-schema.build-date=20251030, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 08:08:09 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:08:09 compute-0 systemd[1]: libpod-conmon-8b0018edebd6b12ba93215c2d0f40c618916e259be6009bd7d2e5736949975b3.scope: Deactivated successfully.
Jan 31 08:08:09 compute-0 podman[94161]: 2026-01-31 08:08:09.834369562 +0000 UTC m=+0.159606834 container init 5e0f923888ae588b764057eb91b5eb7524467c5da730f62294e24bea70d5178b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=objective_bartik, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Jan 31 08:08:09 compute-0 sudo[94029]: pam_unix(sudo:session): session closed for user root
Jan 31 08:08:09 compute-0 podman[94161]: 2026-01-31 08:08:09.840762335 +0000 UTC m=+0.165999527 container start 5e0f923888ae588b764057eb91b5eb7524467c5da730f62294e24bea70d5178b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=objective_bartik, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=tentacle)
Jan 31 08:08:09 compute-0 objective_bartik[94190]: 167 167
Jan 31 08:08:09 compute-0 systemd[1]: libpod-5e0f923888ae588b764057eb91b5eb7524467c5da730f62294e24bea70d5178b.scope: Deactivated successfully.
Jan 31 08:08:09 compute-0 conmon[94190]: conmon 5e0f923888ae588b7640 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-5e0f923888ae588b764057eb91b5eb7524467c5da730f62294e24bea70d5178b.scope/container/memory.events
Jan 31 08:08:09 compute-0 podman[94161]: 2026-01-31 08:08:09.848492935 +0000 UTC m=+0.173730197 container attach 5e0f923888ae588b764057eb91b5eb7524467c5da730f62294e24bea70d5178b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=objective_bartik, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3)
Jan 31 08:08:09 compute-0 podman[94161]: 2026-01-31 08:08:09.84899726 +0000 UTC m=+0.174234452 container died 5e0f923888ae588b764057eb91b5eb7524467c5da730f62294e24bea70d5178b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=objective_bartik, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 08:08:09 compute-0 systemd[1]: var-lib-containers-storage-overlay-6b014380395d5209fcd0564845332ef1ca5987f0cd5f2af8d0c97cfd18c11d0c-merged.mount: Deactivated successfully.
Jan 31 08:08:09 compute-0 podman[94161]: 2026-01-31 08:08:09.899466599 +0000 UTC m=+0.224703801 container remove 5e0f923888ae588b764057eb91b5eb7524467c5da730f62294e24bea70d5178b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=objective_bartik, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Jan 31 08:08:09 compute-0 systemd[1]: libpod-conmon-5e0f923888ae588b764057eb91b5eb7524467c5da730f62294e24bea70d5178b.scope: Deactivated successfully.
Jan 31 08:08:10 compute-0 podman[94218]: 2026-01-31 08:08:10.066524215 +0000 UTC m=+0.052739105 container create 7b2160978ab11bdfbb3427e633da87c1410699b621c425b3498a5ac0c80e8030 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=thirsty_benz, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 08:08:10 compute-0 systemd[1]: Started libpod-conmon-7b2160978ab11bdfbb3427e633da87c1410699b621c425b3498a5ac0c80e8030.scope.
Jan 31 08:08:10 compute-0 podman[94218]: 2026-01-31 08:08:10.043401776 +0000 UTC m=+0.029616646 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 08:08:10 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:08:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7edaf8d3ee38fde0df1aa9563f113e7c1c013efeb8b8151b8fd0c95201a48826/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 08:08:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7edaf8d3ee38fde0df1aa9563f113e7c1c013efeb8b8151b8fd0c95201a48826/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 08:08:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7edaf8d3ee38fde0df1aa9563f113e7c1c013efeb8b8151b8fd0c95201a48826/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 08:08:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7edaf8d3ee38fde0df1aa9563f113e7c1c013efeb8b8151b8fd0c95201a48826/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 08:08:10 compute-0 podman[94218]: 2026-01-31 08:08:10.165939221 +0000 UTC m=+0.152154101 container init 7b2160978ab11bdfbb3427e633da87c1410699b621c425b3498a5ac0c80e8030 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=thirsty_benz, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 08:08:10 compute-0 podman[94218]: 2026-01-31 08:08:10.171474409 +0000 UTC m=+0.157689299 container start 7b2160978ab11bdfbb3427e633da87c1410699b621c425b3498a5ac0c80e8030 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=thirsty_benz, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 08:08:10 compute-0 podman[94218]: 2026-01-31 08:08:10.176613966 +0000 UTC m=+0.162828866 container attach 7b2160978ab11bdfbb3427e633da87c1410699b621c425b3498a5ac0c80e8030 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=thirsty_benz, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 08:08:10 compute-0 sudo[94260]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uxznyusiesfvjmxdqnppuubmoggucdff ; /usr/bin/python3'
Jan 31 08:08:10 compute-0 sudo[94260]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:08:10 compute-0 python3[94264]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 82c880e6-d992-5408-8b12-efff9c275473 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   auth get client.openstack _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 08:08:10 compute-0 podman[94267]: 2026-01-31 08:08:10.392144034 +0000 UTC m=+0.047052903 container create 2961f16d9da760e9229d0d5ed607fa76cad018a79d1be199dc0d215b417da3d5 (image=quay.io/ceph/ceph:v20, name=lucid_moore, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle)
Jan 31 08:08:10 compute-0 ceph-mon[75227]: pgmap v72: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:08:10 compute-0 ceph-mon[75227]: from='client.? 192.168.122.100:0/47939758' entity='client.admin' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 31 08:08:10 compute-0 systemd[1]: Started libpod-conmon-2961f16d9da760e9229d0d5ed607fa76cad018a79d1be199dc0d215b417da3d5.scope.
Jan 31 08:08:10 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:08:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ffdd55939b38292a4f9af59befc93ba559c6991d1f50c4df81dc2ba4260196ee/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 08:08:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ffdd55939b38292a4f9af59befc93ba559c6991d1f50c4df81dc2ba4260196ee/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 08:08:10 compute-0 podman[94267]: 2026-01-31 08:08:10.376351034 +0000 UTC m=+0.031259883 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 31 08:08:10 compute-0 thirsty_benz[94234]: {
Jan 31 08:08:10 compute-0 thirsty_benz[94234]:     "0": [
Jan 31 08:08:10 compute-0 thirsty_benz[94234]:         {
Jan 31 08:08:10 compute-0 thirsty_benz[94234]:             "devices": [
Jan 31 08:08:10 compute-0 thirsty_benz[94234]:                 "/dev/loop3"
Jan 31 08:08:10 compute-0 thirsty_benz[94234]:             ],
Jan 31 08:08:10 compute-0 thirsty_benz[94234]:             "lv_name": "ceph_lv0",
Jan 31 08:08:10 compute-0 thirsty_benz[94234]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 08:08:10 compute-0 thirsty_benz[94234]:             "lv_size": "21470642176",
Jan 31 08:08:10 compute-0 thirsty_benz[94234]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=MTsNbY-MKaT-jGv0-3onj-5WQa-gnK0-BbfLsK,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=82c880e6-d992-5408-8b12-efff9c275473,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=39c36249-2898-4a76-b317-8e4ca379866f,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 31 08:08:10 compute-0 thirsty_benz[94234]:             "lv_uuid": "MTsNbY-MKaT-jGv0-3onj-5WQa-gnK0-BbfLsK",
Jan 31 08:08:10 compute-0 thirsty_benz[94234]:             "name": "ceph_lv0",
Jan 31 08:08:10 compute-0 thirsty_benz[94234]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 08:08:10 compute-0 thirsty_benz[94234]:             "tags": {
Jan 31 08:08:10 compute-0 thirsty_benz[94234]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 31 08:08:10 compute-0 thirsty_benz[94234]:                 "ceph.block_uuid": "MTsNbY-MKaT-jGv0-3onj-5WQa-gnK0-BbfLsK",
Jan 31 08:08:10 compute-0 thirsty_benz[94234]:                 "ceph.cephx_lockbox_secret": "",
Jan 31 08:08:10 compute-0 thirsty_benz[94234]:                 "ceph.cluster_fsid": "82c880e6-d992-5408-8b12-efff9c275473",
Jan 31 08:08:10 compute-0 thirsty_benz[94234]:                 "ceph.cluster_name": "ceph",
Jan 31 08:08:10 compute-0 thirsty_benz[94234]:                 "ceph.crush_device_class": "",
Jan 31 08:08:10 compute-0 thirsty_benz[94234]:                 "ceph.encrypted": "0",
Jan 31 08:08:10 compute-0 thirsty_benz[94234]:                 "ceph.objectstore": "bluestore",
Jan 31 08:08:10 compute-0 thirsty_benz[94234]:                 "ceph.osd_fsid": "39c36249-2898-4a76-b317-8e4ca379866f",
Jan 31 08:08:10 compute-0 thirsty_benz[94234]:                 "ceph.osd_id": "0",
Jan 31 08:08:10 compute-0 thirsty_benz[94234]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 31 08:08:10 compute-0 thirsty_benz[94234]:                 "ceph.type": "block",
Jan 31 08:08:10 compute-0 thirsty_benz[94234]:                 "ceph.vdo": "0",
Jan 31 08:08:10 compute-0 thirsty_benz[94234]:                 "ceph.with_tpm": "0"
Jan 31 08:08:10 compute-0 thirsty_benz[94234]:             },
Jan 31 08:08:10 compute-0 thirsty_benz[94234]:             "type": "block",
Jan 31 08:08:10 compute-0 thirsty_benz[94234]:             "vg_name": "ceph_vg0"
Jan 31 08:08:10 compute-0 thirsty_benz[94234]:         }
Jan 31 08:08:10 compute-0 thirsty_benz[94234]:     ],
Jan 31 08:08:10 compute-0 thirsty_benz[94234]:     "1": [
Jan 31 08:08:10 compute-0 thirsty_benz[94234]:         {
Jan 31 08:08:10 compute-0 thirsty_benz[94234]:             "devices": [
Jan 31 08:08:10 compute-0 thirsty_benz[94234]:                 "/dev/loop4"
Jan 31 08:08:10 compute-0 thirsty_benz[94234]:             ],
Jan 31 08:08:10 compute-0 thirsty_benz[94234]:             "lv_name": "ceph_lv1",
Jan 31 08:08:10 compute-0 thirsty_benz[94234]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 31 08:08:10 compute-0 thirsty_benz[94234]:             "lv_size": "21470642176",
Jan 31 08:08:10 compute-0 thirsty_benz[94234]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=p93Mbf-DMxT-pcUt-jSJE-SFna-oscq-yTAd40,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=82c880e6-d992-5408-8b12-efff9c275473,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=dacad4fa-56d8-4937-b2d8-306fb75187f3,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 31 08:08:10 compute-0 thirsty_benz[94234]:             "lv_uuid": "p93Mbf-DMxT-pcUt-jSJE-SFna-oscq-yTAd40",
Jan 31 08:08:10 compute-0 thirsty_benz[94234]:             "name": "ceph_lv1",
Jan 31 08:08:10 compute-0 thirsty_benz[94234]:             "path": "/dev/ceph_vg1/ceph_lv1",
Jan 31 08:08:10 compute-0 thirsty_benz[94234]:             "tags": {
Jan 31 08:08:10 compute-0 thirsty_benz[94234]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 31 08:08:10 compute-0 thirsty_benz[94234]:                 "ceph.block_uuid": "p93Mbf-DMxT-pcUt-jSJE-SFna-oscq-yTAd40",
Jan 31 08:08:10 compute-0 thirsty_benz[94234]:                 "ceph.cephx_lockbox_secret": "",
Jan 31 08:08:10 compute-0 thirsty_benz[94234]:                 "ceph.cluster_fsid": "82c880e6-d992-5408-8b12-efff9c275473",
Jan 31 08:08:10 compute-0 thirsty_benz[94234]:                 "ceph.cluster_name": "ceph",
Jan 31 08:08:10 compute-0 thirsty_benz[94234]:                 "ceph.crush_device_class": "",
Jan 31 08:08:10 compute-0 thirsty_benz[94234]:                 "ceph.encrypted": "0",
Jan 31 08:08:10 compute-0 thirsty_benz[94234]:                 "ceph.objectstore": "bluestore",
Jan 31 08:08:10 compute-0 thirsty_benz[94234]:                 "ceph.osd_fsid": "dacad4fa-56d8-4937-b2d8-306fb75187f3",
Jan 31 08:08:10 compute-0 thirsty_benz[94234]:                 "ceph.osd_id": "1",
Jan 31 08:08:10 compute-0 thirsty_benz[94234]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 31 08:08:10 compute-0 thirsty_benz[94234]:                 "ceph.type": "block",
Jan 31 08:08:10 compute-0 thirsty_benz[94234]:                 "ceph.vdo": "0",
Jan 31 08:08:10 compute-0 thirsty_benz[94234]:                 "ceph.with_tpm": "0"
Jan 31 08:08:10 compute-0 thirsty_benz[94234]:             },
Jan 31 08:08:10 compute-0 thirsty_benz[94234]:             "type": "block",
Jan 31 08:08:10 compute-0 thirsty_benz[94234]:             "vg_name": "ceph_vg1"
Jan 31 08:08:10 compute-0 thirsty_benz[94234]:         }
Jan 31 08:08:10 compute-0 thirsty_benz[94234]:     ],
Jan 31 08:08:10 compute-0 thirsty_benz[94234]:     "2": [
Jan 31 08:08:10 compute-0 thirsty_benz[94234]:         {
Jan 31 08:08:10 compute-0 thirsty_benz[94234]:             "devices": [
Jan 31 08:08:10 compute-0 thirsty_benz[94234]:                 "/dev/loop5"
Jan 31 08:08:10 compute-0 thirsty_benz[94234]:             ],
Jan 31 08:08:10 compute-0 thirsty_benz[94234]:             "lv_name": "ceph_lv2",
Jan 31 08:08:10 compute-0 thirsty_benz[94234]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 31 08:08:10 compute-0 thirsty_benz[94234]:             "lv_size": "21470642176",
Jan 31 08:08:10 compute-0 thirsty_benz[94234]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=fxd7JU-HnwP-NvcE-M4xv-EgEF-kK7y-w6dXCS,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=82c880e6-d992-5408-8b12-efff9c275473,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=faa25865-e7b6-44f9-8188-08bf287b941b,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 31 08:08:10 compute-0 thirsty_benz[94234]:             "lv_uuid": "fxd7JU-HnwP-NvcE-M4xv-EgEF-kK7y-w6dXCS",
Jan 31 08:08:10 compute-0 thirsty_benz[94234]:             "name": "ceph_lv2",
Jan 31 08:08:10 compute-0 thirsty_benz[94234]:             "path": "/dev/ceph_vg2/ceph_lv2",
Jan 31 08:08:10 compute-0 thirsty_benz[94234]:             "tags": {
Jan 31 08:08:10 compute-0 thirsty_benz[94234]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 31 08:08:10 compute-0 thirsty_benz[94234]:                 "ceph.block_uuid": "fxd7JU-HnwP-NvcE-M4xv-EgEF-kK7y-w6dXCS",
Jan 31 08:08:10 compute-0 thirsty_benz[94234]:                 "ceph.cephx_lockbox_secret": "",
Jan 31 08:08:10 compute-0 thirsty_benz[94234]:                 "ceph.cluster_fsid": "82c880e6-d992-5408-8b12-efff9c275473",
Jan 31 08:08:10 compute-0 thirsty_benz[94234]:                 "ceph.cluster_name": "ceph",
Jan 31 08:08:10 compute-0 thirsty_benz[94234]:                 "ceph.crush_device_class": "",
Jan 31 08:08:10 compute-0 thirsty_benz[94234]:                 "ceph.encrypted": "0",
Jan 31 08:08:10 compute-0 thirsty_benz[94234]:                 "ceph.objectstore": "bluestore",
Jan 31 08:08:10 compute-0 thirsty_benz[94234]:                 "ceph.osd_fsid": "faa25865-e7b6-44f9-8188-08bf287b941b",
Jan 31 08:08:10 compute-0 thirsty_benz[94234]:                 "ceph.osd_id": "2",
Jan 31 08:08:10 compute-0 thirsty_benz[94234]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 31 08:08:10 compute-0 thirsty_benz[94234]:                 "ceph.type": "block",
Jan 31 08:08:10 compute-0 thirsty_benz[94234]:                 "ceph.vdo": "0",
Jan 31 08:08:10 compute-0 thirsty_benz[94234]:                 "ceph.with_tpm": "0"
Jan 31 08:08:10 compute-0 thirsty_benz[94234]:             },
Jan 31 08:08:10 compute-0 thirsty_benz[94234]:             "type": "block",
Jan 31 08:08:10 compute-0 thirsty_benz[94234]:             "vg_name": "ceph_vg2"
Jan 31 08:08:10 compute-0 thirsty_benz[94234]:         }
Jan 31 08:08:10 compute-0 thirsty_benz[94234]:     ]
Jan 31 08:08:10 compute-0 thirsty_benz[94234]: }
Jan 31 08:08:10 compute-0 podman[94267]: 2026-01-31 08:08:10.476405648 +0000 UTC m=+0.131314567 container init 2961f16d9da760e9229d0d5ed607fa76cad018a79d1be199dc0d215b417da3d5 (image=quay.io/ceph/ceph:v20, name=lucid_moore, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 08:08:10 compute-0 podman[94267]: 2026-01-31 08:08:10.481668828 +0000 UTC m=+0.136577697 container start 2961f16d9da760e9229d0d5ed607fa76cad018a79d1be199dc0d215b417da3d5 (image=quay.io/ceph/ceph:v20, name=lucid_moore, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Jan 31 08:08:10 compute-0 podman[94267]: 2026-01-31 08:08:10.484552121 +0000 UTC m=+0.139460950 container attach 2961f16d9da760e9229d0d5ed607fa76cad018a79d1be199dc0d215b417da3d5 (image=quay.io/ceph/ceph:v20, name=lucid_moore, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Jan 31 08:08:10 compute-0 systemd[1]: libpod-7b2160978ab11bdfbb3427e633da87c1410699b621c425b3498a5ac0c80e8030.scope: Deactivated successfully.
Jan 31 08:08:10 compute-0 podman[94218]: 2026-01-31 08:08:10.49889427 +0000 UTC m=+0.485109110 container died 7b2160978ab11bdfbb3427e633da87c1410699b621c425b3498a5ac0c80e8030 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=thirsty_benz, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default)
Jan 31 08:08:10 compute-0 systemd[1]: var-lib-containers-storage-overlay-7edaf8d3ee38fde0df1aa9563f113e7c1c013efeb8b8151b8fd0c95201a48826-merged.mount: Deactivated successfully.
Jan 31 08:08:10 compute-0 podman[94218]: 2026-01-31 08:08:10.530694397 +0000 UTC m=+0.516909247 container remove 7b2160978ab11bdfbb3427e633da87c1410699b621c425b3498a5ac0c80e8030 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=thirsty_benz, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030)
Jan 31 08:08:10 compute-0 systemd[1]: libpod-conmon-7b2160978ab11bdfbb3427e633da87c1410699b621c425b3498a5ac0c80e8030.scope: Deactivated successfully.
Jan 31 08:08:10 compute-0 sudo[94125]: pam_unix(sudo:session): session closed for user root
Jan 31 08:08:10 compute-0 sudo[94298]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 08:08:10 compute-0 sudo[94298]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:08:10 compute-0 sudo[94298]: pam_unix(sudo:session): session closed for user root
Jan 31 08:08:10 compute-0 sudo[94340]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/82c880e6-d992-5408-8b12-efff9c275473/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 82c880e6-d992-5408-8b12-efff9c275473 -- raw list --format json
Jan 31 08:08:10 compute-0 sudo[94340]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:08:11 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.openstack"} v 0)
Jan 31 08:08:11 compute-0 ceph-mon[75227]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1595703628' entity='client.admin' cmd={"prefix": "auth get", "entity": "client.openstack"} : dispatch
Jan 31 08:08:11 compute-0 lucid_moore[94283]: [client.openstack]
Jan 31 08:08:11 compute-0 lucid_moore[94283]:         key = AQDNt31pAAAAABAAYp99ADqsmeg1iSEhkwiYUA==
Jan 31 08:08:11 compute-0 lucid_moore[94283]:         caps mgr = "allow *"
Jan 31 08:08:11 compute-0 lucid_moore[94283]:         caps mon = "profile rbd"
Jan 31 08:08:11 compute-0 lucid_moore[94283]:         caps osd = "profile rbd pool=vms, profile rbd pool=volumes, profile rbd pool=backups, profile rbd pool=images, profile rbd pool=cephfs.cephfs.meta, profile rbd pool=cephfs.cephfs.data"
Jan 31 08:08:11 compute-0 podman[94378]: 2026-01-31 08:08:10.924007707 +0000 UTC m=+0.019075735 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 08:08:11 compute-0 systemd[1]: libpod-2961f16d9da760e9229d0d5ed607fa76cad018a79d1be199dc0d215b417da3d5.scope: Deactivated successfully.
Jan 31 08:08:11 compute-0 podman[94378]: 2026-01-31 08:08:11.100034848 +0000 UTC m=+0.195102856 container create a19021a6cf00d0585e094f01d03d3a1296c36aa58f074f5305888bb0f9953786 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=recursing_cartwright, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 08:08:11 compute-0 podman[94267]: 2026-01-31 08:08:11.10080528 +0000 UTC m=+0.755714149 container died 2961f16d9da760e9229d0d5ed607fa76cad018a79d1be199dc0d215b417da3d5 (image=quay.io/ceph/ceph:v20, name=lucid_moore, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, ceph=True, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Jan 31 08:08:11 compute-0 systemd[1]: var-lib-containers-storage-overlay-ffdd55939b38292a4f9af59befc93ba559c6991d1f50c4df81dc2ba4260196ee-merged.mount: Deactivated successfully.
Jan 31 08:08:11 compute-0 ceph-mon[75227]: from='client.? 192.168.122.100:0/1595703628' entity='client.admin' cmd={"prefix": "auth get", "entity": "client.openstack"} : dispatch
Jan 31 08:08:11 compute-0 podman[94267]: 2026-01-31 08:08:11.664688916 +0000 UTC m=+1.319597785 container remove 2961f16d9da760e9229d0d5ed607fa76cad018a79d1be199dc0d215b417da3d5 (image=quay.io/ceph/ceph:v20, name=lucid_moore, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Jan 31 08:08:11 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v73: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:08:11 compute-0 sudo[94260]: pam_unix(sudo:session): session closed for user root
Jan 31 08:08:11 compute-0 systemd[1]: libpod-conmon-2961f16d9da760e9229d0d5ed607fa76cad018a79d1be199dc0d215b417da3d5.scope: Deactivated successfully.
Jan 31 08:08:11 compute-0 systemd[1]: Started libpod-conmon-a19021a6cf00d0585e094f01d03d3a1296c36aa58f074f5305888bb0f9953786.scope.
Jan 31 08:08:11 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:08:12 compute-0 podman[94378]: 2026-01-31 08:08:12.004733047 +0000 UTC m=+1.099801155 container init a19021a6cf00d0585e094f01d03d3a1296c36aa58f074f5305888bb0f9953786 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=recursing_cartwright, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Jan 31 08:08:12 compute-0 podman[94378]: 2026-01-31 08:08:12.012583031 +0000 UTC m=+1.107651079 container start a19021a6cf00d0585e094f01d03d3a1296c36aa58f074f5305888bb0f9953786 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=recursing_cartwright, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 08:08:12 compute-0 recursing_cartwright[94409]: 167 167
Jan 31 08:08:12 compute-0 systemd[1]: libpod-a19021a6cf00d0585e094f01d03d3a1296c36aa58f074f5305888bb0f9953786.scope: Deactivated successfully.
Jan 31 08:08:12 compute-0 podman[94378]: 2026-01-31 08:08:12.031685726 +0000 UTC m=+1.126753744 container attach a19021a6cf00d0585e094f01d03d3a1296c36aa58f074f5305888bb0f9953786 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=recursing_cartwright, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 08:08:12 compute-0 podman[94378]: 2026-01-31 08:08:12.032584101 +0000 UTC m=+1.127652119 container died a19021a6cf00d0585e094f01d03d3a1296c36aa58f074f5305888bb0f9953786 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=recursing_cartwright, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True)
Jan 31 08:08:12 compute-0 systemd[1]: var-lib-containers-storage-overlay-57b7e697e5c183e316d8e8ff38c2205d20278acc1ad8a198ed16c358189a5177-merged.mount: Deactivated successfully.
Jan 31 08:08:12 compute-0 podman[94378]: 2026-01-31 08:08:12.572159684 +0000 UTC m=+1.667227722 container remove a19021a6cf00d0585e094f01d03d3a1296c36aa58f074f5305888bb0f9953786 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=recursing_cartwright, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Jan 31 08:08:12 compute-0 systemd[1]: libpod-conmon-a19021a6cf00d0585e094f01d03d3a1296c36aa58f074f5305888bb0f9953786.scope: Deactivated successfully.
Jan 31 08:08:12 compute-0 ceph-mon[75227]: pgmap v73: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:08:12 compute-0 podman[94477]: 2026-01-31 08:08:12.823958527 +0000 UTC m=+0.113070927 container create a3ad9ac51f2abf102d18e0af35ed92c97d32b5f2400f8808c960f3da0b6cc301 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=boring_sammet, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Jan 31 08:08:12 compute-0 podman[94477]: 2026-01-31 08:08:12.746194509 +0000 UTC m=+0.035306979 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 08:08:12 compute-0 sudo[94595]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kxqnqkamtksbilidfroaifcnbxksfmec ; ANSIBLE_ASYNC_DIR=\'~/.ansible_async\' /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1769846892.658375-36952-30002951411632/async_wrapper.py j723234989313 30 /home/zuul/.ansible/tmp/ansible-tmp-1769846892.658375-36952-30002951411632/AnsiballZ_command.py _'
Jan 31 08:08:12 compute-0 sudo[94595]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:08:12 compute-0 systemd[1]: Started libpod-conmon-a3ad9ac51f2abf102d18e0af35ed92c97d32b5f2400f8808c960f3da0b6cc301.scope.
Jan 31 08:08:13 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:08:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/059e5e22b9d4f36016c69919452e4fe99eff66a3bc9029b469f90ce5f54a9721/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 08:08:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/059e5e22b9d4f36016c69919452e4fe99eff66a3bc9029b469f90ce5f54a9721/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 08:08:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/059e5e22b9d4f36016c69919452e4fe99eff66a3bc9029b469f90ce5f54a9721/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 08:08:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/059e5e22b9d4f36016c69919452e4fe99eff66a3bc9029b469f90ce5f54a9721/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 08:08:13 compute-0 ansible-async_wrapper.py[94597]: Invoked with j723234989313 30 /home/zuul/.ansible/tmp/ansible-tmp-1769846892.658375-36952-30002951411632/AnsiballZ_command.py _
Jan 31 08:08:13 compute-0 ansible-async_wrapper.py[94605]: Starting module and watcher
Jan 31 08:08:13 compute-0 ansible-async_wrapper.py[94605]: Start watching 94606 (30)
Jan 31 08:08:13 compute-0 ansible-async_wrapper.py[94606]: Start module (94606)
Jan 31 08:08:13 compute-0 ansible-async_wrapper.py[94597]: Return async_wrapper task started.
Jan 31 08:08:13 compute-0 sudo[94595]: pam_unix(sudo:session): session closed for user root
Jan 31 08:08:13 compute-0 podman[94477]: 2026-01-31 08:08:13.17635095 +0000 UTC m=+0.465463370 container init a3ad9ac51f2abf102d18e0af35ed92c97d32b5f2400f8808c960f3da0b6cc301 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=boring_sammet, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 31 08:08:13 compute-0 podman[94477]: 2026-01-31 08:08:13.183522174 +0000 UTC m=+0.472634574 container start a3ad9ac51f2abf102d18e0af35ed92c97d32b5f2400f8808c960f3da0b6cc301 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=boring_sammet, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 08:08:13 compute-0 python3[94607]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 82c880e6-d992-5408-8b12-efff9c275473 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch status --format json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 08:08:13 compute-0 podman[94477]: 2026-01-31 08:08:13.267401777 +0000 UTC m=+0.556514177 container attach a3ad9ac51f2abf102d18e0af35ed92c97d32b5f2400f8808c960f3da0b6cc301 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=boring_sammet, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Jan 31 08:08:13 compute-0 podman[94610]: 2026-01-31 08:08:13.274906451 +0000 UTC m=+0.022971916 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 31 08:08:13 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e32 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 08:08:13 compute-0 podman[94610]: 2026-01-31 08:08:13.463628055 +0000 UTC m=+0.211693500 container create c2d23765b27fe572094dbc8382d5b3a80fa4b8d5fbcabd450782d73d57ea89ef (image=quay.io/ceph/ceph:v20, name=elastic_chatterjee, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True)
Jan 31 08:08:13 compute-0 systemd[1]: Started libpod-conmon-c2d23765b27fe572094dbc8382d5b3a80fa4b8d5fbcabd450782d73d57ea89ef.scope.
Jan 31 08:08:13 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v74: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:08:13 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:08:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/309a5438e68eec4c6e5feab48333881adbbfd63d6a762a69a09ce315cbc48741/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 08:08:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/309a5438e68eec4c6e5feab48333881adbbfd63d6a762a69a09ce315cbc48741/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 08:08:13 compute-0 lvm[94702]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 31 08:08:13 compute-0 lvm[94702]: VG ceph_vg0 finished
Jan 31 08:08:13 compute-0 lvm[94703]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 31 08:08:13 compute-0 lvm[94703]: VG ceph_vg1 finished
Jan 31 08:08:13 compute-0 lvm[94705]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 31 08:08:13 compute-0 lvm[94705]: VG ceph_vg2 finished
Jan 31 08:08:13 compute-0 podman[94610]: 2026-01-31 08:08:13.893747506 +0000 UTC m=+0.641812981 container init c2d23765b27fe572094dbc8382d5b3a80fa4b8d5fbcabd450782d73d57ea89ef (image=quay.io/ceph/ceph:v20, name=elastic_chatterjee, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 08:08:13 compute-0 podman[94610]: 2026-01-31 08:08:13.902282709 +0000 UTC m=+0.650348174 container start c2d23765b27fe572094dbc8382d5b3a80fa4b8d5fbcabd450782d73d57ea89ef (image=quay.io/ceph/ceph:v20, name=elastic_chatterjee, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2)
Jan 31 08:08:13 compute-0 boring_sammet[94600]: {}
Jan 31 08:08:13 compute-0 podman[94610]: 2026-01-31 08:08:13.944906735 +0000 UTC m=+0.692972210 container attach c2d23765b27fe572094dbc8382d5b3a80fa4b8d5fbcabd450782d73d57ea89ef (image=quay.io/ceph/ceph:v20, name=elastic_chatterjee, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Jan 31 08:08:13 compute-0 systemd[1]: libpod-a3ad9ac51f2abf102d18e0af35ed92c97d32b5f2400f8808c960f3da0b6cc301.scope: Deactivated successfully.
Jan 31 08:08:13 compute-0 systemd[1]: libpod-a3ad9ac51f2abf102d18e0af35ed92c97d32b5f2400f8808c960f3da0b6cc301.scope: Consumed 1.011s CPU time.
Jan 31 08:08:13 compute-0 podman[94477]: 2026-01-31 08:08:13.952493241 +0000 UTC m=+1.241605651 container died a3ad9ac51f2abf102d18e0af35ed92c97d32b5f2400f8808c960f3da0b6cc301 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=boring_sammet, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, ceph=True, org.label-schema.schema-version=1.0)
Jan 31 08:08:14 compute-0 sudo[94785]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zgahjebazonmiaschoukqnsucdnzaqog ; /usr/bin/python3'
Jan 31 08:08:14 compute-0 sudo[94785]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:08:14 compute-0 ceph-mon[75227]: pgmap v74: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:08:14 compute-0 systemd[1]: var-lib-containers-storage-overlay-059e5e22b9d4f36016c69919452e4fe99eff66a3bc9029b469f90ce5f54a9721-merged.mount: Deactivated successfully.
Jan 31 08:08:14 compute-0 ceph-mgr[75519]: log_channel(audit) log [DBG] : from='client.14248 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Jan 31 08:08:14 compute-0 elastic_chatterjee[94691]: 
Jan 31 08:08:14 compute-0 elastic_chatterjee[94691]: {"available": true, "backend": "cephadm", "paused": false, "workers": 10}
Jan 31 08:08:14 compute-0 systemd[1]: libpod-c2d23765b27fe572094dbc8382d5b3a80fa4b8d5fbcabd450782d73d57ea89ef.scope: Deactivated successfully.
Jan 31 08:08:14 compute-0 podman[94610]: 2026-01-31 08:08:14.439301149 +0000 UTC m=+1.187366764 container died c2d23765b27fe572094dbc8382d5b3a80fa4b8d5fbcabd450782d73d57ea89ef (image=quay.io/ceph/ceph:v20, name=elastic_chatterjee, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Jan 31 08:08:14 compute-0 python3[94787]: ansible-ansible.legacy.async_status Invoked with jid=j723234989313.94597 mode=status _async_dir=/root/.ansible_async
Jan 31 08:08:14 compute-0 sudo[94785]: pam_unix(sudo:session): session closed for user root
Jan 31 08:08:14 compute-0 systemd[1]: var-lib-containers-storage-overlay-309a5438e68eec4c6e5feab48333881adbbfd63d6a762a69a09ce315cbc48741-merged.mount: Deactivated successfully.
Jan 31 08:08:14 compute-0 podman[94610]: 2026-01-31 08:08:14.712872522 +0000 UTC m=+1.460938007 container remove c2d23765b27fe572094dbc8382d5b3a80fa4b8d5fbcabd450782d73d57ea89ef (image=quay.io/ceph/ceph:v20, name=elastic_chatterjee, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 08:08:14 compute-0 ansible-async_wrapper.py[94606]: Module complete (94606)
Jan 31 08:08:14 compute-0 podman[94477]: 2026-01-31 08:08:14.886686681 +0000 UTC m=+2.175799071 container remove a3ad9ac51f2abf102d18e0af35ed92c97d32b5f2400f8808c960f3da0b6cc301 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=boring_sammet, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.vendor=CentOS)
Jan 31 08:08:14 compute-0 systemd[1]: libpod-conmon-a3ad9ac51f2abf102d18e0af35ed92c97d32b5f2400f8808c960f3da0b6cc301.scope: Deactivated successfully.
Jan 31 08:08:14 compute-0 systemd[1]: libpod-conmon-c2d23765b27fe572094dbc8382d5b3a80fa4b8d5fbcabd450782d73d57ea89ef.scope: Deactivated successfully.
Jan 31 08:08:14 compute-0 sudo[94340]: pam_unix(sudo:session): session closed for user root
Jan 31 08:08:14 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 31 08:08:14 compute-0 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 08:08:14 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 31 08:08:15 compute-0 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 08:08:15 compute-0 ceph-mgr[75519]: [progress INFO root] update: starting ev ff073104-e0de-4320-94d4-974f014c7b3e (Updating rgw.rgw deployment (+1 -> 1))
Jan 31 08:08:15 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-0.dnvgmk", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]} v 0)
Jan 31 08:08:15 compute-0 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-0.dnvgmk", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]} : dispatch
Jan 31 08:08:15 compute-0 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd='[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-0.dnvgmk", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]': finished
Jan 31 08:08:15 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=rgw_frontends}] v 0)
Jan 31 08:08:15 compute-0 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 08:08:15 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 31 08:08:15 compute-0 ceph-mon[75227]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 08:08:15 compute-0 ceph-mgr[75519]: [cephadm INFO cephadm.serve] Deploying daemon rgw.rgw.compute-0.dnvgmk on compute-0
Jan 31 08:08:15 compute-0 ceph-mgr[75519]: log_channel(cephadm) log [INF] : Deploying daemon rgw.rgw.compute-0.dnvgmk on compute-0
Jan 31 08:08:15 compute-0 sudo[94802]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 08:08:15 compute-0 sudo[94802]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:08:15 compute-0 sudo[94802]: pam_unix(sudo:session): session closed for user root
Jan 31 08:08:15 compute-0 sudo[94827]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/82c880e6-d992-5408-8b12-efff9c275473/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 _orch deploy --fsid 82c880e6-d992-5408-8b12-efff9c275473
Jan 31 08:08:15 compute-0 sudo[94827]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:08:15 compute-0 sudo[94898]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oiysjlewghmmfztswzgkcyceclwxaprx ; /usr/bin/python3'
Jan 31 08:08:15 compute-0 sudo[94898]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:08:15 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v75: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:08:15 compute-0 python3[94900]: ansible-ansible.legacy.async_status Invoked with jid=j723234989313.94597 mode=status _async_dir=/root/.ansible_async
Jan 31 08:08:15 compute-0 sudo[94898]: pam_unix(sudo:session): session closed for user root
Jan 31 08:08:15 compute-0 podman[94941]: 2026-01-31 08:08:15.811855524 +0000 UTC m=+0.022518374 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 08:08:15 compute-0 sudo[95001]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ptmbgeihcyfmfexybdvsgjeqruxhckxk ; /usr/bin/python3'
Jan 31 08:08:15 compute-0 sudo[95001]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:08:15 compute-0 podman[94941]: 2026-01-31 08:08:15.980856805 +0000 UTC m=+0.191519655 container create 60790b49ac87ccbf2acaaf1d5965fad096fe68ea55067a38b1aec90068ff1d68 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=distracted_dijkstra, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0)
Jan 31 08:08:16 compute-0 ceph-mon[75227]: from='client.14248 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Jan 31 08:08:16 compute-0 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 08:08:16 compute-0 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 08:08:16 compute-0 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-0.dnvgmk", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]} : dispatch
Jan 31 08:08:16 compute-0 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd='[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-0.dnvgmk", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]': finished
Jan 31 08:08:16 compute-0 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 08:08:16 compute-0 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 08:08:16 compute-0 python3[95003]: ansible-ansible.legacy.async_status Invoked with jid=j723234989313.94597 mode=cleanup _async_dir=/root/.ansible_async
Jan 31 08:08:16 compute-0 sudo[95001]: pam_unix(sudo:session): session closed for user root
Jan 31 08:08:16 compute-0 systemd[1]: Started libpod-conmon-60790b49ac87ccbf2acaaf1d5965fad096fe68ea55067a38b1aec90068ff1d68.scope.
Jan 31 08:08:16 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:08:16 compute-0 podman[94941]: 2026-01-31 08:08:16.398839969 +0000 UTC m=+0.609502899 container init 60790b49ac87ccbf2acaaf1d5965fad096fe68ea55067a38b1aec90068ff1d68 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=distracted_dijkstra, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle)
Jan 31 08:08:16 compute-0 podman[94941]: 2026-01-31 08:08:16.407620179 +0000 UTC m=+0.618283029 container start 60790b49ac87ccbf2acaaf1d5965fad096fe68ea55067a38b1aec90068ff1d68 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=distracted_dijkstra, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Jan 31 08:08:16 compute-0 distracted_dijkstra[95006]: 167 167
Jan 31 08:08:16 compute-0 systemd[1]: libpod-60790b49ac87ccbf2acaaf1d5965fad096fe68ea55067a38b1aec90068ff1d68.scope: Deactivated successfully.
Jan 31 08:08:16 compute-0 conmon[95006]: conmon 60790b49ac87ccbf2aca <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-60790b49ac87ccbf2acaaf1d5965fad096fe68ea55067a38b1aec90068ff1d68.scope/container/memory.events
Jan 31 08:08:16 compute-0 podman[94941]: 2026-01-31 08:08:16.667124332 +0000 UTC m=+0.877787182 container attach 60790b49ac87ccbf2acaaf1d5965fad096fe68ea55067a38b1aec90068ff1d68 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=distracted_dijkstra, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default)
Jan 31 08:08:16 compute-0 podman[94941]: 2026-01-31 08:08:16.667646167 +0000 UTC m=+0.878309007 container died 60790b49ac87ccbf2acaaf1d5965fad096fe68ea55067a38b1aec90068ff1d68 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=distracted_dijkstra, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_REF=tentacle)
Jan 31 08:08:16 compute-0 sudo[95046]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dtdmlbqxihevrmbaymwoctyjqbdhezlu ; /usr/bin/python3'
Jan 31 08:08:16 compute-0 sudo[95046]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:08:16 compute-0 systemd[1]: var-lib-containers-storage-overlay-9413820dcad20d90559af2c8295e93a343c318d11ad51ae6aa810383d974d2f7-merged.mount: Deactivated successfully.
Jan 31 08:08:16 compute-0 python3[95048]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 82c880e6-d992-5408-8b12-efff9c275473 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch status --format json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 08:08:17 compute-0 podman[94941]: 2026-01-31 08:08:17.134438444 +0000 UTC m=+1.345101294 container remove 60790b49ac87ccbf2acaaf1d5965fad096fe68ea55067a38b1aec90068ff1d68 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=distracted_dijkstra, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Jan 31 08:08:17 compute-0 systemd[1]: libpod-conmon-60790b49ac87ccbf2acaaf1d5965fad096fe68ea55067a38b1aec90068ff1d68.scope: Deactivated successfully.
Jan 31 08:08:17 compute-0 podman[95051]: 2026-01-31 08:08:17.166431467 +0000 UTC m=+0.213931835 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 31 08:08:17 compute-0 ceph-mon[75227]: Deploying daemon rgw.rgw.compute-0.dnvgmk on compute-0
Jan 31 08:08:17 compute-0 ceph-mon[75227]: pgmap v75: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:08:17 compute-0 podman[95051]: 2026-01-31 08:08:17.443916563 +0000 UTC m=+0.491416901 container create a2cbecfe56cc75f72a3b70a3a606c5eee1eae771632e456aa0dcf375da8f31bc (image=quay.io/ceph/ceph:v20, name=admiring_keller, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True)
Jan 31 08:08:17 compute-0 systemd[1]: Started libpod-conmon-a2cbecfe56cc75f72a3b70a3a606c5eee1eae771632e456aa0dcf375da8f31bc.scope.
Jan 31 08:08:17 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:08:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/310357367d637cdb230b236e91349cf2a34251ece809c20d727638e1f9441d29/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 08:08:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/310357367d637cdb230b236e91349cf2a34251ece809c20d727638e1f9441d29/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 08:08:17 compute-0 podman[95051]: 2026-01-31 08:08:17.567450437 +0000 UTC m=+0.614950845 container init a2cbecfe56cc75f72a3b70a3a606c5eee1eae771632e456aa0dcf375da8f31bc (image=quay.io/ceph/ceph:v20, name=admiring_keller, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default)
Jan 31 08:08:17 compute-0 podman[95051]: 2026-01-31 08:08:17.575934189 +0000 UTC m=+0.623434557 container start a2cbecfe56cc75f72a3b70a3a606c5eee1eae771632e456aa0dcf375da8f31bc (image=quay.io/ceph/ceph:v20, name=admiring_keller, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 08:08:17 compute-0 podman[95051]: 2026-01-31 08:08:17.600699925 +0000 UTC m=+0.648200263 container attach a2cbecfe56cc75f72a3b70a3a606c5eee1eae771632e456aa0dcf375da8f31bc (image=quay.io/ceph/ceph:v20, name=admiring_keller, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Jan 31 08:08:17 compute-0 systemd[1]: Reloading.
Jan 31 08:08:17 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v76: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:08:17 compute-0 systemd-sysv-generator[95107]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 08:08:17 compute-0 systemd-rc-local-generator[95101]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 08:08:17 compute-0 systemd[1]: Reloading.
Jan 31 08:08:17 compute-0 systemd-rc-local-generator[95155]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 08:08:17 compute-0 systemd-sysv-generator[95161]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 08:08:18 compute-0 ceph-mgr[75519]: log_channel(audit) log [DBG] : from='client.14250 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Jan 31 08:08:18 compute-0 admiring_keller[95067]: 
Jan 31 08:08:18 compute-0 admiring_keller[95067]: {"available": true, "backend": "cephadm", "paused": false, "workers": 10}
Jan 31 08:08:18 compute-0 podman[95051]: 2026-01-31 08:08:18.070553889 +0000 UTC m=+1.118054217 container died a2cbecfe56cc75f72a3b70a3a606c5eee1eae771632e456aa0dcf375da8f31bc (image=quay.io/ceph/ceph:v20, name=admiring_keller, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 31 08:08:18 compute-0 ansible-async_wrapper.py[94605]: Done in kid B.
Jan 31 08:08:18 compute-0 systemd[1]: libpod-a2cbecfe56cc75f72a3b70a3a606c5eee1eae771632e456aa0dcf375da8f31bc.scope: Deactivated successfully.
Jan 31 08:08:18 compute-0 systemd[1]: Starting Ceph rgw.rgw.compute-0.dnvgmk for 82c880e6-d992-5408-8b12-efff9c275473...
Jan 31 08:08:18 compute-0 systemd[1]: var-lib-containers-storage-overlay-310357367d637cdb230b236e91349cf2a34251ece809c20d727638e1f9441d29-merged.mount: Deactivated successfully.
Jan 31 08:08:18 compute-0 ceph-mon[75227]: pgmap v76: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:08:18 compute-0 ceph-mon[75227]: from='client.14250 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Jan 31 08:08:18 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e32 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 08:08:18 compute-0 podman[95051]: 2026-01-31 08:08:18.763021182 +0000 UTC m=+1.810521530 container remove a2cbecfe56cc75f72a3b70a3a606c5eee1eae771632e456aa0dcf375da8f31bc (image=quay.io/ceph/ceph:v20, name=admiring_keller, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 08:08:18 compute-0 systemd[1]: libpod-conmon-a2cbecfe56cc75f72a3b70a3a606c5eee1eae771632e456aa0dcf375da8f31bc.scope: Deactivated successfully.
Jan 31 08:08:18 compute-0 sudo[95046]: pam_unix(sudo:session): session closed for user root
Jan 31 08:08:18 compute-0 podman[95230]: 2026-01-31 08:08:18.871532897 +0000 UTC m=+0.019718703 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 08:08:18 compute-0 podman[95230]: 2026-01-31 08:08:18.992626552 +0000 UTC m=+0.140812358 container create d9d79808a6c74b9f19dc87bff2ff2656fd4a319161e18dcbe010432cfd7060bb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-82c880e6-d992-5408-8b12-efff9c275473-rgw-rgw-compute-0-dnvgmk, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030)
Jan 31 08:08:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/886692810a620050d6b802adb25829b935e6950c2fdba75f0e822aa66465ed1e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 08:08:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/886692810a620050d6b802adb25829b935e6950c2fdba75f0e822aa66465ed1e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 08:08:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/886692810a620050d6b802adb25829b935e6950c2fdba75f0e822aa66465ed1e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 08:08:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/886692810a620050d6b802adb25829b935e6950c2fdba75f0e822aa66465ed1e/merged/var/lib/ceph/radosgw/ceph-rgw.rgw.compute-0.dnvgmk supports timestamps until 2038 (0x7fffffff)
Jan 31 08:08:19 compute-0 podman[95230]: 2026-01-31 08:08:19.175501079 +0000 UTC m=+0.323686965 container init d9d79808a6c74b9f19dc87bff2ff2656fd4a319161e18dcbe010432cfd7060bb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-82c880e6-d992-5408-8b12-efff9c275473-rgw-rgw-compute-0-dnvgmk, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, ceph=True)
Jan 31 08:08:19 compute-0 podman[95230]: 2026-01-31 08:08:19.184715742 +0000 UTC m=+0.332901558 container start d9d79808a6c74b9f19dc87bff2ff2656fd4a319161e18dcbe010432cfd7060bb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-82c880e6-d992-5408-8b12-efff9c275473-rgw-rgw-compute-0-dnvgmk, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 08:08:19 compute-0 radosgw[95251]: deferred set uid:gid to 167:167 (ceph:ceph)
Jan 31 08:08:19 compute-0 radosgw[95251]: ceph version 20.2.0 (69f84cc2651aa259a15bc192ddaabd3baba07489) tentacle (stable - RelWithDebInfo), process radosgw, pid 2
Jan 31 08:08:19 compute-0 radosgw[95251]: framework: beast
Jan 31 08:08:19 compute-0 radosgw[95251]: framework conf key: endpoint, val: 192.168.122.100:8082
Jan 31 08:08:19 compute-0 radosgw[95251]: init_numa not setting numa affinity
Jan 31 08:08:19 compute-0 bash[95230]: d9d79808a6c74b9f19dc87bff2ff2656fd4a319161e18dcbe010432cfd7060bb
Jan 31 08:08:19 compute-0 systemd[1]: Started Ceph rgw.rgw.compute-0.dnvgmk for 82c880e6-d992-5408-8b12-efff9c275473.
Jan 31 08:08:19 compute-0 sudo[94827]: pam_unix(sudo:session): session closed for user root
Jan 31 08:08:19 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 31 08:08:19 compute-0 sudo[95303]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fktolhisygovjuaomivqpfzbonzzypgm ; /usr/bin/python3'
Jan 31 08:08:19 compute-0 sudo[95303]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:08:19 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e32 do_prune osdmap full prune enabled
Jan 31 08:08:19 compute-0 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 08:08:19 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v77: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:08:19 compute-0 python3[95305]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 82c880e6-d992-5408-8b12-efff9c275473 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch ls --export -f json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 08:08:19 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 31 08:08:19 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e33 e33: 3 total, 3 up, 3 in
Jan 31 08:08:19 compute-0 ceph-mon[75227]: log_channel(cluster) log [DBG] : osdmap e33: 3 total, 3 up, 3 in
Jan 31 08:08:19 compute-0 podman[95306]: 2026-01-31 08:08:19.771563044 +0000 UTC m=+0.043711428 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 31 08:08:19 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"} v 0)
Jan 31 08:08:19 compute-0 ceph-mon[75227]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2532394454' entity='client.rgw.rgw.compute-0.dnvgmk' cmd={"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"} : dispatch
Jan 31 08:08:19 compute-0 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 08:08:19 compute-0 podman[95306]: 2026-01-31 08:08:19.960128703 +0000 UTC m=+0.232277027 container create 657980410d950b56f067c4de43c142f525bc63f5dab8a69d3689919a04bc5007 (image=quay.io/ceph/ceph:v20, name=eloquent_golick, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 08:08:19 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.rgw.rgw}] v 0)
Jan 31 08:08:20 compute-0 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 08:08:20 compute-0 ceph-mgr[75519]: [progress INFO root] complete: finished ev ff073104-e0de-4320-94d4-974f014c7b3e (Updating rgw.rgw deployment (+1 -> 1))
Jan 31 08:08:20 compute-0 ceph-mgr[75519]: [progress INFO root] Completed event ff073104-e0de-4320-94d4-974f014c7b3e (Updating rgw.rgw deployment (+1 -> 1)) in 5 seconds
Jan 31 08:08:20 compute-0 ceph-mgr[75519]: [cephadm INFO cephadm.services.cephadmservice] Saving service rgw.rgw spec with placement compute-0
Jan 31 08:08:20 compute-0 ceph-mgr[75519]: log_channel(cephadm) log [INF] : Saving service rgw.rgw spec with placement compute-0
Jan 31 08:08:20 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.rgw.rgw}] v 0)
Jan 31 08:08:20 compute-0 systemd[1]: Started libpod-conmon-657980410d950b56f067c4de43c142f525bc63f5dab8a69d3689919a04bc5007.scope.
Jan 31 08:08:20 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:08:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/936664a9fe5fa3dd7085565398026e9da6ad2fd574184ce4f7b90174ba9017d5/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 08:08:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/936664a9fe5fa3dd7085565398026e9da6ad2fd574184ce4f7b90174ba9017d5/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 08:08:20 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 33 pg[8.0( empty local-lis/les=0/0 n=0 ec=33/33 lis/c=0/0 les/c/f=0/0/0 sis=33) [1] r=0 lpr=33 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:08:20 compute-0 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 08:08:20 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.rgw.rgw}] v 0)
Jan 31 08:08:20 compute-0 podman[95306]: 2026-01-31 08:08:20.386720183 +0000 UTC m=+0.658868537 container init 657980410d950b56f067c4de43c142f525bc63f5dab8a69d3689919a04bc5007 (image=quay.io/ceph/ceph:v20, name=eloquent_golick, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030)
Jan 31 08:08:20 compute-0 podman[95306]: 2026-01-31 08:08:20.395544865 +0000 UTC m=+0.667693179 container start 657980410d950b56f067c4de43c142f525bc63f5dab8a69d3689919a04bc5007 (image=quay.io/ceph/ceph:v20, name=eloquent_golick, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0)
Jan 31 08:08:20 compute-0 podman[95306]: 2026-01-31 08:08:20.428902955 +0000 UTC m=+0.701051279 container attach 657980410d950b56f067c4de43c142f525bc63f5dab8a69d3689919a04bc5007 (image=quay.io/ceph/ceph:v20, name=eloquent_golick, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=tentacle)
Jan 31 08:08:20 compute-0 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 08:08:20 compute-0 ceph-mgr[75519]: [progress INFO root] update: starting ev 9f67468f-9ddb-4e2d-9c5c-7fd766e72e3c (Updating mds.cephfs deployment (+1 -> 1))
Jan 31 08:08:20 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-0.nafbok", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} v 0)
Jan 31 08:08:20 compute-0 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-0.nafbok", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch
Jan 31 08:08:20 compute-0 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd='[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-0.nafbok", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]': finished
Jan 31 08:08:20 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 31 08:08:20 compute-0 ceph-mon[75227]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 08:08:20 compute-0 ceph-mgr[75519]: [cephadm INFO cephadm.serve] Deploying daemon mds.cephfs.compute-0.nafbok on compute-0
Jan 31 08:08:20 compute-0 ceph-mgr[75519]: log_channel(cephadm) log [INF] : Deploying daemon mds.cephfs.compute-0.nafbok on compute-0
Jan 31 08:08:20 compute-0 sudo[95327]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 08:08:20 compute-0 sudo[95327]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:08:20 compute-0 sudo[95327]: pam_unix(sudo:session): session closed for user root
Jan 31 08:08:20 compute-0 sudo[95369]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/82c880e6-d992-5408-8b12-efff9c275473/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 _orch deploy --fsid 82c880e6-d992-5408-8b12-efff9c275473
Jan 31 08:08:20 compute-0 sudo[95369]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:08:20 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e33 do_prune osdmap full prune enabled
Jan 31 08:08:20 compute-0 ceph-mgr[75519]: log_channel(audit) log [DBG] : from='client.14255 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Jan 31 08:08:20 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config get", "who": "client.rgw.rgw.compute-0.dnvgmk", "name": "rgw_frontends"} v 0)
Jan 31 08:08:20 compute-0 ceph-mon[75227]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "config get", "who": "client.rgw.rgw.compute-0.dnvgmk", "name": "rgw_frontends"} : dispatch
Jan 31 08:08:20 compute-0 eloquent_golick[95321]: 
Jan 31 08:08:20 compute-0 eloquent_golick[95321]: [{"placement": {"host_pattern": "*"}, "service_name": "crash", "service_type": "crash"}, {"placement": {"hosts": ["compute-0"]}, "service_id": "cephfs", "service_name": "mds.cephfs", "service_type": "mds"}, {"placement": {"hosts": ["compute-0"]}, "service_name": "mgr", "service_type": "mgr"}, {"placement": {"hosts": ["compute-0"]}, "service_name": "mon", "service_type": "mon"}, {"placement": {"hosts": ["compute-0"]}, "service_id": "default_drive_group", "service_name": "osd.default_drive_group", "service_type": "osd", "spec": {"data_devices": {"paths": ["/dev/ceph_vg0/ceph_lv0", "/dev/ceph_vg1/ceph_lv1", "/dev/ceph_vg2/ceph_lv2"]}, "filter_logic": "AND", "objectstore": "bluestore"}}, {"networks": ["192.168.122.0/24"], "placement": {"hosts": ["compute-0"]}, "service_id": "rgw", "service_name": "rgw.rgw", "service_type": "rgw", "spec": {"rgw_exit_timeout_secs": 120, "rgw_frontend_port": 8082}}]
Jan 31 08:08:20 compute-0 systemd[1]: libpod-657980410d950b56f067c4de43c142f525bc63f5dab8a69d3689919a04bc5007.scope: Deactivated successfully.
Jan 31 08:08:20 compute-0 podman[95306]: 2026-01-31 08:08:20.864679967 +0000 UTC m=+1.136828251 container died 657980410d950b56f067c4de43c142f525bc63f5dab8a69d3689919a04bc5007 (image=quay.io/ceph/ceph:v20, name=eloquent_golick, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 31 08:08:21 compute-0 ceph-mon[75227]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2532394454' entity='client.rgw.rgw.compute-0.dnvgmk' cmd='[{"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"}]': finished
Jan 31 08:08:21 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e34 e34: 3 total, 3 up, 3 in
Jan 31 08:08:21 compute-0 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 08:08:21 compute-0 ceph-mon[75227]: pgmap v77: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:08:21 compute-0 ceph-mon[75227]: osdmap e33: 3 total, 3 up, 3 in
Jan 31 08:08:21 compute-0 ceph-mon[75227]: from='client.? 192.168.122.100:0/2532394454' entity='client.rgw.rgw.compute-0.dnvgmk' cmd={"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"} : dispatch
Jan 31 08:08:21 compute-0 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 08:08:21 compute-0 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 08:08:21 compute-0 ceph-mon[75227]: Saving service rgw.rgw spec with placement compute-0
Jan 31 08:08:21 compute-0 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 08:08:21 compute-0 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 08:08:21 compute-0 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-0.nafbok", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch
Jan 31 08:08:21 compute-0 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd='[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-0.nafbok", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]': finished
Jan 31 08:08:21 compute-0 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 08:08:21 compute-0 systemd[1]: var-lib-containers-storage-overlay-936664a9fe5fa3dd7085565398026e9da6ad2fd574184ce4f7b90174ba9017d5-merged.mount: Deactivated successfully.
Jan 31 08:08:21 compute-0 ceph-mon[75227]: log_channel(cluster) log [DBG] : osdmap e34: 3 total, 3 up, 3 in
Jan 31 08:08:21 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 34 pg[8.0( empty local-lis/les=33/34 n=0 ec=33/33 lis/c=0/0 les/c/f=0/0/0 sis=33) [1] r=0 lpr=33 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:08:21 compute-0 podman[95306]: 2026-01-31 08:08:21.581773104 +0000 UTC m=+1.853921408 container remove 657980410d950b56f067c4de43c142f525bc63f5dab8a69d3689919a04bc5007 (image=quay.io/ceph/ceph:v20, name=eloquent_golick, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.41.3)
Jan 31 08:08:21 compute-0 sudo[95303]: pam_unix(sudo:session): session closed for user root
Jan 31 08:08:21 compute-0 systemd[1]: libpod-conmon-657980410d950b56f067c4de43c142f525bc63f5dab8a69d3689919a04bc5007.scope: Deactivated successfully.
Jan 31 08:08:21 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v80: 8 pgs: 1 creating+peering, 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:08:21 compute-0 podman[95446]: 2026-01-31 08:08:21.871660803 +0000 UTC m=+0.119689304 container create 1ff66d8476fd337447f7a84a34f409eebde858c4b02cc2646d437d219770224b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jolly_hopper, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 08:08:21 compute-0 podman[95446]: 2026-01-31 08:08:21.783391056 +0000 UTC m=+0.031419627 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 08:08:21 compute-0 systemd[1]: Started libpod-conmon-1ff66d8476fd337447f7a84a34f409eebde858c4b02cc2646d437d219770224b.scope.
Jan 31 08:08:21 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:08:22 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e34 do_prune osdmap full prune enabled
Jan 31 08:08:22 compute-0 podman[95446]: 2026-01-31 08:08:22.14813396 +0000 UTC m=+0.396162561 container init 1ff66d8476fd337447f7a84a34f409eebde858c4b02cc2646d437d219770224b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jolly_hopper, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Jan 31 08:08:22 compute-0 podman[95446]: 2026-01-31 08:08:22.156138939 +0000 UTC m=+0.404167450 container start 1ff66d8476fd337447f7a84a34f409eebde858c4b02cc2646d437d219770224b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jolly_hopper, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Jan 31 08:08:22 compute-0 jolly_hopper[96022]: 167 167
Jan 31 08:08:22 compute-0 systemd[1]: libpod-1ff66d8476fd337447f7a84a34f409eebde858c4b02cc2646d437d219770224b.scope: Deactivated successfully.
Jan 31 08:08:22 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e35 e35: 3 total, 3 up, 3 in
Jan 31 08:08:22 compute-0 podman[95446]: 2026-01-31 08:08:22.207469883 +0000 UTC m=+0.455498504 container attach 1ff66d8476fd337447f7a84a34f409eebde858c4b02cc2646d437d219770224b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jolly_hopper, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Jan 31 08:08:22 compute-0 podman[95446]: 2026-01-31 08:08:22.208618026 +0000 UTC m=+0.456646587 container died 1ff66d8476fd337447f7a84a34f409eebde858c4b02cc2646d437d219770224b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jolly_hopper, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.schema-version=1.0)
Jan 31 08:08:22 compute-0 ceph-mon[75227]: log_channel(cluster) log [DBG] : osdmap e35: 3 total, 3 up, 3 in
Jan 31 08:08:22 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"} v 0)
Jan 31 08:08:22 compute-0 ceph-mon[75227]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3933730218' entity='client.rgw.rgw.compute-0.dnvgmk' cmd={"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"} : dispatch
Jan 31 08:08:22 compute-0 sudo[96063]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wthhbhgickbhcetvyoqidlpyjpbierlm ; /usr/bin/python3'
Jan 31 08:08:22 compute-0 sudo[96063]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:08:22 compute-0 ceph-mon[75227]: Deploying daemon mds.cephfs.compute-0.nafbok on compute-0
Jan 31 08:08:22 compute-0 ceph-mon[75227]: from='client.14255 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Jan 31 08:08:22 compute-0 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "config get", "who": "client.rgw.rgw.compute-0.dnvgmk", "name": "rgw_frontends"} : dispatch
Jan 31 08:08:22 compute-0 ceph-mon[75227]: from='client.? 192.168.122.100:0/2532394454' entity='client.rgw.rgw.compute-0.dnvgmk' cmd='[{"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"}]': finished
Jan 31 08:08:22 compute-0 ceph-mon[75227]: osdmap e34: 3 total, 3 up, 3 in
Jan 31 08:08:22 compute-0 systemd[1]: var-lib-containers-storage-overlay-545a2788c1aa01335b5140d7307f95b2d61758c4b6e814f0bba2c35f7cd6a4ae-merged.mount: Deactivated successfully.
Jan 31 08:08:22 compute-0 python3[96065]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 82c880e6-d992-5408-8b12-efff9c275473 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch ps -f json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 08:08:22 compute-0 podman[95446]: 2026-01-31 08:08:22.688168096 +0000 UTC m=+0.936196637 container remove 1ff66d8476fd337447f7a84a34f409eebde858c4b02cc2646d437d219770224b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jolly_hopper, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default)
Jan 31 08:08:22 compute-0 systemd[1]: libpod-conmon-1ff66d8476fd337447f7a84a34f409eebde858c4b02cc2646d437d219770224b.scope: Deactivated successfully.
Jan 31 08:08:22 compute-0 podman[96069]: 2026-01-31 08:08:22.629226395 +0000 UTC m=+0.082054462 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 31 08:08:22 compute-0 ceph-mgr[75519]: [progress INFO root] Writing back 4 completed events
Jan 31 08:08:22 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Jan 31 08:08:22 compute-0 podman[96069]: 2026-01-31 08:08:22.812410271 +0000 UTC m=+0.265238328 container create a2e0fcaf28f5b2b76c57a097e39ade9199a5f0dd9bf5f7c57d219d9c19b837cf (image=quay.io/ceph/ceph:v20, name=laughing_hofstadter, io.buildah.version=1.41.3, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20251030)
Jan 31 08:08:22 compute-0 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 08:08:22 compute-0 ceph-mgr[75519]: [progress WARNING root] Starting Global Recovery Event,2 pgs not in active + clean state
Jan 31 08:08:22 compute-0 systemd[1]: Started libpod-conmon-a2e0fcaf28f5b2b76c57a097e39ade9199a5f0dd9bf5f7c57d219d9c19b837cf.scope.
Jan 31 08:08:22 compute-0 systemd[1]: Reloading.
Jan 31 08:08:23 compute-0 systemd-rc-local-generator[96114]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 08:08:23 compute-0 systemd-sysv-generator[96117]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 08:08:23 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 35 pg[9.0( empty local-lis/les=0/0 n=0 ec=35/35 lis/c=0/0 les/c/f=0/0/0 sis=35) [1] r=0 lpr=35 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:08:23 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e35 do_prune osdmap full prune enabled
Jan 31 08:08:23 compute-0 ceph-mon[75227]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3933730218' entity='client.rgw.rgw.compute-0.dnvgmk' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]': finished
Jan 31 08:08:23 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e36 e36: 3 total, 3 up, 3 in
Jan 31 08:08:23 compute-0 ceph-mon[75227]: log_channel(cluster) log [DBG] : osdmap e36: 3 total, 3 up, 3 in
Jan 31 08:08:23 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:08:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/93f82d8bcf9d79c41461fc9c18de9cb15077da52165414439ac88d3e7acd4be7/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 08:08:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/93f82d8bcf9d79c41461fc9c18de9cb15077da52165414439ac88d3e7acd4be7/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 08:08:23 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 36 pg[9.0( empty local-lis/les=35/36 n=0 ec=35/35 lis/c=0/0 les/c/f=0/0/0 sis=35) [1] r=0 lpr=35 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:08:23 compute-0 podman[96069]: 2026-01-31 08:08:23.216808017 +0000 UTC m=+0.669636104 container init a2e0fcaf28f5b2b76c57a097e39ade9199a5f0dd9bf5f7c57d219d9c19b837cf (image=quay.io/ceph/ceph:v20, name=laughing_hofstadter, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Jan 31 08:08:23 compute-0 podman[96069]: 2026-01-31 08:08:23.225905317 +0000 UTC m=+0.678733374 container start a2e0fcaf28f5b2b76c57a097e39ade9199a5f0dd9bf5f7c57d219d9c19b837cf (image=quay.io/ceph/ceph:v20, name=laughing_hofstadter, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3)
Jan 31 08:08:23 compute-0 podman[96069]: 2026-01-31 08:08:23.229271043 +0000 UTC m=+0.682099140 container attach a2e0fcaf28f5b2b76c57a097e39ade9199a5f0dd9bf5f7c57d219d9c19b837cf (image=quay.io/ceph/ceph:v20, name=laughing_hofstadter, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=tentacle)
Jan 31 08:08:23 compute-0 systemd[1]: Reloading.
Jan 31 08:08:23 compute-0 systemd-rc-local-generator[96177]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 08:08:23 compute-0 systemd-sysv-generator[96183]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 08:08:23 compute-0 ceph-mon[75227]: pgmap v80: 8 pgs: 1 creating+peering, 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:08:23 compute-0 ceph-mon[75227]: osdmap e35: 3 total, 3 up, 3 in
Jan 31 08:08:23 compute-0 ceph-mon[75227]: from='client.? 192.168.122.100:0/3933730218' entity='client.rgw.rgw.compute-0.dnvgmk' cmd={"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"} : dispatch
Jan 31 08:08:23 compute-0 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 08:08:23 compute-0 ceph-mon[75227]: from='client.? 192.168.122.100:0/3933730218' entity='client.rgw.rgw.compute-0.dnvgmk' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]': finished
Jan 31 08:08:23 compute-0 ceph-mon[75227]: osdmap e36: 3 total, 3 up, 3 in
Jan 31 08:08:23 compute-0 systemd[1]: Starting Ceph mds.cephfs.compute-0.nafbok for 82c880e6-d992-5408-8b12-efff9c275473...
Jan 31 08:08:23 compute-0 ceph-mgr[75519]: log_channel(audit) log [DBG] : from='client.14260 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Jan 31 08:08:23 compute-0 laughing_hofstadter[96087]: 
Jan 31 08:08:23 compute-0 laughing_hofstadter[96087]: [{"container_id": "a94e6142bb25", "container_image_digests": ["quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86", "quay.io/ceph/ceph@sha256:4c65c801a8e5e5704934118b2c723e7233f2b5de8552bfc8f129dabe1fced0b1"], "container_image_id": "524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3", "container_image_name": "quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86", "cpu_percentage": "0.25%", "created": "2026-01-31T08:06:53.646207Z", "daemon_id": "compute-0", "daemon_name": "crash.compute-0", "daemon_type": "crash", "events": ["2026-01-31T08:06:53.716669Z daemon:crash.compute-0 [INFO] \"Deployed crash.compute-0 on host 'compute-0'\""], "hostname": "compute-0", "is_active": false, "last_refresh": "2026-01-31T08:08:07.361962Z", "memory_usage": 7799308, "pending_daemon_config": false, "ports": [], "service_name": "crash", "started": "2026-01-31T08:06:53.532959Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-82c880e6-d992-5408-8b12-efff9c275473@crash.compute-0", "version": "20.2.0"}, {"container_id": "469c441ebd04", "container_image_digests": ["quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86", "quay.io/ceph/ceph@sha256:4c65c801a8e5e5704934118b2c723e7233f2b5de8552bfc8f129dabe1fced0b1"], "container_image_id": "524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3", "container_image_name": "quay.io/ceph/ceph:v20", "cpu_percentage": "17.40%", "created": "2026-01-31T08:06:14.835074Z", "daemon_id": "compute-0.fqetdi", "daemon_name": "mgr.compute-0.fqetdi", "daemon_type": "mgr", "events": ["2026-01-31T08:06:58.210393Z daemon:mgr.compute-0.fqetdi [INFO] \"Reconfigured mgr.compute-0.fqetdi on host 'compute-0'\""], "hostname": "compute-0", "is_active": false, "last_refresh": "2026-01-31T08:08:07.361809Z", "memory_usage": 548090675, "pending_daemon_config": false, "ports": [9283, 8765], "service_name": "mgr", "started": "2026-01-31T08:06:14.759640Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-82c880e6-d992-5408-8b12-efff9c275473@mgr.compute-0.fqetdi", "version": "20.2.0"}, {"container_id": "2c160fb9852a", "container_image_digests": ["quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86", "quay.io/ceph/ceph@sha256:4c65c801a8e5e5704934118b2c723e7233f2b5de8552bfc8f129dabe1fced0b1"], "container_image_id": "524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3", "container_image_name": "quay.io/ceph/ceph:v20", "cpu_percentage": "2.58%", "created": "2026-01-31T08:06:11.262972Z", "daemon_id": "compute-0", "daemon_name": "mon.compute-0", "daemon_type": "mon", "events": ["2026-01-31T08:06:57.580410Z daemon:mon.compute-0 [INFO] \"Reconfigured mon.compute-0 on host 'compute-0'\""], "hostname": "compute-0", "is_active": false, "last_refresh": "2026-01-31T08:08:07.361630Z", "memory_request": 2147483648, "memory_usage": 39405486, "pending_daemon_config": false, "ports": [], "service_name": "mon", "started": "2026-01-31T08:06:13.128980Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-82c880e6-d992-5408-8b12-efff9c275473@mon.compute-0", "version": "20.2.0"}, {"container_id": "a780c474029a", "container_image_digests": ["quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86", "quay.io/ceph/ceph@sha256:4c65c801a8e5e5704934118b2c723e7233f2b5de8552bfc8f129dabe1fced0b1"], "container_image_id": "524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3", "container_image_name": "quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86", "cpu_percentage": "1.59%", "created": "2026-01-31T08:07:15.518432Z", "daemon_id": "0", "daemon_name": "osd.0", "daemon_type": "osd", "events": ["2026-01-31T08:07:15.601787Z daemon:osd.0 [INFO] \"Deployed osd.0 on host 'compute-0'\""], "hostname": "compute-0", "is_active": false, "last_refresh": "2026-01-31T08:08:07.362114Z", "memory_request": 4294967296, "memory_usage": 58961428, "pending_daemon_config": false, "ports": [], "service_name": "osd.default_drive_group", "started": "2026-01-31T08:07:15.392122Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-82c880e6-d992-5408-8b12-efff9c275473@osd.0", "version": "20.2.0"}, {"container_id": "679fb36577e7", "container_image_digests": ["quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86", "quay.io/ceph/ceph@sha256:4c65c801a8e5e5704934118b2c723e7233f2b5de8552bfc8f129dabe1fced0b1"], "container_image_id": "524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3", "container_image_name": "quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86", "cpu_percentage": "1.75%", "created": "2026-01-31T08:07:20.299569Z", "daemon_id": "1", "daemon_name": "osd.1", "daemon_type": "osd", "events": ["2026-01-31T08:07:20.444272Z daemon:osd.1 [INFO] \"Deployed osd.1 on host 'compute-0'\""], "hostname": "compute-0", "is_active": false, "last_refresh": "2026-01-31T08:08:07.362289Z", "memory_request": 4294967296, "memory_usage": 58038681, "pending_daemon_config": false, "ports": [], "service_name": "osd.default_drive_group", "started": "2026-01-31T08:07:20.084069Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-82c880e6-d992-5408-8b12-efff9c275473@osd.1", "version": "20.2.0"}, {"container_id": "b5c171002b43", "container_image_digests": ["quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86", "quay.io/ceph/ceph@sha256:4c65c801a8e5e5704934118b2c723e7233f2b5de8552bfc8f129dabe1fced0b1"], "container_image_id": "524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3", "container_image_name": "quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86", "cpu_percentage": "1.92%", "created": "2026-01-31T08:07:26.413743Z", "daemon_id": "2", "daemon_name": "osd.2", "daemon_type": "osd", "events": ["2026-01-31T08:07:27.320070Z daemon:osd.2 [INFO] \"Deployed osd.2 on host 'compute-0'\""], "hostname": "compute-0", "is_active": false, "last_refresh": "2026-01-31T08:08:07.362467Z", "memory_request": 4294967296, "memory_usage": 56423874, "pending_daemon_config": false, "ports": [], "service_name": "osd.default_drive_group", "started": "2026-01-31T08:07:25.815376Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-82c880e6-d992-5408-8b12-efff9c275473@osd.2", "version": "20.2.0"}, {"daemon_id": "rgw.compute-0.dnvgmk", "daemon_name": "rgw.rgw.compute-0.dnvgmk", "daemon_type": "rgw", "events": ["2026-01-31T08:08:19.959068Z daemon:rgw.rgw.compute-0.dnvgmk [INFO] \"Deployed rgw.rgw.compute-0.dnvgmk on host 'compute-0'\""], "hostname": "compute-0", "ip": "192.168.122.100", "is_active": false, "pending_daemon_config": true, "ports": [8082], "service_name": "rgw.rgw", "status": 2, "status_desc": "starting"}]
Jan 31 08:08:23 compute-0 systemd[1]: libpod-a2e0fcaf28f5b2b76c57a097e39ade9199a5f0dd9bf5f7c57d219d9c19b837cf.scope: Deactivated successfully.
Jan 31 08:08:23 compute-0 podman[96069]: 2026-01-31 08:08:23.620540805 +0000 UTC m=+1.073368862 container died a2e0fcaf28f5b2b76c57a097e39ade9199a5f0dd9bf5f7c57d219d9c19b837cf (image=quay.io/ceph/ceph:v20, name=laughing_hofstadter, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 08:08:23 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e36 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 08:08:23 compute-0 systemd[1]: var-lib-containers-storage-overlay-93f82d8bcf9d79c41461fc9c18de9cb15077da52165414439ac88d3e7acd4be7-merged.mount: Deactivated successfully.
Jan 31 08:08:23 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v83: 9 pgs: 1 unknown, 1 creating+peering, 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:08:23 compute-0 podman[96069]: 2026-01-31 08:08:23.672992581 +0000 UTC m=+1.125820628 container remove a2e0fcaf28f5b2b76c57a097e39ade9199a5f0dd9bf5f7c57d219d9c19b837cf (image=quay.io/ceph/ceph:v20, name=laughing_hofstadter, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Jan 31 08:08:23 compute-0 systemd[1]: libpod-conmon-a2e0fcaf28f5b2b76c57a097e39ade9199a5f0dd9bf5f7c57d219d9c19b837cf.scope: Deactivated successfully.
Jan 31 08:08:23 compute-0 sudo[96063]: pam_unix(sudo:session): session closed for user root
Jan 31 08:08:23 compute-0 podman[96247]: 2026-01-31 08:08:23.72832949 +0000 UTC m=+0.042111073 container create 643f1bf5c6c53a7c59bea6de231dee57d5217960264ec41aeb2e846f5ce56bc5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-82c880e6-d992-5408-8b12-efff9c275473-mds-cephfs-compute-0-nafbok, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Jan 31 08:08:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8673d8678c34b5392d7cfe4d22e5df8b652eb1a761a3eddf17932573e6d20351/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 08:08:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8673d8678c34b5392d7cfe4d22e5df8b652eb1a761a3eddf17932573e6d20351/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 08:08:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8673d8678c34b5392d7cfe4d22e5df8b652eb1a761a3eddf17932573e6d20351/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 08:08:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8673d8678c34b5392d7cfe4d22e5df8b652eb1a761a3eddf17932573e6d20351/merged/var/lib/ceph/mds/ceph-cephfs.compute-0.nafbok supports timestamps until 2038 (0x7fffffff)
Jan 31 08:08:23 compute-0 podman[96247]: 2026-01-31 08:08:23.709014128 +0000 UTC m=+0.022795731 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 08:08:23 compute-0 podman[96247]: 2026-01-31 08:08:23.812966684 +0000 UTC m=+0.126748307 container init 643f1bf5c6c53a7c59bea6de231dee57d5217960264ec41aeb2e846f5ce56bc5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-82c880e6-d992-5408-8b12-efff9c275473-mds-cephfs-compute-0-nafbok, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=tentacle)
Jan 31 08:08:23 compute-0 podman[96247]: 2026-01-31 08:08:23.819229823 +0000 UTC m=+0.133011426 container start 643f1bf5c6c53a7c59bea6de231dee57d5217960264ec41aeb2e846f5ce56bc5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-82c880e6-d992-5408-8b12-efff9c275473-mds-cephfs-compute-0-nafbok, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, io.buildah.version=1.41.3)
Jan 31 08:08:23 compute-0 bash[96247]: 643f1bf5c6c53a7c59bea6de231dee57d5217960264ec41aeb2e846f5ce56bc5
Jan 31 08:08:23 compute-0 systemd[1]: Started Ceph mds.cephfs.compute-0.nafbok for 82c880e6-d992-5408-8b12-efff9c275473.
Jan 31 08:08:23 compute-0 ceph-mds[96266]: set uid:gid to 167:167 (ceph:ceph)
Jan 31 08:08:23 compute-0 ceph-mds[96266]: ceph version 20.2.0 (69f84cc2651aa259a15bc192ddaabd3baba07489) tentacle (stable - RelWithDebInfo), process ceph-mds, pid 2
Jan 31 08:08:23 compute-0 ceph-mds[96266]: main not setting numa affinity
Jan 31 08:08:23 compute-0 ceph-mds[96266]: pidfile_write: ignore empty --pid-file
Jan 31 08:08:23 compute-0 ceph-82c880e6-d992-5408-8b12-efff9c275473-mds-cephfs-compute-0-nafbok[96262]: starting mds.cephfs.compute-0.nafbok at 
Jan 31 08:08:23 compute-0 sudo[95369]: pam_unix(sudo:session): session closed for user root
Jan 31 08:08:23 compute-0 ceph-mds[96266]: mds.cephfs.compute-0.nafbok Updating MDS map to version 2 from mon.0
Jan 31 08:08:23 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 31 08:08:23 compute-0 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 08:08:23 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 31 08:08:23 compute-0 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 08:08:23 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mds.cephfs}] v 0)
Jan 31 08:08:23 compute-0 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 08:08:23 compute-0 ceph-mgr[75519]: [progress INFO root] complete: finished ev 9f67468f-9ddb-4e2d-9c5c-7fd766e72e3c (Updating mds.cephfs deployment (+1 -> 1))
Jan 31 08:08:23 compute-0 ceph-mgr[75519]: [progress INFO root] Completed event 9f67468f-9ddb-4e2d-9c5c-7fd766e72e3c (Updating mds.cephfs deployment (+1 -> 1)) in 3 seconds
Jan 31 08:08:23 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=mds_join_fs}] v 0)
Jan 31 08:08:23 compute-0 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 08:08:23 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mds.cephfs}] v 0)
Jan 31 08:08:23 compute-0 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 08:08:23 compute-0 sudo[96285]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 31 08:08:23 compute-0 sudo[96285]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:08:24 compute-0 sudo[96285]: pam_unix(sudo:session): session closed for user root
Jan 31 08:08:24 compute-0 sudo[96310]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 08:08:24 compute-0 sudo[96310]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:08:24 compute-0 sudo[96310]: pam_unix(sudo:session): session closed for user root
Jan 31 08:08:24 compute-0 sudo[96335]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/82c880e6-d992-5408-8b12-efff9c275473/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ls
Jan 31 08:08:24 compute-0 sudo[96335]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:08:24 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e36 do_prune osdmap full prune enabled
Jan 31 08:08:24 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e37 e37: 3 total, 3 up, 3 in
Jan 31 08:08:24 compute-0 ceph-mon[75227]: log_channel(cluster) log [DBG] : osdmap e37: 3 total, 3 up, 3 in
Jan 31 08:08:24 compute-0 ceph-osd[88096]: osd.2 pg_epoch: 37 pg[10.0( empty local-lis/les=0/0 n=0 ec=37/37 lis/c=0/0 les/c/f=0/0/0 sis=37) [2] r=0 lpr=37 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:08:24 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"} v 0)
Jan 31 08:08:24 compute-0 ceph-mon[75227]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3933730218' entity='client.rgw.rgw.compute-0.dnvgmk' cmd={"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"} : dispatch
Jan 31 08:08:24 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).mds e3 new map
Jan 31 08:08:24 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).mds e3 print_map
                                           e3
                                           btime 2026-01-31T08:08:24:405775+0000
                                           enable_multiple, ever_enabled_multiple: 1,1
                                           default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}
                                           legacy client fscid: 1
                                            
                                           Filesystem 'cephfs' (1)
                                           fs_name        cephfs
                                           epoch        2
                                           flags        12 joinable allow_snaps allow_multimds_snaps
                                           created        2026-01-31T08:07:59.022433+0000
                                           modified        2026-01-31T08:07:59.022433+0000
                                           tableserver        0
                                           root        0
                                           session_timeout        60
                                           session_autoclose        300
                                           max_file_size        1099511627776
                                           max_xattr_size        65536
                                           required_client_features        {}
                                           last_failure        0
                                           last_failure_osd_epoch        0
                                           compat        compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}
                                           max_mds        1
                                           in        
                                           up        {}
                                           failed        
                                           damaged        
                                           stopped        
                                           data_pools        [7]
                                           metadata_pool        6
                                           inline_data        disabled
                                           balancer        
                                           bal_rank_mask        -1
                                           standby_count_wanted        0
                                           qdb_cluster        leader: 0 members: 
                                            
                                            
                                           Standby daemons:
                                            
                                           [mds.cephfs.compute-0.nafbok{-1:14262} state up:standby seq 1 addr [v2:192.168.122.100:6814/2430012042,v1:192.168.122.100:6815/2430012042] compat {c=[1],r=[1],i=[1fff]}]
Jan 31 08:08:24 compute-0 sudo[96423]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ynzejomsnqgeueiumenrtlhsgpjqpbnp ; /usr/bin/python3'
Jan 31 08:08:24 compute-0 ceph-mds[96266]: mds.cephfs.compute-0.nafbok Updating MDS map to version 3 from mon.0
Jan 31 08:08:24 compute-0 ceph-mds[96266]: mds.cephfs.compute-0.nafbok Monitors have assigned me to become a standby
Jan 31 08:08:24 compute-0 sudo[96423]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:08:24 compute-0 ceph-mon[75227]: log_channel(cluster) log [DBG] : mds.? [v2:192.168.122.100:6814/2430012042,v1:192.168.122.100:6815/2430012042] up:boot
Jan 31 08:08:24 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).mds e3 assigned standby [v2:192.168.122.100:6814/2430012042,v1:192.168.122.100:6815/2430012042] as mds.0
Jan 31 08:08:24 compute-0 ceph-mon[75227]: log_channel(cluster) log [INF] : daemon mds.cephfs.compute-0.nafbok assigned to filesystem cephfs as rank 0 (now has 1 ranks)
Jan 31 08:08:24 compute-0 ceph-mon[75227]: log_channel(cluster) log [INF] : Health check cleared: MDS_ALL_DOWN (was: 1 filesystem is offline)
Jan 31 08:08:24 compute-0 ceph-mon[75227]: log_channel(cluster) log [INF] : Health check cleared: MDS_UP_LESS_THAN_MAX (was: 1 filesystem is online with fewer MDS than max_mds)
Jan 31 08:08:24 compute-0 ceph-mon[75227]: log_channel(cluster) log [INF] : Cluster is now healthy
Jan 31 08:08:24 compute-0 ceph-mon[75227]: log_channel(cluster) log [DBG] : fsmap cephfs:0 1 up:standby
Jan 31 08:08:24 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mds metadata", "who": "cephfs.compute-0.nafbok"} v 0)
Jan 31 08:08:24 compute-0 ceph-mon[75227]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "mds metadata", "who": "cephfs.compute-0.nafbok"} : dispatch
Jan 31 08:08:24 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).mds e3 all = 0
Jan 31 08:08:24 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).mds e4 new map
Jan 31 08:08:24 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).mds e4 print_map
                                           e4
                                           btime 2026-01-31T08:08:24:412009+0000
                                           enable_multiple, ever_enabled_multiple: 1,1
                                           default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}
                                           legacy client fscid: 1
                                            
                                           Filesystem 'cephfs' (1)
                                           fs_name        cephfs
                                           epoch        4
                                           flags        12 joinable allow_snaps allow_multimds_snaps
                                           created        2026-01-31T08:07:59.022433+0000
                                           modified        2026-01-31T08:08:24.412002+0000
                                           tableserver        0
                                           root        0
                                           session_timeout        60
                                           session_autoclose        300
                                           max_file_size        1099511627776
                                           max_xattr_size        65536
                                           required_client_features        {}
                                           last_failure        0
                                           last_failure_osd_epoch        0
                                           compat        compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}
                                           max_mds        1
                                           in        0
                                           up        {0=14262}
                                           failed        
                                           damaged        
                                           stopped        
                                           data_pools        [7]
                                           metadata_pool        6
                                           inline_data        disabled
                                           balancer        
                                           bal_rank_mask        -1
                                           standby_count_wanted        0
                                           qdb_cluster        leader: 0 members: 
                                           [mds.cephfs.compute-0.nafbok{0:14262} state up:creating seq 1 addr [v2:192.168.122.100:6814/2430012042,v1:192.168.122.100:6815/2430012042] compat {c=[1],r=[1],i=[1fff]}]
                                            
                                            
Jan 31 08:08:24 compute-0 ceph-mds[96266]: mds.cephfs.compute-0.nafbok Updating MDS map to version 4 from mon.0
Jan 31 08:08:24 compute-0 ceph-mds[96266]: mds.0.4 handle_mds_map I am now mds.0.4
Jan 31 08:08:24 compute-0 ceph-mds[96266]: mds.0.4 handle_mds_map state change up:standby --> up:creating
Jan 31 08:08:24 compute-0 ceph-mds[96266]: mds.0.cache creating system inode with ino:0x1
Jan 31 08:08:24 compute-0 ceph-mds[96266]: mds.0.cache creating system inode with ino:0x100
Jan 31 08:08:24 compute-0 ceph-mds[96266]: mds.0.cache creating system inode with ino:0x600
Jan 31 08:08:24 compute-0 ceph-mds[96266]: mds.0.cache creating system inode with ino:0x601
Jan 31 08:08:24 compute-0 ceph-mds[96266]: mds.0.cache creating system inode with ino:0x602
Jan 31 08:08:24 compute-0 ceph-mds[96266]: mds.0.cache creating system inode with ino:0x603
Jan 31 08:08:24 compute-0 ceph-mds[96266]: mds.0.cache creating system inode with ino:0x604
Jan 31 08:08:24 compute-0 ceph-mds[96266]: mds.0.cache creating system inode with ino:0x605
Jan 31 08:08:24 compute-0 ceph-mds[96266]: mds.0.cache creating system inode with ino:0x606
Jan 31 08:08:24 compute-0 ceph-mds[96266]: mds.0.cache creating system inode with ino:0x607
Jan 31 08:08:24 compute-0 ceph-mon[75227]: log_channel(cluster) log [DBG] : fsmap cephfs:1 {0=cephfs.compute-0.nafbok=up:creating}
Jan 31 08:08:24 compute-0 ceph-mds[96266]: mds.0.cache creating system inode with ino:0x608
Jan 31 08:08:24 compute-0 ceph-mds[96266]: mds.0.cache creating system inode with ino:0x609
Jan 31 08:08:24 compute-0 ceph-mds[96266]: mds.0.4 creating_done
Jan 31 08:08:24 compute-0 ceph-mon[75227]: log_channel(cluster) log [INF] : daemon mds.cephfs.compute-0.nafbok is now active in filesystem cephfs as rank 0
Jan 31 08:08:24 compute-0 podman[96427]: 2026-01-31 08:08:24.476113752 +0000 UTC m=+0.057031778 container exec 2c160fb9852a007dc977740f88f96001cc57b1cb392a9e315d541aef8037777a (image=quay.io/ceph/ceph:v20, name=ceph-82c880e6-d992-5408-8b12-efff9c275473-mon-compute-0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 31 08:08:24 compute-0 python3[96428]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 82c880e6-d992-5408-8b12-efff9c275473 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   -s -f json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 08:08:24 compute-0 podman[96457]: 2026-01-31 08:08:24.59452751 +0000 UTC m=+0.044884281 container create f759627f4e009716630c3cdbfeda09773dd23c041df3efebc32da27d1b4742ca (image=quay.io/ceph/ceph:v20, name=zen_mclaren, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, ceph=True, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Jan 31 08:08:24 compute-0 podman[96463]: 2026-01-31 08:08:24.61240774 +0000 UTC m=+0.049816612 container exec_died 2c160fb9852a007dc977740f88f96001cc57b1cb392a9e315d541aef8037777a (image=quay.io/ceph/ceph:v20, name=ceph-82c880e6-d992-5408-8b12-efff9c275473-mon-compute-0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Jan 31 08:08:24 compute-0 podman[96427]: 2026-01-31 08:08:24.625949207 +0000 UTC m=+0.206867233 container exec_died 2c160fb9852a007dc977740f88f96001cc57b1cb392a9e315d541aef8037777a (image=quay.io/ceph/ceph:v20, name=ceph-82c880e6-d992-5408-8b12-efff9c275473-mon-compute-0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 08:08:24 compute-0 systemd[1]: Started libpod-conmon-f759627f4e009716630c3cdbfeda09773dd23c041df3efebc32da27d1b4742ca.scope.
Jan 31 08:08:24 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:08:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a9714d8c02f5f3481cbbc3bcee81b929f45fab98e9fc954b32dd410113373b7d/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 08:08:24 compute-0 podman[96457]: 2026-01-31 08:08:24.575429155 +0000 UTC m=+0.025785916 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 31 08:08:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a9714d8c02f5f3481cbbc3bcee81b929f45fab98e9fc954b32dd410113373b7d/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 08:08:24 compute-0 podman[96457]: 2026-01-31 08:08:24.692928247 +0000 UTC m=+0.143285008 container init f759627f4e009716630c3cdbfeda09773dd23c041df3efebc32da27d1b4742ca (image=quay.io/ceph/ceph:v20, name=zen_mclaren, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Jan 31 08:08:24 compute-0 podman[96457]: 2026-01-31 08:08:24.69861616 +0000 UTC m=+0.148972891 container start f759627f4e009716630c3cdbfeda09773dd23c041df3efebc32da27d1b4742ca (image=quay.io/ceph/ceph:v20, name=zen_mclaren, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 08:08:24 compute-0 podman[96457]: 2026-01-31 08:08:24.715666216 +0000 UTC m=+0.166022977 container attach f759627f4e009716630c3cdbfeda09773dd23c041df3efebc32da27d1b4742ca (image=quay.io/ceph/ceph:v20, name=zen_mclaren, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 08:08:24 compute-0 ceph-mon[75227]: from='client.14260 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Jan 31 08:08:24 compute-0 ceph-mon[75227]: pgmap v83: 9 pgs: 1 unknown, 1 creating+peering, 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:08:24 compute-0 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 08:08:24 compute-0 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 08:08:24 compute-0 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 08:08:24 compute-0 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 08:08:24 compute-0 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 08:08:24 compute-0 ceph-mon[75227]: osdmap e37: 3 total, 3 up, 3 in
Jan 31 08:08:24 compute-0 ceph-mon[75227]: from='client.? 192.168.122.100:0/3933730218' entity='client.rgw.rgw.compute-0.dnvgmk' cmd={"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"} : dispatch
Jan 31 08:08:24 compute-0 ceph-mon[75227]: mds.? [v2:192.168.122.100:6814/2430012042,v1:192.168.122.100:6815/2430012042] up:boot
Jan 31 08:08:24 compute-0 ceph-mon[75227]: daemon mds.cephfs.compute-0.nafbok assigned to filesystem cephfs as rank 0 (now has 1 ranks)
Jan 31 08:08:24 compute-0 ceph-mon[75227]: Health check cleared: MDS_ALL_DOWN (was: 1 filesystem is offline)
Jan 31 08:08:24 compute-0 ceph-mon[75227]: Health check cleared: MDS_UP_LESS_THAN_MAX (was: 1 filesystem is online with fewer MDS than max_mds)
Jan 31 08:08:24 compute-0 ceph-mon[75227]: Cluster is now healthy
Jan 31 08:08:24 compute-0 ceph-mon[75227]: fsmap cephfs:0 1 up:standby
Jan 31 08:08:24 compute-0 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "mds metadata", "who": "cephfs.compute-0.nafbok"} : dispatch
Jan 31 08:08:24 compute-0 ceph-mon[75227]: fsmap cephfs:1 {0=cephfs.compute-0.nafbok=up:creating}
Jan 31 08:08:24 compute-0 ceph-mon[75227]: daemon mds.cephfs.compute-0.nafbok is now active in filesystem cephfs as rank 0
Jan 31 08:08:25 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e37 do_prune osdmap full prune enabled
Jan 31 08:08:25 compute-0 ceph-mon[75227]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3933730218' entity='client.rgw.rgw.compute-0.dnvgmk' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]': finished
Jan 31 08:08:25 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e38 e38: 3 total, 3 up, 3 in
Jan 31 08:08:25 compute-0 ceph-mon[75227]: log_channel(cluster) log [DBG] : osdmap e38: 3 total, 3 up, 3 in
Jan 31 08:08:25 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json"} v 0)
Jan 31 08:08:25 compute-0 ceph-mon[75227]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1519348361' entity='client.admin' cmd={"prefix": "status", "format": "json"} : dispatch
Jan 31 08:08:25 compute-0 zen_mclaren[96486]: 
Jan 31 08:08:25 compute-0 zen_mclaren[96486]: {"fsid":"82c880e6-d992-5408-8b12-efff9c275473","health":{"status":"HEALTH_OK","checks":{},"mutes":[]},"election_epoch":5,"quorum":[0],"quorum_names":["compute-0"],"quorum_age":131,"monmap":{"epoch":1,"min_mon_release_name":"tentacle","num_mons":1},"osdmap":{"epoch":38,"num_osds":3,"num_up_osds":3,"osd_up_since":1769846853,"num_in_osds":3,"osd_in_since":1769846828,"num_remapped_pgs":0},"pgmap":{"pgs_by_state":[{"state_name":"active+clean","count":7},{"state_name":"creating+peering","count":1},{"state_name":"unknown","count":1}],"num_pgs":9,"num_pools":9,"num_objects":2,"data_bytes":459280,"bytes_used":83894272,"bytes_avail":64328032256,"bytes_total":64411926528,"unknown_pgs_ratio":0.1111111119389534,"inactive_pgs_ratio":0.1111111119389534},"fsmap":{"epoch":4,"btime":"2026-01-31T08:08:24:412009+0000","id":1,"up":1,"in":1,"max":1,"by_rank":[{"filesystem_id":1,"rank":0,"name":"cephfs.compute-0.nafbok","status":"up:creating","gid":14262}],"up:standby":0},"mgrmap":{"available":true,"num_standbys":0,"modules":["cephadm","iostat","nfs"],"services":{}},"servicemap":{"epoch":2,"modified":"2026-01-31T08:07:33.658076+0000","services":{"osd":{"daemons":{"summary":"","0":{"start_epoch":0,"start_stamp":"0.000000","gid":0,"addr":"(unrecognized address family 0)/0","metadata":{},"task_status":{}},"1":{"start_epoch":0,"start_stamp":"0.000000","gid":0,"addr":"(unrecognized address family 0)/0","metadata":{},"task_status":{}}}}}},"progress_events":{"9f67468f-9ddb-4e2d-9c5c-7fd766e72e3c":{"message":"Updating mds.cephfs deployment (+1 -> 1) (0s)\n      [............................] ","progress":0,"add_to_ceph_s":true},"d5a72d36-5e9a-4289-8d07-2ee4a9e0f4d5":{"message":"Global Recovery Event (0s)\n      [............................] ","progress":0,"add_to_ceph_s":true}}}
Jan 31 08:08:25 compute-0 systemd[1]: libpod-f759627f4e009716630c3cdbfeda09773dd23c041df3efebc32da27d1b4742ca.scope: Deactivated successfully.
Jan 31 08:08:25 compute-0 podman[96457]: 2026-01-31 08:08:25.227887019 +0000 UTC m=+0.678243750 container died f759627f4e009716630c3cdbfeda09773dd23c041df3efebc32da27d1b4742ca (image=quay.io/ceph/ceph:v20, name=zen_mclaren, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, OSD_FLAVOR=default, ceph=True, CEPH_REF=tentacle, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3)
Jan 31 08:08:25 compute-0 ceph-osd[88096]: osd.2 pg_epoch: 38 pg[10.0( empty local-lis/les=37/38 n=0 ec=37/37 lis/c=0/0 les/c/f=0/0/0 sis=37) [2] r=0 lpr=37 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:08:25 compute-0 systemd[1]: var-lib-containers-storage-overlay-a9714d8c02f5f3481cbbc3bcee81b929f45fab98e9fc954b32dd410113373b7d-merged.mount: Deactivated successfully.
Jan 31 08:08:25 compute-0 podman[96457]: 2026-01-31 08:08:25.321318004 +0000 UTC m=+0.771674745 container remove f759627f4e009716630c3cdbfeda09773dd23c041df3efebc32da27d1b4742ca (image=quay.io/ceph/ceph:v20, name=zen_mclaren, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 08:08:25 compute-0 systemd[1]: libpod-conmon-f759627f4e009716630c3cdbfeda09773dd23c041df3efebc32da27d1b4742ca.scope: Deactivated successfully.
Jan 31 08:08:25 compute-0 sudo[96423]: pam_unix(sudo:session): session closed for user root
Jan 31 08:08:25 compute-0 sudo[96335]: pam_unix(sudo:session): session closed for user root
Jan 31 08:08:25 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 31 08:08:25 compute-0 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 08:08:25 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 31 08:08:25 compute-0 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 08:08:25 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 31 08:08:25 compute-0 ceph-mon[75227]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 08:08:25 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 31 08:08:25 compute-0 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 31 08:08:25 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).mds e5 new map
Jan 31 08:08:25 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).mds e5 print_map
                                           e5
                                           btime 2026-01-31T08:08:25:416196+0000
                                           enable_multiple, ever_enabled_multiple: 1,1
                                           default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}
                                           legacy client fscid: 1
                                            
                                           Filesystem 'cephfs' (1)
                                           fs_name        cephfs
                                           epoch        5
                                           flags        12 joinable allow_snaps allow_multimds_snaps
                                           created        2026-01-31T08:07:59.022433+0000
                                           modified        2026-01-31T08:08:25.416194+0000
                                           tableserver        0
                                           root        0
                                           session_timeout        60
                                           session_autoclose        300
                                           max_file_size        1099511627776
                                           max_xattr_size        65536
                                           required_client_features        {}
                                           last_failure        0
                                           last_failure_osd_epoch        0
                                           compat        compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}
                                           max_mds        1
                                           in        0
                                           up        {0=14262}
                                           failed        
                                           damaged        
                                           stopped        
                                           data_pools        [7]
                                           metadata_pool        6
                                           inline_data        disabled
                                           balancer        
                                           bal_rank_mask        -1
                                           standby_count_wanted        0
                                           qdb_cluster        leader: 14262 members: 14262
                                           [mds.cephfs.compute-0.nafbok{0:14262} state up:active seq 2 join_fscid=1 addr [v2:192.168.122.100:6814/2430012042,v1:192.168.122.100:6815/2430012042] compat {c=[1],r=[1],i=[1fff]}]
                                            
                                            
Jan 31 08:08:25 compute-0 ceph-mds[96266]: mds.cephfs.compute-0.nafbok Updating MDS map to version 5 from mon.0
Jan 31 08:08:25 compute-0 ceph-mds[96266]: mds.0.4 handle_mds_map I am now mds.0.4
Jan 31 08:08:25 compute-0 ceph-mds[96266]: mds.0.4 handle_mds_map state change up:creating --> up:active
Jan 31 08:08:25 compute-0 ceph-mds[96266]: mds.0.4 recovery_done -- successful recovery!
Jan 31 08:08:25 compute-0 ceph-mds[96266]: mds.0.4 active_start
Jan 31 08:08:25 compute-0 ceph-mon[75227]: log_channel(cluster) log [DBG] : mds.? [v2:192.168.122.100:6814/2430012042,v1:192.168.122.100:6815/2430012042] up:active
Jan 31 08:08:25 compute-0 ceph-mon[75227]: log_channel(cluster) log [DBG] : fsmap cephfs:1 {0=cephfs.compute-0.nafbok=up:active}
Jan 31 08:08:25 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 31 08:08:25 compute-0 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 08:08:25 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 31 08:08:25 compute-0 ceph-mon[75227]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 31 08:08:25 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 31 08:08:25 compute-0 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 31 08:08:25 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 31 08:08:25 compute-0 ceph-mon[75227]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 08:08:25 compute-0 sudo[96685]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 08:08:25 compute-0 sudo[96685]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:08:25 compute-0 sudo[96685]: pam_unix(sudo:session): session closed for user root
Jan 31 08:08:25 compute-0 sudo[96710]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/82c880e6-d992-5408-8b12-efff9c275473/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 82c880e6-d992-5408-8b12-efff9c275473 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Jan 31 08:08:25 compute-0 sudo[96710]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:08:25 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v86: 10 pgs: 1 creating+peering, 9 active+clean; 453 KiB data, 80 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 5.7 KiB/s wr, 15 op/s
Jan 31 08:08:25 compute-0 podman[96747]: 2026-01-31 08:08:25.778210087 +0000 UTC m=+0.038301684 container create 5e9cbcbe0380712b10b222c139bab9fe5b24090904735ad6e889c32037357f8d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gifted_shaw, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 08:08:25 compute-0 systemd[1]: Started libpod-conmon-5e9cbcbe0380712b10b222c139bab9fe5b24090904735ad6e889c32037357f8d.scope.
Jan 31 08:08:25 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:08:25 compute-0 podman[96747]: 2026-01-31 08:08:25.858894979 +0000 UTC m=+0.118986606 container init 5e9cbcbe0380712b10b222c139bab9fe5b24090904735ad6e889c32037357f8d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gifted_shaw, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 31 08:08:25 compute-0 podman[96747]: 2026-01-31 08:08:25.763716594 +0000 UTC m=+0.023808211 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 08:08:25 compute-0 podman[96747]: 2026-01-31 08:08:25.866188857 +0000 UTC m=+0.126280454 container start 5e9cbcbe0380712b10b222c139bab9fe5b24090904735ad6e889c32037357f8d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gifted_shaw, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Jan 31 08:08:25 compute-0 podman[96747]: 2026-01-31 08:08:25.869671696 +0000 UTC m=+0.129763333 container attach 5e9cbcbe0380712b10b222c139bab9fe5b24090904735ad6e889c32037357f8d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gifted_shaw, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Jan 31 08:08:25 compute-0 gifted_shaw[96763]: 167 167
Jan 31 08:08:25 compute-0 systemd[1]: libpod-5e9cbcbe0380712b10b222c139bab9fe5b24090904735ad6e889c32037357f8d.scope: Deactivated successfully.
Jan 31 08:08:25 compute-0 podman[96747]: 2026-01-31 08:08:25.871983032 +0000 UTC m=+0.132074619 container died 5e9cbcbe0380712b10b222c139bab9fe5b24090904735ad6e889c32037357f8d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gifted_shaw, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Jan 31 08:08:25 compute-0 systemd[1]: var-lib-containers-storage-overlay-7fc817526de99fb614cd67e5da6ec87cf853d2716653e718f4b463836697c92a-merged.mount: Deactivated successfully.
Jan 31 08:08:25 compute-0 podman[96747]: 2026-01-31 08:08:25.911570541 +0000 UTC m=+0.171662128 container remove 5e9cbcbe0380712b10b222c139bab9fe5b24090904735ad6e889c32037357f8d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gifted_shaw, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, io.buildah.version=1.41.3)
Jan 31 08:08:25 compute-0 systemd[1]: libpod-conmon-5e9cbcbe0380712b10b222c139bab9fe5b24090904735ad6e889c32037357f8d.scope: Deactivated successfully.
Jan 31 08:08:26 compute-0 sudo[96814]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kjjuuyhexdefigxwoxlyutojltkmuoms ; /usr/bin/python3'
Jan 31 08:08:26 compute-0 sudo[96814]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:08:26 compute-0 podman[96798]: 2026-01-31 08:08:26.079232194 +0000 UTC m=+0.053892488 container create a9d2c0b9b043af0ab015019f5a59abe29631de04e2a94ce205c85de6b43cabac (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=blissful_keller, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, io.buildah.version=1.41.3, OSD_FLAVOR=default)
Jan 31 08:08:26 compute-0 systemd[1]: Started libpod-conmon-a9d2c0b9b043af0ab015019f5a59abe29631de04e2a94ce205c85de6b43cabac.scope.
Jan 31 08:08:26 compute-0 podman[96798]: 2026-01-31 08:08:26.053378347 +0000 UTC m=+0.028038631 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 08:08:26 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:08:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fdc2dcd2770c152c8cd2efc3f44d5a20ab85c28dd6aa19c94dc6e44682a745d1/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 08:08:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fdc2dcd2770c152c8cd2efc3f44d5a20ab85c28dd6aa19c94dc6e44682a745d1/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 08:08:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fdc2dcd2770c152c8cd2efc3f44d5a20ab85c28dd6aa19c94dc6e44682a745d1/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 08:08:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fdc2dcd2770c152c8cd2efc3f44d5a20ab85c28dd6aa19c94dc6e44682a745d1/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 08:08:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fdc2dcd2770c152c8cd2efc3f44d5a20ab85c28dd6aa19c94dc6e44682a745d1/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 31 08:08:26 compute-0 podman[96798]: 2026-01-31 08:08:26.177811607 +0000 UTC m=+0.152471931 container init a9d2c0b9b043af0ab015019f5a59abe29631de04e2a94ce205c85de6b43cabac (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=blissful_keller, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 31 08:08:26 compute-0 podman[96798]: 2026-01-31 08:08:26.186021921 +0000 UTC m=+0.160682185 container start a9d2c0b9b043af0ab015019f5a59abe29631de04e2a94ce205c85de6b43cabac (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=blissful_keller, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Jan 31 08:08:26 compute-0 podman[96798]: 2026-01-31 08:08:26.189532811 +0000 UTC m=+0.164193115 container attach a9d2c0b9b043af0ab015019f5a59abe29631de04e2a94ce205c85de6b43cabac (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=blissful_keller, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Jan 31 08:08:26 compute-0 python3[96823]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 82c880e6-d992-5408-8b12-efff9c275473 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config dump -f json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 08:08:26 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e38 do_prune osdmap full prune enabled
Jan 31 08:08:26 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e39 e39: 3 total, 3 up, 3 in
Jan 31 08:08:26 compute-0 ceph-mon[75227]: log_channel(cluster) log [DBG] : osdmap e39: 3 total, 3 up, 3 in
Jan 31 08:08:26 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 39 pg[11.0( empty local-lis/les=0/0 n=0 ec=39/39 lis/c=0/0 les/c/f=0/0/0 sis=39) [1] r=0 lpr=39 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:08:26 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"} v 0)
Jan 31 08:08:26 compute-0 ceph-mon[75227]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3933730218' entity='client.rgw.rgw.compute-0.dnvgmk' cmd={"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"} : dispatch
Jan 31 08:08:26 compute-0 ceph-mon[75227]: from='client.? 192.168.122.100:0/3933730218' entity='client.rgw.rgw.compute-0.dnvgmk' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]': finished
Jan 31 08:08:26 compute-0 ceph-mon[75227]: osdmap e38: 3 total, 3 up, 3 in
Jan 31 08:08:26 compute-0 ceph-mon[75227]: from='client.? 192.168.122.100:0/1519348361' entity='client.admin' cmd={"prefix": "status", "format": "json"} : dispatch
Jan 31 08:08:26 compute-0 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 08:08:26 compute-0 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 08:08:26 compute-0 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 08:08:26 compute-0 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 31 08:08:26 compute-0 ceph-mon[75227]: mds.? [v2:192.168.122.100:6814/2430012042,v1:192.168.122.100:6815/2430012042] up:active
Jan 31 08:08:26 compute-0 ceph-mon[75227]: fsmap cephfs:1 {0=cephfs.compute-0.nafbok=up:active}
Jan 31 08:08:26 compute-0 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 08:08:26 compute-0 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 31 08:08:26 compute-0 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 31 08:08:26 compute-0 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 08:08:26 compute-0 ceph-mon[75227]: pgmap v86: 10 pgs: 1 creating+peering, 9 active+clean; 453 KiB data, 80 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 5.7 KiB/s wr, 15 op/s
Jan 31 08:08:26 compute-0 podman[96833]: 2026-01-31 08:08:26.242863182 +0000 UTC m=+0.037357786 container create 197f9aebb46911e381b74d46b2c87a099cf2b9889e3bfc9fa108ef9b91dced32 (image=quay.io/ceph/ceph:v20, name=recursing_ganguly, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3)
Jan 31 08:08:26 compute-0 systemd[1]: Started libpod-conmon-197f9aebb46911e381b74d46b2c87a099cf2b9889e3bfc9fa108ef9b91dced32.scope.
Jan 31 08:08:26 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:08:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a94fa871abc0b8a9029ad7035fc6909492f0eb407dde1828c984326f8c51a435/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 08:08:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a94fa871abc0b8a9029ad7035fc6909492f0eb407dde1828c984326f8c51a435/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 08:08:26 compute-0 podman[96833]: 2026-01-31 08:08:26.227961577 +0000 UTC m=+0.022456151 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 31 08:08:26 compute-0 podman[96833]: 2026-01-31 08:08:26.325276133 +0000 UTC m=+0.119770727 container init 197f9aebb46911e381b74d46b2c87a099cf2b9889e3bfc9fa108ef9b91dced32 (image=quay.io/ceph/ceph:v20, name=recursing_ganguly, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 08:08:26 compute-0 podman[96833]: 2026-01-31 08:08:26.330835772 +0000 UTC m=+0.125330346 container start 197f9aebb46911e381b74d46b2c87a099cf2b9889e3bfc9fa108ef9b91dced32 (image=quay.io/ceph/ceph:v20, name=recursing_ganguly, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 08:08:26 compute-0 podman[96833]: 2026-01-31 08:08:26.334997421 +0000 UTC m=+0.129492005 container attach 197f9aebb46911e381b74d46b2c87a099cf2b9889e3bfc9fa108ef9b91dced32 (image=quay.io/ceph/ceph:v20, name=recursing_ganguly, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Jan 31 08:08:26 compute-0 blissful_keller[96828]: --> passed data devices: 0 physical, 3 LVM
Jan 31 08:08:26 compute-0 blissful_keller[96828]: --> All data devices are unavailable
Jan 31 08:08:26 compute-0 systemd[1]: libpod-a9d2c0b9b043af0ab015019f5a59abe29631de04e2a94ce205c85de6b43cabac.scope: Deactivated successfully.
Jan 31 08:08:26 compute-0 podman[96798]: 2026-01-31 08:08:26.603394257 +0000 UTC m=+0.578054631 container died a9d2c0b9b043af0ab015019f5a59abe29631de04e2a94ce205c85de6b43cabac (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=blissful_keller, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 08:08:26 compute-0 systemd[1]: var-lib-containers-storage-overlay-fdc2dcd2770c152c8cd2efc3f44d5a20ab85c28dd6aa19c94dc6e44682a745d1-merged.mount: Deactivated successfully.
Jan 31 08:08:26 compute-0 podman[96798]: 2026-01-31 08:08:26.64484158 +0000 UTC m=+0.619501844 container remove a9d2c0b9b043af0ab015019f5a59abe29631de04e2a94ce205c85de6b43cabac (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=blissful_keller, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 31 08:08:26 compute-0 systemd[1]: libpod-conmon-a9d2c0b9b043af0ab015019f5a59abe29631de04e2a94ce205c85de6b43cabac.scope: Deactivated successfully.
Jan 31 08:08:26 compute-0 sudo[96710]: pam_unix(sudo:session): session closed for user root
Jan 31 08:08:26 compute-0 sudo[96898]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 08:08:26 compute-0 sudo[96898]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:08:26 compute-0 sudo[96898]: pam_unix(sudo:session): session closed for user root
Jan 31 08:08:26 compute-0 sudo[96923]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/82c880e6-d992-5408-8b12-efff9c275473/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 82c880e6-d992-5408-8b12-efff9c275473 -- lvm list --format json
Jan 31 08:08:26 compute-0 sudo[96923]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:08:26 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config dump", "format": "json"} v 0)
Jan 31 08:08:26 compute-0 ceph-mon[75227]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2875625571' entity='client.admin' cmd={"prefix": "config dump", "format": "json"} : dispatch
Jan 31 08:08:26 compute-0 recursing_ganguly[96849]: 
Jan 31 08:08:26 compute-0 systemd[1]: libpod-197f9aebb46911e381b74d46b2c87a099cf2b9889e3bfc9fa108ef9b91dced32.scope: Deactivated successfully.
Jan 31 08:08:26 compute-0 recursing_ganguly[96849]: [{"section":"global","name":"cluster_network","value":"172.20.0.0/24","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"container_image","value":"quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86","level":"basic","can_update_at_runtime":false,"mask":""},{"section":"global","name":"log_to_file","value":"true","level":"basic","can_update_at_runtime":true,"mask":""},{"section":"global","name":"mon_cluster_log_to_file","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"ms_bind_ipv4","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"ms_bind_ipv6","value":"false","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"osd_pool_default_size","value":"1","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"public_network","value":"192.168.122.0/24","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_accepted_admin_roles","value":"ResellerAdmin, swiftoperator","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_accepted_roles","value":"member, Member, admin","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_admin_domain","value":"default","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_admin_password","value":"12345678","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_admin_project","value":"service","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_admin_user","value":"swift","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_implicit_tenants","value":"true","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_url","value":"https://keystone-internal.openstack.svc:5000","level":"basic","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_verify_ssl","value":"false","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_max_attr_name_len","value":"128","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_max_attr_size","value":"1024","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_max_attrs_num_in_req","value":"90","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_s3_auth_use_keystone","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_swift_account_in_url","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_swift_enforce_content_length","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_swift_versioning_enabled","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_trust_forwarded_https","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"mon","name":"auth_allow_insecure_global_id_reclaim","value":"false","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"mon","name":"mon_warn_on_pool_no_redundancy","value":"false","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"mgr","name":"mgr/cephadm/container_init","value":"True","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"mgr","name":"mgr/cephadm/migration_current","value":"7","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"mgr","name":"mgr/cephadm/use_repo_digest","value":"false","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"mgr","name":"mgr/orchestrator/orchestrator","value":"cephadm","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"mgr","name":"mgr_standby_modules","value":"false","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"osd","name":"osd_memory_target_autotune","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"mds.cephfs","name":"mds_join_fs","value":"cephfs","level":"basic","can_update_at_runtime":true,"mask":""},{"section":"client.rgw.rgw.compute-0.dnvgmk","name":"rgw_frontends","value":"beast endpoint=192.168.122.100:8082","level":"basic","can_update_at_runtime":false,"mask":""}]
Jan 31 08:08:26 compute-0 podman[96833]: 2026-01-31 08:08:26.796110835 +0000 UTC m=+0.590605399 container died 197f9aebb46911e381b74d46b2c87a099cf2b9889e3bfc9fa108ef9b91dced32 (image=quay.io/ceph/ceph:v20, name=recursing_ganguly, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 31 08:08:26 compute-0 systemd[1]: var-lib-containers-storage-overlay-a94fa871abc0b8a9029ad7035fc6909492f0eb407dde1828c984326f8c51a435-merged.mount: Deactivated successfully.
Jan 31 08:08:26 compute-0 podman[96833]: 2026-01-31 08:08:26.835512449 +0000 UTC m=+0.630007013 container remove 197f9aebb46911e381b74d46b2c87a099cf2b9889e3bfc9fa108ef9b91dced32 (image=quay.io/ceph/ceph:v20, name=recursing_ganguly, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 08:08:26 compute-0 systemd[1]: libpod-conmon-197f9aebb46911e381b74d46b2c87a099cf2b9889e3bfc9fa108ef9b91dced32.scope: Deactivated successfully.
Jan 31 08:08:26 compute-0 sudo[96814]: pam_unix(sudo:session): session closed for user root
Jan 31 08:08:27 compute-0 podman[96975]: 2026-01-31 08:08:27.027864486 +0000 UTC m=+0.054567187 container create 7e312e0ebfec72db44268aac8debe3d10939628c77775a8760e8a99a13f9a3c2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nervous_leavitt, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0)
Jan 31 08:08:27 compute-0 systemd[1]: Started libpod-conmon-7e312e0ebfec72db44268aac8debe3d10939628c77775a8760e8a99a13f9a3c2.scope.
Jan 31 08:08:27 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:08:27 compute-0 podman[96975]: 2026-01-31 08:08:27.001427022 +0000 UTC m=+0.028129733 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 08:08:27 compute-0 podman[96975]: 2026-01-31 08:08:27.103237726 +0000 UTC m=+0.129940407 container init 7e312e0ebfec72db44268aac8debe3d10939628c77775a8760e8a99a13f9a3c2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nervous_leavitt, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 31 08:08:27 compute-0 podman[96975]: 2026-01-31 08:08:27.110623317 +0000 UTC m=+0.137325998 container start 7e312e0ebfec72db44268aac8debe3d10939628c77775a8760e8a99a13f9a3c2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nervous_leavitt, org.label-schema.build-date=20251030, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Jan 31 08:08:27 compute-0 nervous_leavitt[96990]: 167 167
Jan 31 08:08:27 compute-0 systemd[1]: libpod-7e312e0ebfec72db44268aac8debe3d10939628c77775a8760e8a99a13f9a3c2.scope: Deactivated successfully.
Jan 31 08:08:27 compute-0 podman[96975]: 2026-01-31 08:08:27.12088798 +0000 UTC m=+0.147590691 container attach 7e312e0ebfec72db44268aac8debe3d10939628c77775a8760e8a99a13f9a3c2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nervous_leavitt, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 08:08:27 compute-0 podman[96975]: 2026-01-31 08:08:27.121633201 +0000 UTC m=+0.148335882 container died 7e312e0ebfec72db44268aac8debe3d10939628c77775a8760e8a99a13f9a3c2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nervous_leavitt, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.build-date=20251030, ceph=True, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 08:08:27 compute-0 systemd[1]: var-lib-containers-storage-overlay-deebfde82260f6ce7332c51ce1af5950230ecddfda4be8c337aa7a6de68a55be-merged.mount: Deactivated successfully.
Jan 31 08:08:27 compute-0 podman[96975]: 2026-01-31 08:08:27.197734632 +0000 UTC m=+0.224437343 container remove 7e312e0ebfec72db44268aac8debe3d10939628c77775a8760e8a99a13f9a3c2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nervous_leavitt, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Jan 31 08:08:27 compute-0 systemd[1]: libpod-conmon-7e312e0ebfec72db44268aac8debe3d10939628c77775a8760e8a99a13f9a3c2.scope: Deactivated successfully.
Jan 31 08:08:27 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e39 do_prune osdmap full prune enabled
Jan 31 08:08:27 compute-0 ceph-mon[75227]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3933730218' entity='client.rgw.rgw.compute-0.dnvgmk' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]': finished
Jan 31 08:08:27 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e40 e40: 3 total, 3 up, 3 in
Jan 31 08:08:27 compute-0 ceph-mon[75227]: log_channel(cluster) log [DBG] : osdmap e40: 3 total, 3 up, 3 in
Jan 31 08:08:27 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"} v 0)
Jan 31 08:08:27 compute-0 ceph-mon[75227]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3933730218' entity='client.rgw.rgw.compute-0.dnvgmk' cmd={"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"} : dispatch
Jan 31 08:08:27 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 40 pg[11.0( empty local-lis/les=39/40 n=0 ec=39/39 lis/c=0/0 les/c/f=0/0/0 sis=39) [1] r=0 lpr=39 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:08:27 compute-0 ceph-mon[75227]: osdmap e39: 3 total, 3 up, 3 in
Jan 31 08:08:27 compute-0 ceph-mon[75227]: from='client.? 192.168.122.100:0/3933730218' entity='client.rgw.rgw.compute-0.dnvgmk' cmd={"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"} : dispatch
Jan 31 08:08:27 compute-0 ceph-mon[75227]: from='client.? 192.168.122.100:0/2875625571' entity='client.admin' cmd={"prefix": "config dump", "format": "json"} : dispatch
Jan 31 08:08:27 compute-0 ceph-mon[75227]: from='client.? 192.168.122.100:0/3933730218' entity='client.rgw.rgw.compute-0.dnvgmk' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]': finished
Jan 31 08:08:27 compute-0 ceph-mon[75227]: osdmap e40: 3 total, 3 up, 3 in
Jan 31 08:08:27 compute-0 podman[97016]: 2026-01-31 08:08:27.363662896 +0000 UTC m=+0.050540563 container create e1eb5e72fc3422dc96dead0eeb957cfef4360c7304eba2ebb7f8895d8aa2b2df (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hungry_germain, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 08:08:27 compute-0 systemd[1]: Started libpod-conmon-e1eb5e72fc3422dc96dead0eeb957cfef4360c7304eba2ebb7f8895d8aa2b2df.scope.
Jan 31 08:08:27 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:08:27 compute-0 podman[97016]: 2026-01-31 08:08:27.336965494 +0000 UTC m=+0.023843151 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 08:08:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/69e62b495c20e5a97f9bcf132ef6018a268afe7babcea2853e7ef95010817634/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 08:08:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/69e62b495c20e5a97f9bcf132ef6018a268afe7babcea2853e7ef95010817634/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 08:08:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/69e62b495c20e5a97f9bcf132ef6018a268afe7babcea2853e7ef95010817634/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 08:08:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/69e62b495c20e5a97f9bcf132ef6018a268afe7babcea2853e7ef95010817634/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 08:08:27 compute-0 podman[97016]: 2026-01-31 08:08:27.466313014 +0000 UTC m=+0.153190661 container init e1eb5e72fc3422dc96dead0eeb957cfef4360c7304eba2ebb7f8895d8aa2b2df (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hungry_germain, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Jan 31 08:08:27 compute-0 podman[97016]: 2026-01-31 08:08:27.476206936 +0000 UTC m=+0.163084603 container start e1eb5e72fc3422dc96dead0eeb957cfef4360c7304eba2ebb7f8895d8aa2b2df (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hungry_germain, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Jan 31 08:08:27 compute-0 podman[97016]: 2026-01-31 08:08:27.488534858 +0000 UTC m=+0.175412485 container attach e1eb5e72fc3422dc96dead0eeb957cfef4360c7304eba2ebb7f8895d8aa2b2df (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hungry_germain, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 08:08:27 compute-0 sudo[97060]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zmlrdoqtgsbiyhfokpebifnzyoxevvrd ; /usr/bin/python3'
Jan 31 08:08:27 compute-0 sudo[97060]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:08:27 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v89: 11 pgs: 1 unknown, 1 creating+peering, 9 active+clean; 453 KiB data, 80 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 5.7 KiB/s wr, 15 op/s
Jan 31 08:08:27 compute-0 python3[97062]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 82c880e6-d992-5408-8b12-efff9c275473 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd get-require-min-compat-client _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 08:08:27 compute-0 hungry_germain[97032]: {
Jan 31 08:08:27 compute-0 hungry_germain[97032]:     "0": [
Jan 31 08:08:27 compute-0 hungry_germain[97032]:         {
Jan 31 08:08:27 compute-0 hungry_germain[97032]:             "devices": [
Jan 31 08:08:27 compute-0 hungry_germain[97032]:                 "/dev/loop3"
Jan 31 08:08:27 compute-0 hungry_germain[97032]:             ],
Jan 31 08:08:27 compute-0 hungry_germain[97032]:             "lv_name": "ceph_lv0",
Jan 31 08:08:27 compute-0 hungry_germain[97032]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 08:08:27 compute-0 hungry_germain[97032]:             "lv_size": "21470642176",
Jan 31 08:08:27 compute-0 hungry_germain[97032]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=MTsNbY-MKaT-jGv0-3onj-5WQa-gnK0-BbfLsK,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=82c880e6-d992-5408-8b12-efff9c275473,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=39c36249-2898-4a76-b317-8e4ca379866f,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 31 08:08:27 compute-0 hungry_germain[97032]:             "lv_uuid": "MTsNbY-MKaT-jGv0-3onj-5WQa-gnK0-BbfLsK",
Jan 31 08:08:27 compute-0 hungry_germain[97032]:             "name": "ceph_lv0",
Jan 31 08:08:27 compute-0 hungry_germain[97032]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 08:08:27 compute-0 hungry_germain[97032]:             "tags": {
Jan 31 08:08:27 compute-0 hungry_germain[97032]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 31 08:08:27 compute-0 hungry_germain[97032]:                 "ceph.block_uuid": "MTsNbY-MKaT-jGv0-3onj-5WQa-gnK0-BbfLsK",
Jan 31 08:08:27 compute-0 hungry_germain[97032]:                 "ceph.cephx_lockbox_secret": "",
Jan 31 08:08:27 compute-0 hungry_germain[97032]:                 "ceph.cluster_fsid": "82c880e6-d992-5408-8b12-efff9c275473",
Jan 31 08:08:27 compute-0 hungry_germain[97032]:                 "ceph.cluster_name": "ceph",
Jan 31 08:08:27 compute-0 hungry_germain[97032]:                 "ceph.crush_device_class": "",
Jan 31 08:08:27 compute-0 hungry_germain[97032]:                 "ceph.encrypted": "0",
Jan 31 08:08:27 compute-0 hungry_germain[97032]:                 "ceph.objectstore": "bluestore",
Jan 31 08:08:27 compute-0 hungry_germain[97032]:                 "ceph.osd_fsid": "39c36249-2898-4a76-b317-8e4ca379866f",
Jan 31 08:08:27 compute-0 hungry_germain[97032]:                 "ceph.osd_id": "0",
Jan 31 08:08:27 compute-0 hungry_germain[97032]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 31 08:08:27 compute-0 hungry_germain[97032]:                 "ceph.type": "block",
Jan 31 08:08:27 compute-0 hungry_germain[97032]:                 "ceph.vdo": "0",
Jan 31 08:08:27 compute-0 hungry_germain[97032]:                 "ceph.with_tpm": "0"
Jan 31 08:08:27 compute-0 hungry_germain[97032]:             },
Jan 31 08:08:27 compute-0 hungry_germain[97032]:             "type": "block",
Jan 31 08:08:27 compute-0 hungry_germain[97032]:             "vg_name": "ceph_vg0"
Jan 31 08:08:27 compute-0 hungry_germain[97032]:         }
Jan 31 08:08:27 compute-0 hungry_germain[97032]:     ],
Jan 31 08:08:27 compute-0 hungry_germain[97032]:     "1": [
Jan 31 08:08:27 compute-0 hungry_germain[97032]:         {
Jan 31 08:08:27 compute-0 hungry_germain[97032]:             "devices": [
Jan 31 08:08:27 compute-0 hungry_germain[97032]:                 "/dev/loop4"
Jan 31 08:08:27 compute-0 hungry_germain[97032]:             ],
Jan 31 08:08:27 compute-0 hungry_germain[97032]:             "lv_name": "ceph_lv1",
Jan 31 08:08:27 compute-0 hungry_germain[97032]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 31 08:08:27 compute-0 hungry_germain[97032]:             "lv_size": "21470642176",
Jan 31 08:08:27 compute-0 hungry_germain[97032]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=p93Mbf-DMxT-pcUt-jSJE-SFna-oscq-yTAd40,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=82c880e6-d992-5408-8b12-efff9c275473,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=dacad4fa-56d8-4937-b2d8-306fb75187f3,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 31 08:08:27 compute-0 hungry_germain[97032]:             "lv_uuid": "p93Mbf-DMxT-pcUt-jSJE-SFna-oscq-yTAd40",
Jan 31 08:08:27 compute-0 hungry_germain[97032]:             "name": "ceph_lv1",
Jan 31 08:08:27 compute-0 hungry_germain[97032]:             "path": "/dev/ceph_vg1/ceph_lv1",
Jan 31 08:08:27 compute-0 hungry_germain[97032]:             "tags": {
Jan 31 08:08:27 compute-0 hungry_germain[97032]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 31 08:08:27 compute-0 hungry_germain[97032]:                 "ceph.block_uuid": "p93Mbf-DMxT-pcUt-jSJE-SFna-oscq-yTAd40",
Jan 31 08:08:27 compute-0 hungry_germain[97032]:                 "ceph.cephx_lockbox_secret": "",
Jan 31 08:08:27 compute-0 hungry_germain[97032]:                 "ceph.cluster_fsid": "82c880e6-d992-5408-8b12-efff9c275473",
Jan 31 08:08:27 compute-0 hungry_germain[97032]:                 "ceph.cluster_name": "ceph",
Jan 31 08:08:27 compute-0 hungry_germain[97032]:                 "ceph.crush_device_class": "",
Jan 31 08:08:27 compute-0 hungry_germain[97032]:                 "ceph.encrypted": "0",
Jan 31 08:08:27 compute-0 hungry_germain[97032]:                 "ceph.objectstore": "bluestore",
Jan 31 08:08:27 compute-0 hungry_germain[97032]:                 "ceph.osd_fsid": "dacad4fa-56d8-4937-b2d8-306fb75187f3",
Jan 31 08:08:27 compute-0 hungry_germain[97032]:                 "ceph.osd_id": "1",
Jan 31 08:08:27 compute-0 hungry_germain[97032]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 31 08:08:27 compute-0 hungry_germain[97032]:                 "ceph.type": "block",
Jan 31 08:08:27 compute-0 hungry_germain[97032]:                 "ceph.vdo": "0",
Jan 31 08:08:27 compute-0 hungry_germain[97032]:                 "ceph.with_tpm": "0"
Jan 31 08:08:27 compute-0 hungry_germain[97032]:             },
Jan 31 08:08:27 compute-0 hungry_germain[97032]:             "type": "block",
Jan 31 08:08:27 compute-0 hungry_germain[97032]:             "vg_name": "ceph_vg1"
Jan 31 08:08:27 compute-0 hungry_germain[97032]:         }
Jan 31 08:08:27 compute-0 hungry_germain[97032]:     ],
Jan 31 08:08:27 compute-0 hungry_germain[97032]:     "2": [
Jan 31 08:08:27 compute-0 hungry_germain[97032]:         {
Jan 31 08:08:27 compute-0 hungry_germain[97032]:             "devices": [
Jan 31 08:08:27 compute-0 hungry_germain[97032]:                 "/dev/loop5"
Jan 31 08:08:27 compute-0 hungry_germain[97032]:             ],
Jan 31 08:08:27 compute-0 hungry_germain[97032]:             "lv_name": "ceph_lv2",
Jan 31 08:08:27 compute-0 hungry_germain[97032]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 31 08:08:27 compute-0 hungry_germain[97032]:             "lv_size": "21470642176",
Jan 31 08:08:27 compute-0 hungry_germain[97032]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=fxd7JU-HnwP-NvcE-M4xv-EgEF-kK7y-w6dXCS,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=82c880e6-d992-5408-8b12-efff9c275473,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=faa25865-e7b6-44f9-8188-08bf287b941b,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 31 08:08:27 compute-0 hungry_germain[97032]:             "lv_uuid": "fxd7JU-HnwP-NvcE-M4xv-EgEF-kK7y-w6dXCS",
Jan 31 08:08:27 compute-0 hungry_germain[97032]:             "name": "ceph_lv2",
Jan 31 08:08:27 compute-0 hungry_germain[97032]:             "path": "/dev/ceph_vg2/ceph_lv2",
Jan 31 08:08:27 compute-0 hungry_germain[97032]:             "tags": {
Jan 31 08:08:27 compute-0 hungry_germain[97032]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 31 08:08:27 compute-0 hungry_germain[97032]:                 "ceph.block_uuid": "fxd7JU-HnwP-NvcE-M4xv-EgEF-kK7y-w6dXCS",
Jan 31 08:08:27 compute-0 hungry_germain[97032]:                 "ceph.cephx_lockbox_secret": "",
Jan 31 08:08:27 compute-0 hungry_germain[97032]:                 "ceph.cluster_fsid": "82c880e6-d992-5408-8b12-efff9c275473",
Jan 31 08:08:27 compute-0 hungry_germain[97032]:                 "ceph.cluster_name": "ceph",
Jan 31 08:08:27 compute-0 hungry_germain[97032]:                 "ceph.crush_device_class": "",
Jan 31 08:08:27 compute-0 hungry_germain[97032]:                 "ceph.encrypted": "0",
Jan 31 08:08:27 compute-0 hungry_germain[97032]:                 "ceph.objectstore": "bluestore",
Jan 31 08:08:27 compute-0 hungry_germain[97032]:                 "ceph.osd_fsid": "faa25865-e7b6-44f9-8188-08bf287b941b",
Jan 31 08:08:27 compute-0 hungry_germain[97032]:                 "ceph.osd_id": "2",
Jan 31 08:08:27 compute-0 hungry_germain[97032]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 31 08:08:27 compute-0 hungry_germain[97032]:                 "ceph.type": "block",
Jan 31 08:08:27 compute-0 hungry_germain[97032]:                 "ceph.vdo": "0",
Jan 31 08:08:27 compute-0 hungry_germain[97032]:                 "ceph.with_tpm": "0"
Jan 31 08:08:27 compute-0 hungry_germain[97032]:             },
Jan 31 08:08:27 compute-0 hungry_germain[97032]:             "type": "block",
Jan 31 08:08:27 compute-0 hungry_germain[97032]:             "vg_name": "ceph_vg2"
Jan 31 08:08:27 compute-0 hungry_germain[97032]:         }
Jan 31 08:08:27 compute-0 hungry_germain[97032]:     ]
Jan 31 08:08:27 compute-0 hungry_germain[97032]: }
Jan 31 08:08:27 compute-0 systemd[1]: libpod-e1eb5e72fc3422dc96dead0eeb957cfef4360c7304eba2ebb7f8895d8aa2b2df.scope: Deactivated successfully.
Jan 31 08:08:27 compute-0 podman[97016]: 2026-01-31 08:08:27.761508785 +0000 UTC m=+0.448386422 container died e1eb5e72fc3422dc96dead0eeb957cfef4360c7304eba2ebb7f8895d8aa2b2df (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hungry_germain, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, OSD_FLAVOR=default)
Jan 31 08:08:27 compute-0 podman[97067]: 2026-01-31 08:08:27.785740706 +0000 UTC m=+0.063209115 container create d86817efeb6b3e405e931b641e84ca39b98152a7f53881c9f896c26a2e6308c4 (image=quay.io/ceph/ceph:v20, name=sharp_mclaren, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 08:08:27 compute-0 systemd[1]: var-lib-containers-storage-overlay-69e62b495c20e5a97f9bcf132ef6018a268afe7babcea2853e7ef95010817634-merged.mount: Deactivated successfully.
Jan 31 08:08:27 compute-0 systemd[1]: Started libpod-conmon-d86817efeb6b3e405e931b641e84ca39b98152a7f53881c9f896c26a2e6308c4.scope.
Jan 31 08:08:27 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:08:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f3bd4bd7835ac7e7ac6557e1d39918898d4759ff20570b9e61cd3e5034d6d30a/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 08:08:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f3bd4bd7835ac7e7ac6557e1d39918898d4759ff20570b9e61cd3e5034d6d30a/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 08:08:27 compute-0 podman[97067]: 2026-01-31 08:08:27.755298738 +0000 UTC m=+0.032767176 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 31 08:08:27 compute-0 podman[97067]: 2026-01-31 08:08:27.856789553 +0000 UTC m=+0.134257991 container init d86817efeb6b3e405e931b641e84ca39b98152a7f53881c9f896c26a2e6308c4 (image=quay.io/ceph/ceph:v20, name=sharp_mclaren, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 08:08:27 compute-0 podman[97067]: 2026-01-31 08:08:27.860231281 +0000 UTC m=+0.137699689 container start d86817efeb6b3e405e931b641e84ca39b98152a7f53881c9f896c26a2e6308c4 (image=quay.io/ceph/ceph:v20, name=sharp_mclaren, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Jan 31 08:08:27 compute-0 ceph-mgr[75519]: [progress INFO root] Writing back 5 completed events
Jan 31 08:08:27 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Jan 31 08:08:27 compute-0 podman[97016]: 2026-01-31 08:08:27.911681819 +0000 UTC m=+0.598559456 container remove e1eb5e72fc3422dc96dead0eeb957cfef4360c7304eba2ebb7f8895d8aa2b2df (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hungry_germain, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 08:08:27 compute-0 systemd[1]: libpod-conmon-e1eb5e72fc3422dc96dead0eeb957cfef4360c7304eba2ebb7f8895d8aa2b2df.scope: Deactivated successfully.
Jan 31 08:08:27 compute-0 sudo[96923]: pam_unix(sudo:session): session closed for user root
Jan 31 08:08:28 compute-0 sudo[97101]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 08:08:28 compute-0 sudo[97101]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:08:28 compute-0 sudo[97101]: pam_unix(sudo:session): session closed for user root
Jan 31 08:08:28 compute-0 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 08:08:28 compute-0 sudo[97144]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/82c880e6-d992-5408-8b12-efff9c275473/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 82c880e6-d992-5408-8b12-efff9c275473 -- raw list --format json
Jan 31 08:08:28 compute-0 sudo[97144]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:08:28 compute-0 podman[97067]: 2026-01-31 08:08:28.048356928 +0000 UTC m=+0.325825336 container attach d86817efeb6b3e405e931b641e84ca39b98152a7f53881c9f896c26a2e6308c4 (image=quay.io/ceph/ceph:v20, name=sharp_mclaren, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Jan 31 08:08:28 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e40 do_prune osdmap full prune enabled
Jan 31 08:08:28 compute-0 ceph-mon[75227]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3933730218' entity='client.rgw.rgw.compute-0.dnvgmk' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]': finished
Jan 31 08:08:28 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e41 e41: 3 total, 3 up, 3 in
Jan 31 08:08:28 compute-0 ceph-mon[75227]: log_channel(cluster) log [DBG] : osdmap e41: 3 total, 3 up, 3 in
Jan 31 08:08:28 compute-0 ceph-mon[75227]: from='client.? 192.168.122.100:0/3933730218' entity='client.rgw.rgw.compute-0.dnvgmk' cmd={"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"} : dispatch
Jan 31 08:08:28 compute-0 ceph-mon[75227]: pgmap v89: 11 pgs: 1 unknown, 1 creating+peering, 9 active+clean; 453 KiB data, 80 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 5.7 KiB/s wr, 15 op/s
Jan 31 08:08:28 compute-0 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 08:08:28 compute-0 ceph-mon[75227]: from='client.? 192.168.122.100:0/3933730218' entity='client.rgw.rgw.compute-0.dnvgmk' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]': finished
Jan 31 08:08:28 compute-0 ceph-mon[75227]: osdmap e41: 3 total, 3 up, 3 in
Jan 31 08:08:28 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd get-require-min-compat-client"} v 0)
Jan 31 08:08:28 compute-0 ceph-mon[75227]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2466039113' entity='client.admin' cmd={"prefix": "osd get-require-min-compat-client"} : dispatch
Jan 31 08:08:28 compute-0 sharp_mclaren[97096]: mimic
Jan 31 08:08:28 compute-0 systemd[1]: libpod-d86817efeb6b3e405e931b641e84ca39b98152a7f53881c9f896c26a2e6308c4.scope: Deactivated successfully.
Jan 31 08:08:28 compute-0 podman[97181]: 2026-01-31 08:08:28.301408837 +0000 UTC m=+0.046517358 container create 89d907433536ed6974f4db66fc212e4abcb1ae2a5451a89cf0867f78f628bf44 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sleepy_shirley, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 08:08:28 compute-0 podman[97067]: 2026-01-31 08:08:28.302746695 +0000 UTC m=+0.580215133 container died d86817efeb6b3e405e931b641e84ca39b98152a7f53881c9f896c26a2e6308c4 (image=quay.io/ceph/ceph:v20, name=sharp_mclaren, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Jan 31 08:08:28 compute-0 podman[97181]: 2026-01-31 08:08:28.272946005 +0000 UTC m=+0.018054536 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 08:08:28 compute-0 systemd[1]: Started libpod-conmon-89d907433536ed6974f4db66fc212e4abcb1ae2a5451a89cf0867f78f628bf44.scope.
Jan 31 08:08:28 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:08:28 compute-0 systemd[1]: var-lib-containers-storage-overlay-f3bd4bd7835ac7e7ac6557e1d39918898d4759ff20570b9e61cd3e5034d6d30a-merged.mount: Deactivated successfully.
Jan 31 08:08:28 compute-0 podman[97067]: 2026-01-31 08:08:28.442463481 +0000 UTC m=+0.719931899 container remove d86817efeb6b3e405e931b641e84ca39b98152a7f53881c9f896c26a2e6308c4 (image=quay.io/ceph/ceph:v20, name=sharp_mclaren, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Jan 31 08:08:28 compute-0 systemd[1]: libpod-conmon-d86817efeb6b3e405e931b641e84ca39b98152a7f53881c9f896c26a2e6308c4.scope: Deactivated successfully.
Jan 31 08:08:28 compute-0 sudo[97060]: pam_unix(sudo:session): session closed for user root
Jan 31 08:08:28 compute-0 podman[97181]: 2026-01-31 08:08:28.460941378 +0000 UTC m=+0.206049899 container init 89d907433536ed6974f4db66fc212e4abcb1ae2a5451a89cf0867f78f628bf44 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sleepy_shirley, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 08:08:28 compute-0 podman[97181]: 2026-01-31 08:08:28.46555632 +0000 UTC m=+0.210664881 container start 89d907433536ed6974f4db66fc212e4abcb1ae2a5451a89cf0867f78f628bf44 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sleepy_shirley, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030)
Jan 31 08:08:28 compute-0 sleepy_shirley[97211]: 167 167
Jan 31 08:08:28 compute-0 systemd[1]: libpod-89d907433536ed6974f4db66fc212e4abcb1ae2a5451a89cf0867f78f628bf44.scope: Deactivated successfully.
Jan 31 08:08:28 compute-0 podman[97181]: 2026-01-31 08:08:28.486724624 +0000 UTC m=+0.231833145 container attach 89d907433536ed6974f4db66fc212e4abcb1ae2a5451a89cf0867f78f628bf44 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sleepy_shirley, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Jan 31 08:08:28 compute-0 podman[97181]: 2026-01-31 08:08:28.488667349 +0000 UTC m=+0.233775900 container died 89d907433536ed6974f4db66fc212e4abcb1ae2a5451a89cf0867f78f628bf44 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sleepy_shirley, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 08:08:28 compute-0 systemd[1]: var-lib-containers-storage-overlay-47d38caf2157904dd06049ddb11643817cc7dd039d9c769908167a9e70834e5b-merged.mount: Deactivated successfully.
Jan 31 08:08:28 compute-0 podman[97181]: 2026-01-31 08:08:28.57667535 +0000 UTC m=+0.321783871 container remove 89d907433536ed6974f4db66fc212e4abcb1ae2a5451a89cf0867f78f628bf44 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sleepy_shirley, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 08:08:28 compute-0 systemd[1]: libpod-conmon-89d907433536ed6974f4db66fc212e4abcb1ae2a5451a89cf0867f78f628bf44.scope: Deactivated successfully.
Jan 31 08:08:28 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e41 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 08:08:28 compute-0 podman[97252]: 2026-01-31 08:08:28.743403236 +0000 UTC m=+0.065816848 container create 87798c6306dd254a0bfd9ad384b0ff344ec780c75193cddefd290047a1f61b8e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=angry_meitner, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Jan 31 08:08:28 compute-0 systemd[1]: Started libpod-conmon-87798c6306dd254a0bfd9ad384b0ff344ec780c75193cddefd290047a1f61b8e.scope.
Jan 31 08:08:28 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:08:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d23bd1d85480aad6713182c4f5fd431050b2f87aaeb77b11f4d8b99f7495b0e3/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 08:08:28 compute-0 podman[97252]: 2026-01-31 08:08:28.712762772 +0000 UTC m=+0.035176424 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 08:08:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d23bd1d85480aad6713182c4f5fd431050b2f87aaeb77b11f4d8b99f7495b0e3/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 08:08:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d23bd1d85480aad6713182c4f5fd431050b2f87aaeb77b11f4d8b99f7495b0e3/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 08:08:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d23bd1d85480aad6713182c4f5fd431050b2f87aaeb77b11f4d8b99f7495b0e3/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 08:08:28 compute-0 podman[97252]: 2026-01-31 08:08:28.839330193 +0000 UTC m=+0.161743805 container init 87798c6306dd254a0bfd9ad384b0ff344ec780c75193cddefd290047a1f61b8e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=angry_meitner, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20251030, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 08:08:28 compute-0 podman[97252]: 2026-01-31 08:08:28.844777798 +0000 UTC m=+0.167191410 container start 87798c6306dd254a0bfd9ad384b0ff344ec780c75193cddefd290047a1f61b8e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=angry_meitner, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Jan 31 08:08:28 compute-0 podman[97252]: 2026-01-31 08:08:28.85710707 +0000 UTC m=+0.179520702 container attach 87798c6306dd254a0bfd9ad384b0ff344ec780c75193cddefd290047a1f61b8e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=angry_meitner, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Jan 31 08:08:28 compute-0 radosgw[95251]: v1 topic migration: starting v1 topic migration..
Jan 31 08:08:28 compute-0 radosgw[95251]: v1 topic migration: finished v1 topic migration
Jan 31 08:08:28 compute-0 radosgw[95251]: framework: beast
Jan 31 08:08:28 compute-0 radosgw[95251]: framework conf key: ssl_certificate, val: config://rgw/cert/$realm/$zone.crt
Jan 31 08:08:28 compute-0 radosgw[95251]: framework conf key: ssl_private_key, val: config://rgw/cert/$realm/$zone.key
Jan 31 08:08:28 compute-0 radosgw[95251]: starting handler: beast
Jan 31 08:08:29 compute-0 radosgw[95251]: set uid:gid to 167:167 (ceph:ceph)
Jan 31 08:08:29 compute-0 radosgw[95251]: mgrc service_daemon_register rgw.14258 metadata {arch=x86_64,ceph_release=tentacle,ceph_version=ceph version 20.2.0 (69f84cc2651aa259a15bc192ddaabd3baba07489) tentacle (stable - RelWithDebInfo),ceph_version_short=20.2.0,container_hostname=compute-0,container_image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86,cpu=AMD EPYC-Rome Processor,distro=centos,distro_description=CentOS Stream 9,distro_version=9,frontend_config#0=beast endpoint=192.168.122.100:8082,frontend_type#0=beast,hostname=compute-0,id=rgw.compute-0.dnvgmk,kernel_description=#1 SMP PREEMPT_DYNAMIC Thu Jan 22 12:30:22 UTC 2026,kernel_version=5.14.0-665.el9.x86_64,mem_swap_kb=1048572,mem_total_kb=7864296,num_handles=1,os=Linux,pid=2,realm_id=,realm_name=,zone_id=e5ce1d65-2430-4be3-9ca0-9683547c77a5,zone_name=default,zonegroup_id=2c8897d5-67a2-451c-a710-7a7bae68fa34,zonegroup_name=default}
Jan 31 08:08:29 compute-0 sudo[97329]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-svihsycksirqnletmkeotuniuafedyus ; /usr/bin/python3'
Jan 31 08:08:29 compute-0 sudo[97329]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:08:29 compute-0 ceph-mon[75227]: from='client.? 192.168.122.100:0/2466039113' entity='client.admin' cmd={"prefix": "osd get-require-min-compat-client"} : dispatch
Jan 31 08:08:29 compute-0 python3[97334]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 82c880e6-d992-5408-8b12-efff9c275473 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   versions -f json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 08:08:29 compute-0 podman[97374]: 2026-01-31 08:08:29.339558522 +0000 UTC m=+0.043433490 container create 661767da94bb0acb8f2cd1b170ee1248561bfc9042cc8b9f56633c91ba041436 (image=quay.io/ceph/ceph:v20, name=zen_elion, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 08:08:29 compute-0 systemd[1]: Started libpod-conmon-661767da94bb0acb8f2cd1b170ee1248561bfc9042cc8b9f56633c91ba041436.scope.
Jan 31 08:08:29 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:08:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9e1c4366617768d4128969bc95e32e43c90044f30551ec9e4959fe39db583d7f/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 08:08:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9e1c4366617768d4128969bc95e32e43c90044f30551ec9e4959fe39db583d7f/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 08:08:29 compute-0 podman[97374]: 2026-01-31 08:08:29.320365595 +0000 UTC m=+0.024240583 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 31 08:08:29 compute-0 podman[97374]: 2026-01-31 08:08:29.420769079 +0000 UTC m=+0.124644067 container init 661767da94bb0acb8f2cd1b170ee1248561bfc9042cc8b9f56633c91ba041436 (image=quay.io/ceph/ceph:v20, name=zen_elion, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 08:08:29 compute-0 ceph-mds[96266]: mds.pinger is_rank_lagging: rank=0 was never sent ping request.
Jan 31 08:08:29 compute-0 ceph-82c880e6-d992-5408-8b12-efff9c275473-mds-cephfs-compute-0-nafbok[96262]: 2026-01-31T08:08:29.421+0000 7f010199b640 -1 mds.pinger is_rank_lagging: rank=0 was never sent ping request.
Jan 31 08:08:29 compute-0 podman[97374]: 2026-01-31 08:08:29.426155203 +0000 UTC m=+0.130030171 container start 661767da94bb0acb8f2cd1b170ee1248561bfc9042cc8b9f56633c91ba041436 (image=quay.io/ceph/ceph:v20, name=zen_elion, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Jan 31 08:08:29 compute-0 podman[97374]: 2026-01-31 08:08:29.431917467 +0000 UTC m=+0.135792465 container attach 661767da94bb0acb8f2cd1b170ee1248561bfc9042cc8b9f56633c91ba041436 (image=quay.io/ceph/ceph:v20, name=zen_elion, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 08:08:29 compute-0 lvm[97408]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 31 08:08:29 compute-0 lvm[97408]: VG ceph_vg0 finished
Jan 31 08:08:29 compute-0 lvm[97411]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 31 08:08:29 compute-0 lvm[97411]: VG ceph_vg1 finished
Jan 31 08:08:29 compute-0 lvm[97413]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 31 08:08:29 compute-0 lvm[97413]: VG ceph_vg2 finished
Jan 31 08:08:29 compute-0 angry_meitner[97268]: {}
Jan 31 08:08:29 compute-0 systemd[1]: libpod-87798c6306dd254a0bfd9ad384b0ff344ec780c75193cddefd290047a1f61b8e.scope: Deactivated successfully.
Jan 31 08:08:29 compute-0 conmon[97268]: conmon 87798c6306dd254a0bfd <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-87798c6306dd254a0bfd9ad384b0ff344ec780c75193cddefd290047a1f61b8e.scope/container/memory.events
Jan 31 08:08:29 compute-0 podman[97435]: 2026-01-31 08:08:29.621744832 +0000 UTC m=+0.023823910 container died 87798c6306dd254a0bfd9ad384b0ff344ec780c75193cddefd290047a1f61b8e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=angry_meitner, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Jan 31 08:08:29 compute-0 systemd[1]: var-lib-containers-storage-overlay-d23bd1d85480aad6713182c4f5fd431050b2f87aaeb77b11f4d8b99f7495b0e3-merged.mount: Deactivated successfully.
Jan 31 08:08:29 compute-0 podman[97435]: 2026-01-31 08:08:29.669212656 +0000 UTC m=+0.071291734 container remove 87798c6306dd254a0bfd9ad384b0ff344ec780c75193cddefd290047a1f61b8e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=angry_meitner, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 08:08:29 compute-0 systemd[1]: libpod-conmon-87798c6306dd254a0bfd9ad384b0ff344ec780c75193cddefd290047a1f61b8e.scope: Deactivated successfully.
Jan 31 08:08:29 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v91: 11 pgs: 1 unknown, 10 active+clean; 453 KiB data, 80 MiB used, 60 GiB / 60 GiB avail; 1.3 KiB/s rd, 2.0 KiB/s wr, 4 op/s
Jan 31 08:08:29 compute-0 sudo[97144]: pam_unix(sudo:session): session closed for user root
Jan 31 08:08:29 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 31 08:08:29 compute-0 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 08:08:29 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 31 08:08:29 compute-0 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 08:08:29 compute-0 sudo[97450]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 31 08:08:29 compute-0 sudo[97450]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:08:29 compute-0 sudo[97450]: pam_unix(sudo:session): session closed for user root
Jan 31 08:08:29 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 31 08:08:29 compute-0 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 08:08:29 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 31 08:08:29 compute-0 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 08:08:29 compute-0 sudo[97475]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 08:08:29 compute-0 sudo[97475]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:08:29 compute-0 sudo[97475]: pam_unix(sudo:session): session closed for user root
Jan 31 08:08:29 compute-0 sudo[97500]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/82c880e6-d992-5408-8b12-efff9c275473/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ls
Jan 31 08:08:29 compute-0 sudo[97500]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:08:29 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "versions", "format": "json"} v 0)
Jan 31 08:08:29 compute-0 ceph-mon[75227]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2413874688' entity='client.admin' cmd={"prefix": "versions", "format": "json"} : dispatch
Jan 31 08:08:29 compute-0 zen_elion[97401]: 
Jan 31 08:08:29 compute-0 systemd[1]: libpod-661767da94bb0acb8f2cd1b170ee1248561bfc9042cc8b9f56633c91ba041436.scope: Deactivated successfully.
Jan 31 08:08:29 compute-0 conmon[97401]: conmon 661767da94bb0acb8f2c <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-661767da94bb0acb8f2cd1b170ee1248561bfc9042cc8b9f56633c91ba041436.scope/container/memory.events
Jan 31 08:08:29 compute-0 zen_elion[97401]: {"mon":{"ceph version 20.2.0 (69f84cc2651aa259a15bc192ddaabd3baba07489) tentacle (stable - RelWithDebInfo)":1},"mgr":{"ceph version 20.2.0 (69f84cc2651aa259a15bc192ddaabd3baba07489) tentacle (stable - RelWithDebInfo)":1},"osd":{"ceph version 20.2.0 (69f84cc2651aa259a15bc192ddaabd3baba07489) tentacle (stable - RelWithDebInfo)":3},"mds":{"ceph version 20.2.0 (69f84cc2651aa259a15bc192ddaabd3baba07489) tentacle (stable - RelWithDebInfo)":1},"overall":{"ceph version 20.2.0 (69f84cc2651aa259a15bc192ddaabd3baba07489) tentacle (stable - RelWithDebInfo)":6}}
Jan 31 08:08:29 compute-0 podman[97374]: 2026-01-31 08:08:29.989487853 +0000 UTC m=+0.693362921 container died 661767da94bb0acb8f2cd1b170ee1248561bfc9042cc8b9f56633c91ba041436 (image=quay.io/ceph/ceph:v20, name=zen_elion, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, ceph=True)
Jan 31 08:08:30 compute-0 systemd[1]: var-lib-containers-storage-overlay-9e1c4366617768d4128969bc95e32e43c90044f30551ec9e4959fe39db583d7f-merged.mount: Deactivated successfully.
Jan 31 08:08:30 compute-0 podman[97374]: 2026-01-31 08:08:30.036145194 +0000 UTC m=+0.740020192 container remove 661767da94bb0acb8f2cd1b170ee1248561bfc9042cc8b9f56633c91ba041436 (image=quay.io/ceph/ceph:v20, name=zen_elion, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Jan 31 08:08:30 compute-0 systemd[1]: libpod-conmon-661767da94bb0acb8f2cd1b170ee1248561bfc9042cc8b9f56633c91ba041436.scope: Deactivated successfully.
Jan 31 08:08:30 compute-0 sudo[97329]: pam_unix(sudo:session): session closed for user root
Jan 31 08:08:30 compute-0 podman[97584]: 2026-01-31 08:08:30.270332655 +0000 UTC m=+0.052476468 container exec 2c160fb9852a007dc977740f88f96001cc57b1cb392a9e315d541aef8037777a (image=quay.io/ceph/ceph:v20, name=ceph-82c880e6-d992-5408-8b12-efff9c275473-mon-compute-0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Jan 31 08:08:30 compute-0 podman[97584]: 2026-01-31 08:08:30.38058851 +0000 UTC m=+0.162732263 container exec_died 2c160fb9852a007dc977740f88f96001cc57b1cb392a9e315d541aef8037777a (image=quay.io/ceph/ceph:v20, name=ceph-82c880e6-d992-5408-8b12-efff9c275473-mon-compute-0, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 08:08:30 compute-0 ceph-mon[75227]: pgmap v91: 11 pgs: 1 unknown, 10 active+clean; 453 KiB data, 80 MiB used, 60 GiB / 60 GiB avail; 1.3 KiB/s rd, 2.0 KiB/s wr, 4 op/s
Jan 31 08:08:30 compute-0 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 08:08:30 compute-0 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 08:08:30 compute-0 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 08:08:30 compute-0 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 08:08:30 compute-0 ceph-mon[75227]: from='client.? 192.168.122.100:0/2413874688' entity='client.admin' cmd={"prefix": "versions", "format": "json"} : dispatch
Jan 31 08:08:31 compute-0 sudo[97500]: pam_unix(sudo:session): session closed for user root
Jan 31 08:08:31 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 31 08:08:31 compute-0 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 08:08:31 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 31 08:08:31 compute-0 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 08:08:31 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 31 08:08:31 compute-0 ceph-mon[75227]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 08:08:31 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 31 08:08:31 compute-0 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 31 08:08:31 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 31 08:08:31 compute-0 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 08:08:31 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 31 08:08:31 compute-0 ceph-mon[75227]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 31 08:08:31 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 31 08:08:31 compute-0 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 31 08:08:31 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 31 08:08:31 compute-0 ceph-mon[75227]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 08:08:31 compute-0 sudo[97773]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 08:08:31 compute-0 sudo[97773]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:08:31 compute-0 sudo[97773]: pam_unix(sudo:session): session closed for user root
Jan 31 08:08:31 compute-0 sudo[97798]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/82c880e6-d992-5408-8b12-efff9c275473/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 82c880e6-d992-5408-8b12-efff9c275473 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Jan 31 08:08:31 compute-0 sudo[97798]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:08:31 compute-0 podman[97836]: 2026-01-31 08:08:31.440412774 +0000 UTC m=+0.041076543 container create 26e2f5b105b9f8b27c656113ea7baed88f08793f54332c6bdc989af75a16d09e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=heuristic_meitner, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 08:08:31 compute-0 systemd[1]: Started libpod-conmon-26e2f5b105b9f8b27c656113ea7baed88f08793f54332c6bdc989af75a16d09e.scope.
Jan 31 08:08:31 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:08:31 compute-0 podman[97836]: 2026-01-31 08:08:31.422889154 +0000 UTC m=+0.023552953 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 08:08:31 compute-0 podman[97836]: 2026-01-31 08:08:31.525370148 +0000 UTC m=+0.126033997 container init 26e2f5b105b9f8b27c656113ea7baed88f08793f54332c6bdc989af75a16d09e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=heuristic_meitner, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 08:08:31 compute-0 podman[97836]: 2026-01-31 08:08:31.533276883 +0000 UTC m=+0.133940642 container start 26e2f5b105b9f8b27c656113ea7baed88f08793f54332c6bdc989af75a16d09e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=heuristic_meitner, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 08:08:31 compute-0 podman[97836]: 2026-01-31 08:08:31.536357051 +0000 UTC m=+0.137020830 container attach 26e2f5b105b9f8b27c656113ea7baed88f08793f54332c6bdc989af75a16d09e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=heuristic_meitner, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 08:08:31 compute-0 heuristic_meitner[97852]: 167 167
Jan 31 08:08:31 compute-0 systemd[1]: libpod-26e2f5b105b9f8b27c656113ea7baed88f08793f54332c6bdc989af75a16d09e.scope: Deactivated successfully.
Jan 31 08:08:31 compute-0 podman[97836]: 2026-01-31 08:08:31.559007107 +0000 UTC m=+0.159670886 container died 26e2f5b105b9f8b27c656113ea7baed88f08793f54332c6bdc989af75a16d09e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=heuristic_meitner, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 08:08:31 compute-0 systemd[1]: var-lib-containers-storage-overlay-603a017ce25e9aa748c2cf08af0559ab611dea935f3b20292d30f687a39008f2-merged.mount: Deactivated successfully.
Jan 31 08:08:31 compute-0 podman[97836]: 2026-01-31 08:08:31.600021527 +0000 UTC m=+0.200685296 container remove 26e2f5b105b9f8b27c656113ea7baed88f08793f54332c6bdc989af75a16d09e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=heuristic_meitner, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 08:08:31 compute-0 systemd[1]: libpod-conmon-26e2f5b105b9f8b27c656113ea7baed88f08793f54332c6bdc989af75a16d09e.scope: Deactivated successfully.
Jan 31 08:08:31 compute-0 ceph-mgr[75519]: [balancer INFO root] Optimize plan auto_2026-01-31_08:08:31
Jan 31 08:08:31 compute-0 ceph-mgr[75519]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 31 08:08:31 compute-0 ceph-mgr[75519]: [balancer INFO root] Some PGs (0.090909) are unknown; try again later
Jan 31 08:08:31 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v92: 11 pgs: 11 active+clean; 461 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 90 KiB/s rd, 11 KiB/s wr, 236 op/s
Jan 31 08:08:31 compute-0 podman[97876]: 2026-01-31 08:08:31.791225682 +0000 UTC m=+0.058324755 container create 9adea20d2c463df3a60e9befe4e4b6eaf7c58314b1c627d1d5157ba886ea0cda (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=romantic_faraday, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 08:08:31 compute-0 systemd[1]: Started libpod-conmon-9adea20d2c463df3a60e9befe4e4b6eaf7c58314b1c627d1d5157ba886ea0cda.scope.
Jan 31 08:08:31 compute-0 podman[97876]: 2026-01-31 08:08:31.766478236 +0000 UTC m=+0.033577359 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 08:08:31 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:08:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f6b52ec481e67da0af85af3285d791e6bcb5f98e471fdf02c00513917b9b169e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 08:08:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f6b52ec481e67da0af85af3285d791e6bcb5f98e471fdf02c00513917b9b169e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 08:08:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f6b52ec481e67da0af85af3285d791e6bcb5f98e471fdf02c00513917b9b169e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 08:08:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f6b52ec481e67da0af85af3285d791e6bcb5f98e471fdf02c00513917b9b169e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 08:08:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f6b52ec481e67da0af85af3285d791e6bcb5f98e471fdf02c00513917b9b169e/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 31 08:08:31 compute-0 podman[97876]: 2026-01-31 08:08:31.904690909 +0000 UTC m=+0.171790022 container init 9adea20d2c463df3a60e9befe4e4b6eaf7c58314b1c627d1d5157ba886ea0cda (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=romantic_faraday, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=tentacle, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 08:08:31 compute-0 podman[97876]: 2026-01-31 08:08:31.921619112 +0000 UTC m=+0.188718175 container start 9adea20d2c463df3a60e9befe4e4b6eaf7c58314b1c627d1d5157ba886ea0cda (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=romantic_faraday, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 08:08:31 compute-0 podman[97876]: 2026-01-31 08:08:31.925880933 +0000 UTC m=+0.192980046 container attach 9adea20d2c463df3a60e9befe4e4b6eaf7c58314b1c627d1d5157ba886ea0cda (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=romantic_faraday, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 08:08:32 compute-0 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 08:08:32 compute-0 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 08:08:32 compute-0 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 08:08:32 compute-0 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 31 08:08:32 compute-0 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 08:08:32 compute-0 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 31 08:08:32 compute-0 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 31 08:08:32 compute-0 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 08:08:32 compute-0 romantic_faraday[97893]: --> passed data devices: 0 physical, 3 LVM
Jan 31 08:08:32 compute-0 romantic_faraday[97893]: --> All data devices are unavailable
Jan 31 08:08:32 compute-0 systemd[1]: libpod-9adea20d2c463df3a60e9befe4e4b6eaf7c58314b1c627d1d5157ba886ea0cda.scope: Deactivated successfully.
Jan 31 08:08:32 compute-0 podman[97876]: 2026-01-31 08:08:32.422874661 +0000 UTC m=+0.689973724 container died 9adea20d2c463df3a60e9befe4e4b6eaf7c58314b1c627d1d5157ba886ea0cda (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=romantic_faraday, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 08:08:32 compute-0 systemd[1]: var-lib-containers-storage-overlay-f6b52ec481e67da0af85af3285d791e6bcb5f98e471fdf02c00513917b9b169e-merged.mount: Deactivated successfully.
Jan 31 08:08:32 compute-0 podman[97876]: 2026-01-31 08:08:32.472167448 +0000 UTC m=+0.739266491 container remove 9adea20d2c463df3a60e9befe4e4b6eaf7c58314b1c627d1d5157ba886ea0cda (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=romantic_faraday, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 08:08:32 compute-0 systemd[1]: libpod-conmon-9adea20d2c463df3a60e9befe4e4b6eaf7c58314b1c627d1d5157ba886ea0cda.scope: Deactivated successfully.
Jan 31 08:08:32 compute-0 sudo[97798]: pam_unix(sudo:session): session closed for user root
Jan 31 08:08:32 compute-0 sudo[97926]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 08:08:32 compute-0 sudo[97926]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:08:32 compute-0 sudo[97926]: pam_unix(sudo:session): session closed for user root
Jan 31 08:08:32 compute-0 sudo[97951]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/82c880e6-d992-5408-8b12-efff9c275473/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 82c880e6-d992-5408-8b12-efff9c275473 -- lvm list --format json
Jan 31 08:08:32 compute-0 sudo[97951]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:08:32 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] _maybe_adjust
Jan 31 08:08:32 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:08:32 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 31 08:08:32 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:08:32 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 1)
Jan 31 08:08:32 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:08:32 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 1)
Jan 31 08:08:32 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:08:32 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 1)
Jan 31 08:08:32 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:08:32 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 1)
Jan 31 08:08:32 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:08:32 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.48873506906078e-07 of space, bias 4.0, pg target 0.0006586482082872936 quantized to 16 (current 1)
Jan 31 08:08:32 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:08:32 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 1)
Jan 31 08:08:32 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:08:32 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 1)
Jan 31 08:08:32 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:08:32 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 1)
Jan 31 08:08:32 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:08:32 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 1)
Jan 31 08:08:32 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:08:32 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 0.0 of space, bias 4.0, pg target 0.0 quantized to 32 (current 1)
Jan 31 08:08:32 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "vms", "var": "pg_num", "val": "32"} v 0)
Jan 31 08:08:32 compute-0 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "osd pool set", "pool": "vms", "var": "pg_num", "val": "32"} : dispatch
Jan 31 08:08:32 compute-0 ceph-mgr[75519]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:08:32 compute-0 ceph-mgr[75519]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:08:32 compute-0 ceph-mgr[75519]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 31 08:08:32 compute-0 ceph-mgr[75519]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 08:08:32 compute-0 ceph-mgr[75519]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 31 08:08:32 compute-0 ceph-mgr[75519]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 08:08:32 compute-0 ceph-mgr[75519]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 08:08:32 compute-0 ceph-mgr[75519]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 08:08:32 compute-0 ceph-mgr[75519]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 08:08:32 compute-0 ceph-mgr[75519]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 08:08:32 compute-0 ceph-mgr[75519]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:08:32 compute-0 ceph-mgr[75519]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:08:32 compute-0 ceph-mgr[75519]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 08:08:32 compute-0 ceph-mgr[75519]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 08:08:32 compute-0 ceph-mgr[75519]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:08:32 compute-0 ceph-mgr[75519]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:08:32 compute-0 podman[97988]: 2026-01-31 08:08:32.939807557 +0000 UTC m=+0.040099515 container create 41a05a677ef220155297390cf465300301d68bda22750b47a02523ce8df6e48b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=unruffled_bouman, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 08:08:32 compute-0 systemd[1]: Started libpod-conmon-41a05a677ef220155297390cf465300301d68bda22750b47a02523ce8df6e48b.scope.
Jan 31 08:08:32 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:08:32 compute-0 podman[97988]: 2026-01-31 08:08:32.994148557 +0000 UTC m=+0.094440565 container init 41a05a677ef220155297390cf465300301d68bda22750b47a02523ce8df6e48b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=unruffled_bouman, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3)
Jan 31 08:08:33 compute-0 podman[97988]: 2026-01-31 08:08:33.003288058 +0000 UTC m=+0.103580006 container start 41a05a677ef220155297390cf465300301d68bda22750b47a02523ce8df6e48b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=unruffled_bouman, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20251030)
Jan 31 08:08:33 compute-0 podman[97988]: 2026-01-31 08:08:33.007334244 +0000 UTC m=+0.107626242 container attach 41a05a677ef220155297390cf465300301d68bda22750b47a02523ce8df6e48b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=unruffled_bouman, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 08:08:33 compute-0 unruffled_bouman[98005]: 167 167
Jan 31 08:08:33 compute-0 systemd[1]: libpod-41a05a677ef220155297390cf465300301d68bda22750b47a02523ce8df6e48b.scope: Deactivated successfully.
Jan 31 08:08:33 compute-0 podman[97988]: 2026-01-31 08:08:33.009025602 +0000 UTC m=+0.109317560 container died 41a05a677ef220155297390cf465300301d68bda22750b47a02523ce8df6e48b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=unruffled_bouman, ceph=True, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Jan 31 08:08:33 compute-0 podman[97988]: 2026-01-31 08:08:32.91991531 +0000 UTC m=+0.020207258 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 08:08:33 compute-0 ceph-mgr[75519]: [progress INFO root] Completed event d5a72d36-5e9a-4289-8d07-2ee4a9e0f4d5 (Global Recovery Event) in 10 seconds
Jan 31 08:08:33 compute-0 systemd[1]: var-lib-containers-storage-overlay-775d39ad38fb8074f8995d8b655083d712500cf0787231f3b7871f8133e3207a-merged.mount: Deactivated successfully.
Jan 31 08:08:33 compute-0 podman[97988]: 2026-01-31 08:08:33.048866608 +0000 UTC m=+0.149158566 container remove 41a05a677ef220155297390cf465300301d68bda22750b47a02523ce8df6e48b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=unruffled_bouman, OSD_FLAVOR=default, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 08:08:33 compute-0 systemd[1]: libpod-conmon-41a05a677ef220155297390cf465300301d68bda22750b47a02523ce8df6e48b.scope: Deactivated successfully.
Jan 31 08:08:33 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e41 do_prune osdmap full prune enabled
Jan 31 08:08:33 compute-0 ceph-mon[75227]: pgmap v92: 11 pgs: 11 active+clean; 461 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 90 KiB/s rd, 11 KiB/s wr, 236 op/s
Jan 31 08:08:33 compute-0 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "osd pool set", "pool": "vms", "var": "pg_num", "val": "32"} : dispatch
Jan 31 08:08:33 compute-0 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num", "val": "32"}]': finished
Jan 31 08:08:33 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e42 e42: 3 total, 3 up, 3 in
Jan 31 08:08:33 compute-0 ceph-mon[75227]: log_channel(cluster) log [DBG] : osdmap e42: 3 total, 3 up, 3 in
Jan 31 08:08:33 compute-0 ceph-mgr[75519]: [progress INFO root] update: starting ev 1f2c8d8f-218f-4e97-9265-e014252ce84a (PG autoscaler increasing pool 2 PGs from 1 to 32)
Jan 31 08:08:33 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "volumes", "var": "pg_num", "val": "32"} v 0)
Jan 31 08:08:33 compute-0 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "osd pool set", "pool": "volumes", "var": "pg_num", "val": "32"} : dispatch
Jan 31 08:08:33 compute-0 podman[98029]: 2026-01-31 08:08:33.217924951 +0000 UTC m=+0.052070466 container create 351362e87d53a301c3ad522cd791875507561250cb5283657a39d3dfe39d2337 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wonderful_bell, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, ceph=True, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 08:08:33 compute-0 systemd[1]: Started libpod-conmon-351362e87d53a301c3ad522cd791875507561250cb5283657a39d3dfe39d2337.scope.
Jan 31 08:08:33 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:08:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/14c1902009c6ff9062e18d3b4b0a1e6ec76f6e29ce70793aef03ae70c2eb676a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 08:08:33 compute-0 podman[98029]: 2026-01-31 08:08:33.198580489 +0000 UTC m=+0.032726054 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 08:08:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/14c1902009c6ff9062e18d3b4b0a1e6ec76f6e29ce70793aef03ae70c2eb676a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 08:08:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/14c1902009c6ff9062e18d3b4b0a1e6ec76f6e29ce70793aef03ae70c2eb676a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 08:08:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/14c1902009c6ff9062e18d3b4b0a1e6ec76f6e29ce70793aef03ae70c2eb676a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 08:08:33 compute-0 podman[98029]: 2026-01-31 08:08:33.31253246 +0000 UTC m=+0.146678015 container init 351362e87d53a301c3ad522cd791875507561250cb5283657a39d3dfe39d2337 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wonderful_bell, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 08:08:33 compute-0 podman[98029]: 2026-01-31 08:08:33.320350583 +0000 UTC m=+0.154496138 container start 351362e87d53a301c3ad522cd791875507561250cb5283657a39d3dfe39d2337 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wonderful_bell, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 08:08:33 compute-0 podman[98029]: 2026-01-31 08:08:33.324766379 +0000 UTC m=+0.158911934 container attach 351362e87d53a301c3ad522cd791875507561250cb5283657a39d3dfe39d2337 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wonderful_bell, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3)
Jan 31 08:08:33 compute-0 wonderful_bell[98045]: {
Jan 31 08:08:33 compute-0 wonderful_bell[98045]:     "0": [
Jan 31 08:08:33 compute-0 wonderful_bell[98045]:         {
Jan 31 08:08:33 compute-0 wonderful_bell[98045]:             "devices": [
Jan 31 08:08:33 compute-0 wonderful_bell[98045]:                 "/dev/loop3"
Jan 31 08:08:33 compute-0 wonderful_bell[98045]:             ],
Jan 31 08:08:33 compute-0 wonderful_bell[98045]:             "lv_name": "ceph_lv0",
Jan 31 08:08:33 compute-0 wonderful_bell[98045]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 08:08:33 compute-0 wonderful_bell[98045]:             "lv_size": "21470642176",
Jan 31 08:08:33 compute-0 wonderful_bell[98045]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=MTsNbY-MKaT-jGv0-3onj-5WQa-gnK0-BbfLsK,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=82c880e6-d992-5408-8b12-efff9c275473,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=39c36249-2898-4a76-b317-8e4ca379866f,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 31 08:08:33 compute-0 wonderful_bell[98045]:             "lv_uuid": "MTsNbY-MKaT-jGv0-3onj-5WQa-gnK0-BbfLsK",
Jan 31 08:08:33 compute-0 wonderful_bell[98045]:             "name": "ceph_lv0",
Jan 31 08:08:33 compute-0 wonderful_bell[98045]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 08:08:33 compute-0 wonderful_bell[98045]:             "tags": {
Jan 31 08:08:33 compute-0 wonderful_bell[98045]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 31 08:08:33 compute-0 wonderful_bell[98045]:                 "ceph.block_uuid": "MTsNbY-MKaT-jGv0-3onj-5WQa-gnK0-BbfLsK",
Jan 31 08:08:33 compute-0 wonderful_bell[98045]:                 "ceph.cephx_lockbox_secret": "",
Jan 31 08:08:33 compute-0 wonderful_bell[98045]:                 "ceph.cluster_fsid": "82c880e6-d992-5408-8b12-efff9c275473",
Jan 31 08:08:33 compute-0 wonderful_bell[98045]:                 "ceph.cluster_name": "ceph",
Jan 31 08:08:33 compute-0 wonderful_bell[98045]:                 "ceph.crush_device_class": "",
Jan 31 08:08:33 compute-0 wonderful_bell[98045]:                 "ceph.encrypted": "0",
Jan 31 08:08:33 compute-0 wonderful_bell[98045]:                 "ceph.objectstore": "bluestore",
Jan 31 08:08:33 compute-0 wonderful_bell[98045]:                 "ceph.osd_fsid": "39c36249-2898-4a76-b317-8e4ca379866f",
Jan 31 08:08:33 compute-0 wonderful_bell[98045]:                 "ceph.osd_id": "0",
Jan 31 08:08:33 compute-0 wonderful_bell[98045]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 31 08:08:33 compute-0 wonderful_bell[98045]:                 "ceph.type": "block",
Jan 31 08:08:33 compute-0 wonderful_bell[98045]:                 "ceph.vdo": "0",
Jan 31 08:08:33 compute-0 wonderful_bell[98045]:                 "ceph.with_tpm": "0"
Jan 31 08:08:33 compute-0 wonderful_bell[98045]:             },
Jan 31 08:08:33 compute-0 wonderful_bell[98045]:             "type": "block",
Jan 31 08:08:33 compute-0 wonderful_bell[98045]:             "vg_name": "ceph_vg0"
Jan 31 08:08:33 compute-0 wonderful_bell[98045]:         }
Jan 31 08:08:33 compute-0 wonderful_bell[98045]:     ],
Jan 31 08:08:33 compute-0 wonderful_bell[98045]:     "1": [
Jan 31 08:08:33 compute-0 wonderful_bell[98045]:         {
Jan 31 08:08:33 compute-0 wonderful_bell[98045]:             "devices": [
Jan 31 08:08:33 compute-0 wonderful_bell[98045]:                 "/dev/loop4"
Jan 31 08:08:33 compute-0 wonderful_bell[98045]:             ],
Jan 31 08:08:33 compute-0 wonderful_bell[98045]:             "lv_name": "ceph_lv1",
Jan 31 08:08:33 compute-0 wonderful_bell[98045]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 31 08:08:33 compute-0 wonderful_bell[98045]:             "lv_size": "21470642176",
Jan 31 08:08:33 compute-0 wonderful_bell[98045]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=p93Mbf-DMxT-pcUt-jSJE-SFna-oscq-yTAd40,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=82c880e6-d992-5408-8b12-efff9c275473,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=dacad4fa-56d8-4937-b2d8-306fb75187f3,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 31 08:08:33 compute-0 wonderful_bell[98045]:             "lv_uuid": "p93Mbf-DMxT-pcUt-jSJE-SFna-oscq-yTAd40",
Jan 31 08:08:33 compute-0 wonderful_bell[98045]:             "name": "ceph_lv1",
Jan 31 08:08:33 compute-0 wonderful_bell[98045]:             "path": "/dev/ceph_vg1/ceph_lv1",
Jan 31 08:08:33 compute-0 wonderful_bell[98045]:             "tags": {
Jan 31 08:08:33 compute-0 wonderful_bell[98045]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 31 08:08:33 compute-0 wonderful_bell[98045]:                 "ceph.block_uuid": "p93Mbf-DMxT-pcUt-jSJE-SFna-oscq-yTAd40",
Jan 31 08:08:33 compute-0 wonderful_bell[98045]:                 "ceph.cephx_lockbox_secret": "",
Jan 31 08:08:33 compute-0 wonderful_bell[98045]:                 "ceph.cluster_fsid": "82c880e6-d992-5408-8b12-efff9c275473",
Jan 31 08:08:33 compute-0 wonderful_bell[98045]:                 "ceph.cluster_name": "ceph",
Jan 31 08:08:33 compute-0 wonderful_bell[98045]:                 "ceph.crush_device_class": "",
Jan 31 08:08:33 compute-0 wonderful_bell[98045]:                 "ceph.encrypted": "0",
Jan 31 08:08:33 compute-0 wonderful_bell[98045]:                 "ceph.objectstore": "bluestore",
Jan 31 08:08:33 compute-0 wonderful_bell[98045]:                 "ceph.osd_fsid": "dacad4fa-56d8-4937-b2d8-306fb75187f3",
Jan 31 08:08:33 compute-0 wonderful_bell[98045]:                 "ceph.osd_id": "1",
Jan 31 08:08:33 compute-0 wonderful_bell[98045]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 31 08:08:33 compute-0 wonderful_bell[98045]:                 "ceph.type": "block",
Jan 31 08:08:33 compute-0 wonderful_bell[98045]:                 "ceph.vdo": "0",
Jan 31 08:08:33 compute-0 wonderful_bell[98045]:                 "ceph.with_tpm": "0"
Jan 31 08:08:33 compute-0 wonderful_bell[98045]:             },
Jan 31 08:08:33 compute-0 wonderful_bell[98045]:             "type": "block",
Jan 31 08:08:33 compute-0 wonderful_bell[98045]:             "vg_name": "ceph_vg1"
Jan 31 08:08:33 compute-0 wonderful_bell[98045]:         }
Jan 31 08:08:33 compute-0 wonderful_bell[98045]:     ],
Jan 31 08:08:33 compute-0 wonderful_bell[98045]:     "2": [
Jan 31 08:08:33 compute-0 wonderful_bell[98045]:         {
Jan 31 08:08:33 compute-0 wonderful_bell[98045]:             "devices": [
Jan 31 08:08:33 compute-0 wonderful_bell[98045]:                 "/dev/loop5"
Jan 31 08:08:33 compute-0 wonderful_bell[98045]:             ],
Jan 31 08:08:33 compute-0 wonderful_bell[98045]:             "lv_name": "ceph_lv2",
Jan 31 08:08:33 compute-0 wonderful_bell[98045]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 31 08:08:33 compute-0 wonderful_bell[98045]:             "lv_size": "21470642176",
Jan 31 08:08:33 compute-0 wonderful_bell[98045]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=fxd7JU-HnwP-NvcE-M4xv-EgEF-kK7y-w6dXCS,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=82c880e6-d992-5408-8b12-efff9c275473,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=faa25865-e7b6-44f9-8188-08bf287b941b,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 31 08:08:33 compute-0 wonderful_bell[98045]:             "lv_uuid": "fxd7JU-HnwP-NvcE-M4xv-EgEF-kK7y-w6dXCS",
Jan 31 08:08:33 compute-0 wonderful_bell[98045]:             "name": "ceph_lv2",
Jan 31 08:08:33 compute-0 wonderful_bell[98045]:             "path": "/dev/ceph_vg2/ceph_lv2",
Jan 31 08:08:33 compute-0 wonderful_bell[98045]:             "tags": {
Jan 31 08:08:33 compute-0 wonderful_bell[98045]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 31 08:08:33 compute-0 wonderful_bell[98045]:                 "ceph.block_uuid": "fxd7JU-HnwP-NvcE-M4xv-EgEF-kK7y-w6dXCS",
Jan 31 08:08:33 compute-0 wonderful_bell[98045]:                 "ceph.cephx_lockbox_secret": "",
Jan 31 08:08:33 compute-0 wonderful_bell[98045]:                 "ceph.cluster_fsid": "82c880e6-d992-5408-8b12-efff9c275473",
Jan 31 08:08:33 compute-0 wonderful_bell[98045]:                 "ceph.cluster_name": "ceph",
Jan 31 08:08:33 compute-0 wonderful_bell[98045]:                 "ceph.crush_device_class": "",
Jan 31 08:08:33 compute-0 wonderful_bell[98045]:                 "ceph.encrypted": "0",
Jan 31 08:08:33 compute-0 wonderful_bell[98045]:                 "ceph.objectstore": "bluestore",
Jan 31 08:08:33 compute-0 wonderful_bell[98045]:                 "ceph.osd_fsid": "faa25865-e7b6-44f9-8188-08bf287b941b",
Jan 31 08:08:33 compute-0 wonderful_bell[98045]:                 "ceph.osd_id": "2",
Jan 31 08:08:33 compute-0 wonderful_bell[98045]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 31 08:08:33 compute-0 wonderful_bell[98045]:                 "ceph.type": "block",
Jan 31 08:08:33 compute-0 wonderful_bell[98045]:                 "ceph.vdo": "0",
Jan 31 08:08:33 compute-0 wonderful_bell[98045]:                 "ceph.with_tpm": "0"
Jan 31 08:08:33 compute-0 wonderful_bell[98045]:             },
Jan 31 08:08:33 compute-0 wonderful_bell[98045]:             "type": "block",
Jan 31 08:08:33 compute-0 wonderful_bell[98045]:             "vg_name": "ceph_vg2"
Jan 31 08:08:33 compute-0 wonderful_bell[98045]:         }
Jan 31 08:08:33 compute-0 wonderful_bell[98045]:     ]
Jan 31 08:08:33 compute-0 wonderful_bell[98045]: }
Jan 31 08:08:33 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e42 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 08:08:33 compute-0 systemd[1]: libpod-351362e87d53a301c3ad522cd791875507561250cb5283657a39d3dfe39d2337.scope: Deactivated successfully.
Jan 31 08:08:33 compute-0 podman[98029]: 2026-01-31 08:08:33.632660523 +0000 UTC m=+0.466806078 container died 351362e87d53a301c3ad522cd791875507561250cb5283657a39d3dfe39d2337 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wonderful_bell, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Jan 31 08:08:33 compute-0 systemd[1]: var-lib-containers-storage-overlay-14c1902009c6ff9062e18d3b4b0a1e6ec76f6e29ce70793aef03ae70c2eb676a-merged.mount: Deactivated successfully.
Jan 31 08:08:33 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v94: 11 pgs: 11 active+clean; 461 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 84 KiB/s rd, 9.9 KiB/s wr, 220 op/s
Jan 31 08:08:33 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "vms", "var": "pg_num_actual", "val": "32"} v 0)
Jan 31 08:08:33 compute-0 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "osd pool set", "pool": "vms", "var": "pg_num_actual", "val": "32"} : dispatch
Jan 31 08:08:33 compute-0 podman[98029]: 2026-01-31 08:08:33.70091729 +0000 UTC m=+0.535062825 container remove 351362e87d53a301c3ad522cd791875507561250cb5283657a39d3dfe39d2337 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wonderful_bell, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2)
Jan 31 08:08:33 compute-0 systemd[1]: libpod-conmon-351362e87d53a301c3ad522cd791875507561250cb5283657a39d3dfe39d2337.scope: Deactivated successfully.
Jan 31 08:08:33 compute-0 sudo[97951]: pam_unix(sudo:session): session closed for user root
Jan 31 08:08:33 compute-0 sudo[98065]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 08:08:33 compute-0 sudo[98065]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:08:33 compute-0 sudo[98065]: pam_unix(sudo:session): session closed for user root
Jan 31 08:08:33 compute-0 sudo[98090]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/82c880e6-d992-5408-8b12-efff9c275473/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 82c880e6-d992-5408-8b12-efff9c275473 -- raw list --format json
Jan 31 08:08:33 compute-0 sudo[98090]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:08:34 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e42 do_prune osdmap full prune enabled
Jan 31 08:08:34 compute-0 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num", "val": "32"}]': finished
Jan 31 08:08:34 compute-0 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num_actual", "val": "32"}]': finished
Jan 31 08:08:34 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e43 e43: 3 total, 3 up, 3 in
Jan 31 08:08:34 compute-0 ceph-mon[75227]: log_channel(cluster) log [DBG] : osdmap e43: 3 total, 3 up, 3 in
Jan 31 08:08:34 compute-0 ceph-mgr[75519]: [progress INFO root] update: starting ev bea3075c-4cd0-4ea7-a2f9-012dea3b16ab (PG autoscaler increasing pool 3 PGs from 1 to 32)
Jan 31 08:08:34 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "backups", "var": "pg_num", "val": "32"} v 0)
Jan 31 08:08:34 compute-0 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "osd pool set", "pool": "backups", "var": "pg_num", "val": "32"} : dispatch
Jan 31 08:08:34 compute-0 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num", "val": "32"}]': finished
Jan 31 08:08:34 compute-0 ceph-mon[75227]: osdmap e42: 3 total, 3 up, 3 in
Jan 31 08:08:34 compute-0 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "osd pool set", "pool": "volumes", "var": "pg_num", "val": "32"} : dispatch
Jan 31 08:08:34 compute-0 ceph-mon[75227]: pgmap v94: 11 pgs: 11 active+clean; 461 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 84 KiB/s rd, 9.9 KiB/s wr, 220 op/s
Jan 31 08:08:34 compute-0 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "osd pool set", "pool": "vms", "var": "pg_num_actual", "val": "32"} : dispatch
Jan 31 08:08:34 compute-0 podman[98130]: 2026-01-31 08:08:34.153856571 +0000 UTC m=+0.036636816 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 08:08:34 compute-0 podman[98130]: 2026-01-31 08:08:34.262085039 +0000 UTC m=+0.144865234 container create 90d0b13f29609f72ac11a85ebfb5f3f8c793d554ae67190f318a04732539b7ce (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ecstatic_poincare, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Jan 31 08:08:34 compute-0 systemd[1]: Started libpod-conmon-90d0b13f29609f72ac11a85ebfb5f3f8c793d554ae67190f318a04732539b7ce.scope.
Jan 31 08:08:34 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:08:34 compute-0 podman[98130]: 2026-01-31 08:08:34.45424199 +0000 UTC m=+0.337022245 container init 90d0b13f29609f72ac11a85ebfb5f3f8c793d554ae67190f318a04732539b7ce (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ecstatic_poincare, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 31 08:08:34 compute-0 podman[98130]: 2026-01-31 08:08:34.463586917 +0000 UTC m=+0.346367112 container start 90d0b13f29609f72ac11a85ebfb5f3f8c793d554ae67190f318a04732539b7ce (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ecstatic_poincare, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 08:08:34 compute-0 systemd[1]: libpod-90d0b13f29609f72ac11a85ebfb5f3f8c793d554ae67190f318a04732539b7ce.scope: Deactivated successfully.
Jan 31 08:08:34 compute-0 ecstatic_poincare[98147]: 167 167
Jan 31 08:08:34 compute-0 podman[98130]: 2026-01-31 08:08:34.520770658 +0000 UTC m=+0.403550853 container attach 90d0b13f29609f72ac11a85ebfb5f3f8c793d554ae67190f318a04732539b7ce (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ecstatic_poincare, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Jan 31 08:08:34 compute-0 podman[98130]: 2026-01-31 08:08:34.522193699 +0000 UTC m=+0.404973924 container died 90d0b13f29609f72ac11a85ebfb5f3f8c793d554ae67190f318a04732539b7ce (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ecstatic_poincare, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 08:08:34 compute-0 ceph-osd[88096]: osd.2 pg_epoch: 43 pg[2.0( empty local-lis/les=19/20 n=0 ec=19/19 lis/c=19/19 les/c/f=20/20/0 sis=43 pruub=12.903619766s) [2] r=0 lpr=43 pi=[19,43)/1 crt=0'0 mlcod 0'0 active pruub 79.987213135s@ mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [2], acting_primary 2 -> 2, up_primary 2 -> 2, role 0 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 08:08:34 compute-0 ceph-osd[88096]: osd.2 pg_epoch: 43 pg[2.0( empty local-lis/les=19/20 n=0 ec=19/19 lis/c=19/19 les/c/f=20/20/0 sis=43 pruub=12.903619766s) [2] r=0 lpr=43 pi=[19,43)/1 crt=0'0 mlcod 0'0 unknown pruub 79.987213135s@ mbc={}] state<Start>: transitioning to Primary
Jan 31 08:08:34 compute-0 systemd[1]: var-lib-containers-storage-overlay-f3577145f452fa2921c1fbdcb51559237a6c2193b993bb2a79df92aadb4670c6-merged.mount: Deactivated successfully.
Jan 31 08:08:35 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e43 do_prune osdmap full prune enabled
Jan 31 08:08:35 compute-0 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num", "val": "32"}]': finished
Jan 31 08:08:35 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e44 e44: 3 total, 3 up, 3 in
Jan 31 08:08:35 compute-0 podman[98130]: 2026-01-31 08:08:35.363386777 +0000 UTC m=+1.246166972 container remove 90d0b13f29609f72ac11a85ebfb5f3f8c793d554ae67190f318a04732539b7ce (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ecstatic_poincare, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle)
Jan 31 08:08:35 compute-0 ceph-mon[75227]: log_channel(cluster) log [DBG] : osdmap e44: 3 total, 3 up, 3 in
Jan 31 08:08:35 compute-0 ceph-osd[88096]: osd.2 pg_epoch: 44 pg[2.1e( empty local-lis/les=19/20 n=0 ec=43/19 lis/c=19/19 les/c/f=20/20/0 sis=43) [2] r=0 lpr=43 pi=[19,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:08:35 compute-0 ceph-osd[88096]: osd.2 pg_epoch: 44 pg[2.1f( empty local-lis/les=19/20 n=0 ec=43/19 lis/c=19/19 les/c/f=20/20/0 sis=43) [2] r=0 lpr=43 pi=[19,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:08:35 compute-0 ceph-osd[88096]: osd.2 pg_epoch: 44 pg[2.1c( empty local-lis/les=19/20 n=0 ec=43/19 lis/c=19/19 les/c/f=20/20/0 sis=43) [2] r=0 lpr=43 pi=[19,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:08:35 compute-0 ceph-osd[88096]: osd.2 pg_epoch: 44 pg[2.a( empty local-lis/les=19/20 n=0 ec=43/19 lis/c=19/19 les/c/f=20/20/0 sis=43) [2] r=0 lpr=43 pi=[19,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:08:35 compute-0 ceph-osd[88096]: osd.2 pg_epoch: 44 pg[2.9( empty local-lis/les=19/20 n=0 ec=43/19 lis/c=19/19 les/c/f=20/20/0 sis=43) [2] r=0 lpr=43 pi=[19,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:08:35 compute-0 ceph-osd[88096]: osd.2 pg_epoch: 44 pg[2.6( empty local-lis/les=19/20 n=0 ec=43/19 lis/c=19/19 les/c/f=20/20/0 sis=43) [2] r=0 lpr=43 pi=[19,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:08:35 compute-0 ceph-osd[88096]: osd.2 pg_epoch: 44 pg[2.5( empty local-lis/les=19/20 n=0 ec=43/19 lis/c=19/19 les/c/f=20/20/0 sis=43) [2] r=0 lpr=43 pi=[19,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:08:35 compute-0 ceph-osd[88096]: osd.2 pg_epoch: 44 pg[2.4( empty local-lis/les=19/20 n=0 ec=43/19 lis/c=19/19 les/c/f=20/20/0 sis=43) [2] r=0 lpr=43 pi=[19,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:08:35 compute-0 ceph-osd[88096]: osd.2 pg_epoch: 44 pg[2.3( empty local-lis/les=19/20 n=0 ec=43/19 lis/c=19/19 les/c/f=20/20/0 sis=43) [2] r=0 lpr=43 pi=[19,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:08:35 compute-0 ceph-osd[88096]: osd.2 pg_epoch: 44 pg[2.2( empty local-lis/les=19/20 n=0 ec=43/19 lis/c=19/19 les/c/f=20/20/0 sis=43) [2] r=0 lpr=43 pi=[19,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:08:35 compute-0 ceph-osd[88096]: osd.2 pg_epoch: 44 pg[2.1( empty local-lis/les=19/20 n=0 ec=43/19 lis/c=19/19 les/c/f=20/20/0 sis=43) [2] r=0 lpr=43 pi=[19,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:08:35 compute-0 ceph-osd[88096]: osd.2 pg_epoch: 44 pg[2.b( empty local-lis/les=19/20 n=0 ec=43/19 lis/c=19/19 les/c/f=20/20/0 sis=43) [2] r=0 lpr=43 pi=[19,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:08:35 compute-0 ceph-osd[88096]: osd.2 pg_epoch: 44 pg[2.c( empty local-lis/les=19/20 n=0 ec=43/19 lis/c=19/19 les/c/f=20/20/0 sis=43) [2] r=0 lpr=43 pi=[19,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:08:35 compute-0 systemd[1]: libpod-conmon-90d0b13f29609f72ac11a85ebfb5f3f8c793d554ae67190f318a04732539b7ce.scope: Deactivated successfully.
Jan 31 08:08:35 compute-0 ceph-osd[88096]: osd.2 pg_epoch: 44 pg[2.8( empty local-lis/les=19/20 n=0 ec=43/19 lis/c=19/19 les/c/f=20/20/0 sis=43) [2] r=0 lpr=43 pi=[19,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:08:35 compute-0 ceph-osd[88096]: osd.2 pg_epoch: 44 pg[2.d( empty local-lis/les=19/20 n=0 ec=43/19 lis/c=19/19 les/c/f=20/20/0 sis=43) [2] r=0 lpr=43 pi=[19,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:08:35 compute-0 ceph-osd[88096]: osd.2 pg_epoch: 44 pg[2.e( empty local-lis/les=19/20 n=0 ec=43/19 lis/c=19/19 les/c/f=20/20/0 sis=43) [2] r=0 lpr=43 pi=[19,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:08:35 compute-0 ceph-osd[88096]: osd.2 pg_epoch: 44 pg[2.f( empty local-lis/les=19/20 n=0 ec=43/19 lis/c=19/19 les/c/f=20/20/0 sis=43) [2] r=0 lpr=43 pi=[19,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:08:35 compute-0 ceph-mgr[75519]: [progress INFO root] update: starting ev 921ae24a-b5aa-4d4d-861f-3e75adb35864 (PG autoscaler increasing pool 4 PGs from 1 to 32)
Jan 31 08:08:35 compute-0 ceph-osd[88096]: osd.2 pg_epoch: 44 pg[2.1d( empty local-lis/les=19/20 n=0 ec=43/19 lis/c=19/19 les/c/f=20/20/0 sis=43) [2] r=0 lpr=43 pi=[19,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:08:35 compute-0 ceph-osd[88096]: osd.2 pg_epoch: 44 pg[2.10( empty local-lis/les=19/20 n=0 ec=43/19 lis/c=19/19 les/c/f=20/20/0 sis=43) [2] r=0 lpr=43 pi=[19,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:08:35 compute-0 ceph-osd[88096]: osd.2 pg_epoch: 44 pg[2.1b( empty local-lis/les=19/20 n=0 ec=43/19 lis/c=19/19 les/c/f=20/20/0 sis=43) [2] r=0 lpr=43 pi=[19,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:08:35 compute-0 ceph-osd[88096]: osd.2 pg_epoch: 44 pg[2.7( empty local-lis/les=19/20 n=0 ec=43/19 lis/c=19/19 les/c/f=20/20/0 sis=43) [2] r=0 lpr=43 pi=[19,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:08:35 compute-0 ceph-osd[88096]: osd.2 pg_epoch: 44 pg[2.12( empty local-lis/les=19/20 n=0 ec=43/19 lis/c=19/19 les/c/f=20/20/0 sis=43) [2] r=0 lpr=43 pi=[19,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:08:35 compute-0 ceph-osd[88096]: osd.2 pg_epoch: 44 pg[2.11( empty local-lis/les=19/20 n=0 ec=43/19 lis/c=19/19 les/c/f=20/20/0 sis=43) [2] r=0 lpr=43 pi=[19,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:08:35 compute-0 ceph-osd[88096]: osd.2 pg_epoch: 44 pg[2.13( empty local-lis/les=19/20 n=0 ec=43/19 lis/c=19/19 les/c/f=20/20/0 sis=43) [2] r=0 lpr=43 pi=[19,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:08:35 compute-0 ceph-osd[88096]: osd.2 pg_epoch: 44 pg[2.14( empty local-lis/les=19/20 n=0 ec=43/19 lis/c=19/19 les/c/f=20/20/0 sis=43) [2] r=0 lpr=43 pi=[19,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:08:35 compute-0 ceph-osd[88096]: osd.2 pg_epoch: 44 pg[2.15( empty local-lis/les=19/20 n=0 ec=43/19 lis/c=19/19 les/c/f=20/20/0 sis=43) [2] r=0 lpr=43 pi=[19,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:08:35 compute-0 ceph-osd[88096]: osd.2 pg_epoch: 44 pg[2.17( empty local-lis/les=19/20 n=0 ec=43/19 lis/c=19/19 les/c/f=20/20/0 sis=43) [2] r=0 lpr=43 pi=[19,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:08:35 compute-0 ceph-osd[88096]: osd.2 pg_epoch: 44 pg[2.18( empty local-lis/les=19/20 n=0 ec=43/19 lis/c=19/19 les/c/f=20/20/0 sis=43) [2] r=0 lpr=43 pi=[19,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:08:35 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "images", "var": "pg_num", "val": "32"} v 0)
Jan 31 08:08:35 compute-0 ceph-osd[88096]: osd.2 pg_epoch: 44 pg[2.16( empty local-lis/les=19/20 n=0 ec=43/19 lis/c=19/19 les/c/f=20/20/0 sis=43) [2] r=0 lpr=43 pi=[19,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:08:35 compute-0 ceph-osd[88096]: osd.2 pg_epoch: 44 pg[2.19( empty local-lis/les=19/20 n=0 ec=43/19 lis/c=19/19 les/c/f=20/20/0 sis=43) [2] r=0 lpr=43 pi=[19,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:08:35 compute-0 ceph-osd[88096]: osd.2 pg_epoch: 44 pg[2.1a( empty local-lis/les=19/20 n=0 ec=43/19 lis/c=19/19 les/c/f=20/20/0 sis=43) [2] r=0 lpr=43 pi=[19,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:08:35 compute-0 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "osd pool set", "pool": "images", "var": "pg_num", "val": "32"} : dispatch
Jan 31 08:08:35 compute-0 ceph-osd[88096]: osd.2 pg_epoch: 44 pg[2.1e( empty local-lis/les=43/44 n=0 ec=43/19 lis/c=19/19 les/c/f=20/20/0 sis=43) [2] r=0 lpr=43 pi=[19,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:08:35 compute-0 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num", "val": "32"}]': finished
Jan 31 08:08:35 compute-0 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num_actual", "val": "32"}]': finished
Jan 31 08:08:35 compute-0 ceph-mon[75227]: osdmap e43: 3 total, 3 up, 3 in
Jan 31 08:08:35 compute-0 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "osd pool set", "pool": "backups", "var": "pg_num", "val": "32"} : dispatch
Jan 31 08:08:35 compute-0 ceph-osd[88096]: osd.2 pg_epoch: 44 pg[2.1f( empty local-lis/les=43/44 n=0 ec=43/19 lis/c=19/19 les/c/f=20/20/0 sis=43) [2] r=0 lpr=43 pi=[19,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:08:35 compute-0 ceph-osd[88096]: osd.2 pg_epoch: 44 pg[2.1c( empty local-lis/les=43/44 n=0 ec=43/19 lis/c=19/19 les/c/f=20/20/0 sis=43) [2] r=0 lpr=43 pi=[19,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:08:35 compute-0 ceph-osd[88096]: osd.2 pg_epoch: 44 pg[2.a( empty local-lis/les=43/44 n=0 ec=43/19 lis/c=19/19 les/c/f=20/20/0 sis=43) [2] r=0 lpr=43 pi=[19,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:08:35 compute-0 ceph-osd[88096]: osd.2 pg_epoch: 44 pg[2.5( empty local-lis/les=43/44 n=0 ec=43/19 lis/c=19/19 les/c/f=20/20/0 sis=43) [2] r=0 lpr=43 pi=[19,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:08:35 compute-0 ceph-osd[88096]: osd.2 pg_epoch: 44 pg[2.6( empty local-lis/les=43/44 n=0 ec=43/19 lis/c=19/19 les/c/f=20/20/0 sis=43) [2] r=0 lpr=43 pi=[19,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:08:35 compute-0 ceph-osd[88096]: osd.2 pg_epoch: 44 pg[2.3( empty local-lis/les=43/44 n=0 ec=43/19 lis/c=19/19 les/c/f=20/20/0 sis=43) [2] r=0 lpr=43 pi=[19,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:08:35 compute-0 ceph-osd[88096]: osd.2 pg_epoch: 44 pg[2.9( empty local-lis/les=43/44 n=0 ec=43/19 lis/c=19/19 les/c/f=20/20/0 sis=43) [2] r=0 lpr=43 pi=[19,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:08:35 compute-0 ceph-osd[88096]: osd.2 pg_epoch: 44 pg[2.2( empty local-lis/les=43/44 n=0 ec=43/19 lis/c=19/19 les/c/f=20/20/0 sis=43) [2] r=0 lpr=43 pi=[19,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:08:35 compute-0 ceph-osd[88096]: osd.2 pg_epoch: 44 pg[2.4( empty local-lis/les=43/44 n=0 ec=43/19 lis/c=19/19 les/c/f=20/20/0 sis=43) [2] r=0 lpr=43 pi=[19,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:08:35 compute-0 ceph-osd[88096]: osd.2 pg_epoch: 44 pg[2.1( empty local-lis/les=43/44 n=0 ec=43/19 lis/c=19/19 les/c/f=20/20/0 sis=43) [2] r=0 lpr=43 pi=[19,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:08:35 compute-0 ceph-osd[88096]: osd.2 pg_epoch: 44 pg[2.0( empty local-lis/les=43/44 n=0 ec=19/19 lis/c=19/19 les/c/f=20/20/0 sis=43) [2] r=0 lpr=43 pi=[19,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:08:35 compute-0 ceph-osd[88096]: osd.2 pg_epoch: 44 pg[2.b( empty local-lis/les=43/44 n=0 ec=43/19 lis/c=19/19 les/c/f=20/20/0 sis=43) [2] r=0 lpr=43 pi=[19,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:08:35 compute-0 ceph-osd[88096]: osd.2 pg_epoch: 44 pg[2.c( empty local-lis/les=43/44 n=0 ec=43/19 lis/c=19/19 les/c/f=20/20/0 sis=43) [2] r=0 lpr=43 pi=[19,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:08:35 compute-0 ceph-osd[88096]: osd.2 pg_epoch: 44 pg[2.d( empty local-lis/les=43/44 n=0 ec=43/19 lis/c=19/19 les/c/f=20/20/0 sis=43) [2] r=0 lpr=43 pi=[19,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:08:35 compute-0 ceph-osd[88096]: osd.2 pg_epoch: 44 pg[2.e( empty local-lis/les=43/44 n=0 ec=43/19 lis/c=19/19 les/c/f=20/20/0 sis=43) [2] r=0 lpr=43 pi=[19,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:08:35 compute-0 ceph-osd[88096]: osd.2 pg_epoch: 44 pg[2.f( empty local-lis/les=43/44 n=0 ec=43/19 lis/c=19/19 les/c/f=20/20/0 sis=43) [2] r=0 lpr=43 pi=[19,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:08:35 compute-0 ceph-osd[88096]: osd.2 pg_epoch: 44 pg[2.7( empty local-lis/les=43/44 n=0 ec=43/19 lis/c=19/19 les/c/f=20/20/0 sis=43) [2] r=0 lpr=43 pi=[19,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:08:35 compute-0 ceph-osd[88096]: osd.2 pg_epoch: 44 pg[2.1b( empty local-lis/les=43/44 n=0 ec=43/19 lis/c=19/19 les/c/f=20/20/0 sis=43) [2] r=0 lpr=43 pi=[19,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:08:35 compute-0 ceph-osd[88096]: osd.2 pg_epoch: 44 pg[2.1d( empty local-lis/les=43/44 n=0 ec=43/19 lis/c=19/19 les/c/f=20/20/0 sis=43) [2] r=0 lpr=43 pi=[19,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:08:35 compute-0 ceph-osd[88096]: osd.2 pg_epoch: 44 pg[2.11( empty local-lis/les=43/44 n=0 ec=43/19 lis/c=19/19 les/c/f=20/20/0 sis=43) [2] r=0 lpr=43 pi=[19,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:08:35 compute-0 ceph-osd[88096]: osd.2 pg_epoch: 44 pg[2.12( empty local-lis/les=43/44 n=0 ec=43/19 lis/c=19/19 les/c/f=20/20/0 sis=43) [2] r=0 lpr=43 pi=[19,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:08:35 compute-0 ceph-osd[88096]: osd.2 pg_epoch: 44 pg[2.13( empty local-lis/les=43/44 n=0 ec=43/19 lis/c=19/19 les/c/f=20/20/0 sis=43) [2] r=0 lpr=43 pi=[19,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:08:35 compute-0 ceph-osd[88096]: osd.2 pg_epoch: 44 pg[2.14( empty local-lis/les=43/44 n=0 ec=43/19 lis/c=19/19 les/c/f=20/20/0 sis=43) [2] r=0 lpr=43 pi=[19,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:08:35 compute-0 ceph-osd[88096]: osd.2 pg_epoch: 44 pg[2.15( empty local-lis/les=43/44 n=0 ec=43/19 lis/c=19/19 les/c/f=20/20/0 sis=43) [2] r=0 lpr=43 pi=[19,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:08:35 compute-0 ceph-osd[88096]: osd.2 pg_epoch: 44 pg[2.17( empty local-lis/les=43/44 n=0 ec=43/19 lis/c=19/19 les/c/f=20/20/0 sis=43) [2] r=0 lpr=43 pi=[19,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:08:35 compute-0 ceph-osd[88096]: osd.2 pg_epoch: 44 pg[2.18( empty local-lis/les=43/44 n=0 ec=43/19 lis/c=19/19 les/c/f=20/20/0 sis=43) [2] r=0 lpr=43 pi=[19,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:08:35 compute-0 ceph-osd[88096]: osd.2 pg_epoch: 44 pg[2.19( empty local-lis/les=43/44 n=0 ec=43/19 lis/c=19/19 les/c/f=20/20/0 sis=43) [2] r=0 lpr=43 pi=[19,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:08:35 compute-0 ceph-osd[88096]: osd.2 pg_epoch: 44 pg[2.1a( empty local-lis/les=43/44 n=0 ec=43/19 lis/c=19/19 les/c/f=20/20/0 sis=43) [2] r=0 lpr=43 pi=[19,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:08:35 compute-0 ceph-osd[88096]: osd.2 pg_epoch: 44 pg[2.10( empty local-lis/les=43/44 n=0 ec=43/19 lis/c=19/19 les/c/f=20/20/0 sis=43) [2] r=0 lpr=43 pi=[19,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:08:35 compute-0 ceph-osd[88096]: osd.2 pg_epoch: 44 pg[2.16( empty local-lis/les=43/44 n=0 ec=43/19 lis/c=19/19 les/c/f=20/20/0 sis=43) [2] r=0 lpr=43 pi=[19,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:08:35 compute-0 ceph-osd[88096]: osd.2 pg_epoch: 44 pg[2.8( empty local-lis/les=43/44 n=0 ec=43/19 lis/c=19/19 les/c/f=20/20/0 sis=43) [2] r=0 lpr=43 pi=[19,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:08:35 compute-0 podman[98172]: 2026-01-31 08:08:35.608645563 +0000 UTC m=+0.121759164 container create 9b052eb212ee2d260fff83a97775d7696d5f66449179e008b4fcbdadf5299288 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=suspicious_gould, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 31 08:08:35 compute-0 podman[98172]: 2026-01-31 08:08:35.520864369 +0000 UTC m=+0.033978030 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 08:08:35 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v97: 42 pgs: 31 unknown, 11 active+clean; 461 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 90 KiB/s rd, 11 KiB/s wr, 236 op/s
Jan 31 08:08:35 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "backups", "var": "pg_num_actual", "val": "32"} v 0)
Jan 31 08:08:35 compute-0 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "osd pool set", "pool": "backups", "var": "pg_num_actual", "val": "32"} : dispatch
Jan 31 08:08:35 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "volumes", "var": "pg_num_actual", "val": "32"} v 0)
Jan 31 08:08:35 compute-0 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "osd pool set", "pool": "volumes", "var": "pg_num_actual", "val": "32"} : dispatch
Jan 31 08:08:35 compute-0 systemd[1]: Started libpod-conmon-9b052eb212ee2d260fff83a97775d7696d5f66449179e008b4fcbdadf5299288.scope.
Jan 31 08:08:35 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:08:35 compute-0 rsyslogd[1007]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 31 08:08:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a3445257ee0e5639fb1e98c14aded6067d51d58c70f7957f48eeb2f823219e34/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 08:08:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a3445257ee0e5639fb1e98c14aded6067d51d58c70f7957f48eeb2f823219e34/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 08:08:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a3445257ee0e5639fb1e98c14aded6067d51d58c70f7957f48eeb2f823219e34/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 08:08:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a3445257ee0e5639fb1e98c14aded6067d51d58c70f7957f48eeb2f823219e34/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 08:08:36 compute-0 podman[98172]: 2026-01-31 08:08:36.088948525 +0000 UTC m=+0.602062126 container init 9b052eb212ee2d260fff83a97775d7696d5f66449179e008b4fcbdadf5299288 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=suspicious_gould, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 08:08:36 compute-0 podman[98172]: 2026-01-31 08:08:36.097933061 +0000 UTC m=+0.611046652 container start 9b052eb212ee2d260fff83a97775d7696d5f66449179e008b4fcbdadf5299288 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=suspicious_gould, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 08:08:36 compute-0 podman[98172]: 2026-01-31 08:08:36.210528112 +0000 UTC m=+0.723641713 container attach 9b052eb212ee2d260fff83a97775d7696d5f66449179e008b4fcbdadf5299288 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=suspicious_gould, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 08:08:36 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e44 do_prune osdmap full prune enabled
Jan 31 08:08:36 compute-0 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pg_num", "val": "32"}]': finished
Jan 31 08:08:36 compute-0 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num_actual", "val": "32"}]': finished
Jan 31 08:08:36 compute-0 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num_actual", "val": "32"}]': finished
Jan 31 08:08:36 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e45 e45: 3 total, 3 up, 3 in
Jan 31 08:08:36 compute-0 ceph-mon[75227]: log_channel(cluster) log [DBG] : osdmap e45: 3 total, 3 up, 3 in
Jan 31 08:08:36 compute-0 ceph-mgr[75519]: [progress INFO root] update: starting ev 04386057-19f2-4725-b4c5-3655da5ae531 (PG autoscaler increasing pool 5 PGs from 1 to 32)
Jan 31 08:08:36 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num", "val": "16"} v 0)
Jan 31 08:08:36 compute-0 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num", "val": "16"} : dispatch
Jan 31 08:08:36 compute-0 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num", "val": "32"}]': finished
Jan 31 08:08:36 compute-0 ceph-mon[75227]: osdmap e44: 3 total, 3 up, 3 in
Jan 31 08:08:36 compute-0 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "osd pool set", "pool": "images", "var": "pg_num", "val": "32"} : dispatch
Jan 31 08:08:36 compute-0 ceph-mon[75227]: pgmap v97: 42 pgs: 31 unknown, 11 active+clean; 461 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 90 KiB/s rd, 11 KiB/s wr, 236 op/s
Jan 31 08:08:36 compute-0 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "osd pool set", "pool": "backups", "var": "pg_num_actual", "val": "32"} : dispatch
Jan 31 08:08:36 compute-0 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "osd pool set", "pool": "volumes", "var": "pg_num_actual", "val": "32"} : dispatch
Jan 31 08:08:36 compute-0 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pg_num", "val": "32"}]': finished
Jan 31 08:08:36 compute-0 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num_actual", "val": "32"}]': finished
Jan 31 08:08:36 compute-0 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num_actual", "val": "32"}]': finished
Jan 31 08:08:36 compute-0 ceph-mon[75227]: osdmap e45: 3 total, 3 up, 3 in
Jan 31 08:08:36 compute-0 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num", "val": "16"} : dispatch
Jan 31 08:08:36 compute-0 lvm[98268]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 31 08:08:36 compute-0 lvm[98269]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 31 08:08:36 compute-0 lvm[98269]: VG ceph_vg0 finished
Jan 31 08:08:36 compute-0 lvm[98268]: VG ceph_vg1 finished
Jan 31 08:08:36 compute-0 lvm[98271]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 31 08:08:36 compute-0 lvm[98271]: VG ceph_vg2 finished
Jan 31 08:08:36 compute-0 ceph-osd[85971]: osd.0 pg_epoch: 45 pg[4.0( empty local-lis/les=21/22 n=0 ec=21/21 lis/c=21/21 les/c/f=22/22/0 sis=45 pruub=12.787521362s) [0] r=0 lpr=45 pi=[21,45)/1 crt=0'0 mlcod 0'0 active pruub 93.562095642s@ mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [0], acting_primary 0 -> 0, up_primary 0 -> 0, role 0 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 08:08:36 compute-0 ceph-osd[85971]: osd.0 pg_epoch: 45 pg[4.0( empty local-lis/les=21/22 n=0 ec=21/21 lis/c=21/21 les/c/f=22/22/0 sis=45 pruub=12.787521362s) [0] r=0 lpr=45 pi=[21,45)/1 crt=0'0 mlcod 0'0 unknown pruub 93.562095642s@ mbc={}] state<Start>: transitioning to Primary
Jan 31 08:08:36 compute-0 suspicious_gould[98189]: {}
Jan 31 08:08:36 compute-0 systemd[1]: libpod-9b052eb212ee2d260fff83a97775d7696d5f66449179e008b4fcbdadf5299288.scope: Deactivated successfully.
Jan 31 08:08:36 compute-0 systemd[1]: libpod-9b052eb212ee2d260fff83a97775d7696d5f66449179e008b4fcbdadf5299288.scope: Consumed 1.040s CPU time.
Jan 31 08:08:36 compute-0 podman[98172]: 2026-01-31 08:08:36.893955819 +0000 UTC m=+1.407069410 container died 9b052eb212ee2d260fff83a97775d7696d5f66449179e008b4fcbdadf5299288 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=suspicious_gould, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, io.buildah.version=1.41.3, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 08:08:37 compute-0 systemd[1]: var-lib-containers-storage-overlay-a3445257ee0e5639fb1e98c14aded6067d51d58c70f7957f48eeb2f823219e34-merged.mount: Deactivated successfully.
Jan 31 08:08:37 compute-0 ceph-osd[88096]: log_channel(cluster) log [DBG] : 2.1e scrub starts
Jan 31 08:08:37 compute-0 ceph-osd[88096]: log_channel(cluster) log [DBG] : 2.1e scrub ok
Jan 31 08:08:37 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[3.0( empty local-lis/les=20/21 n=0 ec=20/20 lis/c=20/20 les/c/f=21/21/0 sis=45 pruub=11.259410858s) [1] r=0 lpr=45 pi=[20,45)/1 crt=0'0 mlcod 0'0 active pruub 87.504295349s@ mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [1], acting_primary 1 -> 1, up_primary 1 -> 1, role 0 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 08:08:37 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 45 pg[3.0( empty local-lis/les=20/21 n=0 ec=20/20 lis/c=20/20 les/c/f=21/21/0 sis=45 pruub=11.259410858s) [1] r=0 lpr=45 pi=[20,45)/1 crt=0'0 mlcod 0'0 unknown pruub 87.504295349s@ mbc={}] state<Start>: transitioning to Primary
Jan 31 08:08:37 compute-0 podman[98172]: 2026-01-31 08:08:37.384890734 +0000 UTC m=+1.898004335 container remove 9b052eb212ee2d260fff83a97775d7696d5f66449179e008b4fcbdadf5299288 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=suspicious_gould, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 08:08:37 compute-0 systemd[1]: libpod-conmon-9b052eb212ee2d260fff83a97775d7696d5f66449179e008b4fcbdadf5299288.scope: Deactivated successfully.
Jan 31 08:08:37 compute-0 sudo[98090]: pam_unix(sudo:session): session closed for user root
Jan 31 08:08:37 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 31 08:08:37 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e45 do_prune osdmap full prune enabled
Jan 31 08:08:37 compute-0 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 08:08:37 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 31 08:08:37 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v99: 104 pgs: 93 unknown, 11 active+clean; 461 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:08:37 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "images", "var": "pg_num_actual", "val": "32"} v 0)
Jan 31 08:08:37 compute-0 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "osd pool set", "pool": "images", "var": "pg_num_actual", "val": "32"} : dispatch
Jan 31 08:08:37 compute-0 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num", "val": "16"}]': finished
Jan 31 08:08:37 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e46 e46: 3 total, 3 up, 3 in
Jan 31 08:08:37 compute-0 ceph-mon[75227]: log_channel(cluster) log [DBG] : osdmap e46: 3 total, 3 up, 3 in
Jan 31 08:08:37 compute-0 ceph-osd[85971]: osd.0 pg_epoch: 46 pg[4.1f( empty local-lis/les=21/22 n=0 ec=45/21 lis/c=21/21 les/c/f=22/22/0 sis=45) [0] r=0 lpr=45 pi=[21,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:08:37 compute-0 ceph-osd[85971]: osd.0 pg_epoch: 46 pg[4.1d( empty local-lis/les=21/22 n=0 ec=45/21 lis/c=21/21 les/c/f=22/22/0 sis=45) [0] r=0 lpr=45 pi=[21,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:08:37 compute-0 ceph-osd[85971]: osd.0 pg_epoch: 46 pg[4.1c( empty local-lis/les=21/22 n=0 ec=45/21 lis/c=21/21 les/c/f=22/22/0 sis=45) [0] r=0 lpr=45 pi=[21,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:08:37 compute-0 ceph-osd[85971]: osd.0 pg_epoch: 46 pg[4.8( empty local-lis/les=21/22 n=0 ec=45/21 lis/c=21/21 les/c/f=22/22/0 sis=45) [0] r=0 lpr=45 pi=[21,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:08:37 compute-0 ceph-osd[85971]: osd.0 pg_epoch: 46 pg[4.1e( empty local-lis/les=21/22 n=0 ec=45/21 lis/c=21/21 les/c/f=22/22/0 sis=45) [0] r=0 lpr=45 pi=[21,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:08:37 compute-0 ceph-osd[85971]: osd.0 pg_epoch: 46 pg[4.b( empty local-lis/les=21/22 n=0 ec=45/21 lis/c=21/21 les/c/f=22/22/0 sis=45) [0] r=0 lpr=45 pi=[21,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:08:37 compute-0 ceph-osd[85971]: osd.0 pg_epoch: 46 pg[4.7( empty local-lis/les=21/22 n=0 ec=45/21 lis/c=21/21 les/c/f=22/22/0 sis=45) [0] r=0 lpr=45 pi=[21,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:08:37 compute-0 ceph-osd[85971]: osd.0 pg_epoch: 46 pg[4.6( empty local-lis/les=21/22 n=0 ec=45/21 lis/c=21/21 les/c/f=22/22/0 sis=45) [0] r=0 lpr=45 pi=[21,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:08:37 compute-0 ceph-osd[85971]: osd.0 pg_epoch: 46 pg[4.1b( empty local-lis/les=21/22 n=0 ec=45/21 lis/c=21/21 les/c/f=22/22/0 sis=45) [0] r=0 lpr=45 pi=[21,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:08:37 compute-0 ceph-osd[85971]: osd.0 pg_epoch: 46 pg[4.a( empty local-lis/les=21/22 n=0 ec=45/21 lis/c=21/21 les/c/f=22/22/0 sis=45) [0] r=0 lpr=45 pi=[21,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:08:37 compute-0 ceph-osd[85971]: osd.0 pg_epoch: 46 pg[4.1a( empty local-lis/les=21/22 n=0 ec=45/21 lis/c=21/21 les/c/f=22/22/0 sis=45) [0] r=0 lpr=45 pi=[21,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:08:37 compute-0 ceph-osd[85971]: osd.0 pg_epoch: 46 pg[4.5( empty local-lis/les=21/22 n=0 ec=45/21 lis/c=21/21 les/c/f=22/22/0 sis=45) [0] r=0 lpr=45 pi=[21,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:08:37 compute-0 ceph-osd[85971]: osd.0 pg_epoch: 46 pg[4.9( empty local-lis/les=21/22 n=0 ec=45/21 lis/c=21/21 les/c/f=22/22/0 sis=45) [0] r=0 lpr=45 pi=[21,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:08:37 compute-0 ceph-osd[85971]: osd.0 pg_epoch: 46 pg[4.4( empty local-lis/les=21/22 n=0 ec=45/21 lis/c=21/21 les/c/f=22/22/0 sis=45) [0] r=0 lpr=45 pi=[21,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:08:37 compute-0 ceph-osd[85971]: osd.0 pg_epoch: 46 pg[4.19( empty local-lis/les=21/22 n=0 ec=45/21 lis/c=21/21 les/c/f=22/22/0 sis=45) [0] r=0 lpr=45 pi=[21,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:08:37 compute-0 ceph-osd[85971]: osd.0 pg_epoch: 46 pg[4.3( empty local-lis/les=21/22 n=0 ec=45/21 lis/c=21/21 les/c/f=22/22/0 sis=45) [0] r=0 lpr=45 pi=[21,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:08:37 compute-0 ceph-osd[85971]: osd.0 pg_epoch: 46 pg[4.1( empty local-lis/les=21/22 n=0 ec=45/21 lis/c=21/21 les/c/f=22/22/0 sis=45) [0] r=0 lpr=45 pi=[21,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:08:37 compute-0 ceph-osd[85971]: osd.0 pg_epoch: 46 pg[4.c( empty local-lis/les=21/22 n=0 ec=45/21 lis/c=21/21 les/c/f=22/22/0 sis=45) [0] r=0 lpr=45 pi=[21,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:08:37 compute-0 ceph-osd[85971]: osd.0 pg_epoch: 46 pg[4.d( empty local-lis/les=21/22 n=0 ec=45/21 lis/c=21/21 les/c/f=22/22/0 sis=45) [0] r=0 lpr=45 pi=[21,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:08:37 compute-0 ceph-osd[85971]: osd.0 pg_epoch: 46 pg[4.e( empty local-lis/les=21/22 n=0 ec=45/21 lis/c=21/21 les/c/f=22/22/0 sis=45) [0] r=0 lpr=45 pi=[21,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:08:37 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[3.1e( empty local-lis/les=20/21 n=0 ec=45/20 lis/c=20/20 les/c/f=21/21/0 sis=45) [1] r=0 lpr=45 pi=[20,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:08:37 compute-0 ceph-osd[85971]: osd.0 pg_epoch: 46 pg[4.2( empty local-lis/les=21/22 n=0 ec=45/21 lis/c=21/21 les/c/f=22/22/0 sis=45) [0] r=0 lpr=45 pi=[21,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:08:37 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[3.1f( empty local-lis/les=20/21 n=0 ec=45/20 lis/c=20/20 les/c/f=21/21/0 sis=45) [1] r=0 lpr=45 pi=[20,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:08:37 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[3.1c( empty local-lis/les=20/21 n=0 ec=45/20 lis/c=20/20 les/c/f=21/21/0 sis=45) [1] r=0 lpr=45 pi=[20,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:08:37 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[3.1a( empty local-lis/les=20/21 n=0 ec=45/20 lis/c=20/20 les/c/f=21/21/0 sis=45) [1] r=0 lpr=45 pi=[20,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:08:37 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[3.1b( empty local-lis/les=20/21 n=0 ec=45/20 lis/c=20/20 les/c/f=21/21/0 sis=45) [1] r=0 lpr=45 pi=[20,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:08:37 compute-0 ceph-osd[85971]: osd.0 pg_epoch: 46 pg[4.10( empty local-lis/les=21/22 n=0 ec=45/21 lis/c=21/21 les/c/f=22/22/0 sis=45) [0] r=0 lpr=45 pi=[21,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:08:37 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[3.19( empty local-lis/les=20/21 n=0 ec=45/20 lis/c=20/20 les/c/f=21/21/0 sis=45) [1] r=0 lpr=45 pi=[20,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:08:37 compute-0 ceph-osd[85971]: osd.0 pg_epoch: 46 pg[4.f( empty local-lis/les=21/22 n=0 ec=45/21 lis/c=21/21 les/c/f=22/22/0 sis=45) [0] r=0 lpr=45 pi=[21,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:08:37 compute-0 ceph-osd[85971]: osd.0 pg_epoch: 46 pg[4.11( empty local-lis/les=21/22 n=0 ec=45/21 lis/c=21/21 les/c/f=22/22/0 sis=45) [0] r=0 lpr=45 pi=[21,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:08:37 compute-0 ceph-osd[85971]: osd.0 pg_epoch: 46 pg[4.12( empty local-lis/les=21/22 n=0 ec=45/21 lis/c=21/21 les/c/f=22/22/0 sis=45) [0] r=0 lpr=45 pi=[21,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:08:37 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[3.1d( empty local-lis/les=20/21 n=0 ec=45/20 lis/c=20/20 les/c/f=21/21/0 sis=45) [1] r=0 lpr=45 pi=[20,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:08:37 compute-0 ceph-osd[85971]: osd.0 pg_epoch: 46 pg[4.13( empty local-lis/les=21/22 n=0 ec=45/21 lis/c=21/21 les/c/f=22/22/0 sis=45) [0] r=0 lpr=45 pi=[21,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:08:37 compute-0 ceph-osd[85971]: osd.0 pg_epoch: 46 pg[4.14( empty local-lis/les=21/22 n=0 ec=45/21 lis/c=21/21 les/c/f=22/22/0 sis=45) [0] r=0 lpr=45 pi=[21,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:08:37 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[3.18( empty local-lis/les=20/21 n=0 ec=45/20 lis/c=20/20 les/c/f=21/21/0 sis=45) [1] r=0 lpr=45 pi=[20,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:08:37 compute-0 ceph-osd[85971]: osd.0 pg_epoch: 46 pg[4.15( empty local-lis/les=21/22 n=0 ec=45/21 lis/c=21/21 les/c/f=22/22/0 sis=45) [0] r=0 lpr=45 pi=[21,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:08:37 compute-0 ceph-osd[85971]: osd.0 pg_epoch: 46 pg[4.16( empty local-lis/les=21/22 n=0 ec=45/21 lis/c=21/21 les/c/f=22/22/0 sis=45) [0] r=0 lpr=45 pi=[21,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:08:37 compute-0 ceph-osd[85971]: osd.0 pg_epoch: 46 pg[4.17( empty local-lis/les=21/22 n=0 ec=45/21 lis/c=21/21 les/c/f=22/22/0 sis=45) [0] r=0 lpr=45 pi=[21,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:08:37 compute-0 ceph-osd[85971]: osd.0 pg_epoch: 46 pg[4.18( empty local-lis/les=21/22 n=0 ec=45/21 lis/c=21/21 les/c/f=22/22/0 sis=45) [0] r=0 lpr=45 pi=[21,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:08:37 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[3.5( empty local-lis/les=20/21 n=0 ec=45/20 lis/c=20/20 les/c/f=21/21/0 sis=45) [1] r=0 lpr=45 pi=[20,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:08:37 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[3.3( empty local-lis/les=20/21 n=0 ec=45/20 lis/c=20/20 les/c/f=21/21/0 sis=45) [1] r=0 lpr=45 pi=[20,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:08:37 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[3.1( empty local-lis/les=20/21 n=0 ec=45/20 lis/c=20/20 les/c/f=21/21/0 sis=45) [1] r=0 lpr=45 pi=[20,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:08:37 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[3.7( empty local-lis/les=20/21 n=0 ec=45/20 lis/c=20/20 les/c/f=21/21/0 sis=45) [1] r=0 lpr=45 pi=[20,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:08:37 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[3.6( empty local-lis/les=20/21 n=0 ec=45/20 lis/c=20/20 les/c/f=21/21/0 sis=45) [1] r=0 lpr=45 pi=[20,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:08:37 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[3.8( empty local-lis/les=20/21 n=0 ec=45/20 lis/c=20/20 les/c/f=21/21/0 sis=45) [1] r=0 lpr=45 pi=[20,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:08:37 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[3.b( empty local-lis/les=20/21 n=0 ec=45/20 lis/c=20/20 les/c/f=21/21/0 sis=45) [1] r=0 lpr=45 pi=[20,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:08:37 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[3.a( empty local-lis/les=20/21 n=0 ec=45/20 lis/c=20/20 les/c/f=21/21/0 sis=45) [1] r=0 lpr=45 pi=[20,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:08:37 compute-0 ceph-mgr[75519]: [progress INFO root] update: starting ev e0f04fc9-3a4a-419c-885d-453067ed64a6 (PG autoscaler increasing pool 6 PGs from 1 to 16)
Jan 31 08:08:37 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[3.4( empty local-lis/les=20/21 n=0 ec=45/20 lis/c=20/20 les/c/f=21/21/0 sis=45) [1] r=0 lpr=45 pi=[20,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:08:37 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[3.2( empty local-lis/les=20/21 n=0 ec=45/20 lis/c=20/20 les/c/f=21/21/0 sis=45) [1] r=0 lpr=45 pi=[20,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:08:37 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[3.9( empty local-lis/les=20/21 n=0 ec=45/20 lis/c=20/20 les/c/f=21/21/0 sis=45) [1] r=0 lpr=45 pi=[20,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:08:37 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[3.c( empty local-lis/les=20/21 n=0 ec=45/20 lis/c=20/20 les/c/f=21/21/0 sis=45) [1] r=0 lpr=45 pi=[20,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:08:37 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[3.d( empty local-lis/les=20/21 n=0 ec=45/20 lis/c=20/20 les/c/f=21/21/0 sis=45) [1] r=0 lpr=45 pi=[20,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:08:37 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[3.f( empty local-lis/les=20/21 n=0 ec=45/20 lis/c=20/20 les/c/f=21/21/0 sis=45) [1] r=0 lpr=45 pi=[20,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:08:37 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[3.10( empty local-lis/les=20/21 n=0 ec=45/20 lis/c=20/20 les/c/f=21/21/0 sis=45) [1] r=0 lpr=45 pi=[20,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:08:37 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[3.11( empty local-lis/les=20/21 n=0 ec=45/20 lis/c=20/20 les/c/f=21/21/0 sis=45) [1] r=0 lpr=45 pi=[20,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:08:37 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[3.12( empty local-lis/les=20/21 n=0 ec=45/20 lis/c=20/20 les/c/f=21/21/0 sis=45) [1] r=0 lpr=45 pi=[20,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:08:37 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[3.13( empty local-lis/les=20/21 n=0 ec=45/20 lis/c=20/20 les/c/f=21/21/0 sis=45) [1] r=0 lpr=45 pi=[20,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:08:37 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[3.e( empty local-lis/les=20/21 n=0 ec=45/20 lis/c=20/20 les/c/f=21/21/0 sis=45) [1] r=0 lpr=45 pi=[20,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:08:37 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[3.14( empty local-lis/les=20/21 n=0 ec=45/20 lis/c=20/20 les/c/f=21/21/0 sis=45) [1] r=0 lpr=45 pi=[20,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:08:37 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[3.16( empty local-lis/les=20/21 n=0 ec=45/20 lis/c=20/20 les/c/f=21/21/0 sis=45) [1] r=0 lpr=45 pi=[20,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:08:37 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[3.17( empty local-lis/les=20/21 n=0 ec=45/20 lis/c=20/20 les/c/f=21/21/0 sis=45) [1] r=0 lpr=45 pi=[20,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:08:37 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[3.15( empty local-lis/les=20/21 n=0 ec=45/20 lis/c=20/20 les/c/f=21/21/0 sis=45) [1] r=0 lpr=45 pi=[20,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:08:37 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num", "val": "32"} v 0)
Jan 31 08:08:37 compute-0 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num", "val": "32"} : dispatch
Jan 31 08:08:37 compute-0 ceph-osd[85971]: osd.0 pg_epoch: 46 pg[4.1f( empty local-lis/les=45/46 n=0 ec=45/21 lis/c=21/21 les/c/f=22/22/0 sis=45) [0] r=0 lpr=45 pi=[21,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:08:37 compute-0 ceph-osd[85971]: osd.0 pg_epoch: 46 pg[4.1c( empty local-lis/les=45/46 n=0 ec=45/21 lis/c=21/21 les/c/f=22/22/0 sis=45) [0] r=0 lpr=45 pi=[21,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:08:37 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[3.1f( empty local-lis/les=45/46 n=0 ec=45/20 lis/c=20/20 les/c/f=21/21/0 sis=45) [1] r=0 lpr=45 pi=[20,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:08:37 compute-0 ceph-mon[75227]: 2.1e scrub starts
Jan 31 08:08:37 compute-0 ceph-mon[75227]: 2.1e scrub ok
Jan 31 08:08:37 compute-0 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 08:08:37 compute-0 ceph-osd[85971]: osd.0 pg_epoch: 46 pg[4.8( empty local-lis/les=45/46 n=0 ec=45/21 lis/c=21/21 les/c/f=22/22/0 sis=45) [0] r=0 lpr=45 pi=[21,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:08:37 compute-0 ceph-osd[85971]: osd.0 pg_epoch: 46 pg[4.a( empty local-lis/les=45/46 n=0 ec=45/21 lis/c=21/21 les/c/f=22/22/0 sis=45) [0] r=0 lpr=45 pi=[21,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:08:37 compute-0 ceph-osd[85971]: osd.0 pg_epoch: 46 pg[4.7( empty local-lis/les=45/46 n=0 ec=45/21 lis/c=21/21 les/c/f=22/22/0 sis=45) [0] r=0 lpr=45 pi=[21,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:08:37 compute-0 ceph-osd[85971]: osd.0 pg_epoch: 46 pg[4.1e( empty local-lis/les=45/46 n=0 ec=45/21 lis/c=21/21 les/c/f=22/22/0 sis=45) [0] r=0 lpr=45 pi=[21,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:08:37 compute-0 ceph-osd[85971]: osd.0 pg_epoch: 46 pg[4.b( empty local-lis/les=45/46 n=0 ec=45/21 lis/c=21/21 les/c/f=22/22/0 sis=45) [0] r=0 lpr=45 pi=[21,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:08:37 compute-0 ceph-osd[85971]: osd.0 pg_epoch: 46 pg[4.1d( empty local-lis/les=45/46 n=0 ec=45/21 lis/c=21/21 les/c/f=22/22/0 sis=45) [0] r=0 lpr=45 pi=[21,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:08:37 compute-0 ceph-osd[85971]: osd.0 pg_epoch: 46 pg[4.1b( empty local-lis/les=45/46 n=0 ec=45/21 lis/c=21/21 les/c/f=22/22/0 sis=45) [0] r=0 lpr=45 pi=[21,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:08:37 compute-0 ceph-osd[85971]: osd.0 pg_epoch: 46 pg[4.1a( empty local-lis/les=45/46 n=0 ec=45/21 lis/c=21/21 les/c/f=22/22/0 sis=45) [0] r=0 lpr=45 pi=[21,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:08:37 compute-0 ceph-osd[85971]: osd.0 pg_epoch: 46 pg[4.5( empty local-lis/les=45/46 n=0 ec=45/21 lis/c=21/21 les/c/f=22/22/0 sis=45) [0] r=0 lpr=45 pi=[21,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:08:37 compute-0 ceph-osd[85971]: osd.0 pg_epoch: 46 pg[4.9( empty local-lis/les=45/46 n=0 ec=45/21 lis/c=21/21 les/c/f=22/22/0 sis=45) [0] r=0 lpr=45 pi=[21,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:08:37 compute-0 ceph-osd[85971]: osd.0 pg_epoch: 46 pg[4.4( empty local-lis/les=45/46 n=0 ec=45/21 lis/c=21/21 les/c/f=22/22/0 sis=45) [0] r=0 lpr=45 pi=[21,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:08:37 compute-0 ceph-osd[85971]: osd.0 pg_epoch: 46 pg[4.19( empty local-lis/les=45/46 n=0 ec=45/21 lis/c=21/21 les/c/f=22/22/0 sis=45) [0] r=0 lpr=45 pi=[21,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:08:37 compute-0 ceph-osd[85971]: osd.0 pg_epoch: 46 pg[4.c( empty local-lis/les=45/46 n=0 ec=45/21 lis/c=21/21 les/c/f=22/22/0 sis=45) [0] r=0 lpr=45 pi=[21,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:08:37 compute-0 ceph-osd[85971]: osd.0 pg_epoch: 46 pg[4.1( empty local-lis/les=45/46 n=0 ec=45/21 lis/c=21/21 les/c/f=22/22/0 sis=45) [0] r=0 lpr=45 pi=[21,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:08:37 compute-0 ceph-osd[85971]: osd.0 pg_epoch: 46 pg[4.3( empty local-lis/les=45/46 n=0 ec=45/21 lis/c=21/21 les/c/f=22/22/0 sis=45) [0] r=0 lpr=45 pi=[21,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:08:37 compute-0 ceph-osd[85971]: osd.0 pg_epoch: 46 pg[4.d( empty local-lis/les=45/46 n=0 ec=45/21 lis/c=21/21 les/c/f=22/22/0 sis=45) [0] r=0 lpr=45 pi=[21,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:08:37 compute-0 ceph-osd[85971]: osd.0 pg_epoch: 46 pg[4.6( empty local-lis/les=45/46 n=0 ec=45/21 lis/c=21/21 les/c/f=22/22/0 sis=45) [0] r=0 lpr=45 pi=[21,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:08:37 compute-0 ceph-osd[85971]: osd.0 pg_epoch: 46 pg[4.11( empty local-lis/les=45/46 n=0 ec=45/21 lis/c=21/21 les/c/f=22/22/0 sis=45) [0] r=0 lpr=45 pi=[21,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:08:37 compute-0 ceph-osd[85971]: osd.0 pg_epoch: 46 pg[4.10( empty local-lis/les=45/46 n=0 ec=45/21 lis/c=21/21 les/c/f=22/22/0 sis=45) [0] r=0 lpr=45 pi=[21,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:08:37 compute-0 ceph-osd[85971]: osd.0 pg_epoch: 46 pg[4.f( empty local-lis/les=45/46 n=0 ec=45/21 lis/c=21/21 les/c/f=22/22/0 sis=45) [0] r=0 lpr=45 pi=[21,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:08:37 compute-0 ceph-osd[85971]: osd.0 pg_epoch: 46 pg[4.13( empty local-lis/les=45/46 n=0 ec=45/21 lis/c=21/21 les/c/f=22/22/0 sis=45) [0] r=0 lpr=45 pi=[21,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:08:37 compute-0 ceph-osd[85971]: osd.0 pg_epoch: 46 pg[4.12( empty local-lis/les=45/46 n=0 ec=45/21 lis/c=21/21 les/c/f=22/22/0 sis=45) [0] r=0 lpr=45 pi=[21,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:08:37 compute-0 ceph-osd[85971]: osd.0 pg_epoch: 46 pg[4.e( empty local-lis/les=45/46 n=0 ec=45/21 lis/c=21/21 les/c/f=22/22/0 sis=45) [0] r=0 lpr=45 pi=[21,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:08:37 compute-0 ceph-osd[85971]: osd.0 pg_epoch: 46 pg[4.16( empty local-lis/les=45/46 n=0 ec=45/21 lis/c=21/21 les/c/f=22/22/0 sis=45) [0] r=0 lpr=45 pi=[21,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:08:37 compute-0 ceph-osd[85971]: osd.0 pg_epoch: 46 pg[4.15( empty local-lis/les=45/46 n=0 ec=45/21 lis/c=21/21 les/c/f=22/22/0 sis=45) [0] r=0 lpr=45 pi=[21,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:08:37 compute-0 ceph-osd[85971]: osd.0 pg_epoch: 46 pg[4.14( empty local-lis/les=45/46 n=0 ec=45/21 lis/c=21/21 les/c/f=22/22/0 sis=45) [0] r=0 lpr=45 pi=[21,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:08:37 compute-0 ceph-osd[85971]: osd.0 pg_epoch: 46 pg[4.0( empty local-lis/les=45/46 n=0 ec=21/21 lis/c=21/21 les/c/f=22/22/0 sis=45) [0] r=0 lpr=45 pi=[21,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:08:37 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[3.1e( empty local-lis/les=45/46 n=0 ec=45/20 lis/c=20/20 les/c/f=21/21/0 sis=45) [1] r=0 lpr=45 pi=[20,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:08:37 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[3.1b( empty local-lis/les=45/46 n=0 ec=45/20 lis/c=20/20 les/c/f=21/21/0 sis=45) [1] r=0 lpr=45 pi=[20,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:08:37 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[3.19( empty local-lis/les=45/46 n=0 ec=45/20 lis/c=20/20 les/c/f=21/21/0 sis=45) [1] r=0 lpr=45 pi=[20,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:08:37 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[3.1d( empty local-lis/les=45/46 n=0 ec=45/20 lis/c=20/20 les/c/f=21/21/0 sis=45) [1] r=0 lpr=45 pi=[20,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:08:37 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[3.1a( empty local-lis/les=45/46 n=0 ec=45/20 lis/c=20/20 les/c/f=21/21/0 sis=45) [1] r=0 lpr=45 pi=[20,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:08:37 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[3.18( empty local-lis/les=45/46 n=0 ec=45/20 lis/c=20/20 les/c/f=21/21/0 sis=45) [1] r=0 lpr=45 pi=[20,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:08:37 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[3.3( empty local-lis/les=45/46 n=0 ec=45/20 lis/c=20/20 les/c/f=21/21/0 sis=45) [1] r=0 lpr=45 pi=[20,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:08:37 compute-0 ceph-osd[85971]: osd.0 pg_epoch: 46 pg[4.2( empty local-lis/les=45/46 n=0 ec=45/21 lis/c=21/21 les/c/f=22/22/0 sis=45) [0] r=0 lpr=45 pi=[21,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:08:37 compute-0 ceph-osd[85971]: osd.0 pg_epoch: 46 pg[4.18( empty local-lis/les=45/46 n=0 ec=45/21 lis/c=21/21 les/c/f=22/22/0 sis=45) [0] r=0 lpr=45 pi=[21,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:08:37 compute-0 ceph-osd[85971]: osd.0 pg_epoch: 46 pg[4.17( empty local-lis/les=45/46 n=0 ec=45/21 lis/c=21/21 les/c/f=22/22/0 sis=45) [0] r=0 lpr=45 pi=[21,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:08:37 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[3.1( empty local-lis/les=45/46 n=0 ec=45/20 lis/c=20/20 les/c/f=21/21/0 sis=45) [1] r=0 lpr=45 pi=[20,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:08:37 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[3.8( empty local-lis/les=45/46 n=0 ec=45/20 lis/c=20/20 les/c/f=21/21/0 sis=45) [1] r=0 lpr=45 pi=[20,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:08:37 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[3.1c( empty local-lis/les=45/46 n=0 ec=45/20 lis/c=20/20 les/c/f=21/21/0 sis=45) [1] r=0 lpr=45 pi=[20,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:08:37 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[3.7( empty local-lis/les=45/46 n=0 ec=45/20 lis/c=20/20 les/c/f=21/21/0 sis=45) [1] r=0 lpr=45 pi=[20,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:08:37 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[3.6( empty local-lis/les=45/46 n=0 ec=45/20 lis/c=20/20 les/c/f=21/21/0 sis=45) [1] r=0 lpr=45 pi=[20,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:08:37 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[3.4( empty local-lis/les=45/46 n=0 ec=45/20 lis/c=20/20 les/c/f=21/21/0 sis=45) [1] r=0 lpr=45 pi=[20,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:08:37 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[3.0( empty local-lis/les=45/46 n=0 ec=20/20 lis/c=20/20 les/c/f=21/21/0 sis=45) [1] r=0 lpr=45 pi=[20,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:08:37 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[3.9( empty local-lis/les=45/46 n=0 ec=45/20 lis/c=20/20 les/c/f=21/21/0 sis=45) [1] r=0 lpr=45 pi=[20,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:08:37 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[3.c( empty local-lis/les=45/46 n=0 ec=45/20 lis/c=20/20 les/c/f=21/21/0 sis=45) [1] r=0 lpr=45 pi=[20,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:08:37 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[3.2( empty local-lis/les=45/46 n=0 ec=45/20 lis/c=20/20 les/c/f=21/21/0 sis=45) [1] r=0 lpr=45 pi=[20,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:08:37 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[3.d( empty local-lis/les=45/46 n=0 ec=45/20 lis/c=20/20 les/c/f=21/21/0 sis=45) [1] r=0 lpr=45 pi=[20,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:08:37 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[3.f( empty local-lis/les=45/46 n=0 ec=45/20 lis/c=20/20 les/c/f=21/21/0 sis=45) [1] r=0 lpr=45 pi=[20,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:08:37 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[3.10( empty local-lis/les=45/46 n=0 ec=45/20 lis/c=20/20 les/c/f=21/21/0 sis=45) [1] r=0 lpr=45 pi=[20,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:08:37 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[3.11( empty local-lis/les=45/46 n=0 ec=45/20 lis/c=20/20 les/c/f=21/21/0 sis=45) [1] r=0 lpr=45 pi=[20,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:08:37 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[3.a( empty local-lis/les=45/46 n=0 ec=45/20 lis/c=20/20 les/c/f=21/21/0 sis=45) [1] r=0 lpr=45 pi=[20,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:08:37 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[3.13( empty local-lis/les=45/46 n=0 ec=45/20 lis/c=20/20 les/c/f=21/21/0 sis=45) [1] r=0 lpr=45 pi=[20,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:08:37 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[3.12( empty local-lis/les=45/46 n=0 ec=45/20 lis/c=20/20 les/c/f=21/21/0 sis=45) [1] r=0 lpr=45 pi=[20,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:08:37 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[3.14( empty local-lis/les=45/46 n=0 ec=45/20 lis/c=20/20 les/c/f=21/21/0 sis=45) [1] r=0 lpr=45 pi=[20,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:08:37 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[3.e( empty local-lis/les=45/46 n=0 ec=45/20 lis/c=20/20 les/c/f=21/21/0 sis=45) [1] r=0 lpr=45 pi=[20,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:08:37 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[3.16( empty local-lis/les=45/46 n=0 ec=45/20 lis/c=20/20 les/c/f=21/21/0 sis=45) [1] r=0 lpr=45 pi=[20,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:08:37 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[3.5( empty local-lis/les=45/46 n=0 ec=45/20 lis/c=20/20 les/c/f=21/21/0 sis=45) [1] r=0 lpr=45 pi=[20,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:08:37 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[3.17( empty local-lis/les=45/46 n=0 ec=45/20 lis/c=20/20 les/c/f=21/21/0 sis=45) [1] r=0 lpr=45 pi=[20,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:08:37 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[3.15( empty local-lis/les=45/46 n=0 ec=45/20 lis/c=20/20 les/c/f=21/21/0 sis=45) [1] r=0 lpr=45 pi=[20,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:08:37 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 46 pg[3.b( empty local-lis/les=45/46 n=0 ec=45/20 lis/c=20/20 les/c/f=21/21/0 sis=45) [1] r=0 lpr=45 pi=[20,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:08:37 compute-0 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 08:08:38 compute-0 ceph-mgr[75519]: [progress INFO root] Writing back 6 completed events
Jan 31 08:08:38 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Jan 31 08:08:38 compute-0 sudo[98288]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 31 08:08:38 compute-0 sudo[98288]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:08:38 compute-0 sudo[98288]: pam_unix(sudo:session): session closed for user root
Jan 31 08:08:38 compute-0 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 08:08:38 compute-0 ceph-mgr[75519]: [progress WARNING root] Starting Global Recovery Event,93 pgs not in active + clean state
Jan 31 08:08:38 compute-0 ceph-osd[87035]: log_channel(cluster) log [DBG] : 3.1f scrub starts
Jan 31 08:08:38 compute-0 ceph-osd[87035]: log_channel(cluster) log [DBG] : 3.1f scrub ok
Jan 31 08:08:38 compute-0 ceph-osd[85971]: log_channel(cluster) log [DBG] : 4.1c scrub starts
Jan 31 08:08:38 compute-0 ceph-osd[85971]: log_channel(cluster) log [DBG] : 4.1c scrub ok
Jan 31 08:08:38 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e46 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 08:08:38 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e46 do_prune osdmap full prune enabled
Jan 31 08:08:38 compute-0 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pg_num_actual", "val": "32"}]': finished
Jan 31 08:08:38 compute-0 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num", "val": "32"}]': finished
Jan 31 08:08:38 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e47 e47: 3 total, 3 up, 3 in
Jan 31 08:08:38 compute-0 ceph-mon[75227]: log_channel(cluster) log [DBG] : osdmap e47: 3 total, 3 up, 3 in
Jan 31 08:08:38 compute-0 ceph-mgr[75519]: [progress INFO root] update: starting ev b90c2479-d359-4e6d-8ffc-5b2641c84c5a (PG autoscaler increasing pool 7 PGs from 1 to 32)
Jan 31 08:08:38 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num", "val": "32"} v 0)
Jan 31 08:08:38 compute-0 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num", "val": "32"} : dispatch
Jan 31 08:08:38 compute-0 ceph-mon[75227]: pgmap v99: 104 pgs: 93 unknown, 11 active+clean; 461 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:08:38 compute-0 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "osd pool set", "pool": "images", "var": "pg_num_actual", "val": "32"} : dispatch
Jan 31 08:08:38 compute-0 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num", "val": "16"}]': finished
Jan 31 08:08:38 compute-0 ceph-mon[75227]: osdmap e46: 3 total, 3 up, 3 in
Jan 31 08:08:38 compute-0 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num", "val": "32"} : dispatch
Jan 31 08:08:38 compute-0 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 08:08:38 compute-0 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 08:08:38 compute-0 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pg_num_actual", "val": "32"}]': finished
Jan 31 08:08:38 compute-0 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num", "val": "32"}]': finished
Jan 31 08:08:38 compute-0 ceph-mon[75227]: osdmap e47: 3 total, 3 up, 3 in
Jan 31 08:08:38 compute-0 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num", "val": "32"} : dispatch
Jan 31 08:08:39 compute-0 ceph-osd[88096]: osd.2 pg_epoch: 47 pg[5.0( empty local-lis/les=22/23 n=0 ec=22/22 lis/c=22/22 les/c/f=23/23/0 sis=47 pruub=11.337800980s) [2] r=0 lpr=47 pi=[22,47)/1 crt=0'0 mlcod 0'0 active pruub 83.031776428s@ mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [2], acting_primary 2 -> 2, up_primary 2 -> 2, role 0 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 08:08:39 compute-0 ceph-osd[88096]: osd.2 pg_epoch: 47 pg[5.0( empty local-lis/les=22/23 n=0 ec=22/22 lis/c=22/22 les/c/f=23/23/0 sis=47 pruub=11.337800980s) [2] r=0 lpr=47 pi=[22,47)/1 crt=0'0 mlcod 0'0 unknown pruub 83.031776428s@ mbc={}] state<Start>: transitioning to Primary
Jan 31 08:08:39 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v102: 135 pgs: 1 peering, 62 unknown, 72 active+clean; 461 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:08:39 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num_actual", "val": "32"} v 0)
Jan 31 08:08:39 compute-0 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num_actual", "val": "32"} : dispatch
Jan 31 08:08:39 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num_actual", "val": "16"} v 0)
Jan 31 08:08:39 compute-0 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num_actual", "val": "16"} : dispatch
Jan 31 08:08:39 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e47 do_prune osdmap full prune enabled
Jan 31 08:08:39 compute-0 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num", "val": "32"}]': finished
Jan 31 08:08:39 compute-0 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num_actual", "val": "32"}]': finished
Jan 31 08:08:39 compute-0 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num_actual", "val": "16"}]': finished
Jan 31 08:08:39 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e48 e48: 3 total, 3 up, 3 in
Jan 31 08:08:40 compute-0 ceph-mon[75227]: log_channel(cluster) log [DBG] : osdmap e48: 3 total, 3 up, 3 in
Jan 31 08:08:40 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 48 pg[7.0( empty local-lis/les=24/25 n=0 ec=24/24 lis/c=24/24 les/c/f=25/25/0 sis=48 pruub=13.171987534s) [1] r=0 lpr=48 pi=[24,48)/1 crt=0'0 mlcod 0'0 active pruub 92.220489502s@ mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [1], acting_primary 1 -> 1, up_primary 1 -> 1, role 0 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 08:08:40 compute-0 ceph-osd[88096]: osd.2 pg_epoch: 48 pg[5.1d( empty local-lis/les=22/23 n=0 ec=47/22 lis/c=22/22 les/c/f=23/23/0 sis=47) [2] r=0 lpr=47 pi=[22,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:08:40 compute-0 ceph-osd[88096]: osd.2 pg_epoch: 48 pg[5.1e( empty local-lis/les=22/23 n=0 ec=47/22 lis/c=22/22 les/c/f=23/23/0 sis=47) [2] r=0 lpr=47 pi=[22,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:08:40 compute-0 ceph-osd[88096]: osd.2 pg_epoch: 48 pg[5.1f( empty local-lis/les=22/23 n=0 ec=47/22 lis/c=22/22 les/c/f=23/23/0 sis=47) [2] r=0 lpr=47 pi=[22,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:08:40 compute-0 ceph-osd[88096]: osd.2 pg_epoch: 48 pg[5.10( empty local-lis/les=22/23 n=0 ec=47/22 lis/c=22/22 les/c/f=23/23/0 sis=47) [2] r=0 lpr=47 pi=[22,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:08:40 compute-0 ceph-osd[88096]: osd.2 pg_epoch: 48 pg[5.11( empty local-lis/les=22/23 n=0 ec=47/22 lis/c=22/22 les/c/f=23/23/0 sis=47) [2] r=0 lpr=47 pi=[22,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:08:40 compute-0 ceph-osd[88096]: osd.2 pg_epoch: 48 pg[5.12( empty local-lis/les=22/23 n=0 ec=47/22 lis/c=22/22 les/c/f=23/23/0 sis=47) [2] r=0 lpr=47 pi=[22,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:08:40 compute-0 ceph-osd[88096]: osd.2 pg_epoch: 48 pg[5.14( empty local-lis/les=22/23 n=0 ec=47/22 lis/c=22/22 les/c/f=23/23/0 sis=47) [2] r=0 lpr=47 pi=[22,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:08:40 compute-0 ceph-osd[88096]: osd.2 pg_epoch: 48 pg[5.15( empty local-lis/les=22/23 n=0 ec=47/22 lis/c=22/22 les/c/f=23/23/0 sis=47) [2] r=0 lpr=47 pi=[22,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:08:40 compute-0 ceph-osd[88096]: osd.2 pg_epoch: 48 pg[5.13( empty local-lis/les=22/23 n=0 ec=47/22 lis/c=22/22 les/c/f=23/23/0 sis=47) [2] r=0 lpr=47 pi=[22,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:08:40 compute-0 ceph-osd[88096]: osd.2 pg_epoch: 48 pg[5.16( empty local-lis/les=22/23 n=0 ec=47/22 lis/c=22/22 les/c/f=23/23/0 sis=47) [2] r=0 lpr=47 pi=[22,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:08:40 compute-0 ceph-osd[88096]: osd.2 pg_epoch: 48 pg[5.9( empty local-lis/les=22/23 n=0 ec=47/22 lis/c=22/22 les/c/f=23/23/0 sis=47) [2] r=0 lpr=47 pi=[22,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:08:40 compute-0 ceph-osd[88096]: osd.2 pg_epoch: 48 pg[5.17( empty local-lis/les=22/23 n=0 ec=47/22 lis/c=22/22 les/c/f=23/23/0 sis=47) [2] r=0 lpr=47 pi=[22,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:08:40 compute-0 ceph-osd[88096]: osd.2 pg_epoch: 48 pg[5.a( empty local-lis/les=22/23 n=0 ec=47/22 lis/c=22/22 les/c/f=23/23/0 sis=47) [2] r=0 lpr=47 pi=[22,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:08:40 compute-0 ceph-mgr[75519]: [progress INFO root] update: starting ev b11b1caf-b01e-4a34-8316-28039473002a (PG autoscaler increasing pool 8 PGs from 1 to 32)
Jan 31 08:08:40 compute-0 ceph-osd[88096]: osd.2 pg_epoch: 48 pg[5.b( empty local-lis/les=22/23 n=0 ec=47/22 lis/c=22/22 les/c/f=23/23/0 sis=47) [2] r=0 lpr=47 pi=[22,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:08:40 compute-0 ceph-osd[88096]: osd.2 pg_epoch: 48 pg[5.c( empty local-lis/les=22/23 n=0 ec=47/22 lis/c=22/22 les/c/f=23/23/0 sis=47) [2] r=0 lpr=47 pi=[22,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:08:40 compute-0 ceph-osd[88096]: osd.2 pg_epoch: 48 pg[5.8( empty local-lis/les=22/23 n=0 ec=47/22 lis/c=22/22 les/c/f=23/23/0 sis=47) [2] r=0 lpr=47 pi=[22,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:08:40 compute-0 ceph-osd[88096]: osd.2 pg_epoch: 48 pg[5.7( empty local-lis/les=22/23 n=0 ec=47/22 lis/c=22/22 les/c/f=23/23/0 sis=47) [2] r=0 lpr=47 pi=[22,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:08:40 compute-0 ceph-osd[88096]: osd.2 pg_epoch: 48 pg[5.f( empty local-lis/les=22/23 n=0 ec=47/22 lis/c=22/22 les/c/f=23/23/0 sis=47) [2] r=0 lpr=47 pi=[22,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:08:40 compute-0 ceph-osd[88096]: osd.2 pg_epoch: 48 pg[5.6( empty local-lis/les=22/23 n=0 ec=47/22 lis/c=22/22 les/c/f=23/23/0 sis=47) [2] r=0 lpr=47 pi=[22,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:08:40 compute-0 ceph-osd[88096]: osd.2 pg_epoch: 48 pg[5.4( empty local-lis/les=22/23 n=0 ec=47/22 lis/c=22/22 les/c/f=23/23/0 sis=47) [2] r=0 lpr=47 pi=[22,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:08:40 compute-0 ceph-osd[88096]: osd.2 pg_epoch: 48 pg[5.3( empty local-lis/les=22/23 n=0 ec=47/22 lis/c=22/22 les/c/f=23/23/0 sis=47) [2] r=0 lpr=47 pi=[22,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:08:40 compute-0 ceph-osd[88096]: osd.2 pg_epoch: 48 pg[5.5( empty local-lis/les=22/23 n=0 ec=47/22 lis/c=22/22 les/c/f=23/23/0 sis=47) [2] r=0 lpr=47 pi=[22,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:08:40 compute-0 ceph-osd[88096]: osd.2 pg_epoch: 48 pg[5.2( empty local-lis/les=22/23 n=0 ec=47/22 lis/c=22/22 les/c/f=23/23/0 sis=47) [2] r=0 lpr=47 pi=[22,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:08:40 compute-0 ceph-osd[88096]: osd.2 pg_epoch: 48 pg[5.1( empty local-lis/les=22/23 n=0 ec=47/22 lis/c=22/22 les/c/f=23/23/0 sis=47) [2] r=0 lpr=47 pi=[22,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:08:40 compute-0 ceph-osd[88096]: osd.2 pg_epoch: 48 pg[5.e( empty local-lis/les=22/23 n=0 ec=47/22 lis/c=22/22 les/c/f=23/23/0 sis=47) [2] r=0 lpr=47 pi=[22,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:08:40 compute-0 ceph-osd[88096]: osd.2 pg_epoch: 48 pg[5.d( empty local-lis/les=22/23 n=0 ec=47/22 lis/c=22/22 les/c/f=23/23/0 sis=47) [2] r=0 lpr=47 pi=[22,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:08:40 compute-0 ceph-osd[88096]: osd.2 pg_epoch: 48 pg[5.1b( empty local-lis/les=22/23 n=0 ec=47/22 lis/c=22/22 les/c/f=23/23/0 sis=47) [2] r=0 lpr=47 pi=[22,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:08:40 compute-0 ceph-osd[88096]: osd.2 pg_epoch: 48 pg[5.1c( empty local-lis/les=22/23 n=0 ec=47/22 lis/c=22/22 les/c/f=23/23/0 sis=47) [2] r=0 lpr=47 pi=[22,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:08:40 compute-0 ceph-osd[88096]: osd.2 pg_epoch: 48 pg[5.1a( empty local-lis/les=22/23 n=0 ec=47/22 lis/c=22/22 les/c/f=23/23/0 sis=47) [2] r=0 lpr=47 pi=[22,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:08:40 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 48 pg[7.0( empty local-lis/les=24/25 n=0 ec=24/24 lis/c=24/24 les/c/f=25/25/0 sis=48 pruub=13.171987534s) [1] r=0 lpr=48 pi=[24,48)/1 crt=0'0 mlcod 0'0 unknown pruub 92.220489502s@ mbc={}] state<Start>: transitioning to Primary
Jan 31 08:08:40 compute-0 ceph-osd[88096]: osd.2 pg_epoch: 48 pg[5.19( empty local-lis/les=22/23 n=0 ec=47/22 lis/c=22/22 les/c/f=23/23/0 sis=47) [2] r=0 lpr=47 pi=[22,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:08:40 compute-0 ceph-osd[88096]: osd.2 pg_epoch: 48 pg[5.18( empty local-lis/les=22/23 n=0 ec=47/22 lis/c=22/22 les/c/f=23/23/0 sis=47) [2] r=0 lpr=47 pi=[22,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:08:40 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num", "val": "32"} v 0)
Jan 31 08:08:40 compute-0 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num", "val": "32"} : dispatch
Jan 31 08:08:40 compute-0 ceph-mon[75227]: 3.1f scrub starts
Jan 31 08:08:40 compute-0 ceph-mon[75227]: 3.1f scrub ok
Jan 31 08:08:40 compute-0 ceph-mon[75227]: 4.1c scrub starts
Jan 31 08:08:40 compute-0 ceph-osd[88096]: osd.2 pg_epoch: 48 pg[5.1e( empty local-lis/les=47/48 n=0 ec=47/22 lis/c=22/22 les/c/f=23/23/0 sis=47) [2] r=0 lpr=47 pi=[22,47)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:08:40 compute-0 ceph-mon[75227]: 4.1c scrub ok
Jan 31 08:08:40 compute-0 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num_actual", "val": "32"} : dispatch
Jan 31 08:08:40 compute-0 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num_actual", "val": "16"} : dispatch
Jan 31 08:08:40 compute-0 ceph-osd[88096]: osd.2 pg_epoch: 48 pg[5.1d( empty local-lis/les=47/48 n=0 ec=47/22 lis/c=22/22 les/c/f=23/23/0 sis=47) [2] r=0 lpr=47 pi=[22,47)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:08:40 compute-0 ceph-osd[88096]: osd.2 pg_epoch: 48 pg[5.10( empty local-lis/les=47/48 n=0 ec=47/22 lis/c=22/22 les/c/f=23/23/0 sis=47) [2] r=0 lpr=47 pi=[22,47)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:08:40 compute-0 ceph-osd[88096]: osd.2 pg_epoch: 48 pg[5.12( empty local-lis/les=47/48 n=0 ec=47/22 lis/c=22/22 les/c/f=23/23/0 sis=47) [2] r=0 lpr=47 pi=[22,47)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:08:40 compute-0 ceph-osd[88096]: osd.2 pg_epoch: 48 pg[5.14( empty local-lis/les=47/48 n=0 ec=47/22 lis/c=22/22 les/c/f=23/23/0 sis=47) [2] r=0 lpr=47 pi=[22,47)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:08:40 compute-0 ceph-osd[88096]: osd.2 pg_epoch: 48 pg[5.15( empty local-lis/les=47/48 n=0 ec=47/22 lis/c=22/22 les/c/f=23/23/0 sis=47) [2] r=0 lpr=47 pi=[22,47)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:08:40 compute-0 ceph-osd[88096]: osd.2 pg_epoch: 48 pg[5.1f( empty local-lis/les=47/48 n=0 ec=47/22 lis/c=22/22 les/c/f=23/23/0 sis=47) [2] r=0 lpr=47 pi=[22,47)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:08:40 compute-0 ceph-osd[88096]: osd.2 pg_epoch: 48 pg[5.11( empty local-lis/les=47/48 n=0 ec=47/22 lis/c=22/22 les/c/f=23/23/0 sis=47) [2] r=0 lpr=47 pi=[22,47)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:08:40 compute-0 ceph-osd[88096]: osd.2 pg_epoch: 48 pg[5.16( empty local-lis/les=47/48 n=0 ec=47/22 lis/c=22/22 les/c/f=23/23/0 sis=47) [2] r=0 lpr=47 pi=[22,47)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:08:40 compute-0 ceph-osd[88096]: osd.2 pg_epoch: 48 pg[5.13( empty local-lis/les=47/48 n=0 ec=47/22 lis/c=22/22 les/c/f=23/23/0 sis=47) [2] r=0 lpr=47 pi=[22,47)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:08:40 compute-0 ceph-osd[88096]: osd.2 pg_epoch: 48 pg[5.a( empty local-lis/les=47/48 n=0 ec=47/22 lis/c=22/22 les/c/f=23/23/0 sis=47) [2] r=0 lpr=47 pi=[22,47)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:08:40 compute-0 ceph-osd[88096]: osd.2 pg_epoch: 48 pg[5.c( empty local-lis/les=47/48 n=0 ec=47/22 lis/c=22/22 les/c/f=23/23/0 sis=47) [2] r=0 lpr=47 pi=[22,47)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:08:40 compute-0 ceph-osd[88096]: osd.2 pg_epoch: 48 pg[5.9( empty local-lis/les=47/48 n=0 ec=47/22 lis/c=22/22 les/c/f=23/23/0 sis=47) [2] r=0 lpr=47 pi=[22,47)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:08:40 compute-0 ceph-osd[88096]: osd.2 pg_epoch: 48 pg[5.8( empty local-lis/les=47/48 n=0 ec=47/22 lis/c=22/22 les/c/f=23/23/0 sis=47) [2] r=0 lpr=47 pi=[22,47)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:08:40 compute-0 ceph-osd[88096]: osd.2 pg_epoch: 48 pg[5.0( empty local-lis/les=47/48 n=0 ec=22/22 lis/c=22/22 les/c/f=23/23/0 sis=47) [2] r=0 lpr=47 pi=[22,47)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:08:40 compute-0 ceph-osd[88096]: osd.2 pg_epoch: 48 pg[5.f( empty local-lis/les=47/48 n=0 ec=47/22 lis/c=22/22 les/c/f=23/23/0 sis=47) [2] r=0 lpr=47 pi=[22,47)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:08:40 compute-0 ceph-osd[88096]: osd.2 pg_epoch: 48 pg[5.7( empty local-lis/les=47/48 n=0 ec=47/22 lis/c=22/22 les/c/f=23/23/0 sis=47) [2] r=0 lpr=47 pi=[22,47)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:08:40 compute-0 ceph-osd[88096]: osd.2 pg_epoch: 48 pg[5.b( empty local-lis/les=47/48 n=0 ec=47/22 lis/c=22/22 les/c/f=23/23/0 sis=47) [2] r=0 lpr=47 pi=[22,47)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:08:40 compute-0 ceph-osd[88096]: osd.2 pg_epoch: 48 pg[5.6( empty local-lis/les=47/48 n=0 ec=47/22 lis/c=22/22 les/c/f=23/23/0 sis=47) [2] r=0 lpr=47 pi=[22,47)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:08:40 compute-0 ceph-osd[88096]: osd.2 pg_epoch: 48 pg[5.4( empty local-lis/les=47/48 n=0 ec=47/22 lis/c=22/22 les/c/f=23/23/0 sis=47) [2] r=0 lpr=47 pi=[22,47)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:08:40 compute-0 ceph-osd[88096]: osd.2 pg_epoch: 48 pg[5.5( empty local-lis/les=47/48 n=0 ec=47/22 lis/c=22/22 les/c/f=23/23/0 sis=47) [2] r=0 lpr=47 pi=[22,47)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:08:40 compute-0 ceph-osd[88096]: osd.2 pg_epoch: 48 pg[5.2( empty local-lis/les=47/48 n=0 ec=47/22 lis/c=22/22 les/c/f=23/23/0 sis=47) [2] r=0 lpr=47 pi=[22,47)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:08:40 compute-0 ceph-osd[88096]: osd.2 pg_epoch: 48 pg[5.1( empty local-lis/les=47/48 n=0 ec=47/22 lis/c=22/22 les/c/f=23/23/0 sis=47) [2] r=0 lpr=47 pi=[22,47)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:08:40 compute-0 ceph-osd[88096]: osd.2 pg_epoch: 48 pg[5.3( empty local-lis/les=47/48 n=0 ec=47/22 lis/c=22/22 les/c/f=23/23/0 sis=47) [2] r=0 lpr=47 pi=[22,47)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:08:40 compute-0 ceph-osd[88096]: osd.2 pg_epoch: 48 pg[5.e( empty local-lis/les=47/48 n=0 ec=47/22 lis/c=22/22 les/c/f=23/23/0 sis=47) [2] r=0 lpr=47 pi=[22,47)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:08:40 compute-0 ceph-osd[88096]: osd.2 pg_epoch: 48 pg[5.d( empty local-lis/les=47/48 n=0 ec=47/22 lis/c=22/22 les/c/f=23/23/0 sis=47) [2] r=0 lpr=47 pi=[22,47)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:08:40 compute-0 ceph-osd[88096]: osd.2 pg_epoch: 48 pg[5.1a( empty local-lis/les=47/48 n=0 ec=47/22 lis/c=22/22 les/c/f=23/23/0 sis=47) [2] r=0 lpr=47 pi=[22,47)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:08:40 compute-0 ceph-osd[88096]: osd.2 pg_epoch: 48 pg[5.17( empty local-lis/les=47/48 n=0 ec=47/22 lis/c=22/22 les/c/f=23/23/0 sis=47) [2] r=0 lpr=47 pi=[22,47)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:08:40 compute-0 ceph-osd[88096]: osd.2 pg_epoch: 48 pg[5.1c( empty local-lis/les=47/48 n=0 ec=47/22 lis/c=22/22 les/c/f=23/23/0 sis=47) [2] r=0 lpr=47 pi=[22,47)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:08:40 compute-0 ceph-osd[88096]: osd.2 pg_epoch: 48 pg[5.18( empty local-lis/les=47/48 n=0 ec=47/22 lis/c=22/22 les/c/f=23/23/0 sis=47) [2] r=0 lpr=47 pi=[22,47)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:08:40 compute-0 ceph-osd[88096]: osd.2 pg_epoch: 48 pg[5.1b( empty local-lis/les=47/48 n=0 ec=47/22 lis/c=22/22 les/c/f=23/23/0 sis=47) [2] r=0 lpr=47 pi=[22,47)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:08:40 compute-0 ceph-osd[88096]: osd.2 pg_epoch: 48 pg[5.19( empty local-lis/les=47/48 n=0 ec=47/22 lis/c=22/22 les/c/f=23/23/0 sis=47) [2] r=0 lpr=47 pi=[22,47)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:08:40 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e48 do_prune osdmap full prune enabled
Jan 31 08:08:40 compute-0 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num", "val": "32"}]': finished
Jan 31 08:08:40 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e49 e49: 3 total, 3 up, 3 in
Jan 31 08:08:40 compute-0 ceph-mon[75227]: log_channel(cluster) log [DBG] : osdmap e49: 3 total, 3 up, 3 in
Jan 31 08:08:40 compute-0 ceph-mgr[75519]: [progress INFO root] update: starting ev 015710af-b72f-4b5c-b3f1-4f3d67ad9f4e (PG autoscaler increasing pool 9 PGs from 1 to 32)
Jan 31 08:08:40 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num", "val": "32"} v 0)
Jan 31 08:08:40 compute-0 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num", "val": "32"} : dispatch
Jan 31 08:08:40 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 49 pg[7.13( empty local-lis/les=24/25 n=0 ec=48/24 lis/c=24/24 les/c/f=25/25/0 sis=48) [1] r=0 lpr=48 pi=[24,48)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:08:40 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 49 pg[7.12( empty local-lis/les=24/25 n=0 ec=48/24 lis/c=24/24 les/c/f=25/25/0 sis=48) [1] r=0 lpr=48 pi=[24,48)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:08:40 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 49 pg[7.17( empty local-lis/les=24/25 n=0 ec=48/24 lis/c=24/24 les/c/f=25/25/0 sis=48) [1] r=0 lpr=48 pi=[24,48)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:08:40 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 49 pg[7.11( empty local-lis/les=24/25 n=0 ec=48/24 lis/c=24/24 les/c/f=25/25/0 sis=48) [1] r=0 lpr=48 pi=[24,48)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:08:40 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 49 pg[7.16( empty local-lis/les=24/25 n=0 ec=48/24 lis/c=24/24 les/c/f=25/25/0 sis=48) [1] r=0 lpr=48 pi=[24,48)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:08:40 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 49 pg[7.15( empty local-lis/les=24/25 n=0 ec=48/24 lis/c=24/24 les/c/f=25/25/0 sis=48) [1] r=0 lpr=48 pi=[24,48)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:08:40 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 49 pg[7.14( empty local-lis/les=24/25 n=0 ec=48/24 lis/c=24/24 les/c/f=25/25/0 sis=48) [1] r=0 lpr=48 pi=[24,48)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:08:40 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 49 pg[7.10( empty local-lis/les=24/25 n=0 ec=48/24 lis/c=24/24 les/c/f=25/25/0 sis=48) [1] r=0 lpr=48 pi=[24,48)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:08:40 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 49 pg[7.b( empty local-lis/les=24/25 n=0 ec=48/24 lis/c=24/24 les/c/f=25/25/0 sis=48) [1] r=0 lpr=48 pi=[24,48)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:08:40 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 49 pg[7.a( empty local-lis/les=24/25 n=0 ec=48/24 lis/c=24/24 les/c/f=25/25/0 sis=48) [1] r=0 lpr=48 pi=[24,48)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:08:40 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 49 pg[7.9( empty local-lis/les=24/25 n=0 ec=48/24 lis/c=24/24 les/c/f=25/25/0 sis=48) [1] r=0 lpr=48 pi=[24,48)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:08:40 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 49 pg[7.8( empty local-lis/les=24/25 n=0 ec=48/24 lis/c=24/24 les/c/f=25/25/0 sis=48) [1] r=0 lpr=48 pi=[24,48)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:08:40 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 49 pg[7.d( empty local-lis/les=24/25 n=0 ec=48/24 lis/c=24/24 les/c/f=25/25/0 sis=48) [1] r=0 lpr=48 pi=[24,48)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:08:40 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 49 pg[7.6( empty local-lis/les=24/25 n=0 ec=48/24 lis/c=24/24 les/c/f=25/25/0 sis=48) [1] r=0 lpr=48 pi=[24,48)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:08:40 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 49 pg[7.4( empty local-lis/les=24/25 n=0 ec=48/24 lis/c=24/24 les/c/f=25/25/0 sis=48) [1] r=0 lpr=48 pi=[24,48)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:08:40 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 49 pg[7.f( empty local-lis/les=24/25 n=0 ec=48/24 lis/c=24/24 les/c/f=25/25/0 sis=48) [1] r=0 lpr=48 pi=[24,48)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:08:40 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 49 pg[7.e( empty local-lis/les=24/25 n=0 ec=48/24 lis/c=24/24 les/c/f=25/25/0 sis=48) [1] r=0 lpr=48 pi=[24,48)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:08:40 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 49 pg[7.c( empty local-lis/les=24/25 n=0 ec=48/24 lis/c=24/24 les/c/f=25/25/0 sis=48) [1] r=0 lpr=48 pi=[24,48)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:08:40 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 49 pg[7.5( empty local-lis/les=24/25 n=0 ec=48/24 lis/c=24/24 les/c/f=25/25/0 sis=48) [1] r=0 lpr=48 pi=[24,48)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:08:40 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 49 pg[7.7( empty local-lis/les=24/25 n=0 ec=48/24 lis/c=24/24 les/c/f=25/25/0 sis=48) [1] r=0 lpr=48 pi=[24,48)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:08:40 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 49 pg[7.1( empty local-lis/les=24/25 n=0 ec=48/24 lis/c=24/24 les/c/f=25/25/0 sis=48) [1] r=0 lpr=48 pi=[24,48)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:08:40 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 49 pg[7.2( empty local-lis/les=24/25 n=0 ec=48/24 lis/c=24/24 les/c/f=25/25/0 sis=48) [1] r=0 lpr=48 pi=[24,48)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:08:40 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 49 pg[7.3( empty local-lis/les=24/25 n=0 ec=48/24 lis/c=24/24 les/c/f=25/25/0 sis=48) [1] r=0 lpr=48 pi=[24,48)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:08:40 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 49 pg[7.1d( empty local-lis/les=24/25 n=0 ec=48/24 lis/c=24/24 les/c/f=25/25/0 sis=48) [1] r=0 lpr=48 pi=[24,48)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:08:40 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 49 pg[7.1e( empty local-lis/les=24/25 n=0 ec=48/24 lis/c=24/24 les/c/f=25/25/0 sis=48) [1] r=0 lpr=48 pi=[24,48)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:08:40 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 49 pg[7.18( empty local-lis/les=24/25 n=0 ec=48/24 lis/c=24/24 les/c/f=25/25/0 sis=48) [1] r=0 lpr=48 pi=[24,48)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:08:40 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 49 pg[7.19( empty local-lis/les=24/25 n=0 ec=48/24 lis/c=24/24 les/c/f=25/25/0 sis=48) [1] r=0 lpr=48 pi=[24,48)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:08:40 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 49 pg[7.1f( empty local-lis/les=24/25 n=0 ec=48/24 lis/c=24/24 les/c/f=25/25/0 sis=48) [1] r=0 lpr=48 pi=[24,48)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:08:40 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 49 pg[7.1c( empty local-lis/les=24/25 n=0 ec=48/24 lis/c=24/24 les/c/f=25/25/0 sis=48) [1] r=0 lpr=48 pi=[24,48)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:08:40 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 49 pg[7.1b( empty local-lis/les=24/25 n=0 ec=48/24 lis/c=24/24 les/c/f=25/25/0 sis=48) [1] r=0 lpr=48 pi=[24,48)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:08:40 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 49 pg[7.1a( empty local-lis/les=24/25 n=0 ec=48/24 lis/c=24/24 les/c/f=25/25/0 sis=48) [1] r=0 lpr=48 pi=[24,48)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:08:40 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 49 pg[7.13( empty local-lis/les=48/49 n=0 ec=48/24 lis/c=24/24 les/c/f=25/25/0 sis=48) [1] r=0 lpr=48 pi=[24,48)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:08:40 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 49 pg[7.17( empty local-lis/les=48/49 n=0 ec=48/24 lis/c=24/24 les/c/f=25/25/0 sis=48) [1] r=0 lpr=48 pi=[24,48)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:08:40 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 49 pg[7.16( empty local-lis/les=48/49 n=0 ec=48/24 lis/c=24/24 les/c/f=25/25/0 sis=48) [1] r=0 lpr=48 pi=[24,48)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:08:40 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 49 pg[7.15( empty local-lis/les=48/49 n=0 ec=48/24 lis/c=24/24 les/c/f=25/25/0 sis=48) [1] r=0 lpr=48 pi=[24,48)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:08:40 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 49 pg[7.12( empty local-lis/les=48/49 n=0 ec=48/24 lis/c=24/24 les/c/f=25/25/0 sis=48) [1] r=0 lpr=48 pi=[24,48)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:08:40 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 49 pg[7.b( empty local-lis/les=48/49 n=0 ec=48/24 lis/c=24/24 les/c/f=25/25/0 sis=48) [1] r=0 lpr=48 pi=[24,48)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:08:40 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 49 pg[7.a( empty local-lis/les=48/49 n=0 ec=48/24 lis/c=24/24 les/c/f=25/25/0 sis=48) [1] r=0 lpr=48 pi=[24,48)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:08:40 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 49 pg[7.14( empty local-lis/les=48/49 n=0 ec=48/24 lis/c=24/24 les/c/f=25/25/0 sis=48) [1] r=0 lpr=48 pi=[24,48)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:08:40 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 49 pg[7.9( empty local-lis/les=48/49 n=0 ec=48/24 lis/c=24/24 les/c/f=25/25/0 sis=48) [1] r=0 lpr=48 pi=[24,48)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:08:40 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 49 pg[7.6( empty local-lis/les=48/49 n=0 ec=48/24 lis/c=24/24 les/c/f=25/25/0 sis=48) [1] r=0 lpr=48 pi=[24,48)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:08:40 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 49 pg[7.4( empty local-lis/les=48/49 n=0 ec=48/24 lis/c=24/24 les/c/f=25/25/0 sis=48) [1] r=0 lpr=48 pi=[24,48)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:08:40 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 49 pg[7.0( empty local-lis/les=48/49 n=0 ec=24/24 lis/c=24/24 les/c/f=25/25/0 sis=48) [1] r=0 lpr=48 pi=[24,48)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:08:40 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 49 pg[7.d( empty local-lis/les=48/49 n=0 ec=48/24 lis/c=24/24 les/c/f=25/25/0 sis=48) [1] r=0 lpr=48 pi=[24,48)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:08:40 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 49 pg[7.8( empty local-lis/les=48/49 n=0 ec=48/24 lis/c=24/24 les/c/f=25/25/0 sis=48) [1] r=0 lpr=48 pi=[24,48)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:08:40 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 49 pg[7.f( empty local-lis/les=48/49 n=0 ec=48/24 lis/c=24/24 les/c/f=25/25/0 sis=48) [1] r=0 lpr=48 pi=[24,48)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:08:40 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 49 pg[7.e( empty local-lis/les=48/49 n=0 ec=48/24 lis/c=24/24 les/c/f=25/25/0 sis=48) [1] r=0 lpr=48 pi=[24,48)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:08:40 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 49 pg[7.c( empty local-lis/les=48/49 n=0 ec=48/24 lis/c=24/24 les/c/f=25/25/0 sis=48) [1] r=0 lpr=48 pi=[24,48)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:08:40 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 49 pg[7.7( empty local-lis/les=48/49 n=0 ec=48/24 lis/c=24/24 les/c/f=25/25/0 sis=48) [1] r=0 lpr=48 pi=[24,48)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:08:40 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 49 pg[7.10( empty local-lis/les=48/49 n=0 ec=48/24 lis/c=24/24 les/c/f=25/25/0 sis=48) [1] r=0 lpr=48 pi=[24,48)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:08:40 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 49 pg[7.1( empty local-lis/les=48/49 n=0 ec=48/24 lis/c=24/24 les/c/f=25/25/0 sis=48) [1] r=0 lpr=48 pi=[24,48)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:08:40 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 49 pg[7.5( empty local-lis/les=48/49 n=0 ec=48/24 lis/c=24/24 les/c/f=25/25/0 sis=48) [1] r=0 lpr=48 pi=[24,48)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:08:40 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 49 pg[7.2( empty local-lis/les=48/49 n=0 ec=48/24 lis/c=24/24 les/c/f=25/25/0 sis=48) [1] r=0 lpr=48 pi=[24,48)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:08:40 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 49 pg[7.3( empty local-lis/les=48/49 n=0 ec=48/24 lis/c=24/24 les/c/f=25/25/0 sis=48) [1] r=0 lpr=48 pi=[24,48)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:08:40 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 49 pg[7.1d( empty local-lis/les=48/49 n=0 ec=48/24 lis/c=24/24 les/c/f=25/25/0 sis=48) [1] r=0 lpr=48 pi=[24,48)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:08:40 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 49 pg[7.1e( empty local-lis/les=48/49 n=0 ec=48/24 lis/c=24/24 les/c/f=25/25/0 sis=48) [1] r=0 lpr=48 pi=[24,48)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:08:40 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 49 pg[7.18( empty local-lis/les=48/49 n=0 ec=48/24 lis/c=24/24 les/c/f=25/25/0 sis=48) [1] r=0 lpr=48 pi=[24,48)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:08:40 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 49 pg[7.1a( empty local-lis/les=48/49 n=0 ec=48/24 lis/c=24/24 les/c/f=25/25/0 sis=48) [1] r=0 lpr=48 pi=[24,48)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:08:40 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 49 pg[7.1b( empty local-lis/les=48/49 n=0 ec=48/24 lis/c=24/24 les/c/f=25/25/0 sis=48) [1] r=0 lpr=48 pi=[24,48)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:08:40 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 49 pg[7.19( empty local-lis/les=48/49 n=0 ec=48/24 lis/c=24/24 les/c/f=25/25/0 sis=48) [1] r=0 lpr=48 pi=[24,48)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:08:40 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 49 pg[7.1f( empty local-lis/les=48/49 n=0 ec=48/24 lis/c=24/24 les/c/f=25/25/0 sis=48) [1] r=0 lpr=48 pi=[24,48)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:08:40 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 49 pg[7.1c( empty local-lis/les=48/49 n=0 ec=48/24 lis/c=24/24 les/c/f=25/25/0 sis=48) [1] r=0 lpr=48 pi=[24,48)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:08:40 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 49 pg[7.11( empty local-lis/les=48/49 n=0 ec=48/24 lis/c=24/24 les/c/f=25/25/0 sis=48) [1] r=0 lpr=48 pi=[24,48)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:08:41 compute-0 ceph-mon[75227]: pgmap v102: 135 pgs: 1 peering, 62 unknown, 72 active+clean; 461 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:08:41 compute-0 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num", "val": "32"}]': finished
Jan 31 08:08:41 compute-0 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num_actual", "val": "32"}]': finished
Jan 31 08:08:41 compute-0 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num_actual", "val": "16"}]': finished
Jan 31 08:08:41 compute-0 ceph-mon[75227]: osdmap e48: 3 total, 3 up, 3 in
Jan 31 08:08:41 compute-0 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num", "val": "32"} : dispatch
Jan 31 08:08:41 compute-0 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num", "val": "32"}]': finished
Jan 31 08:08:41 compute-0 ceph-mon[75227]: osdmap e49: 3 total, 3 up, 3 in
Jan 31 08:08:41 compute-0 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num", "val": "32"} : dispatch
Jan 31 08:08:41 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v105: 181 pgs: 2 peering, 77 unknown, 102 active+clean; 461 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:08:41 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num_actual", "val": "32"} v 0)
Jan 31 08:08:41 compute-0 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num_actual", "val": "32"} : dispatch
Jan 31 08:08:41 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num_actual", "val": "32"} v 0)
Jan 31 08:08:41 compute-0 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num_actual", "val": "32"} : dispatch
Jan 31 08:08:41 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e49 do_prune osdmap full prune enabled
Jan 31 08:08:41 compute-0 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num", "val": "32"}]': finished
Jan 31 08:08:41 compute-0 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num_actual", "val": "32"}]': finished
Jan 31 08:08:41 compute-0 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num_actual", "val": "32"}]': finished
Jan 31 08:08:41 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e50 e50: 3 total, 3 up, 3 in
Jan 31 08:08:41 compute-0 ceph-mon[75227]: log_channel(cluster) log [DBG] : osdmap e50: 3 total, 3 up, 3 in
Jan 31 08:08:41 compute-0 ceph-mgr[75519]: [progress INFO root] update: starting ev 901d1da8-4684-4daf-bb55-fd322d0e69d9 (PG autoscaler increasing pool 10 PGs from 1 to 32)
Jan 31 08:08:41 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num", "val": "32"} v 0)
Jan 31 08:08:41 compute-0 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num", "val": "32"} : dispatch
Jan 31 08:08:41 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 50 pg[8.0( v 34'6 (0'0,34'6] local-lis/les=33/34 n=6 ec=33/33 lis/c=33/33 les/c/f=34/34/0 sis=50 pruub=11.599143982s) [1] r=0 lpr=50 pi=[33,50)/1 crt=34'6 lcod 34'5 mlcod 34'5 active pruub 92.479797363s@ mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [1], acting_primary 1 -> 1, up_primary 1 -> 1, role 0 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 08:08:41 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 50 pg[9.0( v 41'483 (0'0,41'483] local-lis/les=35/36 n=210 ec=35/35 lis/c=35/35 les/c/f=36/36/0 sis=50 pruub=13.248332024s) [1] r=0 lpr=50 pi=[35,50)/1 crt=41'483 lcod 41'482 mlcod 41'482 active pruub 94.129051208s@ mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [1], acting_primary 1 -> 1, up_primary 1 -> 1, role 0 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 08:08:41 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 50 pg[8.0( v 34'6 lc 0'0 (0'0,34'6] local-lis/les=33/34 n=0 ec=33/33 lis/c=33/33 les/c/f=34/34/0 sis=50 pruub=11.599143982s) [1] r=0 lpr=50 pi=[33,50)/1 crt=34'6 lcod 34'5 mlcod 0'0 unknown pruub 92.479797363s@ mbc={}] state<Start>: transitioning to Primary
Jan 31 08:08:41 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 50 pg[9.0( v 41'483 lc 0'0 (0'0,41'483] local-lis/les=35/36 n=6 ec=35/35 lis/c=35/35 les/c/f=36/36/0 sis=50 pruub=13.248332024s) [1] r=0 lpr=50 pi=[35,50)/1 crt=41'483 lcod 41'482 mlcod 0'0 unknown pruub 94.129051208s@ mbc={}] state<Start>: transitioning to Primary
Jan 31 08:08:41 compute-0 ceph-osd[87035]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x55d781f726c0) split_cache   moving buffer(0x55d7828de600 space 0x55d781f92540 0x0~9a clean)
Jan 31 08:08:41 compute-0 ceph-osd[87035]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x55d781f726c0) split_cache   moving buffer(0x55d7828dfb00 space 0x55d782243a40 0x0~9a clean)
Jan 31 08:08:41 compute-0 ceph-osd[87035]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x55d781f726c0) split_cache   moving buffer(0x55d782966080 space 0x55d78213f440 0x0~98 clean)
Jan 31 08:08:41 compute-0 ceph-osd[87035]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x55d781f726c0) split_cache   moving buffer(0x55d782a5f200 space 0x55d781f2ae40 0x0~9a clean)
Jan 31 08:08:41 compute-0 ceph-osd[87035]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x55d781f726c0) split_cache   moving buffer(0x55d782970100 space 0x55d781f92b40 0x0~9a clean)
Jan 31 08:08:41 compute-0 ceph-osd[87035]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x55d781f726c0) split_cache   moving buffer(0x55d7828de800 space 0x55d781f93740 0x0~9a clean)
Jan 31 08:08:41 compute-0 ceph-osd[87035]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x55d781f726c0) split_cache   moving buffer(0x55d782958b80 space 0x55d781b97a40 0x0~6e clean)
Jan 31 08:08:41 compute-0 ceph-osd[87035]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x55d781f726c0) split_cache   moving buffer(0x55d782966500 space 0x55d78207ae40 0x0~6e clean)
Jan 31 08:08:41 compute-0 ceph-osd[87035]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x55d781f726c0) split_cache   moving buffer(0x55d782966900 space 0x55d781f2ba40 0x0~6e clean)
Jan 31 08:08:41 compute-0 ceph-osd[87035]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x55d781f726c0) split_cache   moving buffer(0x55d782966c00 space 0x55d781f2b140 0x0~6e clean)
Jan 31 08:08:41 compute-0 ceph-osd[87035]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x55d781f726c0) split_cache   moving buffer(0x55d782975f00 space 0x55d7820af740 0x0~6e clean)
Jan 31 08:08:41 compute-0 ceph-osd[87035]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x55d781f726c0) split_cache   moving buffer(0x55d78295e680 space 0x55d782097d40 0x0~6e clean)
Jan 31 08:08:41 compute-0 ceph-osd[87035]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x55d781f726c0) split_cache   moving buffer(0x55d7828df000 space 0x55d7820af440 0x0~9a clean)
Jan 31 08:08:41 compute-0 ceph-osd[87035]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x55d781f726c0) split_cache   moving buffer(0x55d782966a00 space 0x55d78207ba40 0x0~6e clean)
Jan 31 08:08:41 compute-0 ceph-osd[87035]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x55d781f726c0) split_cache   moving buffer(0x55d782967480 space 0x55d7820ec240 0x0~6e clean)
Jan 31 08:08:41 compute-0 ceph-osd[87035]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x55d781f726c0) split_cache   moving buffer(0x55d7828dea00 space 0x55d782094840 0x0~98 clean)
Jan 31 08:08:41 compute-0 ceph-osd[87035]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x55d781f726c0) split_cache   moving buffer(0x55d782894e80 space 0x55d781f6a540 0x0~9a clean)
Jan 31 08:08:41 compute-0 ceph-osd[87035]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x55d781f726c0) split_cache   moving buffer(0x55d782894180 space 0x55d7820dd140 0x0~6e clean)
Jan 31 08:08:41 compute-0 ceph-osd[87035]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x55d781f726c0) split_cache   moving buffer(0x55d782974b80 space 0x55d7820d7440 0x0~6e clean)
Jan 31 08:08:41 compute-0 ceph-osd[87035]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x55d781f726c0) split_cache   moving buffer(0x55d782966f80 space 0x55d781ba7740 0x0~6e clean)
Jan 31 08:08:41 compute-0 ceph-osd[87035]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x55d781f726c0) split_cache   moving buffer(0x55d782967680 space 0x55d7820ecb40 0x0~6e clean)
Jan 31 08:08:41 compute-0 ceph-osd[87035]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x55d781f726c0) split_cache   moving buffer(0x55d782974f80 space 0x55d7820d6240 0x0~6e clean)
Jan 31 08:08:41 compute-0 ceph-osd[87035]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x55d781f726c0) split_cache   moving buffer(0x55d782970800 space 0x55d781ba6240 0x0~6e clean)
Jan 31 08:08:41 compute-0 ceph-osd[87035]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x55d781f726c0) split_cache   moving buffer(0x55d782975d00 space 0x55d781bc3740 0x0~6e clean)
Jan 31 08:08:41 compute-0 ceph-osd[87035]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x55d781f726c0) split_cache   moving buffer(0x55d7828de900 space 0x55d781f29140 0x0~98 clean)
Jan 31 08:08:41 compute-0 ceph-osd[87035]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x55d781f726c0) split_cache   moving buffer(0x55d78297ff00 space 0x55d781bc2540 0x0~6e clean)
Jan 31 08:08:41 compute-0 ceph-osd[87035]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x55d781f726c0) split_cache   moving buffer(0x55d781ce0c80 space 0x55d781f98540 0x0~9a clean)
Jan 31 08:08:41 compute-0 ceph-osd[87035]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x55d781f726c0) split_cache   moving buffer(0x55d782a5f500 space 0x55d782152240 0x0~98 clean)
Jan 31 08:08:41 compute-0 ceph-osd[87035]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x55d781f726c0) split_cache   moving buffer(0x55d782894d00 space 0x55d7820ddd40 0x0~6e clean)
Jan 31 08:08:41 compute-0 ceph-osd[87035]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x55d781f726c0) split_cache   moving buffer(0x55d781ce0000 space 0x55d781bc3d40 0x0~9a clean)
Jan 31 08:08:41 compute-0 ceph-osd[87035]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x55d781f726c0) split_cache   moving buffer(0x55d781ce0280 space 0x55d781f7fa40 0x0~9a clean)
Jan 31 08:08:41 compute-0 ceph-osd[87035]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x55d781f726c0) split_cache   moving buffer(0x55d782974d80 space 0x55d7820d6b40 0x0~6e clean)
Jan 31 08:08:41 compute-0 ceph-osd[87035]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x55d781f726c0) split_cache   moving buffer(0x55d781ce0e00 space 0x55d782107740 0x0~9a clean)
Jan 31 08:08:41 compute-0 ceph-osd[87035]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x55d781f726c0) split_cache   moving buffer(0x55d7827e7c00 space 0x55d782236840 0x0~9a clean)
Jan 31 08:08:41 compute-0 ceph-osd[87035]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x55d781f726c0) split_cache   moving buffer(0x55d78290fa80 space 0x55d781ba6b40 0x0~6e clean)
Jan 31 08:08:41 compute-0 ceph-osd[87035]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x55d781f726c0) split_cache   moving buffer(0x55d782974b00 space 0x55d782152b40 0x0~98 clean)
Jan 31 08:08:41 compute-0 ceph-osd[87035]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x55d781f726c0) split_cache   moving buffer(0x55d781b0b200 space 0x55d781bc2e40 0x0~6e clean)
Jan 31 08:08:41 compute-0 ceph-osd[87035]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x55d781f726c0) split_cache   moving buffer(0x55d78295ff80 space 0x55d781f2cb40 0x0~98 clean)
Jan 31 08:08:41 compute-0 ceph-osd[87035]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x55d781f726c0) split_cache   moving buffer(0x55d781ce0700 space 0x55d782a56540 0x0~9a clean)
Jan 31 08:08:41 compute-0 ceph-osd[87035]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x55d781f726c0) split_cache   moving buffer(0x55d782a5f600 space 0x55d781f93140 0x0~9a clean)
Jan 31 08:08:41 compute-0 ceph-osd[87035]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x55d781f726c0) split_cache   moving buffer(0x55d782966100 space 0x55d781f2c240 0x0~6e clean)
Jan 31 08:08:41 compute-0 ceph-osd[87035]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x55d781f726c0) split_cache   moving buffer(0x55d782967780 space 0x55d7820ae840 0x0~6e clean)
Jan 31 08:08:41 compute-0 ceph-osd[87035]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x55d781f726c0) split_cache   moving buffer(0x55d781ce0680 space 0x55d7820dd440 0x0~9a clean)
Jan 31 08:08:41 compute-0 ceph-osd[87035]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x55d781f726c0) split_cache   moving buffer(0x55d782970380 space 0x55d782097140 0x0~6e clean)
Jan 31 08:08:41 compute-0 ceph-osd[87035]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x55d781f726c0) split_cache   moving buffer(0x55d782a5ef80 space 0x55d7820ec840 0x0~9a clean)
Jan 31 08:08:41 compute-0 ceph-osd[87035]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x55d781f726c0) split_cache   moving buffer(0x55d782916b80 space 0x55d781f98e40 0x0~9a clean)
Jan 31 08:08:41 compute-0 ceph-osd[87035]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x55d781f726c0) split_cache   moving buffer(0x55d7827e7300 space 0x55d782cde840 0x0~98 clean)
Jan 31 08:08:41 compute-0 ceph-osd[87035]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x55d781f726c0) split_cache   moving buffer(0x55d782959580 space 0x55d781b97140 0x0~6e clean)
Jan 31 08:08:41 compute-0 ceph-osd[87035]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x55d781f726c0) split_cache   moving buffer(0x55d782894d80 space 0x55d782106540 0x0~9a clean)
Jan 31 08:08:41 compute-0 ceph-osd[87035]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x55d781f726c0) split_cache   moving buffer(0x55d782974200 space 0x55d7820dc840 0x0~6e clean)
Jan 31 08:08:41 compute-0 ceph-osd[87035]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x55d781f726c0) split_cache   moving buffer(0x55d7828de300 space 0x55d781f6ae40 0x0~9a clean)
Jan 31 08:08:41 compute-0 ceph-osd[87035]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x55d781f726c0) split_cache   moving buffer(0x55d782974400 space 0x55d7820af140 0x0~6e clean)
Jan 31 08:08:41 compute-0 ceph-osd[87035]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x55d781f726c0) split_cache   moving buffer(0x55d782a5ed00 space 0x55d782095d40 0x0~9a clean)
Jan 31 08:08:41 compute-0 ceph-osd[87035]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x55d781f726c0) split_cache   moving buffer(0x55d781ce0a00 space 0x55d782094540 0x0~9a clean)
Jan 31 08:08:41 compute-0 ceph-osd[87035]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x55d781f726c0) split_cache   moving buffer(0x55d782966e80 space 0x55d781f2a840 0x0~6e clean)
Jan 31 08:08:41 compute-0 ceph-osd[87035]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x55d781f726c0) split_cache   moving buffer(0x55d782974980 space 0x55d7820d7d40 0x0~6e clean)
Jan 31 08:08:41 compute-0 ceph-osd[87035]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x55d781f726c0) split_cache   moving buffer(0x55d782927a00 space 0x55d781b96e40 0x0~6e clean)
Jan 31 08:08:41 compute-0 ceph-osd[87035]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x55d781f726c0) split_cache   moving buffer(0x55d7827e6880 space 0x55d781f2dd40 0x0~98 clean)
Jan 31 08:08:41 compute-0 ceph-osd[87035]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x55d781f726c0) split_cache   moving buffer(0x55d782967200 space 0x55d7820ed440 0x0~6e clean)
Jan 31 08:08:42 compute-0 ceph-osd[85971]: osd.0 pg_epoch: 48 pg[6.0( v 39'39 (0'0,39'39] local-lis/les=23/24 n=22 ec=23/23 lis/c=23/23 les/c/f=24/24/0 sis=48 pruub=9.509422302s) [0] r=0 lpr=48 pi=[23,48)/1 crt=39'39 lcod 37'38 mlcod 37'38 active pruub 95.593406677s@ mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [0], acting_primary 0 -> 0, up_primary 0 -> 0, role 0 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 08:08:42 compute-0 ceph-osd[85971]: osd.0 pg_epoch: 48 pg[6.0( v 39'39 lc 0'0 (0'0,39'39] local-lis/les=23/24 n=1 ec=23/23 lis/c=23/23 les/c/f=24/24/0 sis=48 pruub=9.509422302s) [0] r=0 lpr=48 pi=[23,48)/1 crt=39'39 lcod 37'38 mlcod 0'0 unknown pruub 95.593406677s@ mbc={}] state<Start>: transitioning to Primary
Jan 31 08:08:42 compute-0 ceph-mon[75227]: pgmap v105: 181 pgs: 2 peering, 77 unknown, 102 active+clean; 461 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:08:42 compute-0 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num_actual", "val": "32"} : dispatch
Jan 31 08:08:42 compute-0 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num_actual", "val": "32"} : dispatch
Jan 31 08:08:42 compute-0 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num", "val": "32"}]': finished
Jan 31 08:08:42 compute-0 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num_actual", "val": "32"}]': finished
Jan 31 08:08:42 compute-0 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num_actual", "val": "32"}]': finished
Jan 31 08:08:42 compute-0 ceph-mon[75227]: osdmap e50: 3 total, 3 up, 3 in
Jan 31 08:08:42 compute-0 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num", "val": "32"} : dispatch
Jan 31 08:08:42 compute-0 ceph-osd[85971]: osd.0 pg_epoch: 50 pg[6.a( v 39'39 lc 0'0 (0'0,39'39] local-lis/les=23/24 n=1 ec=48/23 lis/c=23/23 les/c/f=24/24/0 sis=48) [0] r=0 lpr=48 pi=[23,48)/1 crt=39'39 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:08:42 compute-0 ceph-osd[85971]: osd.0 pg_epoch: 50 pg[6.4( v 39'39 lc 0'0 (0'0,39'39] local-lis/les=23/24 n=2 ec=48/23 lis/c=23/23 les/c/f=24/24/0 sis=48) [0] r=0 lpr=48 pi=[23,48)/1 crt=39'39 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:08:42 compute-0 ceph-osd[85971]: osd.0 pg_epoch: 50 pg[6.5( v 39'39 lc 0'0 (0'0,39'39] local-lis/les=23/24 n=2 ec=48/23 lis/c=23/23 les/c/f=24/24/0 sis=48) [0] r=0 lpr=48 pi=[23,48)/1 crt=39'39 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:08:42 compute-0 ceph-osd[85971]: osd.0 pg_epoch: 50 pg[6.9( v 39'39 lc 0'0 (0'0,39'39] local-lis/les=23/24 n=1 ec=48/23 lis/c=23/23 les/c/f=24/24/0 sis=48) [0] r=0 lpr=48 pi=[23,48)/1 crt=39'39 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:08:42 compute-0 ceph-osd[85971]: osd.0 pg_epoch: 50 pg[6.8( v 39'39 lc 0'0 (0'0,39'39] local-lis/les=23/24 n=1 ec=48/23 lis/c=23/23 les/c/f=24/24/0 sis=48) [0] r=0 lpr=48 pi=[23,48)/1 crt=39'39 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:08:42 compute-0 ceph-osd[85971]: osd.0 pg_epoch: 50 pg[6.7( v 39'39 lc 0'0 (0'0,39'39] local-lis/les=23/24 n=1 ec=48/23 lis/c=23/23 les/c/f=24/24/0 sis=48) [0] r=0 lpr=48 pi=[23,48)/1 crt=39'39 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:08:42 compute-0 ceph-osd[85971]: osd.0 pg_epoch: 50 pg[6.b( v 39'39 lc 0'0 (0'0,39'39] local-lis/les=23/24 n=1 ec=48/23 lis/c=23/23 les/c/f=24/24/0 sis=48) [0] r=0 lpr=48 pi=[23,48)/1 crt=39'39 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:08:42 compute-0 ceph-osd[85971]: osd.0 pg_epoch: 50 pg[6.6( v 39'39 lc 0'0 (0'0,39'39] local-lis/les=23/24 n=2 ec=48/23 lis/c=23/23 les/c/f=24/24/0 sis=48) [0] r=0 lpr=48 pi=[23,48)/1 crt=39'39 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:08:42 compute-0 ceph-osd[85971]: osd.0 pg_epoch: 50 pg[6.1( v 39'39 (0'0,39'39] local-lis/les=23/24 n=2 ec=48/23 lis/c=23/23 les/c/f=24/24/0 sis=48) [0] r=0 lpr=48 pi=[23,48)/1 crt=39'39 lcod 0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:08:42 compute-0 ceph-osd[85971]: osd.0 pg_epoch: 50 pg[6.3( v 39'39 lc 0'0 (0'0,39'39] local-lis/les=23/24 n=2 ec=48/23 lis/c=23/23 les/c/f=24/24/0 sis=48) [0] r=0 lpr=48 pi=[23,48)/1 crt=39'39 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:08:42 compute-0 ceph-osd[85971]: osd.0 pg_epoch: 50 pg[6.e( v 39'39 lc 0'0 (0'0,39'39] local-lis/les=23/24 n=1 ec=48/23 lis/c=23/23 les/c/f=24/24/0 sis=48) [0] r=0 lpr=48 pi=[23,48)/1 crt=39'39 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:08:42 compute-0 ceph-osd[85971]: osd.0 pg_epoch: 50 pg[6.2( v 39'39 lc 0'0 (0'0,39'39] local-lis/les=23/24 n=2 ec=48/23 lis/c=23/23 les/c/f=24/24/0 sis=48) [0] r=0 lpr=48 pi=[23,48)/1 crt=39'39 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:08:42 compute-0 ceph-osd[85971]: osd.0 pg_epoch: 50 pg[6.f( v 39'39 lc 0'0 (0'0,39'39] local-lis/les=23/24 n=1 ec=48/23 lis/c=23/23 les/c/f=24/24/0 sis=48) [0] r=0 lpr=48 pi=[23,48)/1 crt=39'39 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:08:42 compute-0 ceph-osd[85971]: osd.0 pg_epoch: 50 pg[6.c( v 39'39 lc 0'0 (0'0,39'39] local-lis/les=23/24 n=1 ec=48/23 lis/c=23/23 les/c/f=24/24/0 sis=48) [0] r=0 lpr=48 pi=[23,48)/1 crt=39'39 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:08:42 compute-0 ceph-osd[85971]: osd.0 pg_epoch: 50 pg[6.d( v 39'39 lc 0'0 (0'0,39'39] local-lis/les=23/24 n=1 ec=48/23 lis/c=23/23 les/c/f=24/24/0 sis=48) [0] r=0 lpr=48 pi=[23,48)/1 crt=39'39 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:08:42 compute-0 ceph-osd[87035]: log_channel(cluster) log [DBG] : 3.1e scrub starts
Jan 31 08:08:42 compute-0 ceph-osd[87035]: log_channel(cluster) log [DBG] : 3.1e scrub ok
Jan 31 08:08:42 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e50 do_prune osdmap full prune enabled
Jan 31 08:08:42 compute-0 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num", "val": "32"}]': finished
Jan 31 08:08:42 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e51 e51: 3 total, 3 up, 3 in
Jan 31 08:08:42 compute-0 ceph-mon[75227]: log_channel(cluster) log [DBG] : osdmap e51: 3 total, 3 up, 3 in
Jan 31 08:08:42 compute-0 ceph-mgr[75519]: [progress INFO root] update: starting ev 06d4d9a4-3842-4b34-8d16-64fb69abde60 (PG autoscaler increasing pool 11 PGs from 1 to 32)
Jan 31 08:08:42 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 51 pg[8.14( v 34'6 lc 0'0 (0'0,34'6] local-lis/les=33/34 n=0 ec=50/33 lis/c=33/33 les/c/f=34/34/0 sis=50) [1] r=0 lpr=50 pi=[33,50)/1 crt=34'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:08:42 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 51 pg[9.15( v 41'483 lc 0'0 (0'0,41'483] local-lis/les=35/36 n=6 ec=50/35 lis/c=35/35 les/c/f=36/36/0 sis=50) [1] r=0 lpr=50 pi=[35,50)/1 crt=41'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:08:42 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 51 pg[8.15( v 34'6 lc 0'0 (0'0,34'6] local-lis/les=33/34 n=0 ec=50/33 lis/c=33/33 les/c/f=34/34/0 sis=50) [1] r=0 lpr=50 pi=[33,50)/1 crt=34'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:08:42 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 51 pg[9.14( v 41'483 lc 0'0 (0'0,41'483] local-lis/les=35/36 n=6 ec=50/35 lis/c=35/35 les/c/f=36/36/0 sis=50) [1] r=0 lpr=50 pi=[35,50)/1 crt=41'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:08:42 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 51 pg[8.16( v 34'6 lc 0'0 (0'0,34'6] local-lis/les=33/34 n=0 ec=50/33 lis/c=33/33 les/c/f=34/34/0 sis=50) [1] r=0 lpr=50 pi=[33,50)/1 crt=34'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:08:42 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 51 pg[9.17( v 41'483 lc 0'0 (0'0,41'483] local-lis/les=35/36 n=6 ec=50/35 lis/c=35/35 les/c/f=36/36/0 sis=50) [1] r=0 lpr=50 pi=[35,50)/1 crt=41'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:08:42 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 51 pg[8.17( v 34'6 lc 0'0 (0'0,34'6] local-lis/les=33/34 n=0 ec=50/33 lis/c=33/33 les/c/f=34/34/0 sis=50) [1] r=0 lpr=50 pi=[33,50)/1 crt=34'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:08:42 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 51 pg[9.16( v 41'483 lc 0'0 (0'0,41'483] local-lis/les=35/36 n=6 ec=50/35 lis/c=35/35 les/c/f=36/36/0 sis=50) [1] r=0 lpr=50 pi=[35,50)/1 crt=41'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:08:42 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 51 pg[8.10( v 34'6 lc 0'0 (0'0,34'6] local-lis/les=33/34 n=0 ec=50/33 lis/c=33/33 les/c/f=34/34/0 sis=50) [1] r=0 lpr=50 pi=[33,50)/1 crt=34'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:08:42 compute-0 ceph-mgr[75519]: [progress INFO root] complete: finished ev 1f2c8d8f-218f-4e97-9265-e014252ce84a (PG autoscaler increasing pool 2 PGs from 1 to 32)
Jan 31 08:08:42 compute-0 ceph-mgr[75519]: [progress INFO root] Completed event 1f2c8d8f-218f-4e97-9265-e014252ce84a (PG autoscaler increasing pool 2 PGs from 1 to 32) in 10 seconds
Jan 31 08:08:42 compute-0 ceph-mgr[75519]: [progress INFO root] complete: finished ev bea3075c-4cd0-4ea7-a2f9-012dea3b16ab (PG autoscaler increasing pool 3 PGs from 1 to 32)
Jan 31 08:08:42 compute-0 ceph-mgr[75519]: [progress INFO root] Completed event bea3075c-4cd0-4ea7-a2f9-012dea3b16ab (PG autoscaler increasing pool 3 PGs from 1 to 32) in 9 seconds
Jan 31 08:08:42 compute-0 ceph-mgr[75519]: [progress INFO root] complete: finished ev 921ae24a-b5aa-4d4d-861f-3e75adb35864 (PG autoscaler increasing pool 4 PGs from 1 to 32)
Jan 31 08:08:42 compute-0 ceph-mgr[75519]: [progress INFO root] Completed event 921ae24a-b5aa-4d4d-861f-3e75adb35864 (PG autoscaler increasing pool 4 PGs from 1 to 32) in 8 seconds
Jan 31 08:08:42 compute-0 ceph-mgr[75519]: [progress INFO root] complete: finished ev 04386057-19f2-4725-b4c5-3655da5ae531 (PG autoscaler increasing pool 5 PGs from 1 to 32)
Jan 31 08:08:42 compute-0 ceph-mgr[75519]: [progress INFO root] Completed event 04386057-19f2-4725-b4c5-3655da5ae531 (PG autoscaler increasing pool 5 PGs from 1 to 32) in 6 seconds
Jan 31 08:08:42 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 51 pg[9.11( v 41'483 lc 0'0 (0'0,41'483] local-lis/les=35/36 n=7 ec=50/35 lis/c=35/35 les/c/f=36/36/0 sis=50) [1] r=0 lpr=50 pi=[35,50)/1 crt=41'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:08:42 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 51 pg[9.10( v 41'483 lc 0'0 (0'0,41'483] local-lis/les=35/36 n=7 ec=50/35 lis/c=35/35 les/c/f=36/36/0 sis=50) [1] r=0 lpr=50 pi=[35,50)/1 crt=41'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:08:42 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 51 pg[8.11( v 34'6 lc 0'0 (0'0,34'6] local-lis/les=33/34 n=0 ec=50/33 lis/c=33/33 les/c/f=34/34/0 sis=50) [1] r=0 lpr=50 pi=[33,50)/1 crt=34'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:08:42 compute-0 ceph-mgr[75519]: [progress INFO root] complete: finished ev e0f04fc9-3a4a-419c-885d-453067ed64a6 (PG autoscaler increasing pool 6 PGs from 1 to 16)
Jan 31 08:08:42 compute-0 ceph-mgr[75519]: [progress INFO root] Completed event e0f04fc9-3a4a-419c-885d-453067ed64a6 (PG autoscaler increasing pool 6 PGs from 1 to 16) in 5 seconds
Jan 31 08:08:42 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 51 pg[8.12( v 34'6 lc 0'0 (0'0,34'6] local-lis/les=33/34 n=0 ec=50/33 lis/c=33/33 les/c/f=34/34/0 sis=50) [1] r=0 lpr=50 pi=[33,50)/1 crt=34'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:08:42 compute-0 ceph-mgr[75519]: [progress INFO root] complete: finished ev b90c2479-d359-4e6d-8ffc-5b2641c84c5a (PG autoscaler increasing pool 7 PGs from 1 to 32)
Jan 31 08:08:42 compute-0 ceph-mgr[75519]: [progress INFO root] Completed event b90c2479-d359-4e6d-8ffc-5b2641c84c5a (PG autoscaler increasing pool 7 PGs from 1 to 32) in 4 seconds
Jan 31 08:08:42 compute-0 ceph-mgr[75519]: [progress INFO root] complete: finished ev b11b1caf-b01e-4a34-8316-28039473002a (PG autoscaler increasing pool 8 PGs from 1 to 32)
Jan 31 08:08:42 compute-0 ceph-mgr[75519]: [progress INFO root] Completed event b11b1caf-b01e-4a34-8316-28039473002a (PG autoscaler increasing pool 8 PGs from 1 to 32) in 3 seconds
Jan 31 08:08:42 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 51 pg[9.13( v 41'483 lc 0'0 (0'0,41'483] local-lis/les=35/36 n=6 ec=50/35 lis/c=35/35 les/c/f=36/36/0 sis=50) [1] r=0 lpr=50 pi=[35,50)/1 crt=41'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:08:42 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 51 pg[9.12( v 41'483 lc 0'0 (0'0,41'483] local-lis/les=35/36 n=7 ec=50/35 lis/c=35/35 les/c/f=36/36/0 sis=50) [1] r=0 lpr=50 pi=[35,50)/1 crt=41'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:08:42 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 51 pg[8.13( v 34'6 lc 0'0 (0'0,34'6] local-lis/les=33/34 n=0 ec=50/33 lis/c=33/33 les/c/f=34/34/0 sis=50) [1] r=0 lpr=50 pi=[33,50)/1 crt=34'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:08:42 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 51 pg[8.c( v 34'6 lc 0'0 (0'0,34'6] local-lis/les=33/34 n=0 ec=50/33 lis/c=33/33 les/c/f=34/34/0 sis=50) [1] r=0 lpr=50 pi=[33,50)/1 crt=34'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:08:42 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 51 pg[9.d( v 41'483 lc 0'0 (0'0,41'483] local-lis/les=35/36 n=7 ec=50/35 lis/c=35/35 les/c/f=36/36/0 sis=50) [1] r=0 lpr=50 pi=[35,50)/1 crt=41'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:08:42 compute-0 ceph-mgr[75519]: [progress INFO root] complete: finished ev 015710af-b72f-4b5c-b3f1-4f3d67ad9f4e (PG autoscaler increasing pool 9 PGs from 1 to 32)
Jan 31 08:08:42 compute-0 ceph-mgr[75519]: [progress INFO root] Completed event 015710af-b72f-4b5c-b3f1-4f3d67ad9f4e (PG autoscaler increasing pool 9 PGs from 1 to 32) in 2 seconds
Jan 31 08:08:42 compute-0 ceph-mgr[75519]: [progress INFO root] complete: finished ev 901d1da8-4684-4daf-bb55-fd322d0e69d9 (PG autoscaler increasing pool 10 PGs from 1 to 32)
Jan 31 08:08:42 compute-0 ceph-mgr[75519]: [progress INFO root] Completed event 901d1da8-4684-4daf-bb55-fd322d0e69d9 (PG autoscaler increasing pool 10 PGs from 1 to 32) in 1 seconds
Jan 31 08:08:42 compute-0 ceph-mgr[75519]: [progress INFO root] complete: finished ev 06d4d9a4-3842-4b34-8d16-64fb69abde60 (PG autoscaler increasing pool 11 PGs from 1 to 32)
Jan 31 08:08:42 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 51 pg[8.d( v 34'6 lc 0'0 (0'0,34'6] local-lis/les=33/34 n=0 ec=50/33 lis/c=33/33 les/c/f=34/34/0 sis=50) [1] r=0 lpr=50 pi=[33,50)/1 crt=34'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:08:42 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 51 pg[9.c( v 41'483 lc 0'0 (0'0,41'483] local-lis/les=35/36 n=7 ec=50/35 lis/c=35/35 les/c/f=36/36/0 sis=50) [1] r=0 lpr=50 pi=[35,50)/1 crt=41'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:08:42 compute-0 ceph-mgr[75519]: [progress INFO root] Completed event 06d4d9a4-3842-4b34-8d16-64fb69abde60 (PG autoscaler increasing pool 11 PGs from 1 to 32) in 0 seconds
Jan 31 08:08:42 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 51 pg[8.e( v 34'6 lc 0'0 (0'0,34'6] local-lis/les=33/34 n=0 ec=50/33 lis/c=33/33 les/c/f=34/34/0 sis=50) [1] r=0 lpr=50 pi=[33,50)/1 crt=34'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:08:42 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 51 pg[9.f( v 41'483 lc 0'0 (0'0,41'483] local-lis/les=35/36 n=7 ec=50/35 lis/c=35/35 les/c/f=36/36/0 sis=50) [1] r=0 lpr=50 pi=[35,50)/1 crt=41'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:08:42 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 51 pg[8.8( v 34'6 lc 0'0 (0'0,34'6] local-lis/les=33/34 n=0 ec=50/33 lis/c=33/33 les/c/f=34/34/0 sis=50) [1] r=0 lpr=50 pi=[33,50)/1 crt=34'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:08:42 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 51 pg[9.9( v 41'483 lc 0'0 (0'0,41'483] local-lis/les=35/36 n=7 ec=50/35 lis/c=35/35 les/c/f=36/36/0 sis=50) [1] r=0 lpr=50 pi=[35,50)/1 crt=41'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:08:42 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 51 pg[8.a( v 34'6 lc 0'0 (0'0,34'6] local-lis/les=33/34 n=0 ec=50/33 lis/c=33/33 les/c/f=34/34/0 sis=50) [1] r=0 lpr=50 pi=[33,50)/1 crt=34'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:08:42 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 51 pg[9.b( v 41'483 lc 0'0 (0'0,41'483] local-lis/les=35/36 n=7 ec=50/35 lis/c=35/35 les/c/f=36/36/0 sis=50) [1] r=0 lpr=50 pi=[35,50)/1 crt=41'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:08:42 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 51 pg[8.3( v 34'6 lc 0'0 (0'0,34'6] local-lis/les=33/34 n=1 ec=50/33 lis/c=33/33 les/c/f=34/34/0 sis=50) [1] r=0 lpr=50 pi=[33,50)/1 crt=34'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:08:42 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 51 pg[9.2( v 41'483 lc 0'0 (0'0,41'483] local-lis/les=35/36 n=7 ec=50/35 lis/c=35/35 les/c/f=36/36/0 sis=50) [1] r=0 lpr=50 pi=[35,50)/1 crt=41'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:08:42 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 51 pg[8.1( v 34'6 (0'0,34'6] local-lis/les=33/34 n=1 ec=50/33 lis/c=33/33 les/c/f=34/34/0 sis=50) [1] r=0 lpr=50 pi=[33,50)/1 crt=34'6 lcod 0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:08:42 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 51 pg[9.1( v 41'483 lc 0'0 (0'0,41'483] local-lis/les=35/36 n=7 ec=50/35 lis/c=35/35 les/c/f=36/36/0 sis=50) [1] r=0 lpr=50 pi=[35,50)/1 crt=41'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:08:42 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 51 pg[8.f( v 34'6 lc 0'0 (0'0,34'6] local-lis/les=33/34 n=0 ec=50/33 lis/c=33/33 les/c/f=34/34/0 sis=50) [1] r=0 lpr=50 pi=[33,50)/1 crt=34'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:08:42 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 51 pg[9.e( v 41'483 lc 0'0 (0'0,41'483] local-lis/les=35/36 n=7 ec=50/35 lis/c=35/35 les/c/f=36/36/0 sis=50) [1] r=0 lpr=50 pi=[35,50)/1 crt=41'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:08:42 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 51 pg[8.b( v 34'6 lc 0'0 (0'0,34'6] local-lis/les=33/34 n=0 ec=50/33 lis/c=33/33 les/c/f=34/34/0 sis=50) [1] r=0 lpr=50 pi=[33,50)/1 crt=34'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:08:42 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 51 pg[9.a( v 41'483 lc 0'0 (0'0,41'483] local-lis/les=35/36 n=7 ec=50/35 lis/c=35/35 les/c/f=36/36/0 sis=50) [1] r=0 lpr=50 pi=[35,50)/1 crt=41'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:08:42 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 51 pg[8.9( v 34'6 lc 0'0 (0'0,34'6] local-lis/les=33/34 n=0 ec=50/33 lis/c=33/33 les/c/f=34/34/0 sis=50) [1] r=0 lpr=50 pi=[33,50)/1 crt=34'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:08:42 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 51 pg[9.8( v 41'483 lc 0'0 (0'0,41'483] local-lis/les=35/36 n=7 ec=50/35 lis/c=35/35 les/c/f=36/36/0 sis=50) [1] r=0 lpr=50 pi=[35,50)/1 crt=41'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:08:42 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 51 pg[8.2( v 34'6 lc 0'0 (0'0,34'6] local-lis/les=33/34 n=1 ec=50/33 lis/c=33/33 les/c/f=34/34/0 sis=50) [1] r=0 lpr=50 pi=[33,50)/1 crt=34'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:08:42 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 51 pg[9.3( v 41'483 lc 0'0 (0'0,41'483] local-lis/les=35/36 n=7 ec=50/35 lis/c=35/35 les/c/f=36/36/0 sis=50) [1] r=0 lpr=50 pi=[35,50)/1 crt=41'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:08:42 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 51 pg[8.7( v 34'6 lc 0'0 (0'0,34'6] local-lis/les=33/34 n=0 ec=50/33 lis/c=33/33 les/c/f=34/34/0 sis=50) [1] r=0 lpr=50 pi=[33,50)/1 crt=34'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:08:42 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 51 pg[9.6( v 41'483 lc 0'0 (0'0,41'483] local-lis/les=35/36 n=7 ec=50/35 lis/c=35/35 les/c/f=36/36/0 sis=50) [1] r=0 lpr=50 pi=[35,50)/1 crt=41'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:08:42 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 51 pg[8.6( v 34'6 lc 0'0 (0'0,34'6] local-lis/les=33/34 n=1 ec=50/33 lis/c=33/33 les/c/f=34/34/0 sis=50) [1] r=0 lpr=50 pi=[33,50)/1 crt=34'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:08:42 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 51 pg[9.7( v 41'483 lc 0'0 (0'0,41'483] local-lis/les=35/36 n=7 ec=50/35 lis/c=35/35 les/c/f=36/36/0 sis=50) [1] r=0 lpr=50 pi=[35,50)/1 crt=41'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:08:42 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 51 pg[8.5( v 34'6 lc 0'0 (0'0,34'6] local-lis/les=33/34 n=1 ec=50/33 lis/c=33/33 les/c/f=34/34/0 sis=50) [1] r=0 lpr=50 pi=[33,50)/1 crt=34'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:08:42 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 51 pg[9.4( v 41'483 lc 0'0 (0'0,41'483] local-lis/les=35/36 n=7 ec=50/35 lis/c=35/35 les/c/f=36/36/0 sis=50) [1] r=0 lpr=50 pi=[35,50)/1 crt=41'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:08:42 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 51 pg[8.4( v 34'6 lc 0'0 (0'0,34'6] local-lis/les=33/34 n=1 ec=50/33 lis/c=33/33 les/c/f=34/34/0 sis=50) [1] r=0 lpr=50 pi=[33,50)/1 crt=34'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:08:42 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 51 pg[9.5( v 41'483 lc 0'0 (0'0,41'483] local-lis/les=35/36 n=7 ec=50/35 lis/c=35/35 les/c/f=36/36/0 sis=50) [1] r=0 lpr=50 pi=[35,50)/1 crt=41'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:08:42 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 51 pg[8.1b( v 34'6 lc 0'0 (0'0,34'6] local-lis/les=33/34 n=0 ec=50/33 lis/c=33/33 les/c/f=34/34/0 sis=50) [1] r=0 lpr=50 pi=[33,50)/1 crt=34'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:08:42 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 51 pg[9.1a( v 41'483 lc 0'0 (0'0,41'483] local-lis/les=35/36 n=6 ec=50/35 lis/c=35/35 les/c/f=36/36/0 sis=50) [1] r=0 lpr=50 pi=[35,50)/1 crt=41'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:08:42 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 51 pg[8.1a( v 34'6 lc 0'0 (0'0,34'6] local-lis/les=33/34 n=0 ec=50/33 lis/c=33/33 les/c/f=34/34/0 sis=50) [1] r=0 lpr=50 pi=[33,50)/1 crt=34'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:08:42 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 51 pg[9.1b( v 41'483 lc 0'0 (0'0,41'483] local-lis/les=35/36 n=6 ec=50/35 lis/c=35/35 les/c/f=36/36/0 sis=50) [1] r=0 lpr=50 pi=[35,50)/1 crt=41'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:08:42 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 51 pg[8.19( v 34'6 lc 0'0 (0'0,34'6] local-lis/les=33/34 n=0 ec=50/33 lis/c=33/33 les/c/f=34/34/0 sis=50) [1] r=0 lpr=50 pi=[33,50)/1 crt=34'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:08:42 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 51 pg[9.18( v 41'483 lc 0'0 (0'0,41'483] local-lis/les=35/36 n=6 ec=50/35 lis/c=35/35 les/c/f=36/36/0 sis=50) [1] r=0 lpr=50 pi=[35,50)/1 crt=41'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:08:42 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 51 pg[8.18( v 34'6 lc 0'0 (0'0,34'6] local-lis/les=33/34 n=0 ec=50/33 lis/c=33/33 les/c/f=34/34/0 sis=50) [1] r=0 lpr=50 pi=[33,50)/1 crt=34'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:08:42 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 51 pg[9.19( v 41'483 lc 0'0 (0'0,41'483] local-lis/les=35/36 n=6 ec=50/35 lis/c=35/35 les/c/f=36/36/0 sis=50) [1] r=0 lpr=50 pi=[35,50)/1 crt=41'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:08:42 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 51 pg[9.1e( v 41'483 lc 0'0 (0'0,41'483] local-lis/les=35/36 n=6 ec=50/35 lis/c=35/35 les/c/f=36/36/0 sis=50) [1] r=0 lpr=50 pi=[35,50)/1 crt=41'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:08:42 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 51 pg[8.1f( v 34'6 lc 0'0 (0'0,34'6] local-lis/les=33/34 n=0 ec=50/33 lis/c=33/33 les/c/f=34/34/0 sis=50) [1] r=0 lpr=50 pi=[33,50)/1 crt=34'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:08:42 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 51 pg[9.1f( v 41'483 lc 0'0 (0'0,41'483] local-lis/les=35/36 n=6 ec=50/35 lis/c=35/35 les/c/f=36/36/0 sis=50) [1] r=0 lpr=50 pi=[35,50)/1 crt=41'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:08:42 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 51 pg[8.1e( v 34'6 lc 0'0 (0'0,34'6] local-lis/les=33/34 n=0 ec=50/33 lis/c=33/33 les/c/f=34/34/0 sis=50) [1] r=0 lpr=50 pi=[33,50)/1 crt=34'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:08:42 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 51 pg[8.1d( v 34'6 lc 0'0 (0'0,34'6] local-lis/les=33/34 n=0 ec=50/33 lis/c=33/33 les/c/f=34/34/0 sis=50) [1] r=0 lpr=50 pi=[33,50)/1 crt=34'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:08:42 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 51 pg[9.1c( v 41'483 lc 0'0 (0'0,41'483] local-lis/les=35/36 n=6 ec=50/35 lis/c=35/35 les/c/f=36/36/0 sis=50) [1] r=0 lpr=50 pi=[35,50)/1 crt=41'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:08:42 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 51 pg[9.1d( v 41'483 lc 0'0 (0'0,41'483] local-lis/les=35/36 n=6 ec=50/35 lis/c=35/35 les/c/f=36/36/0 sis=50) [1] r=0 lpr=50 pi=[35,50)/1 crt=41'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:08:42 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 51 pg[9.15( v 41'483 (0'0,41'483] local-lis/les=50/51 n=6 ec=50/35 lis/c=35/35 les/c/f=36/36/0 sis=50) [1] r=0 lpr=50 pi=[35,50)/1 crt=41'483 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:08:42 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 51 pg[8.15( v 34'6 (0'0,34'6] local-lis/les=50/51 n=0 ec=50/33 lis/c=33/33 les/c/f=34/34/0 sis=50) [1] r=0 lpr=50 pi=[33,50)/1 crt=34'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:08:42 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 51 pg[8.1c( v 34'6 lc 0'0 (0'0,34'6] local-lis/les=33/34 n=0 ec=50/33 lis/c=33/33 les/c/f=34/34/0 sis=50) [1] r=0 lpr=50 pi=[33,50)/1 crt=34'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:08:42 compute-0 ceph-osd[85971]: osd.0 pg_epoch: 51 pg[6.5( v 39'39 (0'0,39'39] local-lis/les=48/51 n=2 ec=48/23 lis/c=23/23 les/c/f=24/24/0 sis=48) [0] r=0 lpr=48 pi=[23,48)/1 crt=39'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:08:42 compute-0 ceph-osd[85971]: osd.0 pg_epoch: 51 pg[6.4( v 39'39 (0'0,39'39] local-lis/les=48/51 n=2 ec=48/23 lis/c=23/23 les/c/f=24/24/0 sis=48) [0] r=0 lpr=48 pi=[23,48)/1 crt=39'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:08:42 compute-0 ceph-osd[85971]: osd.0 pg_epoch: 51 pg[6.9( v 39'39 (0'0,39'39] local-lis/les=48/51 n=1 ec=48/23 lis/c=23/23 les/c/f=24/24/0 sis=48) [0] r=0 lpr=48 pi=[23,48)/1 crt=39'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:08:42 compute-0 ceph-osd[85971]: osd.0 pg_epoch: 51 pg[6.7( v 39'39 (0'0,39'39] local-lis/les=48/51 n=1 ec=48/23 lis/c=23/23 les/c/f=24/24/0 sis=48) [0] r=0 lpr=48 pi=[23,48)/1 crt=39'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:08:42 compute-0 ceph-osd[85971]: osd.0 pg_epoch: 51 pg[6.b( v 39'39 (0'0,39'39] local-lis/les=48/51 n=1 ec=48/23 lis/c=23/23 les/c/f=24/24/0 sis=48) [0] r=0 lpr=48 pi=[23,48)/1 crt=39'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:08:42 compute-0 ceph-osd[85971]: osd.0 pg_epoch: 51 pg[6.6( v 39'39 (0'0,39'39] local-lis/les=48/51 n=2 ec=48/23 lis/c=23/23 les/c/f=24/24/0 sis=48) [0] r=0 lpr=48 pi=[23,48)/1 crt=39'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:08:42 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 51 pg[8.16( v 34'6 (0'0,34'6] local-lis/les=50/51 n=0 ec=50/33 lis/c=33/33 les/c/f=34/34/0 sis=50) [1] r=0 lpr=50 pi=[33,50)/1 crt=34'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:08:42 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 51 pg[9.14( v 41'483 (0'0,41'483] local-lis/les=50/51 n=6 ec=50/35 lis/c=35/35 les/c/f=36/36/0 sis=50) [1] r=0 lpr=50 pi=[35,50)/1 crt=41'483 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:08:42 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 51 pg[8.17( v 34'6 (0'0,34'6] local-lis/les=50/51 n=0 ec=50/33 lis/c=33/33 les/c/f=34/34/0 sis=50) [1] r=0 lpr=50 pi=[33,50)/1 crt=34'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:08:42 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 51 pg[8.10( v 34'6 (0'0,34'6] local-lis/les=50/51 n=0 ec=50/33 lis/c=33/33 les/c/f=34/34/0 sis=50) [1] r=0 lpr=50 pi=[33,50)/1 crt=34'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:08:42 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 51 pg[9.17( v 41'483 (0'0,41'483] local-lis/les=50/51 n=6 ec=50/35 lis/c=35/35 les/c/f=36/36/0 sis=50) [1] r=0 lpr=50 pi=[35,50)/1 crt=41'483 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:08:42 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 51 pg[9.16( v 41'483 (0'0,41'483] local-lis/les=50/51 n=6 ec=50/35 lis/c=35/35 les/c/f=36/36/0 sis=50) [1] r=0 lpr=50 pi=[35,50)/1 crt=41'483 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:08:42 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 51 pg[9.11( v 41'483 (0'0,41'483] local-lis/les=50/51 n=7 ec=50/35 lis/c=35/35 les/c/f=36/36/0 sis=50) [1] r=0 lpr=50 pi=[35,50)/1 crt=41'483 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:08:42 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 51 pg[8.14( v 34'6 (0'0,34'6] local-lis/les=50/51 n=0 ec=50/33 lis/c=33/33 les/c/f=34/34/0 sis=50) [1] r=0 lpr=50 pi=[33,50)/1 crt=34'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:08:42 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 51 pg[8.11( v 34'6 (0'0,34'6] local-lis/les=50/51 n=0 ec=50/33 lis/c=33/33 les/c/f=34/34/0 sis=50) [1] r=0 lpr=50 pi=[33,50)/1 crt=34'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:08:42 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 51 pg[9.10( v 41'483 (0'0,41'483] local-lis/les=50/51 n=7 ec=50/35 lis/c=35/35 les/c/f=36/36/0 sis=50) [1] r=0 lpr=50 pi=[35,50)/1 crt=41'483 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:08:42 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 51 pg[8.13( v 34'6 (0'0,34'6] local-lis/les=50/51 n=0 ec=50/33 lis/c=33/33 les/c/f=34/34/0 sis=50) [1] r=0 lpr=50 pi=[33,50)/1 crt=34'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:08:42 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 51 pg[9.d( v 41'483 (0'0,41'483] local-lis/les=50/51 n=7 ec=50/35 lis/c=35/35 les/c/f=36/36/0 sis=50) [1] r=0 lpr=50 pi=[35,50)/1 crt=41'483 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:08:42 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 51 pg[8.12( v 34'6 (0'0,34'6] local-lis/les=50/51 n=0 ec=50/33 lis/c=33/33 les/c/f=34/34/0 sis=50) [1] r=0 lpr=50 pi=[33,50)/1 crt=34'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:08:42 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 51 pg[9.12( v 41'483 (0'0,41'483] local-lis/les=50/51 n=7 ec=50/35 lis/c=35/35 les/c/f=36/36/0 sis=50) [1] r=0 lpr=50 pi=[35,50)/1 crt=41'483 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:08:42 compute-0 ceph-osd[85971]: osd.0 pg_epoch: 51 pg[6.3( v 39'39 (0'0,39'39] local-lis/les=48/51 n=2 ec=48/23 lis/c=23/23 les/c/f=24/24/0 sis=48) [0] r=0 lpr=48 pi=[23,48)/1 crt=39'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:08:42 compute-0 ceph-osd[85971]: osd.0 pg_epoch: 51 pg[6.e( v 39'39 (0'0,39'39] local-lis/les=48/51 n=1 ec=48/23 lis/c=23/23 les/c/f=24/24/0 sis=48) [0] r=0 lpr=48 pi=[23,48)/1 crt=39'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:08:42 compute-0 ceph-osd[85971]: osd.0 pg_epoch: 51 pg[6.0( v 39'39 (0'0,39'39] local-lis/les=48/51 n=1 ec=23/23 lis/c=23/23 les/c/f=24/24/0 sis=48) [0] r=0 lpr=48 pi=[23,48)/1 crt=39'39 lcod 37'38 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:08:42 compute-0 ceph-osd[85971]: osd.0 pg_epoch: 51 pg[6.c( v 39'39 (0'0,39'39] local-lis/les=48/51 n=1 ec=48/23 lis/c=23/23 les/c/f=24/24/0 sis=48) [0] r=0 lpr=48 pi=[23,48)/1 crt=39'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:08:42 compute-0 ceph-osd[85971]: osd.0 pg_epoch: 51 pg[6.a( v 39'39 (0'0,39'39] local-lis/les=48/51 n=1 ec=48/23 lis/c=23/23 les/c/f=24/24/0 sis=48) [0] r=0 lpr=48 pi=[23,48)/1 crt=39'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:08:42 compute-0 ceph-osd[85971]: osd.0 pg_epoch: 51 pg[6.8( v 39'39 (0'0,39'39] local-lis/les=48/51 n=1 ec=48/23 lis/c=23/23 les/c/f=24/24/0 sis=48) [0] r=0 lpr=48 pi=[23,48)/1 crt=39'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:08:42 compute-0 ceph-osd[85971]: osd.0 pg_epoch: 51 pg[6.d( v 39'39 (0'0,39'39] local-lis/les=48/51 n=1 ec=48/23 lis/c=23/23 les/c/f=24/24/0 sis=48) [0] r=0 lpr=48 pi=[23,48)/1 crt=39'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:08:42 compute-0 ceph-osd[85971]: osd.0 pg_epoch: 51 pg[6.2( v 39'39 (0'0,39'39] local-lis/les=48/51 n=2 ec=48/23 lis/c=23/23 les/c/f=24/24/0 sis=48) [0] r=0 lpr=48 pi=[23,48)/1 crt=39'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:08:42 compute-0 ceph-osd[85971]: osd.0 pg_epoch: 51 pg[6.f( v 39'39 (0'0,39'39] local-lis/les=48/51 n=1 ec=48/23 lis/c=23/23 les/c/f=24/24/0 sis=48) [0] r=0 lpr=48 pi=[23,48)/1 crt=39'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:08:42 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 51 pg[9.c( v 41'483 (0'0,41'483] local-lis/les=50/51 n=7 ec=50/35 lis/c=35/35 les/c/f=36/36/0 sis=50) [1] r=0 lpr=50 pi=[35,50)/1 crt=41'483 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:08:42 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 51 pg[8.d( v 34'6 (0'0,34'6] local-lis/les=50/51 n=0 ec=50/33 lis/c=33/33 les/c/f=34/34/0 sis=50) [1] r=0 lpr=50 pi=[33,50)/1 crt=34'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:08:42 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 51 pg[8.c( v 34'6 (0'0,34'6] local-lis/les=50/51 n=0 ec=50/33 lis/c=33/33 les/c/f=34/34/0 sis=50) [1] r=0 lpr=50 pi=[33,50)/1 crt=34'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:08:42 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 51 pg[9.f( v 41'483 (0'0,41'483] local-lis/les=50/51 n=7 ec=50/35 lis/c=35/35 les/c/f=36/36/0 sis=50) [1] r=0 lpr=50 pi=[35,50)/1 crt=41'483 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:08:42 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 51 pg[8.e( v 34'6 (0'0,34'6] local-lis/les=50/51 n=0 ec=50/33 lis/c=33/33 les/c/f=34/34/0 sis=50) [1] r=0 lpr=50 pi=[33,50)/1 crt=34'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:08:42 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 51 pg[9.13( v 41'483 (0'0,41'483] local-lis/les=50/51 n=6 ec=50/35 lis/c=35/35 les/c/f=36/36/0 sis=50) [1] r=0 lpr=50 pi=[35,50)/1 crt=41'483 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:08:42 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 51 pg[8.8( v 34'6 (0'0,34'6] local-lis/les=50/51 n=0 ec=50/33 lis/c=33/33 les/c/f=34/34/0 sis=50) [1] r=0 lpr=50 pi=[33,50)/1 crt=34'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:08:42 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 51 pg[9.9( v 41'483 (0'0,41'483] local-lis/les=50/51 n=7 ec=50/35 lis/c=35/35 les/c/f=36/36/0 sis=50) [1] r=0 lpr=50 pi=[35,50)/1 crt=41'483 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:08:42 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 51 pg[8.a( v 34'6 (0'0,34'6] local-lis/les=50/51 n=0 ec=50/33 lis/c=33/33 les/c/f=34/34/0 sis=50) [1] r=0 lpr=50 pi=[33,50)/1 crt=34'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:08:42 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 51 pg[9.b( v 41'483 (0'0,41'483] local-lis/les=50/51 n=7 ec=50/35 lis/c=35/35 les/c/f=36/36/0 sis=50) [1] r=0 lpr=50 pi=[35,50)/1 crt=41'483 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:08:42 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 51 pg[8.0( v 34'6 (0'0,34'6] local-lis/les=50/51 n=0 ec=33/33 lis/c=33/33 les/c/f=34/34/0 sis=50) [1] r=0 lpr=50 pi=[33,50)/1 crt=34'6 lcod 34'5 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:08:42 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 51 pg[9.0( v 41'483 (0'0,41'483] local-lis/les=50/51 n=6 ec=35/35 lis/c=35/35 les/c/f=36/36/0 sis=50) [1] r=0 lpr=50 pi=[35,50)/1 crt=41'483 lcod 41'482 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:08:42 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 51 pg[8.3( v 34'6 (0'0,34'6] local-lis/les=50/51 n=1 ec=50/33 lis/c=33/33 les/c/f=34/34/0 sis=50) [1] r=0 lpr=50 pi=[33,50)/1 crt=34'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:08:42 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 51 pg[9.2( v 41'483 (0'0,41'483] local-lis/les=50/51 n=7 ec=50/35 lis/c=35/35 les/c/f=36/36/0 sis=50) [1] r=0 lpr=50 pi=[35,50)/1 crt=41'483 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:08:42 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 51 pg[8.1( v 34'6 (0'0,34'6] local-lis/les=50/51 n=1 ec=50/33 lis/c=33/33 les/c/f=34/34/0 sis=50) [1] r=0 lpr=50 pi=[33,50)/1 crt=34'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:08:42 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 51 pg[9.1( v 41'483 (0'0,41'483] local-lis/les=50/51 n=7 ec=50/35 lis/c=35/35 les/c/f=36/36/0 sis=50) [1] r=0 lpr=50 pi=[35,50)/1 crt=41'483 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:08:42 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 51 pg[8.b( v 34'6 (0'0,34'6] local-lis/les=50/51 n=0 ec=50/33 lis/c=33/33 les/c/f=34/34/0 sis=50) [1] r=0 lpr=50 pi=[33,50)/1 crt=34'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:08:42 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 51 pg[8.9( v 34'6 (0'0,34'6] local-lis/les=50/51 n=0 ec=50/33 lis/c=33/33 les/c/f=34/34/0 sis=50) [1] r=0 lpr=50 pi=[33,50)/1 crt=34'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:08:42 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 51 pg[9.a( v 41'483 (0'0,41'483] local-lis/les=50/51 n=7 ec=50/35 lis/c=35/35 les/c/f=36/36/0 sis=50) [1] r=0 lpr=50 pi=[35,50)/1 crt=41'483 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:08:42 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 51 pg[9.8( v 41'483 (0'0,41'483] local-lis/les=50/51 n=7 ec=50/35 lis/c=35/35 les/c/f=36/36/0 sis=50) [1] r=0 lpr=50 pi=[35,50)/1 crt=41'483 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:08:42 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 51 pg[8.f( v 34'6 (0'0,34'6] local-lis/les=50/51 n=0 ec=50/33 lis/c=33/33 les/c/f=34/34/0 sis=50) [1] r=0 lpr=50 pi=[33,50)/1 crt=34'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:08:42 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 51 pg[8.2( v 34'6 (0'0,34'6] local-lis/les=50/51 n=1 ec=50/33 lis/c=33/33 les/c/f=34/34/0 sis=50) [1] r=0 lpr=50 pi=[33,50)/1 crt=34'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:08:42 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 51 pg[8.6( v 34'6 (0'0,34'6] local-lis/les=50/51 n=1 ec=50/33 lis/c=33/33 les/c/f=34/34/0 sis=50) [1] r=0 lpr=50 pi=[33,50)/1 crt=34'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:08:42 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 51 pg[9.3( v 41'483 (0'0,41'483] local-lis/les=50/51 n=7 ec=50/35 lis/c=35/35 les/c/f=36/36/0 sis=50) [1] r=0 lpr=50 pi=[35,50)/1 crt=41'483 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:08:42 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 51 pg[9.7( v 41'483 (0'0,41'483] local-lis/les=50/51 n=7 ec=50/35 lis/c=35/35 les/c/f=36/36/0 sis=50) [1] r=0 lpr=50 pi=[35,50)/1 crt=41'483 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:08:42 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 51 pg[9.6( v 41'483 (0'0,41'483] local-lis/les=50/51 n=7 ec=50/35 lis/c=35/35 les/c/f=36/36/0 sis=50) [1] r=0 lpr=50 pi=[35,50)/1 crt=41'483 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:08:42 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 51 pg[8.7( v 34'6 (0'0,34'6] local-lis/les=50/51 n=0 ec=50/33 lis/c=33/33 les/c/f=34/34/0 sis=50) [1] r=0 lpr=50 pi=[33,50)/1 crt=34'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:08:42 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 51 pg[9.e( v 41'483 (0'0,41'483] local-lis/les=50/51 n=7 ec=50/35 lis/c=35/35 les/c/f=36/36/0 sis=50) [1] r=0 lpr=50 pi=[35,50)/1 crt=41'483 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:08:42 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 51 pg[8.5( v 34'6 (0'0,34'6] local-lis/les=50/51 n=1 ec=50/33 lis/c=33/33 les/c/f=34/34/0 sis=50) [1] r=0 lpr=50 pi=[33,50)/1 crt=34'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:08:42 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 51 pg[8.4( v 34'6 (0'0,34'6] local-lis/les=50/51 n=1 ec=50/33 lis/c=33/33 les/c/f=34/34/0 sis=50) [1] r=0 lpr=50 pi=[33,50)/1 crt=34'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:08:42 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 51 pg[9.4( v 41'483 (0'0,41'483] local-lis/les=50/51 n=7 ec=50/35 lis/c=35/35 les/c/f=36/36/0 sis=50) [1] r=0 lpr=50 pi=[35,50)/1 crt=41'483 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:08:42 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 51 pg[9.1a( v 41'483 (0'0,41'483] local-lis/les=50/51 n=6 ec=50/35 lis/c=35/35 les/c/f=36/36/0 sis=50) [1] r=0 lpr=50 pi=[35,50)/1 crt=41'483 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:08:42 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 51 pg[8.19( v 34'6 (0'0,34'6] local-lis/les=50/51 n=0 ec=50/33 lis/c=33/33 les/c/f=34/34/0 sis=50) [1] r=0 lpr=50 pi=[33,50)/1 crt=34'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:08:42 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 51 pg[8.1b( v 34'6 (0'0,34'6] local-lis/les=50/51 n=0 ec=50/33 lis/c=33/33 les/c/f=34/34/0 sis=50) [1] r=0 lpr=50 pi=[33,50)/1 crt=34'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:08:42 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 51 pg[8.1a( v 34'6 (0'0,34'6] local-lis/les=50/51 n=0 ec=50/33 lis/c=33/33 les/c/f=34/34/0 sis=50) [1] r=0 lpr=50 pi=[33,50)/1 crt=34'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:08:42 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 51 pg[9.18( v 41'483 (0'0,41'483] local-lis/les=50/51 n=6 ec=50/35 lis/c=35/35 les/c/f=36/36/0 sis=50) [1] r=0 lpr=50 pi=[35,50)/1 crt=41'483 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:08:42 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 51 pg[9.1b( v 41'483 (0'0,41'483] local-lis/les=50/51 n=6 ec=50/35 lis/c=35/35 les/c/f=36/36/0 sis=50) [1] r=0 lpr=50 pi=[35,50)/1 crt=41'483 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:08:42 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 51 pg[8.18( v 34'6 (0'0,34'6] local-lis/les=50/51 n=0 ec=50/33 lis/c=33/33 les/c/f=34/34/0 sis=50) [1] r=0 lpr=50 pi=[33,50)/1 crt=34'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:08:42 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 51 pg[9.19( v 41'483 (0'0,41'483] local-lis/les=50/51 n=6 ec=50/35 lis/c=35/35 les/c/f=36/36/0 sis=50) [1] r=0 lpr=50 pi=[35,50)/1 crt=41'483 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:08:42 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 51 pg[8.1f( v 34'6 (0'0,34'6] local-lis/les=50/51 n=0 ec=50/33 lis/c=33/33 les/c/f=34/34/0 sis=50) [1] r=0 lpr=50 pi=[33,50)/1 crt=34'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:08:42 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 51 pg[9.1e( v 41'483 (0'0,41'483] local-lis/les=50/51 n=6 ec=50/35 lis/c=35/35 les/c/f=36/36/0 sis=50) [1] r=0 lpr=50 pi=[35,50)/1 crt=41'483 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:08:42 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 51 pg[8.1e( v 34'6 (0'0,34'6] local-lis/les=50/51 n=0 ec=50/33 lis/c=33/33 les/c/f=34/34/0 sis=50) [1] r=0 lpr=50 pi=[33,50)/1 crt=34'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:08:42 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 51 pg[9.1f( v 41'483 (0'0,41'483] local-lis/les=50/51 n=6 ec=50/35 lis/c=35/35 les/c/f=36/36/0 sis=50) [1] r=0 lpr=50 pi=[35,50)/1 crt=41'483 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:08:42 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 51 pg[9.1c( v 41'483 (0'0,41'483] local-lis/les=50/51 n=6 ec=50/35 lis/c=35/35 les/c/f=36/36/0 sis=50) [1] r=0 lpr=50 pi=[35,50)/1 crt=41'483 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:08:42 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 51 pg[8.1d( v 34'6 (0'0,34'6] local-lis/les=50/51 n=0 ec=50/33 lis/c=33/33 les/c/f=34/34/0 sis=50) [1] r=0 lpr=50 pi=[33,50)/1 crt=34'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:08:42 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 51 pg[9.1d( v 41'483 (0'0,41'483] local-lis/les=50/51 n=6 ec=50/35 lis/c=35/35 les/c/f=36/36/0 sis=50) [1] r=0 lpr=50 pi=[35,50)/1 crt=41'483 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:08:42 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 51 pg[8.1c( v 34'6 (0'0,34'6] local-lis/les=50/51 n=0 ec=50/33 lis/c=33/33 les/c/f=34/34/0 sis=50) [1] r=0 lpr=50 pi=[33,50)/1 crt=34'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:08:42 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 51 pg[9.5( v 41'483 (0'0,41'483] local-lis/les=50/51 n=7 ec=50/35 lis/c=35/35 les/c/f=36/36/0 sis=50) [1] r=0 lpr=50 pi=[35,50)/1 crt=41'483 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:08:42 compute-0 ceph-osd[85971]: osd.0 pg_epoch: 51 pg[6.1( v 39'39 (0'0,39'39] local-lis/les=48/51 n=2 ec=48/23 lis/c=23/23 les/c/f=24/24/0 sis=48) [0] r=0 lpr=48 pi=[23,48)/1 crt=39'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:08:43 compute-0 ceph-mgr[75519]: [progress INFO root] Writing back 16 completed events
Jan 31 08:08:43 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Jan 31 08:08:43 compute-0 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 08:08:43 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e51 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 08:08:43 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v108: 243 pgs: 2 peering, 139 unknown, 102 active+clean; 461 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:08:43 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num_actual", "val": "32"} v 0)
Jan 31 08:08:43 compute-0 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num_actual", "val": "32"} : dispatch
Jan 31 08:08:43 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num_actual", "val": "32"} v 0)
Jan 31 08:08:43 compute-0 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num_actual", "val": "32"} : dispatch
Jan 31 08:08:43 compute-0 ceph-mon[75227]: 3.1e scrub starts
Jan 31 08:08:43 compute-0 ceph-mon[75227]: 3.1e scrub ok
Jan 31 08:08:43 compute-0 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num", "val": "32"}]': finished
Jan 31 08:08:43 compute-0 ceph-mon[75227]: osdmap e51: 3 total, 3 up, 3 in
Jan 31 08:08:43 compute-0 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 08:08:43 compute-0 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num_actual", "val": "32"} : dispatch
Jan 31 08:08:43 compute-0 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num_actual", "val": "32"} : dispatch
Jan 31 08:08:44 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e51 do_prune osdmap full prune enabled
Jan 31 08:08:44 compute-0 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num_actual", "val": "32"}]': finished
Jan 31 08:08:44 compute-0 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num_actual", "val": "32"}]': finished
Jan 31 08:08:44 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e52 e52: 3 total, 3 up, 3 in
Jan 31 08:08:44 compute-0 ceph-mon[75227]: log_channel(cluster) log [DBG] : osdmap e52: 3 total, 3 up, 3 in
Jan 31 08:08:44 compute-0 ceph-osd[87035]: log_channel(cluster) log [DBG] : 3.1b scrub starts
Jan 31 08:08:44 compute-0 ceph-osd[87035]: log_channel(cluster) log [DBG] : 3.1b scrub ok
Jan 31 08:08:44 compute-0 ceph-osd[88096]: log_channel(cluster) log [DBG] : 2.1f scrub starts
Jan 31 08:08:44 compute-0 ceph-osd[88096]: log_channel(cluster) log [DBG] : 2.1f scrub ok
Jan 31 08:08:44 compute-0 ceph-mon[75227]: pgmap v108: 243 pgs: 2 peering, 139 unknown, 102 active+clean; 461 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:08:44 compute-0 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num_actual", "val": "32"}]': finished
Jan 31 08:08:44 compute-0 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num_actual", "val": "32"}]': finished
Jan 31 08:08:44 compute-0 ceph-mon[75227]: osdmap e52: 3 total, 3 up, 3 in
Jan 31 08:08:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 52 pg[11.0( empty local-lis/les=39/40 n=0 ec=39/39 lis/c=39/39 les/c/f=40/40/0 sis=52 pruub=13.963813782s) [1] r=0 lpr=52 pi=[39,52)/1 crt=0'0 mlcod 0'0 active pruub 98.159751892s@ mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [1], acting_primary 1 -> 1, up_primary 1 -> 1, role 0 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 08:08:45 compute-0 ceph-osd[88096]: osd.2 pg_epoch: 52 pg[10.0( v 41'18 (0'0,41'18] local-lis/les=37/38 n=9 ec=37/37 lis/c=37/37 les/c/f=38/38/0 sis=52 pruub=11.955750465s) [2] r=0 lpr=52 pi=[37,52)/1 crt=41'18 lcod 41'17 mlcod 41'17 active pruub 89.636589050s@ mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [2], acting_primary 2 -> 2, up_primary 2 -> 2, role 0 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 08:08:45 compute-0 ceph-osd[88096]: osd.2 pg_epoch: 52 pg[10.0( v 41'18 lc 0'0 (0'0,41'18] local-lis/les=37/38 n=0 ec=37/37 lis/c=37/37 les/c/f=38/38/0 sis=52 pruub=11.955750465s) [2] r=0 lpr=52 pi=[37,52)/1 crt=41'18 lcod 41'17 mlcod 0'0 unknown pruub 89.636589050s@ mbc={}] state<Start>: transitioning to Primary
Jan 31 08:08:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 52 pg[11.0( empty local-lis/les=39/40 n=0 ec=39/39 lis/c=39/39 les/c/f=40/40/0 sis=52 pruub=13.963813782s) [1] r=0 lpr=52 pi=[39,52)/1 crt=0'0 mlcod 0'0 unknown pruub 98.159751892s@ mbc={}] state<Start>: transitioning to Primary
Jan 31 08:08:45 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v110: 305 pgs: 1 peering, 62 unknown, 242 active+clean; 461 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:08:45 compute-0 ceph-osd[88096]: log_channel(cluster) log [DBG] : 2.1c scrub starts
Jan 31 08:08:45 compute-0 ceph-osd[88096]: log_channel(cluster) log [DBG] : 2.1c scrub ok
Jan 31 08:08:45 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e52 do_prune osdmap full prune enabled
Jan 31 08:08:45 compute-0 ceph-mon[75227]: 3.1b scrub starts
Jan 31 08:08:45 compute-0 ceph-mon[75227]: 3.1b scrub ok
Jan 31 08:08:45 compute-0 ceph-mon[75227]: 2.1f scrub starts
Jan 31 08:08:45 compute-0 ceph-mon[75227]: 2.1f scrub ok
Jan 31 08:08:45 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e53 e53: 3 total, 3 up, 3 in
Jan 31 08:08:45 compute-0 ceph-mon[75227]: log_channel(cluster) log [DBG] : osdmap e53: 3 total, 3 up, 3 in
Jan 31 08:08:46 compute-0 ceph-osd[88096]: osd.2 pg_epoch: 53 pg[10.12( v 41'18 lc 0'0 (0'0,41'18] local-lis/les=37/38 n=0 ec=52/37 lis/c=37/37 les/c/f=38/38/0 sis=52) [2] r=0 lpr=52 pi=[37,52)/1 crt=41'18 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:08:46 compute-0 ceph-osd[88096]: osd.2 pg_epoch: 53 pg[10.11( v 41'18 lc 0'0 (0'0,41'18] local-lis/les=37/38 n=0 ec=52/37 lis/c=37/37 les/c/f=38/38/0 sis=52) [2] r=0 lpr=52 pi=[37,52)/1 crt=41'18 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:08:46 compute-0 ceph-osd[88096]: osd.2 pg_epoch: 53 pg[10.10( v 41'18 lc 0'0 (0'0,41'18] local-lis/les=37/38 n=0 ec=52/37 lis/c=37/37 les/c/f=38/38/0 sis=52) [2] r=0 lpr=52 pi=[37,52)/1 crt=41'18 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:08:46 compute-0 ceph-osd[88096]: osd.2 pg_epoch: 53 pg[10.1f( v 41'18 lc 0'0 (0'0,41'18] local-lis/les=37/38 n=0 ec=52/37 lis/c=37/37 les/c/f=38/38/0 sis=52) [2] r=0 lpr=52 pi=[37,52)/1 crt=41'18 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:08:46 compute-0 ceph-osd[88096]: osd.2 pg_epoch: 53 pg[10.1d( v 41'18 lc 0'0 (0'0,41'18] local-lis/les=37/38 n=0 ec=52/37 lis/c=37/37 les/c/f=38/38/0 sis=52) [2] r=0 lpr=52 pi=[37,52)/1 crt=41'18 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:08:46 compute-0 ceph-osd[88096]: osd.2 pg_epoch: 53 pg[10.1c( v 41'18 lc 0'0 (0'0,41'18] local-lis/les=37/38 n=0 ec=52/37 lis/c=37/37 les/c/f=38/38/0 sis=52) [2] r=0 lpr=52 pi=[37,52)/1 crt=41'18 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:08:46 compute-0 ceph-osd[88096]: osd.2 pg_epoch: 53 pg[10.1b( v 41'18 lc 0'0 (0'0,41'18] local-lis/les=37/38 n=0 ec=52/37 lis/c=37/37 les/c/f=38/38/0 sis=52) [2] r=0 lpr=52 pi=[37,52)/1 crt=41'18 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:08:46 compute-0 ceph-osd[88096]: osd.2 pg_epoch: 53 pg[10.1a( v 41'18 lc 0'0 (0'0,41'18] local-lis/les=37/38 n=0 ec=52/37 lis/c=37/37 les/c/f=38/38/0 sis=52) [2] r=0 lpr=52 pi=[37,52)/1 crt=41'18 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:08:46 compute-0 ceph-osd[88096]: osd.2 pg_epoch: 53 pg[10.19( v 41'18 lc 0'0 (0'0,41'18] local-lis/les=37/38 n=0 ec=52/37 lis/c=37/37 les/c/f=38/38/0 sis=52) [2] r=0 lpr=52 pi=[37,52)/1 crt=41'18 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:08:46 compute-0 ceph-osd[88096]: osd.2 pg_epoch: 53 pg[10.18( v 41'18 lc 0'0 (0'0,41'18] local-lis/les=37/38 n=0 ec=52/37 lis/c=37/37 les/c/f=38/38/0 sis=52) [2] r=0 lpr=52 pi=[37,52)/1 crt=41'18 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:08:46 compute-0 ceph-osd[88096]: osd.2 pg_epoch: 53 pg[10.7( v 41'18 lc 0'0 (0'0,41'18] local-lis/les=37/38 n=1 ec=52/37 lis/c=37/37 les/c/f=38/38/0 sis=52) [2] r=0 lpr=52 pi=[37,52)/1 crt=41'18 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:08:46 compute-0 ceph-osd[88096]: osd.2 pg_epoch: 53 pg[10.6( v 41'18 lc 0'0 (0'0,41'18] local-lis/les=37/38 n=1 ec=52/37 lis/c=37/37 les/c/f=38/38/0 sis=52) [2] r=0 lpr=52 pi=[37,52)/1 crt=41'18 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:08:46 compute-0 ceph-osd[88096]: osd.2 pg_epoch: 53 pg[10.5( v 41'18 lc 0'0 (0'0,41'18] local-lis/les=37/38 n=1 ec=52/37 lis/c=37/37 les/c/f=38/38/0 sis=52) [2] r=0 lpr=52 pi=[37,52)/1 crt=41'18 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:08:46 compute-0 ceph-osd[88096]: osd.2 pg_epoch: 53 pg[10.1e( v 41'18 lc 0'0 (0'0,41'18] local-lis/les=37/38 n=0 ec=52/37 lis/c=37/37 les/c/f=38/38/0 sis=52) [2] r=0 lpr=52 pi=[37,52)/1 crt=41'18 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:08:46 compute-0 ceph-osd[88096]: osd.2 pg_epoch: 53 pg[10.4( v 41'18 lc 0'0 (0'0,41'18] local-lis/les=37/38 n=1 ec=52/37 lis/c=37/37 les/c/f=38/38/0 sis=52) [2] r=0 lpr=52 pi=[37,52)/1 crt=41'18 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:08:46 compute-0 ceph-osd[88096]: osd.2 pg_epoch: 53 pg[10.3( v 41'18 lc 0'0 (0'0,41'18] local-lis/les=37/38 n=1 ec=52/37 lis/c=37/37 les/c/f=38/38/0 sis=52) [2] r=0 lpr=52 pi=[37,52)/1 crt=41'18 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:08:46 compute-0 ceph-osd[88096]: osd.2 pg_epoch: 53 pg[10.8( v 41'18 lc 0'0 (0'0,41'18] local-lis/les=37/38 n=1 ec=52/37 lis/c=37/37 les/c/f=38/38/0 sis=52) [2] r=0 lpr=52 pi=[37,52)/1 crt=41'18 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:08:46 compute-0 ceph-osd[88096]: osd.2 pg_epoch: 53 pg[10.f( v 41'18 lc 0'0 (0'0,41'18] local-lis/les=37/38 n=0 ec=52/37 lis/c=37/37 les/c/f=38/38/0 sis=52) [2] r=0 lpr=52 pi=[37,52)/1 crt=41'18 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:08:46 compute-0 ceph-osd[88096]: osd.2 pg_epoch: 53 pg[10.9( v 41'18 lc 0'0 (0'0,41'18] local-lis/les=37/38 n=1 ec=52/37 lis/c=37/37 les/c/f=38/38/0 sis=52) [2] r=0 lpr=52 pi=[37,52)/1 crt=41'18 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:08:46 compute-0 ceph-osd[88096]: osd.2 pg_epoch: 53 pg[10.b( v 41'18 lc 0'0 (0'0,41'18] local-lis/les=37/38 n=0 ec=52/37 lis/c=37/37 les/c/f=38/38/0 sis=52) [2] r=0 lpr=52 pi=[37,52)/1 crt=41'18 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:08:46 compute-0 ceph-osd[88096]: osd.2 pg_epoch: 53 pg[10.d( v 41'18 lc 0'0 (0'0,41'18] local-lis/les=37/38 n=0 ec=52/37 lis/c=37/37 les/c/f=38/38/0 sis=52) [2] r=0 lpr=52 pi=[37,52)/1 crt=41'18 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:08:46 compute-0 ceph-osd[88096]: osd.2 pg_epoch: 53 pg[10.c( v 41'18 lc 0'0 (0'0,41'18] local-lis/les=37/38 n=0 ec=52/37 lis/c=37/37 les/c/f=38/38/0 sis=52) [2] r=0 lpr=52 pi=[37,52)/1 crt=41'18 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:08:46 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 53 pg[11.17( empty local-lis/les=39/40 n=0 ec=52/39 lis/c=39/39 les/c/f=40/40/0 sis=52) [1] r=0 lpr=52 pi=[39,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:08:46 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 53 pg[11.16( empty local-lis/les=39/40 n=0 ec=52/39 lis/c=39/39 les/c/f=40/40/0 sis=52) [1] r=0 lpr=52 pi=[39,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:08:46 compute-0 ceph-osd[88096]: osd.2 pg_epoch: 53 pg[10.e( v 41'18 lc 0'0 (0'0,41'18] local-lis/les=37/38 n=0 ec=52/37 lis/c=37/37 les/c/f=38/38/0 sis=52) [2] r=0 lpr=52 pi=[37,52)/1 crt=41'18 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:08:46 compute-0 ceph-osd[88096]: osd.2 pg_epoch: 53 pg[10.1( v 41'18 (0'0,41'18] local-lis/les=37/38 n=1 ec=52/37 lis/c=37/37 les/c/f=38/38/0 sis=52) [2] r=0 lpr=52 pi=[37,52)/1 crt=41'18 lcod 0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:08:46 compute-0 ceph-osd[88096]: osd.2 pg_epoch: 53 pg[10.a( v 41'18 lc 0'0 (0'0,41'18] local-lis/les=37/38 n=0 ec=52/37 lis/c=37/37 les/c/f=38/38/0 sis=52) [2] r=0 lpr=52 pi=[37,52)/1 crt=41'18 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:08:46 compute-0 ceph-osd[88096]: osd.2 pg_epoch: 53 pg[10.2( v 41'18 lc 0'0 (0'0,41'18] local-lis/les=37/38 n=1 ec=52/37 lis/c=37/37 les/c/f=38/38/0 sis=52) [2] r=0 lpr=52 pi=[37,52)/1 crt=41'18 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:08:46 compute-0 ceph-osd[88096]: osd.2 pg_epoch: 53 pg[10.13( v 41'18 lc 0'0 (0'0,41'18] local-lis/les=37/38 n=0 ec=52/37 lis/c=37/37 les/c/f=38/38/0 sis=52) [2] r=0 lpr=52 pi=[37,52)/1 crt=41'18 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:08:46 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 53 pg[11.15( empty local-lis/les=39/40 n=0 ec=52/39 lis/c=39/39 les/c/f=40/40/0 sis=52) [1] r=0 lpr=52 pi=[39,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:08:46 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 53 pg[11.14( empty local-lis/les=39/40 n=0 ec=52/39 lis/c=39/39 les/c/f=40/40/0 sis=52) [1] r=0 lpr=52 pi=[39,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:08:46 compute-0 ceph-osd[88096]: osd.2 pg_epoch: 53 pg[10.15( v 41'18 lc 0'0 (0'0,41'18] local-lis/les=37/38 n=0 ec=52/37 lis/c=37/37 les/c/f=38/38/0 sis=52) [2] r=0 lpr=52 pi=[37,52)/1 crt=41'18 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:08:46 compute-0 ceph-osd[88096]: osd.2 pg_epoch: 53 pg[10.14( v 41'18 lc 0'0 (0'0,41'18] local-lis/les=37/38 n=0 ec=52/37 lis/c=37/37 les/c/f=38/38/0 sis=52) [2] r=0 lpr=52 pi=[37,52)/1 crt=41'18 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:08:46 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 53 pg[11.12( empty local-lis/les=39/40 n=0 ec=52/39 lis/c=39/39 les/c/f=40/40/0 sis=52) [1] r=0 lpr=52 pi=[39,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:08:46 compute-0 ceph-osd[88096]: osd.2 pg_epoch: 53 pg[10.16( v 41'18 lc 0'0 (0'0,41'18] local-lis/les=37/38 n=0 ec=52/37 lis/c=37/37 les/c/f=38/38/0 sis=52) [2] r=0 lpr=52 pi=[37,52)/1 crt=41'18 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:08:46 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 53 pg[11.13( empty local-lis/les=39/40 n=0 ec=52/39 lis/c=39/39 les/c/f=40/40/0 sis=52) [1] r=0 lpr=52 pi=[39,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:08:46 compute-0 ceph-osd[88096]: osd.2 pg_epoch: 53 pg[10.11( v 41'18 (0'0,41'18] local-lis/les=52/53 n=0 ec=52/37 lis/c=37/37 les/c/f=38/38/0 sis=52) [2] r=0 lpr=52 pi=[37,52)/1 crt=41'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:08:46 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 53 pg[11.11( empty local-lis/les=39/40 n=0 ec=52/39 lis/c=39/39 les/c/f=40/40/0 sis=52) [1] r=0 lpr=52 pi=[39,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:08:46 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 53 pg[11.10( empty local-lis/les=39/40 n=0 ec=52/39 lis/c=39/39 les/c/f=40/40/0 sis=52) [1] r=0 lpr=52 pi=[39,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:08:46 compute-0 ceph-osd[88096]: osd.2 pg_epoch: 53 pg[10.12( v 41'18 (0'0,41'18] local-lis/les=52/53 n=0 ec=52/37 lis/c=37/37 les/c/f=38/38/0 sis=52) [2] r=0 lpr=52 pi=[37,52)/1 crt=41'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:08:46 compute-0 ceph-osd[88096]: osd.2 pg_epoch: 53 pg[10.10( v 41'18 (0'0,41'18] local-lis/les=52/53 n=0 ec=52/37 lis/c=37/37 les/c/f=38/38/0 sis=52) [2] r=0 lpr=52 pi=[37,52)/1 crt=41'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:08:46 compute-0 ceph-osd[88096]: osd.2 pg_epoch: 53 pg[10.17( v 41'18 lc 0'0 (0'0,41'18] local-lis/les=37/38 n=0 ec=52/37 lis/c=37/37 les/c/f=38/38/0 sis=52) [2] r=0 lpr=52 pi=[37,52)/1 crt=41'18 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:08:46 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 53 pg[11.e( empty local-lis/les=39/40 n=0 ec=52/39 lis/c=39/39 les/c/f=40/40/0 sis=52) [1] r=0 lpr=52 pi=[39,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:08:46 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 53 pg[11.d( empty local-lis/les=39/40 n=0 ec=52/39 lis/c=39/39 les/c/f=40/40/0 sis=52) [1] r=0 lpr=52 pi=[39,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:08:46 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 53 pg[11.b( empty local-lis/les=39/40 n=0 ec=52/39 lis/c=39/39 les/c/f=40/40/0 sis=52) [1] r=0 lpr=52 pi=[39,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:08:46 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 53 pg[11.f( empty local-lis/les=39/40 n=0 ec=52/39 lis/c=39/39 les/c/f=40/40/0 sis=52) [1] r=0 lpr=52 pi=[39,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:08:46 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 53 pg[11.9( empty local-lis/les=39/40 n=0 ec=52/39 lis/c=39/39 les/c/f=40/40/0 sis=52) [1] r=0 lpr=52 pi=[39,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:08:46 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 53 pg[11.2( empty local-lis/les=39/40 n=0 ec=52/39 lis/c=39/39 les/c/f=40/40/0 sis=52) [1] r=0 lpr=52 pi=[39,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:08:46 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 53 pg[11.3( empty local-lis/les=39/40 n=0 ec=52/39 lis/c=39/39 les/c/f=40/40/0 sis=52) [1] r=0 lpr=52 pi=[39,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:08:46 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 53 pg[11.c( empty local-lis/les=39/40 n=0 ec=52/39 lis/c=39/39 les/c/f=40/40/0 sis=52) [1] r=0 lpr=52 pi=[39,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:08:46 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 53 pg[11.8( empty local-lis/les=39/40 n=0 ec=52/39 lis/c=39/39 les/c/f=40/40/0 sis=52) [1] r=0 lpr=52 pi=[39,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:08:46 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 53 pg[11.a( empty local-lis/les=39/40 n=0 ec=52/39 lis/c=39/39 les/c/f=40/40/0 sis=52) [1] r=0 lpr=52 pi=[39,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:08:46 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 53 pg[11.1( empty local-lis/les=39/40 n=0 ec=52/39 lis/c=39/39 les/c/f=40/40/0 sis=52) [1] r=0 lpr=52 pi=[39,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:08:46 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 53 pg[11.4( empty local-lis/les=39/40 n=0 ec=52/39 lis/c=39/39 les/c/f=40/40/0 sis=52) [1] r=0 lpr=52 pi=[39,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:08:46 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 53 pg[11.5( empty local-lis/les=39/40 n=0 ec=52/39 lis/c=39/39 les/c/f=40/40/0 sis=52) [1] r=0 lpr=52 pi=[39,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:08:46 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 53 pg[11.6( empty local-lis/les=39/40 n=0 ec=52/39 lis/c=39/39 les/c/f=40/40/0 sis=52) [1] r=0 lpr=52 pi=[39,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:08:46 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 53 pg[11.7( empty local-lis/les=39/40 n=0 ec=52/39 lis/c=39/39 les/c/f=40/40/0 sis=52) [1] r=0 lpr=52 pi=[39,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:08:46 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 53 pg[11.18( empty local-lis/les=39/40 n=0 ec=52/39 lis/c=39/39 les/c/f=40/40/0 sis=52) [1] r=0 lpr=52 pi=[39,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:08:46 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 53 pg[11.1a( empty local-lis/les=39/40 n=0 ec=52/39 lis/c=39/39 les/c/f=40/40/0 sis=52) [1] r=0 lpr=52 pi=[39,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:08:46 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 53 pg[11.1b( empty local-lis/les=39/40 n=0 ec=52/39 lis/c=39/39 les/c/f=40/40/0 sis=52) [1] r=0 lpr=52 pi=[39,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:08:46 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 53 pg[11.1c( empty local-lis/les=39/40 n=0 ec=52/39 lis/c=39/39 les/c/f=40/40/0 sis=52) [1] r=0 lpr=52 pi=[39,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:08:46 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 53 pg[11.19( empty local-lis/les=39/40 n=0 ec=52/39 lis/c=39/39 les/c/f=40/40/0 sis=52) [1] r=0 lpr=52 pi=[39,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:08:46 compute-0 ceph-osd[88096]: osd.2 pg_epoch: 53 pg[10.1f( v 41'18 (0'0,41'18] local-lis/les=52/53 n=0 ec=52/37 lis/c=37/37 les/c/f=38/38/0 sis=52) [2] r=0 lpr=52 pi=[37,52)/1 crt=41'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:08:46 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 53 pg[11.1d( empty local-lis/les=39/40 n=0 ec=52/39 lis/c=39/39 les/c/f=40/40/0 sis=52) [1] r=0 lpr=52 pi=[39,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:08:46 compute-0 ceph-osd[88096]: osd.2 pg_epoch: 53 pg[10.1c( v 41'18 (0'0,41'18] local-lis/les=52/53 n=0 ec=52/37 lis/c=37/37 les/c/f=38/38/0 sis=52) [2] r=0 lpr=52 pi=[37,52)/1 crt=41'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:08:46 compute-0 ceph-osd[88096]: osd.2 pg_epoch: 53 pg[10.1b( v 41'18 (0'0,41'18] local-lis/les=52/53 n=0 ec=52/37 lis/c=37/37 les/c/f=38/38/0 sis=52) [2] r=0 lpr=52 pi=[37,52)/1 crt=41'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:08:46 compute-0 ceph-osd[88096]: osd.2 pg_epoch: 53 pg[10.1a( v 41'18 (0'0,41'18] local-lis/les=52/53 n=0 ec=52/37 lis/c=37/37 les/c/f=38/38/0 sis=52) [2] r=0 lpr=52 pi=[37,52)/1 crt=41'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:08:46 compute-0 ceph-osd[88096]: osd.2 pg_epoch: 53 pg[10.19( v 41'18 (0'0,41'18] local-lis/les=52/53 n=0 ec=52/37 lis/c=37/37 les/c/f=38/38/0 sis=52) [2] r=0 lpr=52 pi=[37,52)/1 crt=41'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:08:46 compute-0 ceph-osd[88096]: osd.2 pg_epoch: 53 pg[10.1d( v 41'18 (0'0,41'18] local-lis/les=52/53 n=0 ec=52/37 lis/c=37/37 les/c/f=38/38/0 sis=52) [2] r=0 lpr=52 pi=[37,52)/1 crt=41'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:08:46 compute-0 ceph-osd[88096]: osd.2 pg_epoch: 53 pg[10.18( v 41'18 (0'0,41'18] local-lis/les=52/53 n=0 ec=52/37 lis/c=37/37 les/c/f=38/38/0 sis=52) [2] r=0 lpr=52 pi=[37,52)/1 crt=41'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:08:46 compute-0 ceph-osd[88096]: osd.2 pg_epoch: 53 pg[10.7( v 41'18 (0'0,41'18] local-lis/les=52/53 n=1 ec=52/37 lis/c=37/37 les/c/f=38/38/0 sis=52) [2] r=0 lpr=52 pi=[37,52)/1 crt=41'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:08:46 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 53 pg[11.1f( empty local-lis/les=39/40 n=0 ec=52/39 lis/c=39/39 les/c/f=40/40/0 sis=52) [1] r=0 lpr=52 pi=[39,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:08:46 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 53 pg[11.17( empty local-lis/les=52/53 n=0 ec=52/39 lis/c=39/39 les/c/f=40/40/0 sis=52) [1] r=0 lpr=52 pi=[39,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:08:46 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 53 pg[11.1e( empty local-lis/les=39/40 n=0 ec=52/39 lis/c=39/39 les/c/f=40/40/0 sis=52) [1] r=0 lpr=52 pi=[39,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:08:46 compute-0 ceph-osd[88096]: osd.2 pg_epoch: 53 pg[10.5( v 41'18 (0'0,41'18] local-lis/les=52/53 n=1 ec=52/37 lis/c=37/37 les/c/f=38/38/0 sis=52) [2] r=0 lpr=52 pi=[37,52)/1 crt=41'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:08:46 compute-0 ceph-osd[88096]: osd.2 pg_epoch: 53 pg[10.6( v 41'18 (0'0,41'18] local-lis/les=52/53 n=1 ec=52/37 lis/c=37/37 les/c/f=38/38/0 sis=52) [2] r=0 lpr=52 pi=[37,52)/1 crt=41'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:08:46 compute-0 ceph-osd[88096]: osd.2 pg_epoch: 53 pg[10.0( v 41'18 (0'0,41'18] local-lis/les=52/53 n=0 ec=37/37 lis/c=37/37 les/c/f=38/38/0 sis=52) [2] r=0 lpr=52 pi=[37,52)/1 crt=41'18 lcod 41'17 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:08:46 compute-0 ceph-osd[88096]: osd.2 pg_epoch: 53 pg[10.4( v 41'18 (0'0,41'18] local-lis/les=52/53 n=1 ec=52/37 lis/c=37/37 les/c/f=38/38/0 sis=52) [2] r=0 lpr=52 pi=[37,52)/1 crt=41'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:08:46 compute-0 ceph-osd[88096]: osd.2 pg_epoch: 53 pg[10.8( v 41'18 (0'0,41'18] local-lis/les=52/53 n=1 ec=52/37 lis/c=37/37 les/c/f=38/38/0 sis=52) [2] r=0 lpr=52 pi=[37,52)/1 crt=41'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:08:46 compute-0 ceph-osd[88096]: osd.2 pg_epoch: 53 pg[10.3( v 41'18 (0'0,41'18] local-lis/les=52/53 n=1 ec=52/37 lis/c=37/37 les/c/f=38/38/0 sis=52) [2] r=0 lpr=52 pi=[37,52)/1 crt=41'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:08:46 compute-0 ceph-osd[88096]: osd.2 pg_epoch: 53 pg[10.f( v 41'18 (0'0,41'18] local-lis/les=52/53 n=0 ec=52/37 lis/c=37/37 les/c/f=38/38/0 sis=52) [2] r=0 lpr=52 pi=[37,52)/1 crt=41'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:08:46 compute-0 ceph-osd[88096]: osd.2 pg_epoch: 53 pg[10.9( v 41'18 (0'0,41'18] local-lis/les=52/53 n=1 ec=52/37 lis/c=37/37 les/c/f=38/38/0 sis=52) [2] r=0 lpr=52 pi=[37,52)/1 crt=41'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:08:46 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 53 pg[11.16( empty local-lis/les=52/53 n=0 ec=52/39 lis/c=39/39 les/c/f=40/40/0 sis=52) [1] r=0 lpr=52 pi=[39,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:08:46 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 53 pg[11.12( empty local-lis/les=52/53 n=0 ec=52/39 lis/c=39/39 les/c/f=40/40/0 sis=52) [1] r=0 lpr=52 pi=[39,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:08:46 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 53 pg[11.10( empty local-lis/les=52/53 n=0 ec=52/39 lis/c=39/39 les/c/f=40/40/0 sis=52) [1] r=0 lpr=52 pi=[39,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:08:46 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 53 pg[11.14( empty local-lis/les=52/53 n=0 ec=52/39 lis/c=39/39 les/c/f=40/40/0 sis=52) [1] r=0 lpr=52 pi=[39,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:08:46 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 53 pg[11.13( empty local-lis/les=52/53 n=0 ec=52/39 lis/c=39/39 les/c/f=40/40/0 sis=52) [1] r=0 lpr=52 pi=[39,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:08:46 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 53 pg[11.11( empty local-lis/les=52/53 n=0 ec=52/39 lis/c=39/39 les/c/f=40/40/0 sis=52) [1] r=0 lpr=52 pi=[39,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:08:46 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 53 pg[11.15( empty local-lis/les=52/53 n=0 ec=52/39 lis/c=39/39 les/c/f=40/40/0 sis=52) [1] r=0 lpr=52 pi=[39,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:08:46 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 53 pg[11.d( empty local-lis/les=52/53 n=0 ec=52/39 lis/c=39/39 les/c/f=40/40/0 sis=52) [1] r=0 lpr=52 pi=[39,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:08:46 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 53 pg[11.b( empty local-lis/les=52/53 n=0 ec=52/39 lis/c=39/39 les/c/f=40/40/0 sis=52) [1] r=0 lpr=52 pi=[39,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:08:46 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 53 pg[11.f( empty local-lis/les=52/53 n=0 ec=52/39 lis/c=39/39 les/c/f=40/40/0 sis=52) [1] r=0 lpr=52 pi=[39,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:08:46 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 53 pg[11.0( empty local-lis/les=52/53 n=0 ec=39/39 lis/c=39/39 les/c/f=40/40/0 sis=52) [1] r=0 lpr=52 pi=[39,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:08:46 compute-0 ceph-osd[88096]: osd.2 pg_epoch: 53 pg[10.b( v 41'18 (0'0,41'18] local-lis/les=52/53 n=0 ec=52/37 lis/c=37/37 les/c/f=38/38/0 sis=52) [2] r=0 lpr=52 pi=[37,52)/1 crt=41'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:08:46 compute-0 ceph-osd[88096]: osd.2 pg_epoch: 53 pg[10.e( v 41'18 (0'0,41'18] local-lis/les=52/53 n=0 ec=52/37 lis/c=37/37 les/c/f=38/38/0 sis=52) [2] r=0 lpr=52 pi=[37,52)/1 crt=41'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:08:46 compute-0 ceph-osd[88096]: osd.2 pg_epoch: 53 pg[10.d( v 41'18 (0'0,41'18] local-lis/les=52/53 n=0 ec=52/37 lis/c=37/37 les/c/f=38/38/0 sis=52) [2] r=0 lpr=52 pi=[37,52)/1 crt=41'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:08:46 compute-0 ceph-osd[88096]: osd.2 pg_epoch: 53 pg[10.c( v 41'18 (0'0,41'18] local-lis/les=52/53 n=0 ec=52/37 lis/c=37/37 les/c/f=38/38/0 sis=52) [2] r=0 lpr=52 pi=[37,52)/1 crt=41'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:08:46 compute-0 ceph-osd[88096]: osd.2 pg_epoch: 53 pg[10.13( v 41'18 (0'0,41'18] local-lis/les=52/53 n=0 ec=52/37 lis/c=37/37 les/c/f=38/38/0 sis=52) [2] r=0 lpr=52 pi=[37,52)/1 crt=41'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:08:46 compute-0 ceph-osd[88096]: osd.2 pg_epoch: 53 pg[10.1e( v 41'18 (0'0,41'18] local-lis/les=52/53 n=0 ec=52/37 lis/c=37/37 les/c/f=38/38/0 sis=52) [2] r=0 lpr=52 pi=[37,52)/1 crt=41'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:08:46 compute-0 ceph-osd[88096]: osd.2 pg_epoch: 53 pg[10.a( v 41'18 (0'0,41'18] local-lis/les=52/53 n=0 ec=52/37 lis/c=37/37 les/c/f=38/38/0 sis=52) [2] r=0 lpr=52 pi=[37,52)/1 crt=41'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:08:46 compute-0 ceph-osd[88096]: osd.2 pg_epoch: 53 pg[10.15( v 41'18 (0'0,41'18] local-lis/les=52/53 n=0 ec=52/37 lis/c=37/37 les/c/f=38/38/0 sis=52) [2] r=0 lpr=52 pi=[37,52)/1 crt=41'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:08:46 compute-0 ceph-osd[88096]: osd.2 pg_epoch: 53 pg[10.1( v 41'18 (0'0,41'18] local-lis/les=52/53 n=1 ec=52/37 lis/c=37/37 les/c/f=38/38/0 sis=52) [2] r=0 lpr=52 pi=[37,52)/1 crt=41'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:08:46 compute-0 ceph-osd[88096]: osd.2 pg_epoch: 53 pg[10.16( v 41'18 (0'0,41'18] local-lis/les=52/53 n=0 ec=52/37 lis/c=37/37 les/c/f=38/38/0 sis=52) [2] r=0 lpr=52 pi=[37,52)/1 crt=41'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:08:46 compute-0 ceph-osd[88096]: osd.2 pg_epoch: 53 pg[10.14( v 41'18 (0'0,41'18] local-lis/les=52/53 n=0 ec=52/37 lis/c=37/37 les/c/f=38/38/0 sis=52) [2] r=0 lpr=52 pi=[37,52)/1 crt=41'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:08:46 compute-0 ceph-osd[88096]: osd.2 pg_epoch: 53 pg[10.2( v 41'18 (0'0,41'18] local-lis/les=52/53 n=1 ec=52/37 lis/c=37/37 les/c/f=38/38/0 sis=52) [2] r=0 lpr=52 pi=[37,52)/1 crt=41'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:08:46 compute-0 ceph-osd[88096]: osd.2 pg_epoch: 53 pg[10.17( v 41'18 (0'0,41'18] local-lis/les=52/53 n=0 ec=52/37 lis/c=37/37 les/c/f=38/38/0 sis=52) [2] r=0 lpr=52 pi=[37,52)/1 crt=41'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:08:46 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 53 pg[11.2( empty local-lis/les=52/53 n=0 ec=52/39 lis/c=39/39 les/c/f=40/40/0 sis=52) [1] r=0 lpr=52 pi=[39,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:08:46 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 53 pg[11.e( empty local-lis/les=52/53 n=0 ec=52/39 lis/c=39/39 les/c/f=40/40/0 sis=52) [1] r=0 lpr=52 pi=[39,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:08:46 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 53 pg[11.3( empty local-lis/les=52/53 n=0 ec=52/39 lis/c=39/39 les/c/f=40/40/0 sis=52) [1] r=0 lpr=52 pi=[39,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:08:46 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 53 pg[11.8( empty local-lis/les=52/53 n=0 ec=52/39 lis/c=39/39 les/c/f=40/40/0 sis=52) [1] r=0 lpr=52 pi=[39,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:08:46 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 53 pg[11.c( empty local-lis/les=52/53 n=0 ec=52/39 lis/c=39/39 les/c/f=40/40/0 sis=52) [1] r=0 lpr=52 pi=[39,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:08:46 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 53 pg[11.9( empty local-lis/les=52/53 n=0 ec=52/39 lis/c=39/39 les/c/f=40/40/0 sis=52) [1] r=0 lpr=52 pi=[39,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:08:46 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 53 pg[11.1( empty local-lis/les=52/53 n=0 ec=52/39 lis/c=39/39 les/c/f=40/40/0 sis=52) [1] r=0 lpr=52 pi=[39,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:08:46 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 53 pg[11.5( empty local-lis/les=52/53 n=0 ec=52/39 lis/c=39/39 les/c/f=40/40/0 sis=52) [1] r=0 lpr=52 pi=[39,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:08:46 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 53 pg[11.6( empty local-lis/les=52/53 n=0 ec=52/39 lis/c=39/39 les/c/f=40/40/0 sis=52) [1] r=0 lpr=52 pi=[39,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:08:46 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 53 pg[11.4( empty local-lis/les=52/53 n=0 ec=52/39 lis/c=39/39 les/c/f=40/40/0 sis=52) [1] r=0 lpr=52 pi=[39,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:08:46 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 53 pg[11.7( empty local-lis/les=52/53 n=0 ec=52/39 lis/c=39/39 les/c/f=40/40/0 sis=52) [1] r=0 lpr=52 pi=[39,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:08:46 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 53 pg[11.18( empty local-lis/les=52/53 n=0 ec=52/39 lis/c=39/39 les/c/f=40/40/0 sis=52) [1] r=0 lpr=52 pi=[39,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:08:46 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 53 pg[11.1b( empty local-lis/les=52/53 n=0 ec=52/39 lis/c=39/39 les/c/f=40/40/0 sis=52) [1] r=0 lpr=52 pi=[39,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:08:46 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 53 pg[11.1a( empty local-lis/les=52/53 n=0 ec=52/39 lis/c=39/39 les/c/f=40/40/0 sis=52) [1] r=0 lpr=52 pi=[39,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:08:46 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 53 pg[11.1d( empty local-lis/les=52/53 n=0 ec=52/39 lis/c=39/39 les/c/f=40/40/0 sis=52) [1] r=0 lpr=52 pi=[39,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:08:46 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 53 pg[11.1c( empty local-lis/les=52/53 n=0 ec=52/39 lis/c=39/39 les/c/f=40/40/0 sis=52) [1] r=0 lpr=52 pi=[39,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:08:46 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 53 pg[11.1f( empty local-lis/les=52/53 n=0 ec=52/39 lis/c=39/39 les/c/f=40/40/0 sis=52) [1] r=0 lpr=52 pi=[39,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:08:46 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 53 pg[11.1e( empty local-lis/les=52/53 n=0 ec=52/39 lis/c=39/39 les/c/f=40/40/0 sis=52) [1] r=0 lpr=52 pi=[39,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:08:46 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 53 pg[11.19( empty local-lis/les=52/53 n=0 ec=52/39 lis/c=39/39 les/c/f=40/40/0 sis=52) [1] r=0 lpr=52 pi=[39,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:08:46 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 53 pg[11.a( empty local-lis/les=52/53 n=0 ec=52/39 lis/c=39/39 les/c/f=40/40/0 sis=52) [1] r=0 lpr=52 pi=[39,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:08:46 compute-0 ceph-osd[88096]: log_channel(cluster) log [DBG] : 2.a scrub starts
Jan 31 08:08:46 compute-0 ceph-osd[88096]: log_channel(cluster) log [DBG] : 2.a scrub ok
Jan 31 08:08:47 compute-0 ceph-mon[75227]: pgmap v110: 305 pgs: 1 peering, 62 unknown, 242 active+clean; 461 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:08:47 compute-0 ceph-mon[75227]: 2.1c scrub starts
Jan 31 08:08:47 compute-0 ceph-mon[75227]: 2.1c scrub ok
Jan 31 08:08:47 compute-0 ceph-mon[75227]: osdmap e53: 3 total, 3 up, 3 in
Jan 31 08:08:47 compute-0 ceph-osd[87035]: log_channel(cluster) log [DBG] : 3.1a scrub starts
Jan 31 08:08:47 compute-0 ceph-osd[87035]: log_channel(cluster) log [DBG] : 3.1a scrub ok
Jan 31 08:08:47 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v112: 305 pgs: 1 peering, 62 unknown, 242 active+clean; 461 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:08:47 compute-0 ceph-osd[88096]: log_channel(cluster) log [DBG] : 2.2 scrub starts
Jan 31 08:08:47 compute-0 ceph-osd[88096]: log_channel(cluster) log [DBG] : 2.2 scrub ok
Jan 31 08:08:48 compute-0 ceph-mon[75227]: 2.a scrub starts
Jan 31 08:08:48 compute-0 ceph-mon[75227]: 2.a scrub ok
Jan 31 08:08:48 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e53 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 08:08:48 compute-0 ceph-osd[88096]: log_channel(cluster) log [DBG] : 2.6 scrub starts
Jan 31 08:08:48 compute-0 ceph-osd[88096]: log_channel(cluster) log [DBG] : 2.6 scrub ok
Jan 31 08:08:49 compute-0 ceph-mon[75227]: 3.1a scrub starts
Jan 31 08:08:49 compute-0 ceph-mon[75227]: 3.1a scrub ok
Jan 31 08:08:49 compute-0 ceph-mon[75227]: pgmap v112: 305 pgs: 1 peering, 62 unknown, 242 active+clean; 461 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:08:49 compute-0 ceph-mon[75227]: 2.2 scrub starts
Jan 31 08:08:49 compute-0 ceph-mon[75227]: 2.2 scrub ok
Jan 31 08:08:49 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v113: 305 pgs: 1 peering, 31 unknown, 273 active+clean; 461 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:08:50 compute-0 ceph-mon[75227]: 2.6 scrub starts
Jan 31 08:08:50 compute-0 ceph-mon[75227]: 2.6 scrub ok
Jan 31 08:08:50 compute-0 ceph-osd[87035]: log_channel(cluster) log [DBG] : 3.18 scrub starts
Jan 31 08:08:50 compute-0 ceph-osd[87035]: log_channel(cluster) log [DBG] : 3.18 scrub ok
Jan 31 08:08:50 compute-0 ceph-osd[85971]: log_channel(cluster) log [DBG] : 4.1f scrub starts
Jan 31 08:08:50 compute-0 ceph-osd[85971]: log_channel(cluster) log [DBG] : 4.1f scrub ok
Jan 31 08:08:51 compute-0 ceph-mon[75227]: pgmap v113: 305 pgs: 1 peering, 31 unknown, 273 active+clean; 461 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:08:51 compute-0 ceph-osd[85971]: log_channel(cluster) log [DBG] : 4.a scrub starts
Jan 31 08:08:51 compute-0 ceph-osd[85971]: log_channel(cluster) log [DBG] : 4.a scrub ok
Jan 31 08:08:51 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v114: 305 pgs: 305 active+clean; 461 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:08:51 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": ".rgw.root", "var": "pgp_num_actual", "val": "32"} v 0)
Jan 31 08:08:51 compute-0 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "osd pool set", "pool": ".rgw.root", "var": "pgp_num_actual", "val": "32"} : dispatch
Jan 31 08:08:51 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "backups", "var": "pgp_num_actual", "val": "32"} v 0)
Jan 31 08:08:51 compute-0 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "osd pool set", "pool": "backups", "var": "pgp_num_actual", "val": "32"} : dispatch
Jan 31 08:08:51 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pgp_num_actual", "val": "32"} v 0)
Jan 31 08:08:51 compute-0 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pgp_num_actual", "val": "32"} : dispatch
Jan 31 08:08:51 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "2"} v 0)
Jan 31 08:08:51 compute-0 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "2"} : dispatch
Jan 31 08:08:51 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pgp_num_actual", "val": "32"} v 0)
Jan 31 08:08:51 compute-0 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pgp_num_actual", "val": "32"} : dispatch
Jan 31 08:08:51 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "2"} v 0)
Jan 31 08:08:51 compute-0 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "2"} : dispatch
Jan 31 08:08:51 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pgp_num_actual", "val": "32"} v 0)
Jan 31 08:08:51 compute-0 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pgp_num_actual", "val": "32"} : dispatch
Jan 31 08:08:51 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "images", "var": "pgp_num_actual", "val": "32"} v 0)
Jan 31 08:08:51 compute-0 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "osd pool set", "pool": "images", "var": "pgp_num_actual", "val": "32"} : dispatch
Jan 31 08:08:51 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "vms", "var": "pgp_num_actual", "val": "32"} v 0)
Jan 31 08:08:51 compute-0 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "osd pool set", "pool": "vms", "var": "pgp_num_actual", "val": "32"} : dispatch
Jan 31 08:08:51 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "volumes", "var": "pgp_num_actual", "val": "32"} v 0)
Jan 31 08:08:51 compute-0 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "osd pool set", "pool": "volumes", "var": "pgp_num_actual", "val": "32"} : dispatch
Jan 31 08:08:52 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e53 do_prune osdmap full prune enabled
Jan 31 08:08:52 compute-0 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pgp_num_actual", "val": "32"}]': finished
Jan 31 08:08:52 compute-0 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pgp_num_actual", "val": "32"}]': finished
Jan 31 08:08:52 compute-0 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pgp_num_actual", "val": "32"}]': finished
Jan 31 08:08:52 compute-0 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "2"}]': finished
Jan 31 08:08:52 compute-0 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pgp_num_actual", "val": "32"}]': finished
Jan 31 08:08:52 compute-0 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "2"}]': finished
Jan 31 08:08:52 compute-0 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pgp_num_actual", "val": "32"}]': finished
Jan 31 08:08:52 compute-0 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pgp_num_actual", "val": "32"}]': finished
Jan 31 08:08:52 compute-0 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pgp_num_actual", "val": "32"}]': finished
Jan 31 08:08:52 compute-0 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pgp_num_actual", "val": "32"}]': finished
Jan 31 08:08:52 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e54 e54: 3 total, 3 up, 3 in
Jan 31 08:08:52 compute-0 ceph-mon[75227]: log_channel(cluster) log [DBG] : osdmap e54: 3 total, 3 up, 3 in
Jan 31 08:08:52 compute-0 ceph-osd[85971]: osd.0 pg_epoch: 54 pg[4.1c( empty local-lis/les=45/46 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=54 pruub=9.366329193s) [2] r=-1 lpr=54 pi=[45,54)/1 crt=0'0 active pruub 105.747566223s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 08:08:52 compute-0 ceph-osd[85971]: osd.0 pg_epoch: 54 pg[4.8( empty local-lis/les=45/46 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=54 pruub=9.403979301s) [1] r=-1 lpr=54 pi=[45,54)/1 crt=0'0 active pruub 105.785240173s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 08:08:52 compute-0 ceph-osd[85971]: osd.0 pg_epoch: 54 pg[6.5( v 39'39 (0'0,39'39] local-lis/les=48/51 n=2 ec=48/23 lis/c=48/48 les/c/f=51/51/0 sis=54 pruub=14.546413422s) [1] r=-1 lpr=54 pi=[48,54)/1 crt=39'39 lcod 0'0 active pruub 110.927688599s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 08:08:52 compute-0 ceph-osd[85971]: osd.0 pg_epoch: 54 pg[6.9( v 39'39 (0'0,39'39] local-lis/les=48/51 n=1 ec=48/23 lis/c=48/48 les/c/f=51/51/0 sis=54 pruub=14.546381950s) [1] r=-1 lpr=54 pi=[48,54)/1 crt=39'39 lcod 0'0 active pruub 110.927696228s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 08:08:52 compute-0 ceph-osd[85971]: osd.0 pg_epoch: 54 pg[4.1c( empty local-lis/les=45/46 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=54 pruub=9.366270065s) [2] r=-1 lpr=54 pi=[45,54)/1 crt=0'0 unknown NOTIFY pruub 105.747566223s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 08:08:52 compute-0 ceph-osd[85971]: osd.0 pg_epoch: 54 pg[6.5( v 39'39 (0'0,39'39] local-lis/les=48/51 n=2 ec=48/23 lis/c=48/48 les/c/f=51/51/0 sis=54 pruub=14.546366692s) [1] r=-1 lpr=54 pi=[48,54)/1 crt=39'39 lcod 0'0 unknown NOTIFY pruub 110.927688599s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 08:08:52 compute-0 ceph-osd[85971]: osd.0 pg_epoch: 54 pg[4.8( empty local-lis/les=45/46 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=54 pruub=9.403889656s) [1] r=-1 lpr=54 pi=[45,54)/1 crt=0'0 unknown NOTIFY pruub 105.785240173s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 08:08:52 compute-0 ceph-osd[85971]: osd.0 pg_epoch: 54 pg[6.9( v 39'39 (0'0,39'39] local-lis/les=48/51 n=1 ec=48/23 lis/c=48/48 les/c/f=51/51/0 sis=54 pruub=14.546327591s) [1] r=-1 lpr=54 pi=[48,54)/1 crt=39'39 lcod 0'0 unknown NOTIFY pruub 110.927696228s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 08:08:52 compute-0 ceph-osd[85971]: osd.0 pg_epoch: 54 pg[4.1b( empty local-lis/les=45/46 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=54 pruub=9.403687477s) [2] r=-1 lpr=54 pi=[45,54)/1 crt=0'0 active pruub 105.785438538s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 08:08:52 compute-0 ceph-osd[85971]: osd.0 pg_epoch: 54 pg[4.a( empty local-lis/les=45/46 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=54 pruub=9.403489113s) [2] r=-1 lpr=54 pi=[45,54)/1 crt=0'0 active pruub 105.785255432s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 08:08:52 compute-0 ceph-osd[85971]: osd.0 pg_epoch: 54 pg[6.7( v 39'39 (0'0,39'39] local-lis/les=48/51 n=1 ec=48/23 lis/c=48/48 les/c/f=51/51/0 sis=54 pruub=14.545950890s) [1] r=-1 lpr=54 pi=[48,54)/1 crt=39'39 lcod 0'0 active pruub 110.927726746s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 08:08:52 compute-0 ceph-osd[85971]: osd.0 pg_epoch: 54 pg[4.1b( empty local-lis/les=45/46 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=54 pruub=9.403654099s) [2] r=-1 lpr=54 pi=[45,54)/1 crt=0'0 unknown NOTIFY pruub 105.785438538s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 08:08:52 compute-0 ceph-osd[85971]: osd.0 pg_epoch: 54 pg[6.7( v 39'39 (0'0,39'39] local-lis/les=48/51 n=1 ec=48/23 lis/c=48/48 les/c/f=51/51/0 sis=54 pruub=14.545910835s) [1] r=-1 lpr=54 pi=[48,54)/1 crt=39'39 lcod 0'0 unknown NOTIFY pruub 110.927726746s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 08:08:52 compute-0 ceph-osd[85971]: osd.0 pg_epoch: 54 pg[4.5( empty local-lis/les=45/46 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=54 pruub=9.403573036s) [1] r=-1 lpr=54 pi=[45,54)/1 crt=0'0 active pruub 105.785461426s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 08:08:52 compute-0 ceph-osd[85971]: osd.0 pg_epoch: 54 pg[4.a( empty local-lis/les=45/46 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=54 pruub=9.403446198s) [2] r=-1 lpr=54 pi=[45,54)/1 crt=0'0 unknown NOTIFY pruub 105.785255432s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 08:08:52 compute-0 ceph-osd[85971]: osd.0 pg_epoch: 54 pg[4.5( empty local-lis/les=45/46 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=54 pruub=9.403544426s) [1] r=-1 lpr=54 pi=[45,54)/1 crt=0'0 unknown NOTIFY pruub 105.785461426s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 08:08:52 compute-0 ceph-osd[85971]: osd.0 pg_epoch: 54 pg[4.1a( empty local-lis/les=45/46 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=54 pruub=9.403316498s) [2] r=-1 lpr=54 pi=[45,54)/1 crt=0'0 active pruub 105.785453796s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 08:08:52 compute-0 ceph-osd[85971]: osd.0 pg_epoch: 54 pg[4.1a( empty local-lis/les=45/46 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=54 pruub=9.403273582s) [2] r=-1 lpr=54 pi=[45,54)/1 crt=0'0 unknown NOTIFY pruub 105.785453796s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 08:08:52 compute-0 ceph-osd[85971]: osd.0 pg_epoch: 54 pg[6.b( v 39'39 (0'0,39'39] local-lis/les=48/51 n=1 ec=48/23 lis/c=48/48 les/c/f=51/51/0 sis=54 pruub=14.545407295s) [1] r=-1 lpr=54 pi=[48,54)/1 crt=39'39 lcod 0'0 active pruub 110.927742004s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 08:08:52 compute-0 ceph-osd[85971]: osd.0 pg_epoch: 54 pg[4.4( empty local-lis/les=45/46 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=54 pruub=9.403190613s) [1] r=-1 lpr=54 pi=[45,54)/1 crt=0'0 active pruub 105.785583496s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 08:08:52 compute-0 ceph-osd[85971]: osd.0 pg_epoch: 54 pg[6.b( v 39'39 (0'0,39'39] local-lis/les=48/51 n=1 ec=48/23 lis/c=48/48 les/c/f=51/51/0 sis=54 pruub=14.545362473s) [1] r=-1 lpr=54 pi=[48,54)/1 crt=39'39 lcod 0'0 unknown NOTIFY pruub 110.927742004s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 08:08:52 compute-0 ceph-osd[85971]: osd.0 pg_epoch: 54 pg[4.9( empty local-lis/les=45/46 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=54 pruub=9.403147697s) [1] r=-1 lpr=54 pi=[45,54)/1 crt=0'0 active pruub 105.785568237s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 08:08:52 compute-0 ceph-osd[85971]: osd.0 pg_epoch: 54 pg[4.4( empty local-lis/les=45/46 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=54 pruub=9.403170586s) [1] r=-1 lpr=54 pi=[45,54)/1 crt=0'0 unknown NOTIFY pruub 105.785583496s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 08:08:52 compute-0 ceph-osd[85971]: osd.0 pg_epoch: 54 pg[4.9( empty local-lis/les=45/46 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=54 pruub=9.403113365s) [1] r=-1 lpr=54 pi=[45,54)/1 crt=0'0 unknown NOTIFY pruub 105.785568237s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 08:08:52 compute-0 ceph-osd[85971]: osd.0 pg_epoch: 54 pg[4.1( empty local-lis/les=45/46 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=54 pruub=9.403079033s) [2] r=-1 lpr=54 pi=[45,54)/1 crt=0'0 active pruub 105.785713196s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 08:08:52 compute-0 ceph-osd[85971]: osd.0 pg_epoch: 54 pg[4.1( empty local-lis/les=45/46 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=54 pruub=9.403055191s) [2] r=-1 lpr=54 pi=[45,54)/1 crt=0'0 unknown NOTIFY pruub 105.785713196s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 08:08:52 compute-0 ceph-osd[85971]: osd.0 pg_epoch: 54 pg[6.3( v 39'39 (0'0,39'39] local-lis/les=48/51 n=2 ec=48/23 lis/c=48/48 les/c/f=51/51/0 sis=54 pruub=14.551298141s) [1] r=-1 lpr=54 pi=[48,54)/1 crt=39'39 lcod 0'0 active pruub 110.933990479s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 08:08:52 compute-0 ceph-osd[85971]: osd.0 pg_epoch: 54 pg[6.1( v 39'39 (0'0,39'39] local-lis/les=48/51 n=2 ec=48/23 lis/c=48/48 les/c/f=51/51/0 sis=54 pruub=14.554909706s) [1] r=-1 lpr=54 pi=[48,54)/1 crt=39'39 lcod 0'0 active pruub 110.937629700s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 08:08:52 compute-0 ceph-osd[85971]: osd.0 pg_epoch: 54 pg[6.3( v 39'39 (0'0,39'39] local-lis/les=48/51 n=2 ec=48/23 lis/c=48/48 les/c/f=51/51/0 sis=54 pruub=14.551251411s) [1] r=-1 lpr=54 pi=[48,54)/1 crt=39'39 lcod 0'0 unknown NOTIFY pruub 110.933990479s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 08:08:52 compute-0 ceph-osd[85971]: osd.0 pg_epoch: 54 pg[6.1( v 39'39 (0'0,39'39] local-lis/les=48/51 n=2 ec=48/23 lis/c=48/48 les/c/f=51/51/0 sis=54 pruub=14.554866791s) [1] r=-1 lpr=54 pi=[48,54)/1 crt=39'39 lcod 0'0 unknown NOTIFY pruub 110.937629700s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 08:08:52 compute-0 ceph-osd[85971]: osd.0 pg_epoch: 54 pg[6.f( v 39'39 (0'0,39'39] local-lis/les=48/51 n=1 ec=48/23 lis/c=48/48 les/c/f=51/51/0 sis=54 pruub=14.551415443s) [1] r=-1 lpr=54 pi=[48,54)/1 crt=39'39 lcod 0'0 active pruub 110.934501648s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 08:08:52 compute-0 ceph-osd[85971]: osd.0 pg_epoch: 54 pg[4.d( empty local-lis/les=45/46 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=54 pruub=9.402625084s) [1] r=-1 lpr=54 pi=[45,54)/1 crt=0'0 active pruub 105.785736084s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 08:08:52 compute-0 ceph-osd[85971]: osd.0 pg_epoch: 54 pg[6.f( v 39'39 (0'0,39'39] local-lis/les=48/51 n=1 ec=48/23 lis/c=48/48 les/c/f=51/51/0 sis=54 pruub=14.551385880s) [1] r=-1 lpr=54 pi=[48,54)/1 crt=39'39 lcod 0'0 unknown NOTIFY pruub 110.934501648s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 08:08:52 compute-0 ceph-osd[85971]: osd.0 pg_epoch: 54 pg[4.d( empty local-lis/les=45/46 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=54 pruub=9.402586937s) [1] r=-1 lpr=54 pi=[45,54)/1 crt=0'0 unknown NOTIFY pruub 105.785736084s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 08:08:52 compute-0 ceph-osd[85971]: osd.0 pg_epoch: 54 pg[4.e( empty local-lis/les=45/46 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=54 pruub=9.402750969s) [2] r=-1 lpr=54 pi=[45,54)/1 crt=0'0 active pruub 105.786003113s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 08:08:52 compute-0 ceph-osd[85971]: osd.0 pg_epoch: 54 pg[4.2( empty local-lis/les=45/46 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=54 pruub=9.403772354s) [1] r=-1 lpr=54 pi=[45,54)/1 crt=0'0 active pruub 105.787002563s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 08:08:52 compute-0 ceph-osd[85971]: osd.0 pg_epoch: 54 pg[4.e( empty local-lis/les=45/46 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=54 pruub=9.402713776s) [2] r=-1 lpr=54 pi=[45,54)/1 crt=0'0 unknown NOTIFY pruub 105.786003113s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 08:08:52 compute-0 ceph-osd[85971]: osd.0 pg_epoch: 54 pg[6.d( v 39'39 (0'0,39'39] local-lis/les=48/51 n=1 ec=48/23 lis/c=48/48 les/c/f=51/51/0 sis=54 pruub=14.551176071s) [1] r=-1 lpr=54 pi=[48,54)/1 crt=39'39 lcod 0'0 active pruub 110.934494019s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 08:08:52 compute-0 ceph-osd[85971]: osd.0 pg_epoch: 54 pg[4.2( empty local-lis/les=45/46 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=54 pruub=9.403694153s) [1] r=-1 lpr=54 pi=[45,54)/1 crt=0'0 unknown NOTIFY pruub 105.787002563s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 08:08:52 compute-0 ceph-osd[85971]: osd.0 pg_epoch: 54 pg[6.d( v 39'39 (0'0,39'39] local-lis/les=48/51 n=1 ec=48/23 lis/c=48/48 les/c/f=51/51/0 sis=54 pruub=14.551154137s) [1] r=-1 lpr=54 pi=[48,54)/1 crt=39'39 lcod 0'0 unknown NOTIFY pruub 110.934494019s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 08:08:52 compute-0 ceph-osd[85971]: osd.0 pg_epoch: 54 pg[4.f( empty local-lis/les=45/46 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=54 pruub=9.402445793s) [1] r=-1 lpr=54 pi=[45,54)/1 crt=0'0 active pruub 105.785827637s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 08:08:52 compute-0 ceph-osd[85971]: osd.0 pg_epoch: 54 pg[4.f( empty local-lis/les=45/46 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=54 pruub=9.402412415s) [1] r=-1 lpr=54 pi=[45,54)/1 crt=0'0 unknown NOTIFY pruub 105.785827637s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 08:08:52 compute-0 ceph-osd[85971]: osd.0 pg_epoch: 54 pg[4.10( empty local-lis/les=45/46 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=54 pruub=9.402297974s) [1] r=-1 lpr=54 pi=[45,54)/1 crt=0'0 active pruub 105.785804749s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 08:08:52 compute-0 ceph-osd[85971]: osd.0 pg_epoch: 54 pg[4.11( empty local-lis/les=45/46 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=54 pruub=9.402265549s) [2] r=-1 lpr=54 pi=[45,54)/1 crt=0'0 active pruub 105.785812378s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 08:08:52 compute-0 ceph-osd[85971]: osd.0 pg_epoch: 54 pg[4.10( empty local-lis/les=45/46 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=54 pruub=9.402276993s) [1] r=-1 lpr=54 pi=[45,54)/1 crt=0'0 unknown NOTIFY pruub 105.785804749s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 08:08:52 compute-0 ceph-osd[85971]: osd.0 pg_epoch: 54 pg[4.11( empty local-lis/les=45/46 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=54 pruub=9.402242661s) [2] r=-1 lpr=54 pi=[45,54)/1 crt=0'0 unknown NOTIFY pruub 105.785812378s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 08:08:52 compute-0 ceph-osd[85971]: osd.0 pg_epoch: 54 pg[4.13( empty local-lis/les=45/46 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=54 pruub=9.402084351s) [2] r=-1 lpr=54 pi=[45,54)/1 crt=0'0 active pruub 105.785820007s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 08:08:52 compute-0 ceph-osd[85971]: osd.0 pg_epoch: 54 pg[4.14( empty local-lis/les=45/46 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=54 pruub=9.402440071s) [1] r=-1 lpr=54 pi=[45,54)/1 crt=0'0 active pruub 105.786193848s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 08:08:52 compute-0 ceph-osd[85971]: osd.0 pg_epoch: 54 pg[4.13( empty local-lis/les=45/46 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=54 pruub=9.402047157s) [2] r=-1 lpr=54 pi=[45,54)/1 crt=0'0 unknown NOTIFY pruub 105.785820007s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 08:08:52 compute-0 ceph-osd[85971]: osd.0 pg_epoch: 54 pg[4.14( empty local-lis/les=45/46 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=54 pruub=9.402405739s) [1] r=-1 lpr=54 pi=[45,54)/1 crt=0'0 unknown NOTIFY pruub 105.786193848s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 08:08:52 compute-0 ceph-osd[85971]: osd.0 pg_epoch: 54 pg[4.18( empty local-lis/les=45/46 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=54 pruub=9.403245926s) [2] r=-1 lpr=54 pi=[45,54)/1 crt=0'0 active pruub 105.787071228s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 08:08:52 compute-0 ceph-osd[85971]: osd.0 pg_epoch: 54 pg[4.18( empty local-lis/les=45/46 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=54 pruub=9.403223991s) [2] r=-1 lpr=54 pi=[45,54)/1 crt=0'0 unknown NOTIFY pruub 105.787071228s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 08:08:52 compute-0 ceph-osd[85971]: osd.0 pg_epoch: 54 pg[4.12( empty local-lis/les=45/46 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=54 pruub=9.401849747s) [1] r=-1 lpr=54 pi=[45,54)/1 crt=0'0 active pruub 105.785881042s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 08:08:52 compute-0 ceph-osd[85971]: osd.0 pg_epoch: 54 pg[4.12( empty local-lis/les=45/46 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=54 pruub=9.401474953s) [1] r=-1 lpr=54 pi=[45,54)/1 crt=0'0 unknown NOTIFY pruub 105.785881042s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 08:08:52 compute-0 ceph-osd[85971]: osd.0 pg_epoch: 54 pg[4.7( empty local-lis/les=45/46 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=54 pruub=9.401257515s) [1] r=-1 lpr=54 pi=[45,54)/1 crt=0'0 active pruub 105.785247803s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 08:08:52 compute-0 ceph-osd[85971]: osd.0 pg_epoch: 54 pg[4.7( empty local-lis/les=45/46 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=54 pruub=9.400353432s) [1] r=-1 lpr=54 pi=[45,54)/1 crt=0'0 unknown NOTIFY pruub 105.785247803s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 08:08:52 compute-0 ceph-osd[88096]: osd.2 pg_epoch: 54 pg[4.18( empty local-lis/les=0/0 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=54) [2] r=0 lpr=54 pi=[45,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:08:52 compute-0 ceph-osd[88096]: osd.2 pg_epoch: 54 pg[4.1b( empty local-lis/les=0/0 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=54) [2] r=0 lpr=54 pi=[45,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:08:52 compute-0 ceph-osd[88096]: osd.2 pg_epoch: 54 pg[4.1a( empty local-lis/les=0/0 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=54) [2] r=0 lpr=54 pi=[45,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:08:52 compute-0 ceph-osd[88096]: osd.2 pg_epoch: 54 pg[4.e( empty local-lis/les=0/0 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=54) [2] r=0 lpr=54 pi=[45,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:08:52 compute-0 ceph-osd[88096]: osd.2 pg_epoch: 54 pg[4.1( empty local-lis/les=0/0 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=54) [2] r=0 lpr=54 pi=[45,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:08:52 compute-0 ceph-osd[88096]: osd.2 pg_epoch: 54 pg[4.a( empty local-lis/les=0/0 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=54) [2] r=0 lpr=54 pi=[45,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:08:52 compute-0 ceph-osd[88096]: osd.2 pg_epoch: 54 pg[4.13( empty local-lis/les=0/0 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=54) [2] r=0 lpr=54 pi=[45,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:08:52 compute-0 ceph-osd[88096]: osd.2 pg_epoch: 54 pg[4.11( empty local-lis/les=0/0 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=54) [2] r=0 lpr=54 pi=[45,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:08:52 compute-0 ceph-osd[88096]: osd.2 pg_epoch: 54 pg[4.1c( empty local-lis/les=0/0 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=54) [2] r=0 lpr=54 pi=[45,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:08:52 compute-0 ceph-osd[88096]: osd.2 pg_epoch: 54 pg[5.1d( empty local-lis/les=47/48 n=0 ec=47/22 lis/c=47/47 les/c/f=48/48/0 sis=54 pruub=11.811231613s) [1] r=-1 lpr=54 pi=[47,54)/1 crt=0'0 active pruub 96.653053284s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 08:08:52 compute-0 ceph-osd[88096]: osd.2 pg_epoch: 54 pg[10.12( v 53'19 (0'0,53'19] local-lis/les=52/53 n=0 ec=52/37 lis/c=52/52 les/c/f=53/53/0 sis=54 pruub=9.567105293s) [1] r=-1 lpr=54 pi=[52,54)/1 crt=41'18 lcod 41'18 active pruub 94.408935547s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 08:08:52 compute-0 ceph-osd[88096]: osd.2 pg_epoch: 54 pg[5.1d( empty local-lis/les=47/48 n=0 ec=47/22 lis/c=47/47 les/c/f=48/48/0 sis=54 pruub=11.811202049s) [1] r=-1 lpr=54 pi=[47,54)/1 crt=0'0 unknown NOTIFY pruub 96.653053284s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 08:08:52 compute-0 ceph-osd[88096]: osd.2 pg_epoch: 54 pg[10.11( v 41'18 (0'0,41'18] local-lis/les=52/53 n=0 ec=52/37 lis/c=52/52 les/c/f=53/53/0 sis=54 pruub=9.566882133s) [1] r=-1 lpr=54 pi=[52,54)/1 crt=41'18 lcod 0'0 active pruub 94.408943176s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 08:08:52 compute-0 ceph-osd[88096]: osd.2 pg_epoch: 54 pg[5.1e( empty local-lis/les=47/48 n=0 ec=47/22 lis/c=47/47 les/c/f=48/48/0 sis=54 pruub=11.791992188s) [0] r=-1 lpr=54 pi=[47,54)/1 crt=0'0 active pruub 96.634162903s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 08:08:52 compute-0 ceph-osd[88096]: osd.2 pg_epoch: 54 pg[5.1e( empty local-lis/les=47/48 n=0 ec=47/22 lis/c=47/47 les/c/f=48/48/0 sis=54 pruub=11.791945457s) [0] r=-1 lpr=54 pi=[47,54)/1 crt=0'0 unknown NOTIFY pruub 96.634162903s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 08:08:52 compute-0 ceph-osd[88096]: osd.2 pg_epoch: 54 pg[10.10( v 41'18 (0'0,41'18] local-lis/les=52/53 n=0 ec=52/37 lis/c=52/52 les/c/f=53/53/0 sis=54 pruub=9.566702843s) [1] r=-1 lpr=54 pi=[52,54)/1 crt=41'18 lcod 0'0 active pruub 94.409156799s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 08:08:52 compute-0 ceph-osd[88096]: osd.2 pg_epoch: 54 pg[2.19( empty local-lis/les=43/44 n=0 ec=43/19 lis/c=43/43 les/c/f=44/44/0 sis=54 pruub=15.143866539s) [0] r=-1 lpr=54 pi=[43,54)/1 crt=0'0 active pruub 99.986343384s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 08:08:52 compute-0 ceph-osd[88096]: osd.2 pg_epoch: 54 pg[10.10( v 41'18 (0'0,41'18] local-lis/les=52/53 n=0 ec=52/37 lis/c=52/52 les/c/f=53/53/0 sis=54 pruub=9.566665649s) [1] r=-1 lpr=54 pi=[52,54)/1 crt=41'18 lcod 0'0 unknown NOTIFY pruub 94.409156799s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 08:08:52 compute-0 ceph-osd[88096]: osd.2 pg_epoch: 54 pg[2.18( empty local-lis/les=43/44 n=0 ec=43/19 lis/c=43/43 les/c/f=44/44/0 sis=54 pruub=15.143800735s) [0] r=-1 lpr=54 pi=[43,54)/1 crt=0'0 active pruub 99.986335754s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 08:08:52 compute-0 ceph-osd[88096]: osd.2 pg_epoch: 54 pg[2.19( empty local-lis/les=43/44 n=0 ec=43/19 lis/c=43/43 les/c/f=44/44/0 sis=54 pruub=15.143803596s) [0] r=-1 lpr=54 pi=[43,54)/1 crt=0'0 unknown NOTIFY pruub 99.986343384s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 08:08:52 compute-0 ceph-osd[88096]: osd.2 pg_epoch: 54 pg[2.18( empty local-lis/les=43/44 n=0 ec=43/19 lis/c=43/43 les/c/f=44/44/0 sis=54 pruub=15.143773079s) [0] r=-1 lpr=54 pi=[43,54)/1 crt=0'0 unknown NOTIFY pruub 99.986335754s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 08:08:52 compute-0 ceph-osd[88096]: osd.2 pg_epoch: 54 pg[10.12( v 53'19 (0'0,53'19] local-lis/les=52/53 n=0 ec=52/37 lis/c=52/52 les/c/f=53/53/0 sis=54 pruub=9.567013741s) [1] r=-1 lpr=54 pi=[52,54)/1 crt=41'18 lcod 41'18 unknown NOTIFY pruub 94.408935547s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 08:08:52 compute-0 ceph-osd[88096]: osd.2 pg_epoch: 54 pg[2.17( empty local-lis/les=43/44 n=0 ec=43/19 lis/c=43/43 les/c/f=44/44/0 sis=54 pruub=15.143560410s) [1] r=-1 lpr=54 pi=[43,54)/1 crt=0'0 active pruub 99.986312866s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 08:08:52 compute-0 ceph-osd[88096]: osd.2 pg_epoch: 54 pg[10.11( v 41'18 (0'0,41'18] local-lis/les=52/53 n=0 ec=52/37 lis/c=52/52 les/c/f=53/53/0 sis=54 pruub=9.566806793s) [1] r=-1 lpr=54 pi=[52,54)/1 crt=41'18 lcod 0'0 unknown NOTIFY pruub 94.408943176s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 08:08:52 compute-0 ceph-osd[88096]: osd.2 pg_epoch: 54 pg[10.1e( v 41'18 (0'0,41'18] local-lis/les=52/53 n=0 ec=52/37 lis/c=52/52 les/c/f=53/53/0 sis=54 pruub=9.572649956s) [0] r=-1 lpr=54 pi=[52,54)/1 crt=41'18 lcod 0'0 active pruub 94.415519714s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 08:08:52 compute-0 ceph-osd[88096]: osd.2 pg_epoch: 54 pg[2.16( empty local-lis/les=43/44 n=0 ec=43/19 lis/c=43/43 les/c/f=44/44/0 sis=54 pruub=15.143499374s) [0] r=-1 lpr=54 pi=[43,54)/1 crt=0'0 active pruub 99.986412048s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 08:08:52 compute-0 ceph-osd[88096]: osd.2 pg_epoch: 54 pg[2.17( empty local-lis/les=43/44 n=0 ec=43/19 lis/c=43/43 les/c/f=44/44/0 sis=54 pruub=15.143529892s) [1] r=-1 lpr=54 pi=[43,54)/1 crt=0'0 unknown NOTIFY pruub 99.986312866s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 08:08:52 compute-0 ceph-osd[88096]: osd.2 pg_epoch: 54 pg[10.1e( v 41'18 (0'0,41'18] local-lis/les=52/53 n=0 ec=52/37 lis/c=52/52 les/c/f=53/53/0 sis=54 pruub=9.572596550s) [0] r=-1 lpr=54 pi=[52,54)/1 crt=41'18 lcod 0'0 unknown NOTIFY pruub 94.415519714s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 08:08:52 compute-0 ceph-osd[88096]: osd.2 pg_epoch: 54 pg[2.16( empty local-lis/les=43/44 n=0 ec=43/19 lis/c=43/43 les/c/f=44/44/0 sis=54 pruub=15.143474579s) [0] r=-1 lpr=54 pi=[43,54)/1 crt=0'0 unknown NOTIFY pruub 99.986412048s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 08:08:52 compute-0 ceph-osd[88096]: osd.2 pg_epoch: 54 pg[2.15( empty local-lis/les=43/44 n=0 ec=43/19 lis/c=43/43 les/c/f=44/44/0 sis=54 pruub=15.143161774s) [1] r=-1 lpr=54 pi=[43,54)/1 crt=0'0 active pruub 99.986320496s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 08:08:52 compute-0 ceph-osd[88096]: osd.2 pg_epoch: 54 pg[5.12( empty local-lis/les=47/48 n=0 ec=47/22 lis/c=47/47 les/c/f=48/48/0 sis=54 pruub=11.809897423s) [1] r=-1 lpr=54 pi=[47,54)/1 crt=0'0 active pruub 96.653068542s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 08:08:52 compute-0 ceph-osd[88096]: osd.2 pg_epoch: 54 pg[5.12( empty local-lis/les=47/48 n=0 ec=47/22 lis/c=47/47 les/c/f=48/48/0 sis=54 pruub=11.809860229s) [1] r=-1 lpr=54 pi=[47,54)/1 crt=0'0 unknown NOTIFY pruub 96.653068542s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 08:08:52 compute-0 ceph-osd[88096]: osd.2 pg_epoch: 54 pg[5.13( empty local-lis/les=47/48 n=0 ec=47/22 lis/c=47/47 les/c/f=48/48/0 sis=54 pruub=11.810057640s) [1] r=-1 lpr=54 pi=[47,54)/1 crt=0'0 active pruub 96.653305054s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 08:08:52 compute-0 ceph-osd[88096]: osd.2 pg_epoch: 54 pg[2.15( empty local-lis/les=43/44 n=0 ec=43/19 lis/c=43/43 les/c/f=44/44/0 sis=54 pruub=15.143073082s) [1] r=-1 lpr=54 pi=[43,54)/1 crt=0'0 unknown NOTIFY pruub 99.986320496s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 08:08:52 compute-0 ceph-osd[88096]: osd.2 pg_epoch: 54 pg[5.13( empty local-lis/les=47/48 n=0 ec=47/22 lis/c=47/47 les/c/f=48/48/0 sis=54 pruub=11.810032845s) [1] r=-1 lpr=54 pi=[47,54)/1 crt=0'0 unknown NOTIFY pruub 96.653305054s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 08:08:52 compute-0 ceph-osd[88096]: osd.2 pg_epoch: 54 pg[5.14( empty local-lis/les=47/48 n=0 ec=47/22 lis/c=47/47 les/c/f=48/48/0 sis=54 pruub=11.809679985s) [0] r=-1 lpr=54 pi=[47,54)/1 crt=0'0 active pruub 96.653076172s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 08:08:52 compute-0 ceph-osd[88096]: osd.2 pg_epoch: 54 pg[5.14( empty local-lis/les=47/48 n=0 ec=47/22 lis/c=47/47 les/c/f=48/48/0 sis=54 pruub=11.809659958s) [0] r=-1 lpr=54 pi=[47,54)/1 crt=0'0 unknown NOTIFY pruub 96.653076172s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 08:08:52 compute-0 ceph-osd[88096]: osd.2 pg_epoch: 54 pg[2.13( empty local-lis/les=43/44 n=0 ec=43/19 lis/c=43/43 les/c/f=44/44/0 sis=54 pruub=15.142872810s) [0] r=-1 lpr=54 pi=[43,54)/1 crt=0'0 active pruub 99.986312866s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 08:08:52 compute-0 ceph-osd[88096]: osd.2 pg_epoch: 54 pg[2.13( empty local-lis/les=43/44 n=0 ec=43/19 lis/c=43/43 les/c/f=44/44/0 sis=54 pruub=15.142829895s) [0] r=-1 lpr=54 pi=[43,54)/1 crt=0'0 unknown NOTIFY pruub 99.986312866s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 08:08:52 compute-0 ceph-osd[88096]: osd.2 pg_epoch: 54 pg[10.1a( v 41'18 (0'0,41'18] local-lis/les=52/53 n=0 ec=52/37 lis/c=52/52 les/c/f=53/53/0 sis=54 pruub=9.568995476s) [1] r=-1 lpr=54 pi=[52,54)/1 crt=41'18 lcod 0'0 active pruub 94.412544250s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 08:08:52 compute-0 ceph-osd[88096]: osd.2 pg_epoch: 54 pg[10.19( v 41'18 (0'0,41'18] local-lis/les=52/53 n=0 ec=52/37 lis/c=52/52 les/c/f=53/53/0 sis=54 pruub=9.568959236s) [1] r=-1 lpr=54 pi=[52,54)/1 crt=41'18 lcod 0'0 active pruub 94.412567139s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 08:08:52 compute-0 ceph-osd[88096]: osd.2 pg_epoch: 54 pg[10.1a( v 41'18 (0'0,41'18] local-lis/les=52/53 n=0 ec=52/37 lis/c=52/52 les/c/f=53/53/0 sis=54 pruub=9.568970680s) [1] r=-1 lpr=54 pi=[52,54)/1 crt=41'18 lcod 0'0 unknown NOTIFY pruub 94.412544250s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 08:08:52 compute-0 ceph-osd[88096]: osd.2 pg_epoch: 54 pg[5.15( empty local-lis/les=47/48 n=0 ec=47/22 lis/c=47/47 les/c/f=48/48/0 sis=54 pruub=11.809531212s) [0] r=-1 lpr=54 pi=[47,54)/1 crt=0'0 active pruub 96.653137207s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 08:08:52 compute-0 ceph-osd[88096]: osd.2 pg_epoch: 54 pg[10.19( v 41'18 (0'0,41'18] local-lis/les=52/53 n=0 ec=52/37 lis/c=52/52 les/c/f=53/53/0 sis=54 pruub=9.568924904s) [1] r=-1 lpr=54 pi=[52,54)/1 crt=41'18 lcod 0'0 unknown NOTIFY pruub 94.412567139s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 08:08:52 compute-0 ceph-osd[88096]: osd.2 pg_epoch: 54 pg[5.15( empty local-lis/les=47/48 n=0 ec=47/22 lis/c=47/47 les/c/f=48/48/0 sis=54 pruub=11.809463501s) [0] r=-1 lpr=54 pi=[47,54)/1 crt=0'0 unknown NOTIFY pruub 96.653137207s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 08:08:52 compute-0 ceph-osd[88096]: osd.2 pg_epoch: 54 pg[2.11( empty local-lis/les=43/44 n=0 ec=43/19 lis/c=43/43 les/c/f=44/44/0 sis=54 pruub=15.142394066s) [0] r=-1 lpr=54 pi=[43,54)/1 crt=0'0 active pruub 99.986145020s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 08:08:52 compute-0 ceph-osd[88096]: osd.2 pg_epoch: 54 pg[5.16( empty local-lis/les=47/48 n=0 ec=47/22 lis/c=47/47 les/c/f=48/48/0 sis=54 pruub=11.809522629s) [1] r=-1 lpr=54 pi=[47,54)/1 crt=0'0 active pruub 96.653297424s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 08:08:52 compute-0 ceph-osd[88096]: osd.2 pg_epoch: 54 pg[2.11( empty local-lis/les=43/44 n=0 ec=43/19 lis/c=43/43 les/c/f=44/44/0 sis=54 pruub=15.142354012s) [0] r=-1 lpr=54 pi=[43,54)/1 crt=0'0 unknown NOTIFY pruub 99.986145020s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 08:08:52 compute-0 ceph-osd[88096]: osd.2 pg_epoch: 54 pg[5.16( empty local-lis/les=47/48 n=0 ec=47/22 lis/c=47/47 les/c/f=48/48/0 sis=54 pruub=11.809453964s) [1] r=-1 lpr=54 pi=[47,54)/1 crt=0'0 unknown NOTIFY pruub 96.653297424s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 08:08:52 compute-0 ceph-osd[88096]: osd.2 pg_epoch: 54 pg[2.f( empty local-lis/les=43/44 n=0 ec=43/19 lis/c=43/43 les/c/f=44/44/0 sis=54 pruub=15.141651154s) [0] r=-1 lpr=54 pi=[43,54)/1 crt=0'0 active pruub 99.985633850s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 08:08:52 compute-0 ceph-osd[88096]: osd.2 pg_epoch: 54 pg[10.7( v 41'18 (0'0,41'18] local-lis/les=52/53 n=1 ec=52/37 lis/c=52/52 les/c/f=53/53/0 sis=54 pruub=9.568623543s) [0] r=-1 lpr=54 pi=[52,54)/1 crt=41'18 lcod 0'0 active pruub 94.412681580s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 08:08:52 compute-0 ceph-osd[88096]: osd.2 pg_epoch: 54 pg[2.f( empty local-lis/les=43/44 n=0 ec=43/19 lis/c=43/43 les/c/f=44/44/0 sis=54 pruub=15.141567230s) [0] r=-1 lpr=54 pi=[43,54)/1 crt=0'0 unknown NOTIFY pruub 99.985633850s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 08:08:52 compute-0 ceph-osd[88096]: osd.2 pg_epoch: 54 pg[10.6( v 41'18 (0'0,41'18] local-lis/les=52/53 n=1 ec=52/37 lis/c=52/52 les/c/f=53/53/0 sis=54 pruub=9.569154739s) [1] r=-1 lpr=54 pi=[52,54)/1 crt=41'18 lcod 0'0 active pruub 94.413276672s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 08:08:52 compute-0 ceph-osd[88096]: osd.2 pg_epoch: 54 pg[5.9( empty local-lis/les=47/48 n=0 ec=47/22 lis/c=47/47 les/c/f=48/48/0 sis=54 pruub=11.809201241s) [1] r=-1 lpr=54 pi=[47,54)/1 crt=0'0 active pruub 96.653343201s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 08:08:52 compute-0 ceph-osd[88096]: osd.2 pg_epoch: 54 pg[10.7( v 41'18 (0'0,41'18] local-lis/les=52/53 n=1 ec=52/37 lis/c=52/52 les/c/f=53/53/0 sis=54 pruub=9.568585396s) [0] r=-1 lpr=54 pi=[52,54)/1 crt=41'18 lcod 0'0 unknown NOTIFY pruub 94.412681580s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 08:08:52 compute-0 ceph-osd[88096]: osd.2 pg_epoch: 54 pg[5.9( empty local-lis/les=47/48 n=0 ec=47/22 lis/c=47/47 les/c/f=48/48/0 sis=54 pruub=11.809109688s) [1] r=-1 lpr=54 pi=[47,54)/1 crt=0'0 unknown NOTIFY pruub 96.653343201s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 08:08:52 compute-0 ceph-osd[88096]: osd.2 pg_epoch: 54 pg[2.d( empty local-lis/les=43/44 n=0 ec=43/19 lis/c=43/43 les/c/f=44/44/0 sis=54 pruub=15.141268730s) [1] r=-1 lpr=54 pi=[43,54)/1 crt=0'0 active pruub 99.985618591s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 08:08:52 compute-0 ceph-osd[88096]: osd.2 pg_epoch: 54 pg[2.d( empty local-lis/les=43/44 n=0 ec=43/19 lis/c=43/43 les/c/f=44/44/0 sis=54 pruub=15.141236305s) [1] r=-1 lpr=54 pi=[43,54)/1 crt=0'0 unknown NOTIFY pruub 99.985618591s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 08:08:52 compute-0 ceph-osd[88096]: osd.2 pg_epoch: 54 pg[10.4( v 41'18 (0'0,41'18] local-lis/les=52/53 n=1 ec=52/37 lis/c=52/52 les/c/f=53/53/0 sis=54 pruub=9.568981171s) [0] r=-1 lpr=54 pi=[52,54)/1 crt=41'18 lcod 0'0 active pruub 94.413444519s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 08:08:52 compute-0 ceph-osd[88096]: osd.2 pg_epoch: 54 pg[5.11( empty local-lis/les=47/48 n=0 ec=47/22 lis/c=47/47 les/c/f=48/48/0 sis=54 pruub=11.808795929s) [1] r=-1 lpr=54 pi=[47,54)/1 crt=0'0 active pruub 96.653289795s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 08:08:52 compute-0 ceph-osd[88096]: osd.2 pg_epoch: 54 pg[10.4( v 41'18 (0'0,41'18] local-lis/les=52/53 n=1 ec=52/37 lis/c=52/52 les/c/f=53/53/0 sis=54 pruub=9.568946838s) [0] r=-1 lpr=54 pi=[52,54)/1 crt=41'18 lcod 0'0 unknown NOTIFY pruub 94.413444519s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 08:08:52 compute-0 ceph-osd[88096]: osd.2 pg_epoch: 54 pg[5.11( empty local-lis/les=47/48 n=0 ec=47/22 lis/c=47/47 les/c/f=48/48/0 sis=54 pruub=11.808708191s) [1] r=-1 lpr=54 pi=[47,54)/1 crt=0'0 unknown NOTIFY pruub 96.653289795s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 08:08:52 compute-0 ceph-osd[85971]: osd.0 pg_epoch: 54 pg[5.1e( empty local-lis/les=0/0 n=0 ec=47/22 lis/c=47/47 les/c/f=48/48/0 sis=54) [0] r=0 lpr=54 pi=[47,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:08:52 compute-0 ceph-osd[88096]: osd.2 pg_epoch: 54 pg[2.b( empty local-lis/les=43/44 n=0 ec=43/19 lis/c=43/43 les/c/f=44/44/0 sis=54 pruub=15.140923500s) [0] r=-1 lpr=54 pi=[43,54)/1 crt=0'0 active pruub 99.985603333s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 08:08:52 compute-0 ceph-osd[88096]: osd.2 pg_epoch: 54 pg[2.b( empty local-lis/les=43/44 n=0 ec=43/19 lis/c=43/43 les/c/f=44/44/0 sis=54 pruub=15.140869141s) [0] r=-1 lpr=54 pi=[43,54)/1 crt=0'0 unknown NOTIFY pruub 99.985603333s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 08:08:52 compute-0 ceph-osd[88096]: osd.2 pg_epoch: 54 pg[5.c( empty local-lis/les=47/48 n=0 ec=47/22 lis/c=47/47 les/c/f=48/48/0 sis=54 pruub=11.808568954s) [1] r=-1 lpr=54 pi=[47,54)/1 crt=0'0 active pruub 96.653335571s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 08:08:52 compute-0 ceph-osd[88096]: osd.2 pg_epoch: 54 pg[10.8( v 41'18 (0'0,41'18] local-lis/les=52/53 n=1 ec=52/37 lis/c=52/52 les/c/f=53/53/0 sis=54 pruub=9.568681717s) [0] r=-1 lpr=54 pi=[52,54)/1 crt=41'18 lcod 0'0 active pruub 94.413459778s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 08:08:52 compute-0 ceph-osd[88096]: osd.2 pg_epoch: 54 pg[5.c( empty local-lis/les=47/48 n=0 ec=47/22 lis/c=47/47 les/c/f=48/48/0 sis=54 pruub=11.808535576s) [1] r=-1 lpr=54 pi=[47,54)/1 crt=0'0 unknown NOTIFY pruub 96.653335571s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 08:08:52 compute-0 ceph-osd[88096]: osd.2 pg_epoch: 54 pg[10.8( v 41'18 (0'0,41'18] local-lis/les=52/53 n=1 ec=52/37 lis/c=52/52 les/c/f=53/53/0 sis=54 pruub=9.568647385s) [0] r=-1 lpr=54 pi=[52,54)/1 crt=41'18 lcod 0'0 unknown NOTIFY pruub 94.413459778s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 08:08:52 compute-0 ceph-osd[88096]: osd.2 pg_epoch: 54 pg[10.f( v 41'18 (0'0,41'18] local-lis/les=52/53 n=0 ec=52/37 lis/c=52/52 les/c/f=53/53/0 sis=54 pruub=9.568542480s) [1] r=-1 lpr=54 pi=[52,54)/1 crt=41'18 lcod 0'0 active pruub 94.413581848s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 08:08:52 compute-0 ceph-osd[88096]: osd.2 pg_epoch: 54 pg[10.6( v 41'18 (0'0,41'18] local-lis/les=52/53 n=1 ec=52/37 lis/c=52/52 les/c/f=53/53/0 sis=54 pruub=9.568243980s) [1] r=-1 lpr=54 pi=[52,54)/1 crt=41'18 lcod 0'0 unknown NOTIFY pruub 94.413276672s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 08:08:52 compute-0 ceph-osd[88096]: osd.2 pg_epoch: 54 pg[10.f( v 41'18 (0'0,41'18] local-lis/les=52/53 n=0 ec=52/37 lis/c=52/52 les/c/f=53/53/0 sis=54 pruub=9.568473816s) [1] r=-1 lpr=54 pi=[52,54)/1 crt=41'18 lcod 0'0 unknown NOTIFY pruub 94.413581848s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 08:08:52 compute-0 ceph-osd[88096]: osd.2 pg_epoch: 54 pg[2.7( empty local-lis/les=43/44 n=0 ec=43/19 lis/c=43/43 les/c/f=44/44/0 sis=54 pruub=15.140815735s) [1] r=-1 lpr=54 pi=[43,54)/1 crt=0'0 active pruub 99.985946655s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 08:08:52 compute-0 ceph-osd[88096]: osd.2 pg_epoch: 54 pg[2.7( empty local-lis/les=43/44 n=0 ec=43/19 lis/c=43/43 les/c/f=44/44/0 sis=54 pruub=15.140743256s) [1] r=-1 lpr=54 pi=[43,54)/1 crt=0'0 unknown NOTIFY pruub 99.985946655s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 08:08:52 compute-0 ceph-osd[88096]: osd.2 pg_epoch: 54 pg[5.f( empty local-lis/les=47/48 n=0 ec=47/22 lis/c=47/47 les/c/f=48/48/0 sis=54 pruub=11.808373451s) [1] r=-1 lpr=54 pi=[47,54)/1 crt=0'0 active pruub 96.653633118s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 08:08:52 compute-0 ceph-osd[88096]: osd.2 pg_epoch: 54 pg[5.f( empty local-lis/les=47/48 n=0 ec=47/22 lis/c=47/47 les/c/f=48/48/0 sis=54 pruub=11.808355331s) [1] r=-1 lpr=54 pi=[47,54)/1 crt=0'0 unknown NOTIFY pruub 96.653633118s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 08:08:52 compute-0 ceph-osd[85971]: osd.0 pg_epoch: 54 pg[2.19( empty local-lis/les=0/0 n=0 ec=43/19 lis/c=43/43 les/c/f=44/44/0 sis=54) [0] r=0 lpr=54 pi=[43,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:08:52 compute-0 ceph-osd[85971]: osd.0 pg_epoch: 54 pg[2.18( empty local-lis/les=0/0 n=0 ec=43/19 lis/c=43/43 les/c/f=44/44/0 sis=54) [0] r=0 lpr=54 pi=[43,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:08:52 compute-0 ceph-osd[88096]: osd.2 pg_epoch: 54 pg[2.8( empty local-lis/les=43/44 n=0 ec=43/19 lis/c=43/43 les/c/f=44/44/0 sis=54 pruub=15.141191483s) [0] r=-1 lpr=54 pi=[43,54)/1 crt=0'0 active pruub 99.986480713s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 08:08:52 compute-0 ceph-osd[88096]: osd.2 pg_epoch: 54 pg[10.9( v 53'19 (0'0,53'19] local-lis/les=52/53 n=1 ec=52/37 lis/c=52/52 les/c/f=53/53/0 sis=54 pruub=9.567784309s) [0] r=-1 lpr=54 pi=[52,54)/1 crt=41'18 lcod 41'18 active pruub 94.413619995s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 08:08:52 compute-0 ceph-osd[88096]: osd.2 pg_epoch: 54 pg[2.8( empty local-lis/les=43/44 n=0 ec=43/19 lis/c=43/43 les/c/f=44/44/0 sis=54 pruub=15.140518188s) [0] r=-1 lpr=54 pi=[43,54)/1 crt=0'0 unknown NOTIFY pruub 99.986480713s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 08:08:52 compute-0 ceph-osd[88096]: osd.2 pg_epoch: 54 pg[10.9( v 53'19 (0'0,53'19] local-lis/les=52/53 n=1 ec=52/37 lis/c=52/52 les/c/f=53/53/0 sis=54 pruub=9.567600250s) [0] r=-1 lpr=54 pi=[52,54)/1 crt=41'18 lcod 41'18 unknown NOTIFY pruub 94.413619995s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 08:08:52 compute-0 ceph-osd[88096]: osd.2 pg_epoch: 54 pg[5.7( empty local-lis/les=47/48 n=0 ec=47/22 lis/c=47/47 les/c/f=48/48/0 sis=54 pruub=11.807469368s) [0] r=-1 lpr=54 pi=[47,54)/1 crt=0'0 active pruub 96.653564453s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 08:08:52 compute-0 ceph-osd[88096]: osd.2 pg_epoch: 54 pg[5.7( empty local-lis/les=47/48 n=0 ec=47/22 lis/c=47/47 les/c/f=48/48/0 sis=54 pruub=11.807430267s) [0] r=-1 lpr=54 pi=[47,54)/1 crt=0'0 unknown NOTIFY pruub 96.653564453s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 08:08:52 compute-0 ceph-osd[88096]: osd.2 pg_epoch: 54 pg[2.2( empty local-lis/les=43/44 n=0 ec=43/19 lis/c=43/43 les/c/f=44/44/0 sis=54 pruub=15.139079094s) [0] r=-1 lpr=54 pi=[43,54)/1 crt=0'0 active pruub 99.985427856s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 08:08:52 compute-0 ceph-osd[88096]: osd.2 pg_epoch: 54 pg[10.b( v 41'18 (0'0,41'18] local-lis/les=52/53 n=0 ec=52/37 lis/c=52/52 les/c/f=53/53/0 sis=54 pruub=9.568891525s) [1] r=-1 lpr=54 pi=[52,54)/1 crt=41'18 lcod 0'0 active pruub 94.415252686s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 08:08:52 compute-0 ceph-osd[88096]: osd.2 pg_epoch: 54 pg[2.2( empty local-lis/les=43/44 n=0 ec=43/19 lis/c=43/43 les/c/f=44/44/0 sis=54 pruub=15.139038086s) [0] r=-1 lpr=54 pi=[43,54)/1 crt=0'0 unknown NOTIFY pruub 99.985427856s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 08:08:52 compute-0 ceph-osd[88096]: osd.2 pg_epoch: 54 pg[10.b( v 41'18 (0'0,41'18] local-lis/les=52/53 n=0 ec=52/37 lis/c=52/52 les/c/f=53/53/0 sis=54 pruub=9.568850517s) [1] r=-1 lpr=54 pi=[52,54)/1 crt=41'18 lcod 0'0 unknown NOTIFY pruub 94.415252686s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 08:08:52 compute-0 ceph-osd[88096]: osd.2 pg_epoch: 54 pg[2.3( empty local-lis/les=43/44 n=0 ec=43/19 lis/c=43/43 les/c/f=44/44/0 sis=54 pruub=15.138841629s) [1] r=-1 lpr=54 pi=[43,54)/1 crt=0'0 active pruub 99.985397339s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 08:08:52 compute-0 ceph-osd[88096]: osd.2 pg_epoch: 54 pg[2.3( empty local-lis/les=43/44 n=0 ec=43/19 lis/c=43/43 les/c/f=44/44/0 sis=54 pruub=15.138813019s) [1] r=-1 lpr=54 pi=[43,54)/1 crt=0'0 unknown NOTIFY pruub 99.985397339s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 08:08:52 compute-0 ceph-osd[88096]: osd.2 pg_epoch: 54 pg[5.4( empty local-lis/les=47/48 n=0 ec=47/22 lis/c=47/47 les/c/f=48/48/0 sis=54 pruub=11.806970596s) [0] r=-1 lpr=54 pi=[47,54)/1 crt=0'0 active pruub 96.653594971s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 08:08:52 compute-0 ceph-osd[85971]: osd.0 pg_epoch: 54 pg[10.1e( empty local-lis/les=0/0 n=0 ec=52/37 lis/c=52/52 les/c/f=53/53/0 sis=54) [0] r=0 lpr=54 pi=[52,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:08:52 compute-0 ceph-osd[88096]: osd.2 pg_epoch: 54 pg[5.4( empty local-lis/les=47/48 n=0 ec=47/22 lis/c=47/47 les/c/f=48/48/0 sis=54 pruub=11.806947708s) [0] r=-1 lpr=54 pi=[47,54)/1 crt=0'0 unknown NOTIFY pruub 96.653594971s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 08:08:52 compute-0 ceph-osd[88096]: osd.2 pg_epoch: 54 pg[2.4( empty local-lis/les=43/44 n=0 ec=43/19 lis/c=43/43 les/c/f=44/44/0 sis=54 pruub=15.138633728s) [1] r=-1 lpr=54 pi=[43,54)/1 crt=0'0 active pruub 99.985435486s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 08:08:52 compute-0 ceph-osd[88096]: osd.2 pg_epoch: 54 pg[2.4( empty local-lis/les=43/44 n=0 ec=43/19 lis/c=43/43 les/c/f=44/44/0 sis=54 pruub=15.138595581s) [1] r=-1 lpr=54 pi=[43,54)/1 crt=0'0 unknown NOTIFY pruub 99.985435486s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 08:08:52 compute-0 ceph-osd[88096]: osd.2 pg_epoch: 54 pg[5.3( empty local-lis/les=47/48 n=0 ec=47/22 lis/c=47/47 les/c/f=48/48/0 sis=54 pruub=11.806786537s) [0] r=-1 lpr=54 pi=[47,54)/1 crt=0'0 active pruub 96.653732300s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 08:08:52 compute-0 ceph-osd[88096]: osd.2 pg_epoch: 54 pg[2.5( empty local-lis/les=43/44 n=0 ec=43/19 lis/c=43/43 les/c/f=44/44/0 sis=54 pruub=15.138245583s) [1] r=-1 lpr=54 pi=[43,54)/1 crt=0'0 active pruub 99.985260010s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 08:08:52 compute-0 ceph-osd[88096]: osd.2 pg_epoch: 54 pg[2.5( empty local-lis/les=43/44 n=0 ec=43/19 lis/c=43/43 les/c/f=44/44/0 sis=54 pruub=15.138203621s) [1] r=-1 lpr=54 pi=[43,54)/1 crt=0'0 unknown NOTIFY pruub 99.985260010s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 08:08:52 compute-0 ceph-osd[88096]: osd.2 pg_epoch: 54 pg[5.3( empty local-lis/les=47/48 n=0 ec=47/22 lis/c=47/47 les/c/f=48/48/0 sis=54 pruub=11.806687355s) [0] r=-1 lpr=54 pi=[47,54)/1 crt=0'0 unknown NOTIFY pruub 96.653732300s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 08:08:52 compute-0 ceph-osd[85971]: osd.0 pg_epoch: 54 pg[2.16( empty local-lis/les=0/0 n=0 ec=43/19 lis/c=43/43 les/c/f=44/44/0 sis=54) [0] r=0 lpr=54 pi=[43,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:08:52 compute-0 ceph-osd[88096]: osd.2 pg_epoch: 54 pg[10.d( v 53'19 (0'0,53'19] local-lis/les=52/53 n=0 ec=52/37 lis/c=52/52 les/c/f=53/53/0 sis=54 pruub=9.567918777s) [0] r=-1 lpr=54 pi=[52,54)/1 crt=41'18 lcod 41'18 active pruub 94.415283203s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 08:08:52 compute-0 ceph-osd[88096]: osd.2 pg_epoch: 54 pg[5.2( empty local-lis/les=47/48 n=0 ec=47/22 lis/c=47/47 les/c/f=48/48/0 sis=54 pruub=11.806267738s) [0] r=-1 lpr=54 pi=[47,54)/1 crt=0'0 active pruub 96.653656006s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 08:08:52 compute-0 ceph-osd[88096]: osd.2 pg_epoch: 54 pg[10.e( v 53'19 (0'0,53'19] local-lis/les=52/53 n=0 ec=52/37 lis/c=52/52 les/c/f=53/53/0 sis=54 pruub=9.567814827s) [0] r=-1 lpr=54 pi=[52,54)/1 crt=41'18 lcod 41'18 active pruub 94.415267944s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 08:08:52 compute-0 ceph-osd[88096]: osd.2 pg_epoch: 54 pg[2.6( empty local-lis/les=43/44 n=0 ec=43/19 lis/c=43/43 les/c/f=44/44/0 sis=54 pruub=15.137924194s) [1] r=-1 lpr=54 pi=[43,54)/1 crt=0'0 active pruub 99.985374451s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 08:08:52 compute-0 ceph-osd[88096]: osd.2 pg_epoch: 54 pg[10.d( v 53'19 (0'0,53'19] local-lis/les=52/53 n=0 ec=52/37 lis/c=52/52 les/c/f=53/53/0 sis=54 pruub=9.567847252s) [0] r=-1 lpr=54 pi=[52,54)/1 crt=41'18 lcod 41'18 unknown NOTIFY pruub 94.415283203s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 08:08:52 compute-0 ceph-osd[88096]: osd.2 pg_epoch: 54 pg[2.6( empty local-lis/les=43/44 n=0 ec=43/19 lis/c=43/43 les/c/f=44/44/0 sis=54 pruub=15.137880325s) [1] r=-1 lpr=54 pi=[43,54)/1 crt=0'0 unknown NOTIFY pruub 99.985374451s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 08:08:52 compute-0 ceph-osd[88096]: osd.2 pg_epoch: 54 pg[5.2( empty local-lis/les=47/48 n=0 ec=47/22 lis/c=47/47 les/c/f=48/48/0 sis=54 pruub=11.806214333s) [0] r=-1 lpr=54 pi=[47,54)/1 crt=0'0 unknown NOTIFY pruub 96.653656006s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 08:08:52 compute-0 ceph-osd[88096]: osd.2 pg_epoch: 54 pg[10.e( v 53'19 (0'0,53'19] local-lis/les=52/53 n=0 ec=52/37 lis/c=52/52 les/c/f=53/53/0 sis=54 pruub=9.567748070s) [0] r=-1 lpr=54 pi=[52,54)/1 crt=41'18 lcod 41'18 unknown NOTIFY pruub 94.415267944s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 08:08:52 compute-0 ceph-osd[88096]: osd.2 pg_epoch: 54 pg[5.5( empty local-lis/les=47/48 n=0 ec=47/22 lis/c=47/47 les/c/f=48/48/0 sis=54 pruub=11.805973053s) [0] r=-1 lpr=54 pi=[47,54)/1 crt=0'0 active pruub 96.653640747s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 08:08:52 compute-0 ceph-osd[88096]: osd.2 pg_epoch: 54 pg[10.1( v 41'18 (0'0,41'18] local-lis/les=52/53 n=1 ec=52/37 lis/c=52/52 les/c/f=53/53/0 sis=54 pruub=9.567916870s) [0] r=-1 lpr=54 pi=[52,54)/1 crt=41'18 lcod 0'0 active pruub 94.415634155s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 08:08:52 compute-0 ceph-osd[88096]: osd.2 pg_epoch: 54 pg[5.1( empty local-lis/les=47/48 n=0 ec=47/22 lis/c=47/47 les/c/f=48/48/0 sis=54 pruub=11.805935860s) [1] r=-1 lpr=54 pi=[47,54)/1 crt=0'0 active pruub 96.653678894s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 08:08:52 compute-0 ceph-osd[88096]: osd.2 pg_epoch: 54 pg[5.5( empty local-lis/les=47/48 n=0 ec=47/22 lis/c=47/47 les/c/f=48/48/0 sis=54 pruub=11.805913925s) [0] r=-1 lpr=54 pi=[47,54)/1 crt=0'0 unknown NOTIFY pruub 96.653640747s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 08:08:52 compute-0 ceph-osd[88096]: osd.2 pg_epoch: 54 pg[10.1( v 41'18 (0'0,41'18] local-lis/les=52/53 n=1 ec=52/37 lis/c=52/52 les/c/f=53/53/0 sis=54 pruub=9.567890167s) [0] r=-1 lpr=54 pi=[52,54)/1 crt=41'18 lcod 0'0 unknown NOTIFY pruub 94.415634155s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 08:08:52 compute-0 ceph-osd[88096]: osd.2 pg_epoch: 54 pg[5.1( empty local-lis/les=47/48 n=0 ec=47/22 lis/c=47/47 les/c/f=48/48/0 sis=54 pruub=11.805903435s) [1] r=-1 lpr=54 pi=[47,54)/1 crt=0'0 unknown NOTIFY pruub 96.653678894s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 08:08:52 compute-0 ceph-osd[85971]: osd.0 pg_epoch: 54 pg[5.14( empty local-lis/les=0/0 n=0 ec=47/22 lis/c=47/47 les/c/f=48/48/0 sis=54) [0] r=0 lpr=54 pi=[47,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:08:52 compute-0 ceph-osd[88096]: osd.2 pg_epoch: 54 pg[10.2( v 41'18 (0'0,41'18] local-lis/les=52/53 n=1 ec=52/37 lis/c=52/52 les/c/f=53/53/0 sis=54 pruub=9.567959785s) [1] r=-1 lpr=54 pi=[52,54)/1 crt=41'18 lcod 0'0 active pruub 94.416152954s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 08:08:52 compute-0 ceph-osd[88096]: osd.2 pg_epoch: 54 pg[2.a( empty local-lis/les=43/44 n=0 ec=43/19 lis/c=43/43 les/c/f=44/44/0 sis=54 pruub=15.137019157s) [1] r=-1 lpr=54 pi=[43,54)/1 crt=0'0 active pruub 99.985260010s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 08:08:52 compute-0 ceph-osd[88096]: osd.2 pg_epoch: 54 pg[2.9( empty local-lis/les=43/44 n=0 ec=43/19 lis/c=43/43 les/c/f=44/44/0 sis=54 pruub=15.137157440s) [1] r=-1 lpr=54 pi=[43,54)/1 crt=0'0 active pruub 99.985412598s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 08:08:52 compute-0 ceph-osd[88096]: osd.2 pg_epoch: 54 pg[10.2( v 41'18 (0'0,41'18] local-lis/les=52/53 n=1 ec=52/37 lis/c=52/52 les/c/f=53/53/0 sis=54 pruub=9.567924500s) [1] r=-1 lpr=54 pi=[52,54)/1 crt=41'18 lcod 0'0 unknown NOTIFY pruub 94.416152954s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 08:08:52 compute-0 ceph-osd[88096]: osd.2 pg_epoch: 54 pg[10.13( v 41'18 (0'0,41'18] local-lis/les=52/53 n=0 ec=52/37 lis/c=52/52 les/c/f=53/53/0 sis=54 pruub=9.567237854s) [1] r=-1 lpr=54 pi=[52,54)/1 crt=41'18 lcod 0'0 active pruub 94.415512085s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 08:08:52 compute-0 ceph-osd[88096]: osd.2 pg_epoch: 54 pg[2.a( empty local-lis/les=43/44 n=0 ec=43/19 lis/c=43/43 les/c/f=44/44/0 sis=54 pruub=15.136970520s) [1] r=-1 lpr=54 pi=[43,54)/1 crt=0'0 unknown NOTIFY pruub 99.985260010s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 08:08:52 compute-0 ceph-osd[85971]: osd.0 pg_epoch: 54 pg[2.13( empty local-lis/les=0/0 n=0 ec=43/19 lis/c=43/43 les/c/f=44/44/0 sis=54) [0] r=0 lpr=54 pi=[43,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:08:52 compute-0 ceph-osd[88096]: osd.2 pg_epoch: 54 pg[2.9( empty local-lis/les=43/44 n=0 ec=43/19 lis/c=43/43 les/c/f=44/44/0 sis=54 pruub=15.137116432s) [1] r=-1 lpr=54 pi=[43,54)/1 crt=0'0 unknown NOTIFY pruub 99.985412598s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 08:08:52 compute-0 ceph-osd[85971]: osd.0 pg_epoch: 54 pg[5.15( empty local-lis/les=0/0 n=0 ec=47/22 lis/c=47/47 les/c/f=48/48/0 sis=54) [0] r=0 lpr=54 pi=[47,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:08:52 compute-0 ceph-osd[88096]: osd.2 pg_epoch: 54 pg[10.13( v 41'18 (0'0,41'18] local-lis/les=52/53 n=0 ec=52/37 lis/c=52/52 les/c/f=53/53/0 sis=54 pruub=9.567194939s) [1] r=-1 lpr=54 pi=[52,54)/1 crt=41'18 lcod 0'0 unknown NOTIFY pruub 94.415512085s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 08:08:52 compute-0 ceph-osd[88096]: osd.2 pg_epoch: 54 pg[2.1b( empty local-lis/les=43/44 n=0 ec=43/19 lis/c=43/43 les/c/f=44/44/0 sis=54 pruub=15.137442589s) [1] r=-1 lpr=54 pi=[43,54)/1 crt=0'0 active pruub 99.985946655s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 08:08:52 compute-0 ceph-osd[88096]: osd.2 pg_epoch: 54 pg[10.14( v 53'19 (0'0,53'19] local-lis/les=52/53 n=0 ec=52/37 lis/c=52/52 les/c/f=53/53/0 sis=54 pruub=9.567556381s) [1] r=-1 lpr=54 pi=[52,54)/1 crt=41'18 lcod 41'18 active pruub 94.416091919s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 08:08:52 compute-0 ceph-osd[88096]: osd.2 pg_epoch: 54 pg[10.15( v 53'19 (0'0,53'19] local-lis/les=52/53 n=0 ec=52/37 lis/c=52/52 les/c/f=53/53/0 sis=54 pruub=9.566939354s) [0] r=-1 lpr=54 pi=[52,54)/1 crt=41'18 lcod 41'18 active pruub 94.415596008s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 08:08:52 compute-0 ceph-osd[88096]: osd.2 pg_epoch: 54 pg[10.14( v 53'19 (0'0,53'19] local-lis/les=52/53 n=0 ec=52/37 lis/c=52/52 les/c/f=53/53/0 sis=54 pruub=9.567514420s) [1] r=-1 lpr=54 pi=[52,54)/1 crt=41'18 lcod 41'18 unknown NOTIFY pruub 94.416091919s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 08:08:52 compute-0 ceph-osd[88096]: osd.2 pg_epoch: 54 pg[2.1c( empty local-lis/les=43/44 n=0 ec=43/19 lis/c=43/43 les/c/f=44/44/0 sis=54 pruub=15.136508942s) [0] r=-1 lpr=54 pi=[43,54)/1 crt=0'0 active pruub 99.985221863s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 08:08:52 compute-0 ceph-osd[88096]: osd.2 pg_epoch: 54 pg[10.15( v 53'19 (0'0,53'19] local-lis/les=52/53 n=0 ec=52/37 lis/c=52/52 les/c/f=53/53/0 sis=54 pruub=9.566890717s) [0] r=-1 lpr=54 pi=[52,54)/1 crt=41'18 lcod 41'18 unknown NOTIFY pruub 94.415596008s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 08:08:52 compute-0 ceph-osd[88096]: osd.2 pg_epoch: 54 pg[2.1c( empty local-lis/les=43/44 n=0 ec=43/19 lis/c=43/43 les/c/f=44/44/0 sis=54 pruub=15.136459351s) [0] r=-1 lpr=54 pi=[43,54)/1 crt=0'0 unknown NOTIFY pruub 99.985221863s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 08:08:52 compute-0 ceph-osd[88096]: osd.2 pg_epoch: 54 pg[2.1d( empty local-lis/les=43/44 n=0 ec=43/19 lis/c=43/43 les/c/f=44/44/0 sis=54 pruub=15.137128830s) [0] r=-1 lpr=54 pi=[43,54)/1 crt=0'0 active pruub 99.986152649s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 08:08:52 compute-0 ceph-osd[88096]: osd.2 pg_epoch: 54 pg[10.16( v 41'18 (0'0,41'18] local-lis/les=52/53 n=0 ec=52/37 lis/c=52/52 les/c/f=53/53/0 sis=54 pruub=9.566961288s) [0] r=-1 lpr=54 pi=[52,54)/1 crt=41'18 lcod 0'0 active pruub 94.415992737s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 08:08:52 compute-0 ceph-osd[88096]: osd.2 pg_epoch: 54 pg[10.16( v 41'18 (0'0,41'18] local-lis/les=52/53 n=0 ec=52/37 lis/c=52/52 les/c/f=53/53/0 sis=54 pruub=9.566926956s) [0] r=-1 lpr=54 pi=[52,54)/1 crt=41'18 lcod 0'0 unknown NOTIFY pruub 94.415992737s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 08:08:52 compute-0 ceph-osd[88096]: osd.2 pg_epoch: 54 pg[2.1d( empty local-lis/les=43/44 n=0 ec=43/19 lis/c=43/43 les/c/f=44/44/0 sis=54 pruub=15.137091637s) [0] r=-1 lpr=54 pi=[43,54)/1 crt=0'0 unknown NOTIFY pruub 99.986152649s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 08:08:52 compute-0 ceph-osd[88096]: osd.2 pg_epoch: 54 pg[2.1b( empty local-lis/les=43/44 n=0 ec=43/19 lis/c=43/43 les/c/f=44/44/0 sis=54 pruub=15.136816025s) [1] r=-1 lpr=54 pi=[43,54)/1 crt=0'0 unknown NOTIFY pruub 99.985946655s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 08:08:52 compute-0 ceph-osd[88096]: osd.2 pg_epoch: 54 pg[5.19( empty local-lis/les=47/48 n=0 ec=47/22 lis/c=47/47 les/c/f=48/48/0 sis=54 pruub=11.804634094s) [1] r=-1 lpr=54 pi=[47,54)/1 crt=0'0 active pruub 96.653869629s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 08:08:52 compute-0 ceph-osd[88096]: osd.2 pg_epoch: 54 pg[10.17( v 41'18 (0'0,41'18] local-lis/les=52/53 n=0 ec=52/37 lis/c=52/52 les/c/f=53/53/0 sis=54 pruub=9.566829681s) [0] r=-1 lpr=54 pi=[52,54)/1 crt=41'18 lcod 0'0 active pruub 94.416191101s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 08:08:52 compute-0 ceph-osd[85971]: osd.0 pg_epoch: 54 pg[2.11( empty local-lis/les=0/0 n=0 ec=43/19 lis/c=43/43 les/c/f=44/44/0 sis=54) [0] r=0 lpr=54 pi=[43,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:08:52 compute-0 ceph-osd[88096]: osd.2 pg_epoch: 54 pg[5.19( empty local-lis/les=47/48 n=0 ec=47/22 lis/c=47/47 les/c/f=48/48/0 sis=54 pruub=11.804590225s) [1] r=-1 lpr=54 pi=[47,54)/1 crt=0'0 unknown NOTIFY pruub 96.653869629s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 08:08:52 compute-0 ceph-osd[88096]: osd.2 pg_epoch: 54 pg[10.17( v 41'18 (0'0,41'18] local-lis/les=52/53 n=0 ec=52/37 lis/c=52/52 les/c/f=53/53/0 sis=54 pruub=9.566802979s) [0] r=-1 lpr=54 pi=[52,54)/1 crt=41'18 lcod 0'0 unknown NOTIFY pruub 94.416191101s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 08:08:52 compute-0 ceph-osd[88096]: osd.2 pg_epoch: 54 pg[5.1a( empty local-lis/les=47/48 n=0 ec=47/22 lis/c=47/47 les/c/f=48/48/0 sis=54 pruub=11.804243088s) [1] r=-1 lpr=54 pi=[47,54)/1 crt=0'0 active pruub 96.653762817s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 08:08:52 compute-0 ceph-osd[88096]: osd.2 pg_epoch: 54 pg[5.1a( empty local-lis/les=47/48 n=0 ec=47/22 lis/c=47/47 les/c/f=48/48/0 sis=54 pruub=11.804201126s) [1] r=-1 lpr=54 pi=[47,54)/1 crt=0'0 unknown NOTIFY pruub 96.653762817s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 08:08:52 compute-0 ceph-osd[88096]: osd.2 pg_epoch: 54 pg[2.1f( empty local-lis/les=43/44 n=0 ec=43/19 lis/c=43/43 les/c/f=44/44/0 sis=54 pruub=15.135580063s) [0] r=-1 lpr=54 pi=[43,54)/1 crt=0'0 active pruub 99.985176086s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 08:08:52 compute-0 ceph-osd[88096]: osd.2 pg_epoch: 54 pg[5.18( empty local-lis/les=47/48 n=0 ec=47/22 lis/c=47/47 les/c/f=48/48/0 sis=54 pruub=11.804260254s) [1] r=-1 lpr=54 pi=[47,54)/1 crt=0'0 active pruub 96.653862000s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 08:08:52 compute-0 ceph-osd[88096]: osd.2 pg_epoch: 54 pg[2.1f( empty local-lis/les=43/44 n=0 ec=43/19 lis/c=43/43 les/c/f=44/44/0 sis=54 pruub=15.135535240s) [0] r=-1 lpr=54 pi=[43,54)/1 crt=0'0 unknown NOTIFY pruub 99.985176086s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 08:08:52 compute-0 ceph-osd[88096]: osd.2 pg_epoch: 54 pg[5.18( empty local-lis/les=47/48 n=0 ec=47/22 lis/c=47/47 les/c/f=48/48/0 sis=54 pruub=11.804222107s) [1] r=-1 lpr=54 pi=[47,54)/1 crt=0'0 unknown NOTIFY pruub 96.653862000s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 08:08:52 compute-0 ceph-osd[85971]: osd.0 pg_epoch: 54 pg[2.f( empty local-lis/les=0/0 n=0 ec=43/19 lis/c=43/43 les/c/f=44/44/0 sis=54) [0] r=0 lpr=54 pi=[43,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:08:52 compute-0 ceph-osd[85971]: osd.0 pg_epoch: 54 pg[10.7( empty local-lis/les=0/0 n=0 ec=52/37 lis/c=52/52 les/c/f=53/53/0 sis=54) [0] r=0 lpr=54 pi=[52,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:08:52 compute-0 ceph-osd[85971]: osd.0 pg_epoch: 54 pg[10.4( empty local-lis/les=0/0 n=0 ec=52/37 lis/c=52/52 les/c/f=53/53/0 sis=54) [0] r=0 lpr=54 pi=[52,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:08:52 compute-0 ceph-osd[85971]: osd.0 pg_epoch: 54 pg[2.b( empty local-lis/les=0/0 n=0 ec=43/19 lis/c=43/43 les/c/f=44/44/0 sis=54) [0] r=0 lpr=54 pi=[43,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:08:52 compute-0 ceph-osd[85971]: osd.0 pg_epoch: 54 pg[10.8( empty local-lis/les=0/0 n=0 ec=52/37 lis/c=52/52 les/c/f=53/53/0 sis=54) [0] r=0 lpr=54 pi=[52,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:08:52 compute-0 ceph-osd[85971]: osd.0 pg_epoch: 54 pg[2.8( empty local-lis/les=0/0 n=0 ec=43/19 lis/c=43/43 les/c/f=44/44/0 sis=54) [0] r=0 lpr=54 pi=[43,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:08:52 compute-0 ceph-osd[85971]: osd.0 pg_epoch: 54 pg[10.9( empty local-lis/les=0/0 n=0 ec=52/37 lis/c=52/52 les/c/f=53/53/0 sis=54) [0] r=0 lpr=54 pi=[52,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:08:52 compute-0 ceph-osd[85971]: osd.0 pg_epoch: 54 pg[5.7( empty local-lis/les=0/0 n=0 ec=47/22 lis/c=47/47 les/c/f=48/48/0 sis=54) [0] r=0 lpr=54 pi=[47,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:08:52 compute-0 ceph-osd[85971]: osd.0 pg_epoch: 54 pg[2.2( empty local-lis/les=0/0 n=0 ec=43/19 lis/c=43/43 les/c/f=44/44/0 sis=54) [0] r=0 lpr=54 pi=[43,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:08:52 compute-0 ceph-osd[85971]: osd.0 pg_epoch: 54 pg[5.4( empty local-lis/les=0/0 n=0 ec=47/22 lis/c=47/47 les/c/f=48/48/0 sis=54) [0] r=0 lpr=54 pi=[47,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:08:52 compute-0 ceph-osd[85971]: osd.0 pg_epoch: 54 pg[5.3( empty local-lis/les=0/0 n=0 ec=47/22 lis/c=47/47 les/c/f=48/48/0 sis=54) [0] r=0 lpr=54 pi=[47,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:08:52 compute-0 ceph-osd[85971]: osd.0 pg_epoch: 54 pg[10.d( empty local-lis/les=0/0 n=0 ec=52/37 lis/c=52/52 les/c/f=53/53/0 sis=54) [0] r=0 lpr=54 pi=[52,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:08:52 compute-0 ceph-osd[85971]: osd.0 pg_epoch: 54 pg[10.e( empty local-lis/les=0/0 n=0 ec=52/37 lis/c=52/52 les/c/f=53/53/0 sis=54) [0] r=0 lpr=54 pi=[52,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:08:52 compute-0 ceph-osd[85971]: osd.0 pg_epoch: 54 pg[5.5( empty local-lis/les=0/0 n=0 ec=47/22 lis/c=47/47 les/c/f=48/48/0 sis=54) [0] r=0 lpr=54 pi=[47,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:08:52 compute-0 ceph-osd[85971]: osd.0 pg_epoch: 54 pg[10.1( empty local-lis/les=0/0 n=0 ec=52/37 lis/c=52/52 les/c/f=53/53/0 sis=54) [0] r=0 lpr=54 pi=[52,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:08:52 compute-0 ceph-osd[85971]: osd.0 pg_epoch: 54 pg[5.2( empty local-lis/les=0/0 n=0 ec=47/22 lis/c=47/47 les/c/f=48/48/0 sis=54) [0] r=0 lpr=54 pi=[47,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:08:52 compute-0 ceph-osd[85971]: osd.0 pg_epoch: 54 pg[10.15( empty local-lis/les=0/0 n=0 ec=52/37 lis/c=52/52 les/c/f=53/53/0 sis=54) [0] r=0 lpr=54 pi=[52,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:08:52 compute-0 ceph-osd[85971]: osd.0 pg_epoch: 54 pg[2.1c( empty local-lis/les=0/0 n=0 ec=43/19 lis/c=43/43 les/c/f=44/44/0 sis=54) [0] r=0 lpr=54 pi=[43,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:08:52 compute-0 ceph-osd[85971]: osd.0 pg_epoch: 54 pg[10.16( empty local-lis/les=0/0 n=0 ec=52/37 lis/c=52/52 les/c/f=53/53/0 sis=54) [0] r=0 lpr=54 pi=[52,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:08:52 compute-0 ceph-osd[85971]: osd.0 pg_epoch: 54 pg[2.1d( empty local-lis/les=0/0 n=0 ec=43/19 lis/c=43/43 les/c/f=44/44/0 sis=54) [0] r=0 lpr=54 pi=[43,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:08:52 compute-0 ceph-osd[85971]: osd.0 pg_epoch: 54 pg[10.17( empty local-lis/les=0/0 n=0 ec=52/37 lis/c=52/52 les/c/f=53/53/0 sis=54) [0] r=0 lpr=54 pi=[52,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:08:52 compute-0 ceph-osd[85971]: osd.0 pg_epoch: 54 pg[2.1f( empty local-lis/les=0/0 n=0 ec=43/19 lis/c=43/43 les/c/f=44/44/0 sis=54) [0] r=0 lpr=54 pi=[43,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:08:52 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 54 pg[4.10( empty local-lis/les=0/0 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=54) [1] r=0 lpr=54 pi=[45,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:08:52 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 54 pg[5.11( empty local-lis/les=0/0 n=0 ec=47/22 lis/c=47/47 les/c/f=48/48/0 sis=54) [1] r=0 lpr=54 pi=[47,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:08:52 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 54 pg[2.17( empty local-lis/les=0/0 n=0 ec=43/19 lis/c=43/43 les/c/f=44/44/0 sis=54) [1] r=0 lpr=54 pi=[43,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:08:52 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 54 pg[4.12( empty local-lis/les=0/0 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=54) [1] r=0 lpr=54 pi=[45,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:08:52 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 54 pg[5.13( empty local-lis/les=0/0 n=0 ec=47/22 lis/c=47/47 les/c/f=48/48/0 sis=54) [1] r=0 lpr=54 pi=[47,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:08:52 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 54 pg[5.12( empty local-lis/les=0/0 n=0 ec=47/22 lis/c=47/47 les/c/f=48/48/0 sis=54) [1] r=0 lpr=54 pi=[47,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:08:52 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 54 pg[2.15( empty local-lis/les=0/0 n=0 ec=43/19 lis/c=43/43 les/c/f=44/44/0 sis=54) [1] r=0 lpr=54 pi=[43,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:08:52 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 54 pg[4.14( empty local-lis/les=0/0 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=54) [1] r=0 lpr=54 pi=[45,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:08:52 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 54 pg[10.1a( empty local-lis/les=0/0 n=0 ec=52/37 lis/c=52/52 les/c/f=53/53/0 sis=54) [1] r=0 lpr=54 pi=[52,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:08:52 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 54 pg[10.19( empty local-lis/les=0/0 n=0 ec=52/37 lis/c=52/52 les/c/f=53/53/0 sis=54) [1] r=0 lpr=54 pi=[52,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:08:52 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 54 pg[5.16( empty local-lis/les=0/0 n=0 ec=47/22 lis/c=47/47 les/c/f=48/48/0 sis=54) [1] r=0 lpr=54 pi=[47,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:08:52 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 54 pg[4.8( empty local-lis/les=0/0 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=54) [1] r=0 lpr=54 pi=[45,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:08:52 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 54 pg[5.9( empty local-lis/les=0/0 n=0 ec=47/22 lis/c=47/47 les/c/f=48/48/0 sis=54) [1] r=0 lpr=54 pi=[47,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:08:52 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 54 pg[10.6( empty local-lis/les=0/0 n=0 ec=52/37 lis/c=52/52 les/c/f=53/53/0 sis=54) [1] r=0 lpr=54 pi=[52,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:08:52 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 54 pg[6.b( empty local-lis/les=0/0 n=0 ec=48/23 lis/c=48/48 les/c/f=51/51/0 sis=54) [1] r=0 lpr=54 pi=[48,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:08:52 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 54 pg[4.9( empty local-lis/les=0/0 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=54) [1] r=0 lpr=54 pi=[45,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:08:52 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 54 pg[6.9( empty local-lis/les=0/0 n=0 ec=48/23 lis/c=48/48 les/c/f=51/51/0 sis=54) [1] r=0 lpr=54 pi=[48,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:08:52 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 54 pg[2.d( empty local-lis/les=0/0 n=0 ec=43/19 lis/c=43/43 les/c/f=44/44/0 sis=54) [1] r=0 lpr=54 pi=[43,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:08:52 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 54 pg[5.f( empty local-lis/les=0/0 n=0 ec=47/22 lis/c=47/47 les/c/f=48/48/0 sis=54) [1] r=0 lpr=54 pi=[47,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:08:52 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 54 pg[6.7( empty local-lis/les=0/0 n=0 ec=48/23 lis/c=48/48 les/c/f=51/51/0 sis=54) [1] r=0 lpr=54 pi=[48,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:08:52 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 54 pg[4.5( empty local-lis/les=0/0 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=54) [1] r=0 lpr=54 pi=[45,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:08:52 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 54 pg[10.b( empty local-lis/les=0/0 n=0 ec=52/37 lis/c=52/52 les/c/f=53/53/0 sis=54) [1] r=0 lpr=54 pi=[52,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:08:52 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 54 pg[2.3( empty local-lis/les=0/0 n=0 ec=43/19 lis/c=43/43 les/c/f=44/44/0 sis=54) [1] r=0 lpr=54 pi=[43,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:08:52 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 54 pg[6.5( empty local-lis/les=0/0 n=0 ec=48/23 lis/c=48/48 les/c/f=51/51/0 sis=54) [1] r=0 lpr=54 pi=[48,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:08:52 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 54 pg[4.7( empty local-lis/les=0/0 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=54) [1] r=0 lpr=54 pi=[45,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:08:52 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 54 pg[6.1( empty local-lis/les=0/0 n=0 ec=48/23 lis/c=48/48 les/c/f=51/51/0 sis=54) [1] r=0 lpr=54 pi=[48,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:08:52 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 54 pg[2.5( empty local-lis/les=0/0 n=0 ec=43/19 lis/c=43/43 les/c/f=44/44/0 sis=54) [1] r=0 lpr=54 pi=[43,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:08:52 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 54 pg[10.2( empty local-lis/les=0/0 n=0 ec=52/37 lis/c=52/52 les/c/f=53/53/0 sis=54) [1] r=0 lpr=54 pi=[52,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:08:52 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 54 pg[2.a( empty local-lis/les=0/0 n=0 ec=43/19 lis/c=43/43 les/c/f=44/44/0 sis=54) [1] r=0 lpr=54 pi=[43,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:08:52 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 54 pg[6.f( empty local-lis/les=0/0 n=0 ec=48/23 lis/c=48/48 les/c/f=51/51/0 sis=54) [1] r=0 lpr=54 pi=[48,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:08:52 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 54 pg[4.d( empty local-lis/les=0/0 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=54) [1] r=0 lpr=54 pi=[45,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:08:52 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 54 pg[5.c( empty local-lis/les=0/0 n=0 ec=47/22 lis/c=47/47 les/c/f=48/48/0 sis=54) [1] r=0 lpr=54 pi=[47,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:08:52 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 54 pg[6.d( empty local-lis/les=0/0 n=0 ec=48/23 lis/c=48/48 les/c/f=51/51/0 sis=54) [1] r=0 lpr=54 pi=[48,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:08:52 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 54 pg[4.f( empty local-lis/les=0/0 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=54) [1] r=0 lpr=54 pi=[45,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:08:52 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 54 pg[2.9( empty local-lis/les=0/0 n=0 ec=43/19 lis/c=43/43 les/c/f=44/44/0 sis=54) [1] r=0 lpr=54 pi=[43,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:08:52 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 54 pg[4.4( empty local-lis/les=0/0 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=54) [1] r=0 lpr=54 pi=[45,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:08:52 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 54 pg[4.2( empty local-lis/les=0/0 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=54) [1] r=0 lpr=54 pi=[45,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:08:52 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 54 pg[2.4( empty local-lis/les=0/0 n=0 ec=43/19 lis/c=43/43 les/c/f=44/44/0 sis=54) [1] r=0 lpr=54 pi=[43,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:08:52 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 54 pg[6.3( empty local-lis/les=0/0 n=0 ec=48/23 lis/c=48/48 les/c/f=51/51/0 sis=54) [1] r=0 lpr=54 pi=[48,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:08:52 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 54 pg[10.f( empty local-lis/les=0/0 n=0 ec=52/37 lis/c=52/52 les/c/f=53/53/0 sis=54) [1] r=0 lpr=54 pi=[52,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:08:52 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 54 pg[2.7( empty local-lis/les=0/0 n=0 ec=43/19 lis/c=43/43 les/c/f=44/44/0 sis=54) [1] r=0 lpr=54 pi=[43,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:08:52 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 54 pg[2.6( empty local-lis/les=0/0 n=0 ec=43/19 lis/c=43/43 les/c/f=44/44/0 sis=54) [1] r=0 lpr=54 pi=[43,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:08:52 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 54 pg[5.1( empty local-lis/les=0/0 n=0 ec=47/22 lis/c=47/47 les/c/f=48/48/0 sis=54) [1] r=0 lpr=54 pi=[47,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:08:52 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 54 pg[10.11( empty local-lis/les=0/0 n=0 ec=52/37 lis/c=52/52 les/c/f=53/53/0 sis=54) [1] r=0 lpr=54 pi=[52,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:08:52 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 54 pg[10.10( empty local-lis/les=0/0 n=0 ec=52/37 lis/c=52/52 les/c/f=53/53/0 sis=54) [1] r=0 lpr=54 pi=[52,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:08:52 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 54 pg[10.13( empty local-lis/les=0/0 n=0 ec=52/37 lis/c=52/52 les/c/f=53/53/0 sis=54) [1] r=0 lpr=54 pi=[52,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:08:52 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 54 pg[2.1b( empty local-lis/les=0/0 n=0 ec=43/19 lis/c=43/43 les/c/f=44/44/0 sis=54) [1] r=0 lpr=54 pi=[43,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:08:52 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 54 pg[10.12( empty local-lis/les=0/0 n=0 ec=52/37 lis/c=52/52 les/c/f=53/53/0 sis=54) [1] r=0 lpr=54 pi=[52,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:08:52 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 54 pg[5.1d( empty local-lis/les=0/0 n=0 ec=47/22 lis/c=47/47 les/c/f=48/48/0 sis=54) [1] r=0 lpr=54 pi=[47,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:08:52 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 54 pg[5.1a( empty local-lis/les=0/0 n=0 ec=47/22 lis/c=47/47 les/c/f=48/48/0 sis=54) [1] r=0 lpr=54 pi=[47,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:08:52 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 54 pg[10.14( empty local-lis/les=0/0 n=0 ec=52/37 lis/c=52/52 les/c/f=53/53/0 sis=54) [1] r=0 lpr=54 pi=[52,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:08:52 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 54 pg[5.18( empty local-lis/les=0/0 n=0 ec=47/22 lis/c=47/47 les/c/f=48/48/0 sis=54) [1] r=0 lpr=54 pi=[47,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:08:52 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 54 pg[5.19( empty local-lis/les=0/0 n=0 ec=47/22 lis/c=47/47 les/c/f=48/48/0 sis=54) [1] r=0 lpr=54 pi=[47,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:08:52 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 54 pg[11.17( empty local-lis/les=52/53 n=0 ec=52/39 lis/c=52/52 les/c/f=53/53/0 sis=54 pruub=9.512075424s) [0] r=-1 lpr=54 pi=[52,54)/1 crt=0'0 active pruub 100.927856445s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 08:08:52 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 54 pg[8.14( v 34'6 (0'0,34'6] local-lis/les=50/51 n=0 ec=50/33 lis/c=50/50 les/c/f=51/51/0 sis=54 pruub=14.477354050s) [0] r=-1 lpr=54 pi=[50,54)/1 crt=34'6 lcod 0'0 active pruub 105.893218994s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 08:08:52 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 54 pg[7.1b( empty local-lis/les=48/49 n=0 ec=48/24 lis/c=48/48 les/c/f=49/49/0 sis=54 pruub=12.463869095s) [0] r=-1 lpr=54 pi=[48,54)/1 crt=0'0 active pruub 103.879730225s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 08:08:52 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 54 pg[11.17( empty local-lis/les=52/53 n=0 ec=52/39 lis/c=52/52 les/c/f=53/53/0 sis=54 pruub=9.512034416s) [0] r=-1 lpr=54 pi=[52,54)/1 crt=0'0 unknown NOTIFY pruub 100.927856445s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 08:08:52 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 54 pg[8.14( v 34'6 (0'0,34'6] local-lis/les=50/51 n=0 ec=50/33 lis/c=50/50 les/c/f=51/51/0 sis=54 pruub=14.477331161s) [0] r=-1 lpr=54 pi=[50,54)/1 crt=34'6 lcod 0'0 unknown NOTIFY pruub 105.893218994s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 08:08:52 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 54 pg[7.1b( empty local-lis/les=48/49 n=0 ec=48/24 lis/c=48/48 les/c/f=49/49/0 sis=54 pruub=12.463832855s) [0] r=-1 lpr=54 pi=[48,54)/1 crt=0'0 unknown NOTIFY pruub 103.879730225s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 08:08:52 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 54 pg[9.15( v 41'483 (0'0,41'483] local-lis/les=50/51 n=6 ec=50/35 lis/c=50/50 les/c/f=51/51/0 sis=54 pruub=14.475920677s) [0] r=-1 lpr=54 pi=[50,54)/1 crt=41'483 lcod 0'0 active pruub 105.891906738s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 08:08:52 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 54 pg[9.15( v 41'483 (0'0,41'483] local-lis/les=50/51 n=6 ec=50/35 lis/c=50/50 les/c/f=51/51/0 sis=54 pruub=14.475829124s) [0] r=-1 lpr=54 pi=[50,54)/1 crt=41'483 lcod 0'0 unknown NOTIFY pruub 105.891906738s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 08:08:52 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 54 pg[7.1a( empty local-lis/les=48/49 n=0 ec=48/24 lis/c=48/48 les/c/f=49/49/0 sis=54 pruub=12.463495255s) [2] r=-1 lpr=54 pi=[48,54)/1 crt=0'0 active pruub 103.879730225s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 08:08:52 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 54 pg[7.1a( empty local-lis/les=48/49 n=0 ec=48/24 lis/c=48/48 les/c/f=49/49/0 sis=54 pruub=12.463477135s) [2] r=-1 lpr=54 pi=[48,54)/1 crt=0'0 unknown NOTIFY pruub 103.879730225s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 08:08:52 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 54 pg[8.15( v 34'6 (0'0,34'6] local-lis/les=50/51 n=0 ec=50/33 lis/c=50/50 les/c/f=51/51/0 sis=54 pruub=14.475553513s) [2] r=-1 lpr=54 pi=[50,54)/1 crt=34'6 lcod 0'0 active pruub 105.891914368s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 08:08:52 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 54 pg[8.15( v 34'6 (0'0,34'6] local-lis/les=50/51 n=0 ec=50/33 lis/c=50/50 les/c/f=51/51/0 sis=54 pruub=14.475517273s) [2] r=-1 lpr=54 pi=[50,54)/1 crt=34'6 lcod 0'0 unknown NOTIFY pruub 105.891914368s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 08:08:52 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 54 pg[3.1d( empty local-lis/les=45/46 n=0 ec=45/20 lis/c=45/45 les/c/f=46/46/0 sis=54 pruub=9.334954262s) [2] r=-1 lpr=54 pi=[45,54)/1 crt=0'0 active pruub 100.751403809s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 08:08:52 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 54 pg[3.1d( empty local-lis/les=45/46 n=0 ec=45/20 lis/c=45/45 les/c/f=46/46/0 sis=54 pruub=9.334927559s) [2] r=-1 lpr=54 pi=[45,54)/1 crt=0'0 unknown NOTIFY pruub 100.751403809s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 08:08:52 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 54 pg[3.1e( empty local-lis/les=45/46 n=0 ec=45/20 lis/c=45/45 les/c/f=46/46/0 sis=54 pruub=9.334781647s) [2] r=-1 lpr=54 pi=[45,54)/1 crt=0'0 active pruub 100.751213074s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 08:08:52 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 54 pg[11.15( empty local-lis/les=52/53 n=0 ec=52/39 lis/c=52/52 les/c/f=53/53/0 sis=54 pruub=9.512706757s) [2] r=-1 lpr=54 pi=[52,54)/1 crt=0'0 active pruub 100.929229736s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 08:08:52 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 54 pg[11.15( empty local-lis/les=52/53 n=0 ec=52/39 lis/c=52/52 les/c/f=53/53/0 sis=54 pruub=9.512685776s) [2] r=-1 lpr=54 pi=[52,54)/1 crt=0'0 unknown NOTIFY pruub 100.929229736s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 08:08:52 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 54 pg[3.1e( empty local-lis/les=45/46 n=0 ec=45/20 lis/c=45/45 les/c/f=46/46/0 sis=54 pruub=9.334666252s) [2] r=-1 lpr=54 pi=[45,54)/1 crt=0'0 unknown NOTIFY pruub 100.751213074s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 08:08:52 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 54 pg[9.17( v 41'483 (0'0,41'483] local-lis/les=50/51 n=6 ec=50/35 lis/c=50/50 les/c/f=51/51/0 sis=54 pruub=14.476438522s) [0] r=-1 lpr=54 pi=[50,54)/1 crt=41'483 lcod 0'0 active pruub 105.893142700s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 08:08:52 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 54 pg[11.14( empty local-lis/les=52/53 n=0 ec=52/39 lis/c=52/52 les/c/f=53/53/0 sis=54 pruub=9.512269974s) [0] r=-1 lpr=54 pi=[52,54)/1 crt=0'0 active pruub 100.929046631s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 08:08:52 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 54 pg[7.18( empty local-lis/les=48/49 n=0 ec=48/24 lis/c=48/48 les/c/f=49/49/0 sis=54 pruub=12.462862015s) [0] r=-1 lpr=54 pi=[48,54)/1 crt=0'0 active pruub 103.879676819s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 08:08:52 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 54 pg[11.14( empty local-lis/les=52/53 n=0 ec=52/39 lis/c=52/52 les/c/f=53/53/0 sis=54 pruub=9.512231827s) [0] r=-1 lpr=54 pi=[52,54)/1 crt=0'0 unknown NOTIFY pruub 100.929046631s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 08:08:52 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 54 pg[7.18( empty local-lis/les=48/49 n=0 ec=48/24 lis/c=48/48 les/c/f=49/49/0 sis=54 pruub=12.462832451s) [0] r=-1 lpr=54 pi=[48,54)/1 crt=0'0 unknown NOTIFY pruub 103.879676819s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 08:08:52 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 54 pg[9.17( v 41'483 (0'0,41'483] local-lis/les=50/51 n=6 ec=50/35 lis/c=50/50 les/c/f=51/51/0 sis=54 pruub=14.476285934s) [0] r=-1 lpr=54 pi=[50,54)/1 crt=41'483 lcod 0'0 unknown NOTIFY pruub 105.893142700s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 08:08:52 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 54 pg[3.1f( empty local-lis/les=45/46 n=0 ec=45/20 lis/c=45/45 les/c/f=46/46/0 sis=54 pruub=9.313980103s) [0] r=-1 lpr=54 pi=[45,54)/1 crt=0'0 active pruub 100.730964661s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 08:08:52 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 54 pg[3.1f( empty local-lis/les=45/46 n=0 ec=45/20 lis/c=45/45 les/c/f=46/46/0 sis=54 pruub=9.313960075s) [0] r=-1 lpr=54 pi=[45,54)/1 crt=0'0 unknown NOTIFY pruub 100.730964661s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 08:08:52 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 54 pg[7.1f( empty local-lis/les=48/49 n=0 ec=48/24 lis/c=48/48 les/c/f=49/49/0 sis=54 pruub=12.462627411s) [0] r=-1 lpr=54 pi=[48,54)/1 crt=0'0 active pruub 103.879745483s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 08:08:52 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 54 pg[8.10( v 34'6 (0'0,34'6] local-lis/les=50/51 n=0 ec=50/33 lis/c=50/50 les/c/f=51/51/0 sis=54 pruub=14.476002693s) [0] r=-1 lpr=54 pi=[50,54)/1 crt=34'6 lcod 0'0 active pruub 105.893150330s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 08:08:52 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 54 pg[7.1f( empty local-lis/les=48/49 n=0 ec=48/24 lis/c=48/48 les/c/f=49/49/0 sis=54 pruub=12.462597847s) [0] r=-1 lpr=54 pi=[48,54)/1 crt=0'0 unknown NOTIFY pruub 103.879745483s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 08:08:52 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 54 pg[8.10( v 34'6 (0'0,34'6] local-lis/les=50/51 n=0 ec=50/33 lis/c=50/50 les/c/f=51/51/0 sis=54 pruub=14.475985527s) [0] r=-1 lpr=54 pi=[50,54)/1 crt=34'6 lcod 0'0 unknown NOTIFY pruub 105.893150330s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 08:08:52 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 54 pg[3.1b( empty local-lis/les=45/46 n=0 ec=45/20 lis/c=45/45 les/c/f=46/46/0 sis=54 pruub=9.334197998s) [0] r=-1 lpr=54 pi=[45,54)/1 crt=0'0 active pruub 100.751396179s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 08:08:52 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 54 pg[9.11( v 41'483 (0'0,41'483] local-lis/les=50/51 n=7 ec=50/35 lis/c=50/50 les/c/f=51/51/0 sis=54 pruub=14.475849152s) [0] r=-1 lpr=54 pi=[50,54)/1 crt=41'483 lcod 0'0 active pruub 105.893211365s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 08:08:52 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 54 pg[11.12( empty local-lis/les=52/53 n=0 ec=52/39 lis/c=52/52 les/c/f=53/53/0 sis=54 pruub=9.511613846s) [2] r=-1 lpr=54 pi=[52,54)/1 crt=0'0 active pruub 100.929054260s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 08:08:52 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 54 pg[3.1b( empty local-lis/les=45/46 n=0 ec=45/20 lis/c=45/45 les/c/f=46/46/0 sis=54 pruub=9.333983421s) [0] r=-1 lpr=54 pi=[45,54)/1 crt=0'0 unknown NOTIFY pruub 100.751396179s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 08:08:52 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 54 pg[11.12( empty local-lis/les=52/53 n=0 ec=52/39 lis/c=52/52 les/c/f=53/53/0 sis=54 pruub=9.511595726s) [2] r=-1 lpr=54 pi=[52,54)/1 crt=0'0 unknown NOTIFY pruub 100.929054260s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 08:08:52 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 54 pg[8.11( v 34'6 (0'0,34'6] local-lis/les=50/51 n=0 ec=50/33 lis/c=50/50 les/c/f=51/51/0 sis=54 pruub=14.475621223s) [2] r=-1 lpr=54 pi=[50,54)/1 crt=34'6 lcod 0'0 active pruub 105.893196106s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 08:08:52 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 54 pg[8.11( v 34'6 (0'0,34'6] local-lis/les=50/51 n=0 ec=50/33 lis/c=50/50 les/c/f=51/51/0 sis=54 pruub=14.475601196s) [2] r=-1 lpr=54 pi=[50,54)/1 crt=34'6 lcod 0'0 unknown NOTIFY pruub 105.893196106s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 08:08:52 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 54 pg[11.11( empty local-lis/les=52/53 n=0 ec=52/39 lis/c=52/52 les/c/f=53/53/0 sis=54 pruub=9.511446953s) [2] r=-1 lpr=54 pi=[52,54)/1 crt=0'0 active pruub 100.929191589s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 08:08:52 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 54 pg[8.12( v 34'6 (0'0,34'6] local-lis/les=50/51 n=0 ec=50/33 lis/c=50/50 les/c/f=51/51/0 sis=54 pruub=14.475505829s) [2] r=-1 lpr=54 pi=[50,54)/1 crt=34'6 lcod 0'0 active pruub 105.893302917s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 08:08:52 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 54 pg[8.12( v 34'6 (0'0,34'6] local-lis/les=50/51 n=0 ec=50/33 lis/c=50/50 les/c/f=51/51/0 sis=54 pruub=14.475494385s) [2] r=-1 lpr=54 pi=[50,54)/1 crt=34'6 lcod 0'0 unknown NOTIFY pruub 105.893302917s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 08:08:52 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 54 pg[11.11( empty local-lis/les=52/53 n=0 ec=52/39 lis/c=52/52 les/c/f=53/53/0 sis=54 pruub=9.511421204s) [2] r=-1 lpr=54 pi=[52,54)/1 crt=0'0 unknown NOTIFY pruub 100.929191589s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 08:08:52 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 54 pg[9.13( v 41'483 (0'0,41'483] local-lis/les=50/51 n=6 ec=50/35 lis/c=50/50 les/c/f=51/51/0 sis=54 pruub=14.483572960s) [0] r=-1 lpr=54 pi=[50,54)/1 crt=41'483 lcod 0'0 active pruub 105.901481628s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 08:08:52 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 54 pg[9.13( v 41'483 (0'0,41'483] local-lis/les=50/51 n=6 ec=50/35 lis/c=50/50 les/c/f=51/51/0 sis=54 pruub=14.483560562s) [0] r=-1 lpr=54 pi=[50,54)/1 crt=41'483 lcod 0'0 unknown NOTIFY pruub 105.901481628s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 08:08:52 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 54 pg[7.1c( empty local-lis/les=48/49 n=0 ec=48/24 lis/c=48/48 les/c/f=49/49/0 sis=54 pruub=12.461873055s) [2] r=-1 lpr=54 pi=[48,54)/1 crt=0'0 active pruub 103.879890442s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 08:08:52 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 54 pg[7.1c( empty local-lis/les=48/49 n=0 ec=48/24 lis/c=48/48 les/c/f=49/49/0 sis=54 pruub=12.461859703s) [2] r=-1 lpr=54 pi=[48,54)/1 crt=0'0 unknown NOTIFY pruub 103.879890442s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 08:08:52 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 54 pg[3.18( empty local-lis/les=45/46 n=0 ec=45/20 lis/c=45/45 les/c/f=46/46/0 sis=54 pruub=9.333656311s) [2] r=-1 lpr=54 pi=[45,54)/1 crt=0'0 active pruub 100.751785278s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 08:08:52 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 54 pg[11.f( empty local-lis/les=52/53 n=0 ec=52/39 lis/c=52/52 les/c/f=53/53/0 sis=54 pruub=9.511238098s) [0] r=-1 lpr=54 pi=[52,54)/1 crt=0'0 active pruub 100.929397583s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 08:08:52 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 54 pg[3.18( empty local-lis/les=45/46 n=0 ec=45/20 lis/c=45/45 les/c/f=46/46/0 sis=54 pruub=9.333625793s) [2] r=-1 lpr=54 pi=[45,54)/1 crt=0'0 unknown NOTIFY pruub 100.751785278s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 08:08:52 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 54 pg[11.10( empty local-lis/les=52/53 n=0 ec=52/39 lis/c=52/52 les/c/f=53/53/0 sis=54 pruub=9.510953903s) [0] r=-1 lpr=54 pi=[52,54)/1 crt=0'0 active pruub 100.929138184s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 08:08:52 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 54 pg[7.3( empty local-lis/les=48/49 n=0 ec=48/24 lis/c=48/48 les/c/f=49/49/0 sis=54 pruub=12.461362839s) [0] r=-1 lpr=54 pi=[48,54)/1 crt=0'0 active pruub 103.879600525s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 08:08:52 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 54 pg[7.3( empty local-lis/les=48/49 n=0 ec=48/24 lis/c=48/48 les/c/f=49/49/0 sis=54 pruub=12.461347580s) [0] r=-1 lpr=54 pi=[48,54)/1 crt=0'0 unknown NOTIFY pruub 103.879600525s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 08:08:52 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 54 pg[9.11( v 41'483 (0'0,41'483] local-lis/les=50/51 n=7 ec=50/35 lis/c=50/50 les/c/f=51/51/0 sis=54 pruub=14.474966049s) [0] r=-1 lpr=54 pi=[50,54)/1 crt=41'483 lcod 0'0 unknown NOTIFY pruub 105.893211365s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 08:08:52 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 54 pg[3.7( empty local-lis/les=45/46 n=0 ec=45/20 lis/c=45/45 les/c/f=46/46/0 sis=54 pruub=9.334375381s) [2] r=-1 lpr=54 pi=[45,54)/1 crt=0'0 active pruub 100.752700806s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 08:08:52 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 54 pg[3.7( empty local-lis/les=45/46 n=0 ec=45/20 lis/c=45/45 les/c/f=46/46/0 sis=54 pruub=9.334362030s) [2] r=-1 lpr=54 pi=[45,54)/1 crt=0'0 unknown NOTIFY pruub 100.752700806s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 08:08:52 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 54 pg[8.c( v 34'6 (0'0,34'6] local-lis/les=50/51 n=0 ec=50/33 lis/c=50/50 les/c/f=51/51/0 sis=54 pruub=14.483026505s) [0] r=-1 lpr=54 pi=[50,54)/1 crt=34'6 lcod 0'0 active pruub 105.901390076s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 08:08:52 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 54 pg[8.c( v 34'6 (0'0,34'6] local-lis/les=50/51 n=0 ec=50/33 lis/c=50/50 les/c/f=51/51/0 sis=54 pruub=14.483014107s) [0] r=-1 lpr=54 pi=[50,54)/1 crt=34'6 lcod 0'0 unknown NOTIFY pruub 105.901390076s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 08:08:52 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 54 pg[9.d( v 41'483 (0'0,41'483] local-lis/les=50/51 n=7 ec=50/35 lis/c=50/50 les/c/f=51/51/0 sis=54 pruub=14.474869728s) [0] r=-1 lpr=54 pi=[50,54)/1 crt=41'483 lcod 0'0 active pruub 105.893302917s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 08:08:52 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 54 pg[7.2( empty local-lis/les=48/49 n=0 ec=48/24 lis/c=48/48 les/c/f=49/49/0 sis=54 pruub=12.461119652s) [2] r=-1 lpr=54 pi=[48,54)/1 crt=0'0 active pruub 103.879592896s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 08:08:52 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 54 pg[7.2( empty local-lis/les=48/49 n=0 ec=48/24 lis/c=48/48 les/c/f=49/49/0 sis=54 pruub=12.461109161s) [2] r=-1 lpr=54 pi=[48,54)/1 crt=0'0 unknown NOTIFY pruub 103.879592896s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 08:08:52 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 54 pg[9.d( v 41'483 (0'0,41'483] local-lis/les=50/51 n=7 ec=50/35 lis/c=50/50 les/c/f=51/51/0 sis=54 pruub=14.474858284s) [0] r=-1 lpr=54 pi=[50,54)/1 crt=41'483 lcod 0'0 unknown NOTIFY pruub 105.893302917s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 08:08:52 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 54 pg[11.e( empty local-lis/les=52/53 n=0 ec=52/39 lis/c=52/52 les/c/f=53/53/0 sis=54 pruub=9.517030716s) [0] r=-1 lpr=54 pi=[52,54)/1 crt=0'0 active pruub 100.935539246s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 08:08:52 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 54 pg[11.e( empty local-lis/les=52/53 n=0 ec=52/39 lis/c=52/52 les/c/f=53/53/0 sis=54 pruub=9.517004013s) [0] r=-1 lpr=54 pi=[52,54)/1 crt=0'0 unknown NOTIFY pruub 100.935539246s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 08:08:52 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 54 pg[3.6( empty local-lis/les=45/46 n=0 ec=45/20 lis/c=45/45 les/c/f=46/46/0 sis=54 pruub=9.334179878s) [0] r=-1 lpr=54 pi=[45,54)/1 crt=0'0 active pruub 100.752746582s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 08:08:52 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 54 pg[3.6( empty local-lis/les=45/46 n=0 ec=45/20 lis/c=45/45 les/c/f=46/46/0 sis=54 pruub=9.334168434s) [0] r=-1 lpr=54 pi=[45,54)/1 crt=0'0 unknown NOTIFY pruub 100.752746582s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 08:08:52 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 54 pg[8.d( v 34'6 (0'0,34'6] local-lis/les=50/51 n=0 ec=50/33 lis/c=50/50 les/c/f=51/51/0 sis=54 pruub=14.482809067s) [2] r=-1 lpr=54 pi=[50,54)/1 crt=34'6 lcod 0'0 active pruub 105.901397705s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 08:08:52 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 54 pg[8.d( v 34'6 (0'0,34'6] local-lis/les=50/51 n=0 ec=50/33 lis/c=50/50 les/c/f=51/51/0 sis=54 pruub=14.482793808s) [2] r=-1 lpr=54 pi=[50,54)/1 crt=34'6 lcod 0'0 unknown NOTIFY pruub 105.901397705s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 08:08:52 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 54 pg[7.1( empty local-lis/les=48/49 n=0 ec=48/24 lis/c=48/48 les/c/f=49/49/0 sis=54 pruub=12.460900307s) [2] r=-1 lpr=54 pi=[48,54)/1 crt=0'0 active pruub 103.879600525s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 08:08:52 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 54 pg[7.1( empty local-lis/les=48/49 n=0 ec=48/24 lis/c=48/48 les/c/f=49/49/0 sis=54 pruub=12.460887909s) [2] r=-1 lpr=54 pi=[48,54)/1 crt=0'0 unknown NOTIFY pruub 103.879600525s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 08:08:52 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 54 pg[3.5( empty local-lis/les=45/46 n=0 ec=45/20 lis/c=45/45 les/c/f=46/46/0 sis=54 pruub=9.334342003s) [2] r=-1 lpr=54 pi=[45,54)/1 crt=0'0 active pruub 100.753082275s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 08:08:52 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 54 pg[3.5( empty local-lis/les=45/46 n=0 ec=45/20 lis/c=45/45 les/c/f=46/46/0 sis=54 pruub=9.334328651s) [2] r=-1 lpr=54 pi=[45,54)/1 crt=0'0 unknown NOTIFY pruub 100.753082275s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 08:08:52 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 54 pg[8.e( v 34'6 (0'0,34'6] local-lis/les=50/51 n=0 ec=50/33 lis/c=50/50 les/c/f=51/51/0 sis=54 pruub=14.482596397s) [0] r=-1 lpr=54 pi=[50,54)/1 crt=34'6 lcod 0'0 active pruub 105.901397705s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 08:08:52 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 54 pg[8.e( v 34'6 (0'0,34'6] local-lis/les=50/51 n=0 ec=50/33 lis/c=50/50 les/c/f=51/51/0 sis=54 pruub=14.482586861s) [0] r=-1 lpr=54 pi=[50,54)/1 crt=34'6 lcod 0'0 unknown NOTIFY pruub 105.901397705s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 08:08:52 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 54 pg[9.f( v 41'483 (0'0,41'483] local-lis/les=50/51 n=7 ec=50/35 lis/c=50/50 les/c/f=51/51/0 sis=54 pruub=14.482557297s) [0] r=-1 lpr=54 pi=[50,54)/1 crt=41'483 lcod 0'0 active pruub 105.901405334s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 08:08:52 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 54 pg[11.f( empty local-lis/les=52/53 n=0 ec=52/39 lis/c=52/52 les/c/f=53/53/0 sis=54 pruub=9.510538101s) [0] r=-1 lpr=54 pi=[52,54)/1 crt=0'0 unknown NOTIFY pruub 100.929397583s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 08:08:52 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 54 pg[9.f( v 41'483 (0'0,41'483] local-lis/les=50/51 n=7 ec=50/35 lis/c=50/50 les/c/f=51/51/0 sis=54 pruub=14.482544899s) [0] r=-1 lpr=54 pi=[50,54)/1 crt=41'483 lcod 0'0 unknown NOTIFY pruub 105.901405334s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 08:08:52 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 54 pg[11.d( empty local-lis/les=52/53 n=0 ec=52/39 lis/c=52/52 les/c/f=53/53/0 sis=54 pruub=9.510442734s) [2] r=-1 lpr=54 pi=[52,54)/1 crt=0'0 active pruub 100.929313660s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 08:08:52 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 54 pg[11.d( empty local-lis/les=52/53 n=0 ec=52/39 lis/c=52/52 les/c/f=53/53/0 sis=54 pruub=9.510410309s) [2] r=-1 lpr=54 pi=[52,54)/1 crt=0'0 unknown NOTIFY pruub 100.929313660s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 08:08:52 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 54 pg[11.b( empty local-lis/les=52/53 n=0 ec=52/39 lis/c=52/52 les/c/f=53/53/0 sis=54 pruub=9.510427475s) [2] r=-1 lpr=54 pi=[52,54)/1 crt=0'0 active pruub 100.929374695s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 08:08:52 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 54 pg[11.b( empty local-lis/les=52/53 n=0 ec=52/39 lis/c=52/52 les/c/f=53/53/0 sis=54 pruub=9.510416985s) [2] r=-1 lpr=54 pi=[52,54)/1 crt=0'0 unknown NOTIFY pruub 100.929374695s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 08:08:52 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 54 pg[3.3( empty local-lis/les=45/46 n=0 ec=45/20 lis/c=45/45 les/c/f=46/46/0 sis=54 pruub=9.333257675s) [0] r=-1 lpr=54 pi=[45,54)/1 crt=0'0 active pruub 100.752258301s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 08:08:52 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 54 pg[3.3( empty local-lis/les=45/46 n=0 ec=45/20 lis/c=45/45 les/c/f=46/46/0 sis=54 pruub=9.333246231s) [0] r=-1 lpr=54 pi=[45,54)/1 crt=0'0 unknown NOTIFY pruub 100.752258301s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 08:08:52 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 54 pg[11.9( empty local-lis/les=52/53 n=0 ec=52/39 lis/c=52/52 les/c/f=53/53/0 sis=54 pruub=9.516615868s) [2] r=-1 lpr=54 pi=[52,54)/1 crt=0'0 active pruub 100.935722351s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 08:08:52 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 54 pg[7.5( empty local-lis/les=48/49 n=0 ec=48/24 lis/c=48/48 les/c/f=49/49/0 sis=54 pruub=12.460489273s) [2] r=-1 lpr=54 pi=[48,54)/1 crt=0'0 active pruub 103.879608154s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 08:08:52 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 54 pg[11.9( empty local-lis/les=52/53 n=0 ec=52/39 lis/c=52/52 les/c/f=53/53/0 sis=54 pruub=9.516606331s) [2] r=-1 lpr=54 pi=[52,54)/1 crt=0'0 unknown NOTIFY pruub 100.935722351s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 08:08:52 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 54 pg[9.9( v 41'483 (0'0,41'483] local-lis/les=50/51 n=7 ec=50/35 lis/c=50/50 les/c/f=51/51/0 sis=54 pruub=14.482373238s) [0] r=-1 lpr=54 pi=[50,54)/1 crt=41'483 lcod 0'0 active pruub 105.901496887s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 08:08:52 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 54 pg[7.5( empty local-lis/les=48/49 n=0 ec=48/24 lis/c=48/48 les/c/f=49/49/0 sis=54 pruub=12.460473061s) [2] r=-1 lpr=54 pi=[48,54)/1 crt=0'0 unknown NOTIFY pruub 103.879608154s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 08:08:52 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 54 pg[9.9( v 41'483 (0'0,41'483] local-lis/les=50/51 n=7 ec=50/35 lis/c=50/50 les/c/f=51/51/0 sis=54 pruub=14.482350349s) [0] r=-1 lpr=54 pi=[50,54)/1 crt=41'483 lcod 0'0 unknown NOTIFY pruub 105.901496887s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 08:08:52 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 54 pg[3.1( empty local-lis/les=45/46 n=0 ec=45/20 lis/c=45/45 les/c/f=46/46/0 sis=54 pruub=9.333232880s) [0] r=-1 lpr=54 pi=[45,54)/1 crt=0'0 active pruub 100.752441406s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 08:08:52 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 54 pg[3.1( empty local-lis/les=45/46 n=0 ec=45/20 lis/c=45/45 les/c/f=46/46/0 sis=54 pruub=9.333211899s) [0] r=-1 lpr=54 pi=[45,54)/1 crt=0'0 unknown NOTIFY pruub 100.752441406s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 08:08:52 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 54 pg[9.b( v 41'483 (0'0,41'483] local-lis/les=50/51 n=7 ec=50/35 lis/c=50/50 les/c/f=51/51/0 sis=54 pruub=14.482227325s) [0] r=-1 lpr=54 pi=[50,54)/1 crt=41'483 lcod 0'0 active pruub 105.901504517s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 08:08:52 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 54 pg[9.b( v 41'483 (0'0,41'483] local-lis/les=50/51 n=7 ec=50/35 lis/c=50/50 les/c/f=51/51/0 sis=54 pruub=14.482213974s) [0] r=-1 lpr=54 pi=[50,54)/1 crt=41'483 lcod 0'0 unknown NOTIFY pruub 105.901504517s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 08:08:52 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 54 pg[7.c( empty local-lis/les=48/49 n=0 ec=48/24 lis/c=48/48 les/c/f=49/49/0 sis=54 pruub=12.460116386s) [2] r=-1 lpr=54 pi=[48,54)/1 crt=0'0 active pruub 103.879432678s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 08:08:52 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 54 pg[3.8( empty local-lis/les=45/46 n=0 ec=45/20 lis/c=45/45 les/c/f=46/46/0 sis=54 pruub=9.333134651s) [2] r=-1 lpr=54 pi=[45,54)/1 crt=0'0 active pruub 100.752471924s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 08:08:52 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 54 pg[3.8( empty local-lis/les=45/46 n=0 ec=45/20 lis/c=45/45 les/c/f=46/46/0 sis=54 pruub=9.333121300s) [2] r=-1 lpr=54 pi=[45,54)/1 crt=0'0 unknown NOTIFY pruub 100.752471924s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 08:08:52 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 54 pg[7.c( empty local-lis/les=48/49 n=0 ec=48/24 lis/c=48/48 les/c/f=49/49/0 sis=54 pruub=12.460096359s) [2] r=-1 lpr=54 pi=[48,54)/1 crt=0'0 unknown NOTIFY pruub 103.879432678s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 08:08:52 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 54 pg[11.2( empty local-lis/les=52/53 n=0 ec=52/39 lis/c=52/52 les/c/f=53/53/0 sis=54 pruub=9.516116142s) [2] r=-1 lpr=54 pi=[52,54)/1 crt=0'0 active pruub 100.935531616s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 08:08:52 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 54 pg[11.2( empty local-lis/les=52/53 n=0 ec=52/39 lis/c=52/52 les/c/f=53/53/0 sis=54 pruub=9.516103745s) [2] r=-1 lpr=54 pi=[52,54)/1 crt=0'0 unknown NOTIFY pruub 100.935531616s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 08:08:52 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 54 pg[7.e( empty local-lis/les=48/49 n=0 ec=48/24 lis/c=48/48 les/c/f=49/49/0 sis=54 pruub=12.459880829s) [2] r=-1 lpr=54 pi=[48,54)/1 crt=0'0 active pruub 103.879325867s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 08:08:52 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 54 pg[7.e( empty local-lis/les=48/49 n=0 ec=48/24 lis/c=48/48 les/c/f=49/49/0 sis=54 pruub=12.459869385s) [2] r=-1 lpr=54 pi=[48,54)/1 crt=0'0 unknown NOTIFY pruub 103.879325867s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 08:08:52 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 54 pg[3.a( empty local-lis/les=45/46 n=0 ec=45/20 lis/c=45/45 les/c/f=46/46/0 sis=54 pruub=9.333497047s) [0] r=-1 lpr=54 pi=[45,54)/1 crt=0'0 active pruub 100.753059387s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 08:08:52 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 54 pg[11.3( empty local-lis/les=52/53 n=0 ec=52/39 lis/c=52/52 les/c/f=53/53/0 sis=54 pruub=9.515996933s) [2] r=-1 lpr=54 pi=[52,54)/1 crt=0'0 active pruub 100.935562134s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 08:08:52 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 54 pg[11.3( empty local-lis/les=52/53 n=0 ec=52/39 lis/c=52/52 les/c/f=53/53/0 sis=54 pruub=9.515967369s) [2] r=-1 lpr=54 pi=[52,54)/1 crt=0'0 unknown NOTIFY pruub 100.935562134s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 08:08:52 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 54 pg[3.a( empty local-lis/les=45/46 n=0 ec=45/20 lis/c=45/45 les/c/f=46/46/0 sis=54 pruub=9.333464622s) [0] r=-1 lpr=54 pi=[45,54)/1 crt=0'0 unknown NOTIFY pruub 100.753059387s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 08:08:52 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 54 pg[11.10( empty local-lis/les=52/53 n=0 ec=52/39 lis/c=52/52 les/c/f=53/53/0 sis=54 pruub=9.509437561s) [0] r=-1 lpr=54 pi=[52,54)/1 crt=0'0 unknown NOTIFY pruub 100.929138184s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 08:08:52 compute-0 ceph-osd[85971]: osd.0 pg_epoch: 54 pg[11.17( empty local-lis/les=0/0 n=0 ec=52/39 lis/c=52/52 les/c/f=53/53/0 sis=54) [0] r=0 lpr=54 pi=[52,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:08:52 compute-0 ceph-osd[85971]: osd.0 pg_epoch: 54 pg[7.1b( empty local-lis/les=0/0 n=0 ec=48/24 lis/c=48/48 les/c/f=49/49/0 sis=54) [0] r=0 lpr=54 pi=[48,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:08:52 compute-0 ceph-osd[85971]: osd.0 pg_epoch: 54 pg[8.14( empty local-lis/les=0/0 n=0 ec=50/33 lis/c=50/50 les/c/f=51/51/0 sis=54) [0] r=0 lpr=54 pi=[50,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:08:52 compute-0 ceph-osd[85971]: osd.0 pg_epoch: 54 pg[9.15( empty local-lis/les=0/0 n=0 ec=50/35 lis/c=50/50 les/c/f=51/51/0 sis=54) [0] r=0 lpr=54 pi=[50,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:08:52 compute-0 ceph-osd[85971]: osd.0 pg_epoch: 54 pg[11.14( empty local-lis/les=0/0 n=0 ec=52/39 lis/c=52/52 les/c/f=53/53/0 sis=54) [0] r=0 lpr=54 pi=[52,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:08:52 compute-0 ceph-osd[85971]: osd.0 pg_epoch: 54 pg[3.1f( empty local-lis/les=0/0 n=0 ec=45/20 lis/c=45/45 les/c/f=46/46/0 sis=54) [0] r=0 lpr=54 pi=[45,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:08:52 compute-0 ceph-osd[85971]: osd.0 pg_epoch: 54 pg[8.10( empty local-lis/les=0/0 n=0 ec=50/33 lis/c=50/50 les/c/f=51/51/0 sis=54) [0] r=0 lpr=54 pi=[50,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:08:52 compute-0 ceph-osd[85971]: osd.0 pg_epoch: 54 pg[7.1f( empty local-lis/les=0/0 n=0 ec=48/24 lis/c=48/48 les/c/f=49/49/0 sis=54) [0] r=0 lpr=54 pi=[48,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:08:52 compute-0 ceph-osd[85971]: osd.0 pg_epoch: 54 pg[9.17( empty local-lis/les=0/0 n=0 ec=50/35 lis/c=50/50 les/c/f=51/51/0 sis=54) [0] r=0 lpr=54 pi=[50,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:08:52 compute-0 ceph-osd[85971]: osd.0 pg_epoch: 54 pg[7.18( empty local-lis/les=0/0 n=0 ec=48/24 lis/c=48/48 les/c/f=49/49/0 sis=54) [0] r=0 lpr=54 pi=[48,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:08:52 compute-0 ceph-osd[85971]: osd.0 pg_epoch: 54 pg[3.1b( empty local-lis/les=0/0 n=0 ec=45/20 lis/c=45/45 les/c/f=46/46/0 sis=54) [0] r=0 lpr=54 pi=[45,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:08:52 compute-0 ceph-osd[85971]: osd.0 pg_epoch: 54 pg[9.13( empty local-lis/les=0/0 n=0 ec=50/35 lis/c=50/50 les/c/f=51/51/0 sis=54) [0] r=0 lpr=54 pi=[50,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:08:52 compute-0 ceph-osd[85971]: osd.0 pg_epoch: 54 pg[7.3( empty local-lis/les=0/0 n=0 ec=48/24 lis/c=48/48 les/c/f=49/49/0 sis=54) [0] r=0 lpr=54 pi=[48,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:08:52 compute-0 ceph-osd[85971]: osd.0 pg_epoch: 54 pg[9.11( empty local-lis/les=0/0 n=0 ec=50/35 lis/c=50/50 les/c/f=51/51/0 sis=54) [0] r=0 lpr=54 pi=[50,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:08:52 compute-0 ceph-osd[85971]: osd.0 pg_epoch: 54 pg[8.c( empty local-lis/les=0/0 n=0 ec=50/33 lis/c=50/50 les/c/f=51/51/0 sis=54) [0] r=0 lpr=54 pi=[50,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:08:52 compute-0 ceph-osd[85971]: osd.0 pg_epoch: 54 pg[9.d( empty local-lis/les=0/0 n=0 ec=50/35 lis/c=50/50 les/c/f=51/51/0 sis=54) [0] r=0 lpr=54 pi=[50,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:08:52 compute-0 ceph-osd[85971]: osd.0 pg_epoch: 54 pg[11.e( empty local-lis/les=0/0 n=0 ec=52/39 lis/c=52/52 les/c/f=53/53/0 sis=54) [0] r=0 lpr=54 pi=[52,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:08:52 compute-0 ceph-osd[85971]: osd.0 pg_epoch: 54 pg[3.6( empty local-lis/les=0/0 n=0 ec=45/20 lis/c=45/45 les/c/f=46/46/0 sis=54) [0] r=0 lpr=54 pi=[45,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:08:52 compute-0 ceph-osd[85971]: osd.0 pg_epoch: 54 pg[11.f( empty local-lis/les=0/0 n=0 ec=52/39 lis/c=52/52 les/c/f=53/53/0 sis=54) [0] r=0 lpr=54 pi=[52,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:08:52 compute-0 ceph-osd[85971]: osd.0 pg_epoch: 54 pg[9.f( empty local-lis/les=0/0 n=0 ec=50/35 lis/c=50/50 les/c/f=51/51/0 sis=54) [0] r=0 lpr=54 pi=[50,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:08:52 compute-0 ceph-osd[85971]: osd.0 pg_epoch: 54 pg[8.e( empty local-lis/les=0/0 n=0 ec=50/33 lis/c=50/50 les/c/f=51/51/0 sis=54) [0] r=0 lpr=54 pi=[50,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:08:52 compute-0 ceph-osd[85971]: osd.0 pg_epoch: 54 pg[3.3( empty local-lis/les=0/0 n=0 ec=45/20 lis/c=45/45 les/c/f=46/46/0 sis=54) [0] r=0 lpr=54 pi=[45,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:08:52 compute-0 ceph-osd[85971]: osd.0 pg_epoch: 54 pg[9.9( empty local-lis/les=0/0 n=0 ec=50/35 lis/c=50/50 les/c/f=51/51/0 sis=54) [0] r=0 lpr=54 pi=[50,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:08:52 compute-0 ceph-osd[85971]: osd.0 pg_epoch: 54 pg[3.1( empty local-lis/les=0/0 n=0 ec=45/20 lis/c=45/45 les/c/f=46/46/0 sis=54) [0] r=0 lpr=54 pi=[45,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:08:52 compute-0 ceph-osd[85971]: osd.0 pg_epoch: 54 pg[9.b( empty local-lis/les=0/0 n=0 ec=50/35 lis/c=50/50 les/c/f=51/51/0 sis=54) [0] r=0 lpr=54 pi=[50,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:08:52 compute-0 ceph-osd[85971]: osd.0 pg_epoch: 54 pg[3.a( empty local-lis/les=0/0 n=0 ec=45/20 lis/c=45/45 les/c/f=46/46/0 sis=54) [0] r=0 lpr=54 pi=[45,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:08:52 compute-0 ceph-osd[85971]: osd.0 pg_epoch: 54 pg[11.10( empty local-lis/les=0/0 n=0 ec=52/39 lis/c=52/52 les/c/f=53/53/0 sis=54) [0] r=0 lpr=54 pi=[52,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:08:52 compute-0 ceph-osd[88096]: osd.2 pg_epoch: 54 pg[7.1a( empty local-lis/les=0/0 n=0 ec=48/24 lis/c=48/48 les/c/f=49/49/0 sis=54) [2] r=0 lpr=54 pi=[48,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:08:52 compute-0 ceph-osd[88096]: osd.2 pg_epoch: 54 pg[8.15( empty local-lis/les=0/0 n=0 ec=50/33 lis/c=50/50 les/c/f=51/51/0 sis=54) [2] r=0 lpr=54 pi=[50,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:08:52 compute-0 ceph-osd[88096]: osd.2 pg_epoch: 54 pg[3.1d( empty local-lis/les=0/0 n=0 ec=45/20 lis/c=45/45 les/c/f=46/46/0 sis=54) [2] r=0 lpr=54 pi=[45,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:08:52 compute-0 ceph-osd[88096]: osd.2 pg_epoch: 54 pg[11.15( empty local-lis/les=0/0 n=0 ec=52/39 lis/c=52/52 les/c/f=53/53/0 sis=54) [2] r=0 lpr=54 pi=[52,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:08:52 compute-0 ceph-osd[88096]: osd.2 pg_epoch: 54 pg[3.1e( empty local-lis/les=0/0 n=0 ec=45/20 lis/c=45/45 les/c/f=46/46/0 sis=54) [2] r=0 lpr=54 pi=[45,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:08:52 compute-0 ceph-osd[88096]: osd.2 pg_epoch: 54 pg[11.12( empty local-lis/les=0/0 n=0 ec=52/39 lis/c=52/52 les/c/f=53/53/0 sis=54) [2] r=0 lpr=54 pi=[52,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:08:52 compute-0 ceph-osd[88096]: osd.2 pg_epoch: 54 pg[8.11( empty local-lis/les=0/0 n=0 ec=50/33 lis/c=50/50 les/c/f=51/51/0 sis=54) [2] r=0 lpr=54 pi=[50,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:08:52 compute-0 ceph-osd[88096]: osd.2 pg_epoch: 54 pg[8.12( empty local-lis/les=0/0 n=0 ec=50/33 lis/c=50/50 les/c/f=51/51/0 sis=54) [2] r=0 lpr=54 pi=[50,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:08:52 compute-0 ceph-osd[88096]: osd.2 pg_epoch: 54 pg[11.11( empty local-lis/les=0/0 n=0 ec=52/39 lis/c=52/52 les/c/f=53/53/0 sis=54) [2] r=0 lpr=54 pi=[52,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:08:52 compute-0 ceph-osd[88096]: osd.2 pg_epoch: 54 pg[7.1c( empty local-lis/les=0/0 n=0 ec=48/24 lis/c=48/48 les/c/f=49/49/0 sis=54) [2] r=0 lpr=54 pi=[48,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:08:52 compute-0 ceph-osd[88096]: osd.2 pg_epoch: 54 pg[3.18( empty local-lis/les=0/0 n=0 ec=45/20 lis/c=45/45 les/c/f=46/46/0 sis=54) [2] r=0 lpr=54 pi=[45,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:08:52 compute-0 ceph-osd[88096]: osd.2 pg_epoch: 54 pg[3.7( empty local-lis/les=0/0 n=0 ec=45/20 lis/c=45/45 les/c/f=46/46/0 sis=54) [2] r=0 lpr=54 pi=[45,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:08:52 compute-0 ceph-osd[88096]: osd.2 pg_epoch: 54 pg[7.2( empty local-lis/les=0/0 n=0 ec=48/24 lis/c=48/48 les/c/f=49/49/0 sis=54) [2] r=0 lpr=54 pi=[48,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:08:52 compute-0 ceph-osd[88096]: osd.2 pg_epoch: 54 pg[8.d( empty local-lis/les=0/0 n=0 ec=50/33 lis/c=50/50 les/c/f=51/51/0 sis=54) [2] r=0 lpr=54 pi=[50,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:08:52 compute-0 ceph-osd[88096]: osd.2 pg_epoch: 54 pg[7.1( empty local-lis/les=0/0 n=0 ec=48/24 lis/c=48/48 les/c/f=49/49/0 sis=54) [2] r=0 lpr=54 pi=[48,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:08:52 compute-0 ceph-osd[88096]: osd.2 pg_epoch: 54 pg[3.5( empty local-lis/les=0/0 n=0 ec=45/20 lis/c=45/45 les/c/f=46/46/0 sis=54) [2] r=0 lpr=54 pi=[45,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:08:52 compute-0 ceph-osd[88096]: osd.2 pg_epoch: 54 pg[11.d( empty local-lis/les=0/0 n=0 ec=52/39 lis/c=52/52 les/c/f=53/53/0 sis=54) [2] r=0 lpr=54 pi=[52,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:08:52 compute-0 ceph-osd[88096]: osd.2 pg_epoch: 54 pg[11.b( empty local-lis/les=0/0 n=0 ec=52/39 lis/c=52/52 les/c/f=53/53/0 sis=54) [2] r=0 lpr=54 pi=[52,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:08:52 compute-0 ceph-osd[88096]: osd.2 pg_epoch: 54 pg[11.9( empty local-lis/les=0/0 n=0 ec=52/39 lis/c=52/52 les/c/f=53/53/0 sis=54) [2] r=0 lpr=54 pi=[52,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:08:52 compute-0 ceph-osd[88096]: osd.2 pg_epoch: 54 pg[7.5( empty local-lis/les=0/0 n=0 ec=48/24 lis/c=48/48 les/c/f=49/49/0 sis=54) [2] r=0 lpr=54 pi=[48,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:08:52 compute-0 ceph-osd[88096]: osd.2 pg_epoch: 54 pg[3.8( empty local-lis/les=0/0 n=0 ec=45/20 lis/c=45/45 les/c/f=46/46/0 sis=54) [2] r=0 lpr=54 pi=[45,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:08:52 compute-0 ceph-osd[88096]: osd.2 pg_epoch: 54 pg[7.c( empty local-lis/les=0/0 n=0 ec=48/24 lis/c=48/48 les/c/f=49/49/0 sis=54) [2] r=0 lpr=54 pi=[48,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:08:52 compute-0 ceph-osd[88096]: osd.2 pg_epoch: 54 pg[11.2( empty local-lis/les=0/0 n=0 ec=52/39 lis/c=52/52 les/c/f=53/53/0 sis=54) [2] r=0 lpr=54 pi=[52,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:08:52 compute-0 ceph-osd[88096]: osd.2 pg_epoch: 54 pg[7.e( empty local-lis/les=0/0 n=0 ec=48/24 lis/c=48/48 les/c/f=49/49/0 sis=54) [2] r=0 lpr=54 pi=[48,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:08:52 compute-0 ceph-osd[88096]: osd.2 pg_epoch: 54 pg[11.3( empty local-lis/les=0/0 n=0 ec=52/39 lis/c=52/52 les/c/f=53/53/0 sis=54) [2] r=0 lpr=54 pi=[52,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:08:52 compute-0 ceph-mon[75227]: 3.18 scrub starts
Jan 31 08:08:52 compute-0 ceph-mon[75227]: 3.18 scrub ok
Jan 31 08:08:52 compute-0 ceph-mon[75227]: 4.1f scrub starts
Jan 31 08:08:52 compute-0 ceph-mon[75227]: 4.1f scrub ok
Jan 31 08:08:52 compute-0 ceph-mon[75227]: 4.a scrub starts
Jan 31 08:08:52 compute-0 ceph-mon[75227]: 4.a scrub ok
Jan 31 08:08:52 compute-0 ceph-mon[75227]: pgmap v114: 305 pgs: 305 active+clean; 461 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:08:52 compute-0 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "osd pool set", "pool": ".rgw.root", "var": "pgp_num_actual", "val": "32"} : dispatch
Jan 31 08:08:52 compute-0 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "osd pool set", "pool": "backups", "var": "pgp_num_actual", "val": "32"} : dispatch
Jan 31 08:08:52 compute-0 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pgp_num_actual", "val": "32"} : dispatch
Jan 31 08:08:52 compute-0 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "2"} : dispatch
Jan 31 08:08:52 compute-0 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pgp_num_actual", "val": "32"} : dispatch
Jan 31 08:08:52 compute-0 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "2"} : dispatch
Jan 31 08:08:52 compute-0 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pgp_num_actual", "val": "32"} : dispatch
Jan 31 08:08:52 compute-0 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "osd pool set", "pool": "images", "var": "pgp_num_actual", "val": "32"} : dispatch
Jan 31 08:08:52 compute-0 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "osd pool set", "pool": "vms", "var": "pgp_num_actual", "val": "32"} : dispatch
Jan 31 08:08:52 compute-0 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "osd pool set", "pool": "volumes", "var": "pgp_num_actual", "val": "32"} : dispatch
Jan 31 08:08:52 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 54 pg[7.f( empty local-lis/les=48/49 n=0 ec=48/24 lis/c=48/48 les/c/f=49/49/0 sis=54 pruub=12.336371422s) [0] r=-1 lpr=54 pi=[48,54)/1 crt=0'0 active pruub 103.879249573s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 08:08:52 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 54 pg[9.1( v 41'483 (0'0,41'483] local-lis/les=50/51 n=7 ec=50/35 lis/c=50/50 les/c/f=51/51/0 sis=54 pruub=14.359142303s) [0] r=-1 lpr=54 pi=[50,54)/1 crt=41'483 lcod 0'0 active pruub 105.902038574s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 08:08:52 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 54 pg[9.1( v 41'483 (0'0,41'483] local-lis/les=50/51 n=7 ec=50/35 lis/c=50/50 les/c/f=51/51/0 sis=54 pruub=14.359107971s) [0] r=-1 lpr=54 pi=[50,54)/1 crt=41'483 lcod 0'0 unknown NOTIFY pruub 105.902038574s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 08:08:52 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 54 pg[7.f( empty local-lis/les=48/49 n=0 ec=48/24 lis/c=48/48 les/c/f=49/49/0 sis=54 pruub=12.336319923s) [0] r=-1 lpr=54 pi=[48,54)/1 crt=0'0 unknown NOTIFY pruub 103.879249573s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 08:08:52 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 54 pg[8.f( v 34'6 (0'0,34'6] local-lis/les=50/51 n=0 ec=50/33 lis/c=50/50 les/c/f=51/51/0 sis=54 pruub=14.358894348s) [0] r=-1 lpr=54 pi=[50,54)/1 crt=34'6 lcod 0'0 active pruub 105.902099609s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 08:08:52 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 54 pg[8.f( v 34'6 (0'0,34'6] local-lis/les=50/51 n=0 ec=50/33 lis/c=50/50 les/c/f=51/51/0 sis=54 pruub=14.358872414s) [0] r=-1 lpr=54 pi=[50,54)/1 crt=34'6 lcod 0'0 unknown NOTIFY pruub 105.902099609s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 08:08:52 compute-0 ceph-osd[85971]: osd.0 pg_epoch: 54 pg[9.1( empty local-lis/les=0/0 n=0 ec=50/35 lis/c=50/50 les/c/f=51/51/0 sis=54) [0] r=0 lpr=54 pi=[50,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:08:52 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 54 pg[11.8( empty local-lis/les=52/53 n=0 ec=52/39 lis/c=52/52 les/c/f=53/53/0 sis=54 pruub=9.392084122s) [2] r=-1 lpr=54 pi=[52,54)/1 crt=0'0 active pruub 100.935607910s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 08:08:52 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 54 pg[11.8( empty local-lis/les=52/53 n=0 ec=52/39 lis/c=52/52 les/c/f=53/53/0 sis=54 pruub=9.392064095s) [2] r=-1 lpr=54 pi=[52,54)/1 crt=0'0 unknown NOTIFY pruub 100.935607910s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 08:08:52 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 54 pg[7.4( empty local-lis/les=48/49 n=0 ec=48/24 lis/c=48/48 les/c/f=49/49/0 sis=54 pruub=12.335464478s) [0] r=-1 lpr=54 pi=[48,54)/1 crt=0'0 active pruub 103.879196167s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 08:08:52 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 54 pg[7.6( empty local-lis/les=48/49 n=0 ec=48/24 lis/c=48/48 les/c/f=49/49/0 sis=54 pruub=12.335405350s) [0] r=-1 lpr=54 pi=[48,54)/1 crt=0'0 active pruub 103.879165649s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 08:08:52 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 54 pg[7.6( empty local-lis/les=48/49 n=0 ec=48/24 lis/c=48/48 les/c/f=49/49/0 sis=54 pruub=12.335379601s) [0] r=-1 lpr=54 pi=[48,54)/1 crt=0'0 unknown NOTIFY pruub 103.879165649s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 08:08:52 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 54 pg[7.4( empty local-lis/les=48/49 n=0 ec=48/24 lis/c=48/48 les/c/f=49/49/0 sis=54 pruub=12.335411072s) [0] r=-1 lpr=54 pi=[48,54)/1 crt=0'0 unknown NOTIFY pruub 103.879196167s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 08:08:52 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 54 pg[8.9( v 34'6 (0'0,34'6] local-lis/les=50/51 n=0 ec=50/33 lis/c=50/50 les/c/f=51/51/0 sis=54 pruub=14.358260155s) [0] r=-1 lpr=54 pi=[50,54)/1 crt=34'6 lcod 0'0 active pruub 105.902099609s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 08:08:52 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 54 pg[8.9( v 34'6 (0'0,34'6] local-lis/les=50/51 n=0 ec=50/33 lis/c=50/50 les/c/f=51/51/0 sis=54 pruub=14.358234406s) [0] r=-1 lpr=54 pi=[50,54)/1 crt=34'6 lcod 0'0 unknown NOTIFY pruub 105.902099609s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 08:08:52 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 54 pg[11.1( empty local-lis/les=52/53 n=0 ec=52/39 lis/c=52/52 les/c/f=53/53/0 sis=54 pruub=9.391707420s) [0] r=-1 lpr=54 pi=[52,54)/1 crt=0'0 active pruub 100.935768127s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 08:08:52 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 54 pg[11.1( empty local-lis/les=52/53 n=0 ec=52/39 lis/c=52/52 les/c/f=53/53/0 sis=54 pruub=9.391681671s) [0] r=-1 lpr=54 pi=[52,54)/1 crt=0'0 unknown NOTIFY pruub 100.935768127s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 08:08:52 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 54 pg[9.3( v 41'483 (0'0,41'483] local-lis/les=50/51 n=7 ec=50/35 lis/c=50/50 les/c/f=51/51/0 sis=54 pruub=14.358014107s) [0] r=-1 lpr=54 pi=[50,54)/1 crt=41'483 lcod 0'0 active pruub 105.902267456s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 08:08:52 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 54 pg[9.3( v 41'483 (0'0,41'483] local-lis/les=50/51 n=7 ec=50/35 lis/c=50/50 les/c/f=51/51/0 sis=54 pruub=14.357990265s) [0] r=-1 lpr=54 pi=[50,54)/1 crt=41'483 lcod 0'0 unknown NOTIFY pruub 105.902267456s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 08:08:52 compute-0 ceph-osd[85971]: osd.0 pg_epoch: 54 pg[7.f( empty local-lis/les=0/0 n=0 ec=48/24 lis/c=48/48 les/c/f=49/49/0 sis=54) [0] r=0 lpr=54 pi=[48,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:08:52 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 54 pg[11.4( empty local-lis/les=52/53 n=0 ec=52/39 lis/c=52/52 les/c/f=53/53/0 sis=54 pruub=9.391457558s) [0] r=-1 lpr=54 pi=[52,54)/1 crt=0'0 active pruub 100.935882568s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 08:08:52 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 54 pg[11.4( empty local-lis/les=52/53 n=0 ec=52/39 lis/c=52/52 les/c/f=53/53/0 sis=54 pruub=9.391435623s) [0] r=-1 lpr=54 pi=[52,54)/1 crt=0'0 unknown NOTIFY pruub 100.935882568s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 08:08:52 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 54 pg[7.8( empty local-lis/les=48/49 n=0 ec=48/24 lis/c=48/48 les/c/f=49/49/0 sis=54 pruub=12.334536552s) [2] r=-1 lpr=54 pi=[48,54)/1 crt=0'0 active pruub 103.879203796s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 08:08:52 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 54 pg[7.8( empty local-lis/les=48/49 n=0 ec=48/24 lis/c=48/48 les/c/f=49/49/0 sis=54 pruub=12.334514618s) [2] r=-1 lpr=54 pi=[48,54)/1 crt=0'0 unknown NOTIFY pruub 103.879203796s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 08:08:52 compute-0 ceph-osd[85971]: osd.0 pg_epoch: 54 pg[8.f( empty local-lis/les=0/0 n=0 ec=50/33 lis/c=50/50 les/c/f=51/51/0 sis=54) [0] r=0 lpr=54 pi=[50,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:08:52 compute-0 ceph-osd[85971]: osd.0 pg_epoch: 54 pg[7.6( empty local-lis/les=0/0 n=0 ec=48/24 lis/c=48/48 les/c/f=49/49/0 sis=54) [0] r=0 lpr=54 pi=[48,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:08:52 compute-0 ceph-osd[88096]: osd.2 pg_epoch: 54 pg[11.8( empty local-lis/les=0/0 n=0 ec=52/39 lis/c=52/52 les/c/f=53/53/0 sis=54) [2] r=0 lpr=54 pi=[52,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:08:52 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 54 pg[3.c( empty local-lis/les=45/46 n=0 ec=45/20 lis/c=45/45 les/c/f=46/46/0 sis=54 pruub=9.208025932s) [0] r=-1 lpr=54 pi=[45,54)/1 crt=0'0 active pruub 100.752876282s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 08:08:52 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 54 pg[3.c( empty local-lis/les=45/46 n=0 ec=45/20 lis/c=45/45 les/c/f=46/46/0 sis=54 pruub=9.208004951s) [0] r=-1 lpr=54 pi=[45,54)/1 crt=0'0 unknown NOTIFY pruub 100.752876282s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 08:08:52 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 54 pg[3.9( empty local-lis/les=45/46 n=0 ec=45/20 lis/c=45/45 les/c/f=46/46/0 sis=54 pruub=9.207977295s) [0] r=-1 lpr=54 pi=[45,54)/1 crt=0'0 active pruub 100.752876282s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 08:08:52 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 54 pg[3.9( empty local-lis/les=45/46 n=0 ec=45/20 lis/c=45/45 les/c/f=46/46/0 sis=54 pruub=9.207951546s) [0] r=-1 lpr=54 pi=[45,54)/1 crt=0'0 unknown NOTIFY pruub 100.752876282s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 08:08:52 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 54 pg[8.2( v 34'6 (0'0,34'6] local-lis/les=50/51 n=1 ec=50/33 lis/c=50/50 les/c/f=51/51/0 sis=54 pruub=14.357210159s) [2] r=-1 lpr=54 pi=[50,54)/1 crt=34'6 lcod 0'0 active pruub 105.902099609s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 08:08:52 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 54 pg[7.9( empty local-lis/les=48/49 n=0 ec=48/24 lis/c=48/48 les/c/f=49/49/0 sis=54 pruub=12.334060669s) [0] r=-1 lpr=54 pi=[48,54)/1 crt=0'0 active pruub 103.879150391s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 08:08:52 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 54 pg[8.2( v 34'6 (0'0,34'6] local-lis/les=50/51 n=1 ec=50/33 lis/c=50/50 les/c/f=51/51/0 sis=54 pruub=14.357098579s) [2] r=-1 lpr=54 pi=[50,54)/1 crt=34'6 lcod 0'0 unknown NOTIFY pruub 105.902099609s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 08:08:52 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 54 pg[7.9( empty local-lis/les=48/49 n=0 ec=48/24 lis/c=48/48 les/c/f=49/49/0 sis=54 pruub=12.334037781s) [0] r=-1 lpr=54 pi=[48,54)/1 crt=0'0 unknown NOTIFY pruub 103.879150391s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 08:08:52 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 54 pg[8.6( v 34'6 (0'0,34'6] local-lis/les=50/51 n=1 ec=50/33 lis/c=50/50 les/c/f=51/51/0 sis=54 pruub=14.357048988s) [0] r=-1 lpr=54 pi=[50,54)/1 crt=34'6 lcod 0'0 active pruub 105.902259827s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 08:08:52 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 54 pg[9.7( v 41'483 (0'0,41'483] local-lis/les=50/51 n=7 ec=50/35 lis/c=50/50 les/c/f=51/51/0 sis=54 pruub=14.357003212s) [0] r=-1 lpr=54 pi=[50,54)/1 crt=41'483 lcod 0'0 active pruub 105.902267456s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 08:08:52 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 54 pg[9.7( v 41'483 (0'0,41'483] local-lis/les=50/51 n=7 ec=50/35 lis/c=50/50 les/c/f=51/51/0 sis=54 pruub=14.356979370s) [0] r=-1 lpr=54 pi=[50,54)/1 crt=41'483 lcod 0'0 unknown NOTIFY pruub 105.902267456s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 08:08:52 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 54 pg[11.6( empty local-lis/les=52/53 n=0 ec=52/39 lis/c=52/52 les/c/f=53/53/0 sis=54 pruub=9.390460968s) [0] r=-1 lpr=54 pi=[52,54)/1 crt=0'0 active pruub 100.935829163s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 08:08:52 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 54 pg[7.a( empty local-lis/les=48/49 n=0 ec=48/24 lis/c=48/48 les/c/f=49/49/0 sis=54 pruub=12.333687782s) [2] r=-1 lpr=54 pi=[48,54)/1 crt=0'0 active pruub 103.879142761s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 08:08:52 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 54 pg[11.6( empty local-lis/les=52/53 n=0 ec=52/39 lis/c=52/52 les/c/f=53/53/0 sis=54 pruub=9.390439034s) [0] r=-1 lpr=54 pi=[52,54)/1 crt=0'0 unknown NOTIFY pruub 100.935829163s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 08:08:52 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 54 pg[7.a( empty local-lis/les=48/49 n=0 ec=48/24 lis/c=48/48 les/c/f=49/49/0 sis=54 pruub=12.333665848s) [2] r=-1 lpr=54 pi=[48,54)/1 crt=0'0 unknown NOTIFY pruub 103.879142761s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 08:08:52 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 54 pg[8.b( v 34'6 (0'0,34'6] local-lis/les=50/51 n=0 ec=50/33 lis/c=50/50 les/c/f=51/51/0 sis=54 pruub=14.357176781s) [0] r=-1 lpr=54 pi=[50,54)/1 crt=34'6 lcod 0'0 active pruub 105.902030945s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 08:08:52 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 54 pg[8.b( v 34'6 (0'0,34'6] local-lis/les=50/51 n=0 ec=50/33 lis/c=50/50 les/c/f=51/51/0 sis=54 pruub=14.356482506s) [0] r=-1 lpr=54 pi=[50,54)/1 crt=34'6 lcod 0'0 unknown NOTIFY pruub 105.902030945s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 08:08:52 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 54 pg[3.e( empty local-lis/les=45/46 n=0 ec=45/20 lis/c=45/45 les/c/f=46/46/0 sis=54 pruub=9.207416534s) [2] r=-1 lpr=54 pi=[45,54)/1 crt=0'0 active pruub 100.753074646s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 08:08:52 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 54 pg[3.e( empty local-lis/les=45/46 n=0 ec=45/20 lis/c=45/45 les/c/f=46/46/0 sis=54 pruub=9.207395554s) [2] r=-1 lpr=54 pi=[45,54)/1 crt=0'0 unknown NOTIFY pruub 100.753074646s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 08:08:52 compute-0 ceph-osd[85971]: osd.0 pg_epoch: 54 pg[8.9( empty local-lis/les=0/0 n=0 ec=50/33 lis/c=50/50 les/c/f=51/51/0 sis=54) [0] r=0 lpr=54 pi=[50,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:08:52 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 54 pg[3.f( empty local-lis/les=45/46 n=0 ec=45/20 lis/c=45/45 les/c/f=46/46/0 sis=54 pruub=9.207137108s) [0] r=-1 lpr=54 pi=[45,54)/1 crt=0'0 active pruub 100.752899170s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 08:08:52 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 54 pg[3.f( empty local-lis/les=45/46 n=0 ec=45/20 lis/c=45/45 les/c/f=46/46/0 sis=54 pruub=9.207106590s) [0] r=-1 lpr=54 pi=[45,54)/1 crt=0'0 unknown NOTIFY pruub 100.752899170s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 08:08:52 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 54 pg[8.4( v 34'6 (0'0,34'6] local-lis/les=50/51 n=1 ec=50/33 lis/c=50/50 les/c/f=51/51/0 sis=54 pruub=14.356471062s) [2] r=-1 lpr=54 pi=[50,54)/1 crt=34'6 lcod 0'0 active pruub 105.902343750s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 08:08:52 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 54 pg[9.5( v 53'484 (0'0,53'484] local-lis/les=50/51 n=7 ec=50/35 lis/c=50/50 les/c/f=51/51/0 sis=54 pruub=14.356512070s) [0] r=-1 lpr=54 pi=[50,54)/1 crt=41'483 lcod 41'483 active pruub 105.902404785s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 08:08:52 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 54 pg[8.4( v 34'6 (0'0,34'6] local-lis/les=50/51 n=1 ec=50/33 lis/c=50/50 les/c/f=51/51/0 sis=54 pruub=14.356439590s) [2] r=-1 lpr=54 pi=[50,54)/1 crt=34'6 lcod 0'0 unknown NOTIFY pruub 105.902343750s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 08:08:52 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 54 pg[9.5( v 53'484 (0'0,53'484] local-lis/les=50/51 n=7 ec=50/35 lis/c=50/50 les/c/f=51/51/0 sis=54 pruub=14.356477737s) [0] r=-1 lpr=54 pi=[50,54)/1 crt=41'483 lcod 41'483 unknown NOTIFY pruub 105.902404785s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 08:08:52 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 54 pg[11.18( empty local-lis/les=52/53 n=0 ec=52/39 lis/c=52/52 les/c/f=53/53/0 sis=54 pruub=9.390192986s) [2] r=-1 lpr=54 pi=[52,54)/1 crt=0'0 active pruub 100.936172485s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 08:08:52 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 54 pg[11.18( empty local-lis/les=52/53 n=0 ec=52/39 lis/c=52/52 les/c/f=53/53/0 sis=54 pruub=9.390171051s) [2] r=-1 lpr=54 pi=[52,54)/1 crt=0'0 unknown NOTIFY pruub 100.936172485s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 08:08:52 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 54 pg[8.6( v 34'6 (0'0,34'6] local-lis/les=50/51 n=1 ec=50/33 lis/c=50/50 les/c/f=51/51/0 sis=54 pruub=14.357024193s) [0] r=-1 lpr=54 pi=[50,54)/1 crt=34'6 lcod 0'0 unknown NOTIFY pruub 105.902259827s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 08:08:52 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 54 pg[8.1b( v 34'6 (0'0,34'6] local-lis/les=50/51 n=0 ec=50/33 lis/c=50/50 les/c/f=51/51/0 sis=54 pruub=14.356262207s) [2] r=-1 lpr=54 pi=[50,54)/1 crt=34'6 lcod 0'0 active pruub 105.902442932s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 08:08:52 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 54 pg[11.19( empty local-lis/les=52/53 n=0 ec=52/39 lis/c=52/52 les/c/f=53/53/0 sis=54 pruub=9.390110970s) [0] r=-1 lpr=54 pi=[52,54)/1 crt=0'0 active pruub 100.936317444s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 08:08:52 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 54 pg[7.15( empty local-lis/les=48/49 n=0 ec=48/24 lis/c=48/48 les/c/f=49/49/0 sis=54 pruub=12.332658768s) [2] r=-1 lpr=54 pi=[48,54)/1 crt=0'0 active pruub 103.878890991s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 08:08:52 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 54 pg[11.19( empty local-lis/les=52/53 n=0 ec=52/39 lis/c=52/52 les/c/f=53/53/0 sis=54 pruub=9.390088081s) [0] r=-1 lpr=54 pi=[52,54)/1 crt=0'0 unknown NOTIFY pruub 100.936317444s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 08:08:52 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 54 pg[8.1b( v 34'6 (0'0,34'6] local-lis/les=50/51 n=0 ec=50/33 lis/c=50/50 les/c/f=51/51/0 sis=54 pruub=14.356236458s) [2] r=-1 lpr=54 pi=[50,54)/1 crt=34'6 lcod 0'0 unknown NOTIFY pruub 105.902442932s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 08:08:52 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 54 pg[7.15( empty local-lis/les=48/49 n=0 ec=48/24 lis/c=48/48 les/c/f=49/49/0 sis=54 pruub=12.332636833s) [2] r=-1 lpr=54 pi=[48,54)/1 crt=0'0 unknown NOTIFY pruub 103.878890991s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 08:08:52 compute-0 ceph-osd[85971]: osd.0 pg_epoch: 54 pg[7.4( empty local-lis/les=0/0 n=0 ec=48/24 lis/c=48/48 les/c/f=49/49/0 sis=54) [0] r=0 lpr=54 pi=[48,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:08:52 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 54 pg[3.11( empty local-lis/les=45/46 n=0 ec=45/20 lis/c=45/45 les/c/f=46/46/0 sis=54 pruub=9.206583023s) [2] r=-1 lpr=54 pi=[45,54)/1 crt=0'0 active pruub 100.752952576s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 08:08:52 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 54 pg[9.1b( v 41'483 (0'0,41'483] local-lis/les=50/51 n=6 ec=50/35 lis/c=50/50 les/c/f=51/51/0 sis=54 pruub=14.356039047s) [0] r=-1 lpr=54 pi=[50,54)/1 crt=41'483 lcod 0'0 active pruub 105.902481079s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 08:08:52 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 54 pg[11.1a( empty local-lis/les=52/53 n=0 ec=52/39 lis/c=52/52 les/c/f=53/53/0 sis=54 pruub=9.389750481s) [2] r=-1 lpr=54 pi=[52,54)/1 crt=0'0 active pruub 100.936210632s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 08:08:52 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 54 pg[9.1b( v 41'483 (0'0,41'483] local-lis/les=50/51 n=6 ec=50/35 lis/c=50/50 les/c/f=51/51/0 sis=54 pruub=14.356015205s) [0] r=-1 lpr=54 pi=[50,54)/1 crt=41'483 lcod 0'0 unknown NOTIFY pruub 105.902481079s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 08:08:52 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 54 pg[11.1a( empty local-lis/les=52/53 n=0 ec=52/39 lis/c=52/52 les/c/f=53/53/0 sis=54 pruub=9.389726639s) [2] r=-1 lpr=54 pi=[52,54)/1 crt=0'0 unknown NOTIFY pruub 100.936210632s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 08:08:52 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 54 pg[3.11( empty local-lis/les=45/46 n=0 ec=45/20 lis/c=45/45 les/c/f=46/46/0 sis=54 pruub=9.206468582s) [2] r=-1 lpr=54 pi=[45,54)/1 crt=0'0 unknown NOTIFY pruub 100.752952576s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 08:08:52 compute-0 ceph-osd[85971]: osd.0 pg_epoch: 54 pg[11.1( empty local-lis/les=0/0 n=0 ec=52/39 lis/c=52/52 les/c/f=53/53/0 sis=54) [0] r=0 lpr=54 pi=[52,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:08:52 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 54 pg[11.1b( empty local-lis/les=52/53 n=0 ec=52/39 lis/c=52/52 les/c/f=53/53/0 sis=54 pruub=9.389445305s) [2] r=-1 lpr=54 pi=[52,54)/1 crt=0'0 active pruub 100.936195374s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 08:08:52 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 54 pg[11.1b( empty local-lis/les=52/53 n=0 ec=52/39 lis/c=52/52 les/c/f=53/53/0 sis=54 pruub=9.389420509s) [2] r=-1 lpr=54 pi=[52,54)/1 crt=0'0 unknown NOTIFY pruub 100.936195374s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 08:08:52 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 54 pg[9.19( v 41'483 (0'0,41'483] local-lis/les=50/51 n=6 ec=50/35 lis/c=50/50 les/c/f=51/51/0 sis=54 pruub=14.355772972s) [0] r=-1 lpr=54 pi=[50,54)/1 crt=41'483 lcod 0'0 active pruub 105.902580261s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 08:08:52 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 54 pg[3.12( empty local-lis/les=45/46 n=0 ec=45/20 lis/c=45/45 les/c/f=46/46/0 sis=54 pruub=9.206260681s) [0] r=-1 lpr=54 pi=[45,54)/1 crt=0'0 active pruub 100.753089905s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 08:08:52 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 54 pg[8.18( v 34'6 (0'0,34'6] local-lis/les=50/51 n=0 ec=50/33 lis/c=50/50 les/c/f=51/51/0 sis=54 pruub=14.355648041s) [0] r=-1 lpr=54 pi=[50,54)/1 crt=34'6 lcod 0'0 active pruub 105.902481079s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 08:08:52 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 54 pg[9.19( v 41'483 (0'0,41'483] local-lis/les=50/51 n=6 ec=50/35 lis/c=50/50 les/c/f=51/51/0 sis=54 pruub=14.355749130s) [0] r=-1 lpr=54 pi=[50,54)/1 crt=41'483 lcod 0'0 unknown NOTIFY pruub 105.902580261s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 08:08:52 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 54 pg[3.12( empty local-lis/les=45/46 n=0 ec=45/20 lis/c=45/45 les/c/f=46/46/0 sis=54 pruub=9.206235886s) [0] r=-1 lpr=54 pi=[45,54)/1 crt=0'0 unknown NOTIFY pruub 100.753089905s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 08:08:52 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 54 pg[8.18( v 34'6 (0'0,34'6] local-lis/les=50/51 n=0 ec=50/33 lis/c=50/50 les/c/f=51/51/0 sis=54 pruub=14.355624199s) [0] r=-1 lpr=54 pi=[50,54)/1 crt=34'6 lcod 0'0 unknown NOTIFY pruub 105.902481079s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 08:08:52 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 54 pg[11.1c( empty local-lis/les=52/53 n=0 ec=52/39 lis/c=52/52 les/c/f=53/53/0 sis=54 pruub=9.389324188s) [2] r=-1 lpr=54 pi=[52,54)/1 crt=0'0 active pruub 100.936279297s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 08:08:52 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 54 pg[8.1a( v 34'6 (0'0,34'6] local-lis/les=50/51 n=0 ec=50/33 lis/c=50/50 les/c/f=51/51/0 sis=54 pruub=14.355732918s) [0] r=-1 lpr=54 pi=[50,54)/1 crt=34'6 lcod 0'0 active pruub 105.902465820s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 08:08:52 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 54 pg[11.1c( empty local-lis/les=52/53 n=0 ec=52/39 lis/c=52/52 les/c/f=53/53/0 sis=54 pruub=9.389301300s) [2] r=-1 lpr=54 pi=[52,54)/1 crt=0'0 unknown NOTIFY pruub 100.936279297s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 08:08:52 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 54 pg[8.1a( v 34'6 (0'0,34'6] local-lis/les=50/51 n=0 ec=50/33 lis/c=50/50 les/c/f=51/51/0 sis=54 pruub=14.355465889s) [0] r=-1 lpr=54 pi=[50,54)/1 crt=34'6 lcod 0'0 unknown NOTIFY pruub 105.902465820s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 08:08:52 compute-0 ceph-osd[85971]: osd.0 pg_epoch: 54 pg[9.3( empty local-lis/les=0/0 n=0 ec=50/35 lis/c=50/50 les/c/f=51/51/0 sis=54) [0] r=0 lpr=54 pi=[50,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:08:52 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 54 pg[8.1f( v 34'6 (0'0,34'6] local-lis/les=50/51 n=0 ec=50/33 lis/c=50/50 les/c/f=51/51/0 sis=54 pruub=14.355365753s) [0] r=-1 lpr=54 pi=[50,54)/1 crt=34'6 lcod 0'0 active pruub 105.902557373s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 08:08:52 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 54 pg[7.11( empty local-lis/les=48/49 n=0 ec=48/24 lis/c=48/48 les/c/f=49/49/0 sis=54 pruub=12.331678391s) [2] r=-1 lpr=54 pi=[48,54)/1 crt=0'0 active pruub 103.878875732s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 08:08:52 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 54 pg[8.1f( v 34'6 (0'0,34'6] local-lis/les=50/51 n=0 ec=50/33 lis/c=50/50 les/c/f=51/51/0 sis=54 pruub=14.355341911s) [0] r=-1 lpr=54 pi=[50,54)/1 crt=34'6 lcod 0'0 unknown NOTIFY pruub 105.902557373s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 08:08:52 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 54 pg[7.11( empty local-lis/les=48/49 n=0 ec=48/24 lis/c=48/48 les/c/f=49/49/0 sis=54 pruub=12.331655502s) [2] r=-1 lpr=54 pi=[48,54)/1 crt=0'0 unknown NOTIFY pruub 103.878875732s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 08:08:52 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 54 pg[9.1f( v 41'483 (0'0,41'483] local-lis/les=50/51 n=6 ec=50/35 lis/c=50/50 les/c/f=51/51/0 sis=54 pruub=14.355225563s) [0] r=-1 lpr=54 pi=[50,54)/1 crt=41'483 lcod 0'0 active pruub 105.902565002s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 08:08:52 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 54 pg[9.1f( v 41'483 (0'0,41'483] local-lis/les=50/51 n=6 ec=50/35 lis/c=50/50 les/c/f=51/51/0 sis=54 pruub=14.355195045s) [0] r=-1 lpr=54 pi=[50,54)/1 crt=41'483 lcod 0'0 unknown NOTIFY pruub 105.902565002s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 08:08:52 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 54 pg[3.15( empty local-lis/les=45/46 n=0 ec=45/20 lis/c=45/45 les/c/f=46/46/0 sis=54 pruub=9.205713272s) [0] r=-1 lpr=54 pi=[45,54)/1 crt=0'0 active pruub 100.753135681s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 08:08:52 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 54 pg[3.15( empty local-lis/les=45/46 n=0 ec=45/20 lis/c=45/45 les/c/f=46/46/0 sis=54 pruub=9.205686569s) [0] r=-1 lpr=54 pi=[45,54)/1 crt=0'0 unknown NOTIFY pruub 100.753135681s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 08:08:52 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 54 pg[11.1e( empty local-lis/les=52/53 n=0 ec=52/39 lis/c=52/52 les/c/f=53/53/0 sis=54 pruub=9.388819695s) [2] r=-1 lpr=54 pi=[52,54)/1 crt=0'0 active pruub 100.936332703s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 08:08:52 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 54 pg[11.1e( empty local-lis/les=52/53 n=0 ec=52/39 lis/c=52/52 les/c/f=53/53/0 sis=54 pruub=9.388796806s) [2] r=-1 lpr=54 pi=[52,54)/1 crt=0'0 unknown NOTIFY pruub 100.936332703s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 08:08:52 compute-0 ceph-osd[85971]: osd.0 pg_epoch: 54 pg[11.4( empty local-lis/les=0/0 n=0 ec=52/39 lis/c=52/52 les/c/f=53/53/0 sis=54) [0] r=0 lpr=54 pi=[52,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:08:52 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 54 pg[8.1d( v 34'6 (0'0,34'6] local-lis/les=50/51 n=0 ec=50/33 lis/c=50/50 les/c/f=51/51/0 sis=54 pruub=14.355127335s) [0] r=-1 lpr=54 pi=[50,54)/1 crt=34'6 lcod 0'0 active pruub 105.902832031s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 08:08:52 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 54 pg[11.1f( empty local-lis/les=52/53 n=0 ec=52/39 lis/c=52/52 les/c/f=53/53/0 sis=54 pruub=9.388573647s) [2] r=-1 lpr=54 pi=[52,54)/1 crt=0'0 active pruub 100.936309814s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 08:08:52 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 54 pg[9.1d( v 41'483 (0'0,41'483] local-lis/les=50/51 n=6 ec=50/35 lis/c=50/50 les/c/f=51/51/0 sis=54 pruub=14.355089188s) [0] r=-1 lpr=54 pi=[50,54)/1 crt=41'483 lcod 0'0 active pruub 105.902839661s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 08:08:52 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 54 pg[8.1d( v 34'6 (0'0,34'6] local-lis/les=50/51 n=0 ec=50/33 lis/c=50/50 les/c/f=51/51/0 sis=54 pruub=14.355102539s) [0] r=-1 lpr=54 pi=[50,54)/1 crt=34'6 lcod 0'0 unknown NOTIFY pruub 105.902832031s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 08:08:52 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 54 pg[11.1f( empty local-lis/les=52/53 n=0 ec=52/39 lis/c=52/52 les/c/f=53/53/0 sis=54 pruub=9.388548851s) [2] r=-1 lpr=54 pi=[52,54)/1 crt=0'0 unknown NOTIFY pruub 100.936309814s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 08:08:52 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 54 pg[9.1d( v 41'483 (0'0,41'483] local-lis/les=50/51 n=6 ec=50/35 lis/c=50/50 les/c/f=51/51/0 sis=54 pruub=14.355070114s) [0] r=-1 lpr=54 pi=[50,54)/1 crt=41'483 lcod 0'0 unknown NOTIFY pruub 105.902839661s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 08:08:52 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 54 pg[7.13( empty local-lis/les=48/49 n=0 ec=48/24 lis/c=48/48 les/c/f=49/49/0 sis=54 pruub=12.329282761s) [0] r=-1 lpr=54 pi=[48,54)/1 crt=0'0 active pruub 103.877197266s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 08:08:52 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 54 pg[8.1c( v 34'6 (0'0,34'6] local-lis/les=50/51 n=0 ec=50/33 lis/c=50/50 les/c/f=51/51/0 sis=54 pruub=14.354876518s) [2] r=-1 lpr=54 pi=[50,54)/1 crt=34'6 lcod 0'0 active pruub 105.902847290s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 08:08:52 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 54 pg[7.13( empty local-lis/les=48/49 n=0 ec=48/24 lis/c=48/48 les/c/f=49/49/0 sis=54 pruub=12.329265594s) [0] r=-1 lpr=54 pi=[48,54)/1 crt=0'0 unknown NOTIFY pruub 103.877197266s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 08:08:52 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 54 pg[3.17( empty local-lis/les=45/46 n=0 ec=45/20 lis/c=45/45 les/c/f=46/46/0 sis=54 pruub=9.205083847s) [0] r=-1 lpr=54 pi=[45,54)/1 crt=0'0 active pruub 100.753089905s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 08:08:52 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 54 pg[3.16( empty local-lis/les=45/46 n=0 ec=45/20 lis/c=45/45 les/c/f=46/46/0 sis=54 pruub=9.205039978s) [2] r=-1 lpr=54 pi=[45,54)/1 crt=0'0 active pruub 100.753074646s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 08:08:52 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 54 pg[3.17( empty local-lis/les=45/46 n=0 ec=45/20 lis/c=45/45 les/c/f=46/46/0 sis=54 pruub=9.205048561s) [0] r=-1 lpr=54 pi=[45,54)/1 crt=0'0 unknown NOTIFY pruub 100.753089905s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 08:08:52 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 54 pg[3.16( empty local-lis/les=45/46 n=0 ec=45/20 lis/c=45/45 les/c/f=46/46/0 sis=54 pruub=9.205020905s) [2] r=-1 lpr=54 pi=[45,54)/1 crt=0'0 unknown NOTIFY pruub 100.753074646s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 08:08:52 compute-0 ceph-osd[85971]: osd.0 pg_epoch: 54 pg[3.c( empty local-lis/les=0/0 n=0 ec=45/20 lis/c=45/45 les/c/f=46/46/0 sis=54) [0] r=0 lpr=54 pi=[45,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:08:52 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 54 pg[8.1c( v 34'6 (0'0,34'6] local-lis/les=50/51 n=0 ec=50/33 lis/c=50/50 les/c/f=51/51/0 sis=54 pruub=14.354350090s) [2] r=-1 lpr=54 pi=[50,54)/1 crt=34'6 lcod 0'0 unknown NOTIFY pruub 105.902847290s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 08:08:52 compute-0 ceph-osd[85971]: osd.0 pg_epoch: 54 pg[3.9( empty local-lis/les=0/0 n=0 ec=45/20 lis/c=45/45 les/c/f=46/46/0 sis=54) [0] r=0 lpr=54 pi=[45,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:08:52 compute-0 ceph-osd[85971]: osd.0 pg_epoch: 54 pg[7.9( empty local-lis/les=0/0 n=0 ec=48/24 lis/c=48/48 les/c/f=49/49/0 sis=54) [0] r=0 lpr=54 pi=[48,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:08:52 compute-0 ceph-osd[85971]: osd.0 pg_epoch: 54 pg[9.7( empty local-lis/les=0/0 n=0 ec=50/35 lis/c=50/50 les/c/f=51/51/0 sis=54) [0] r=0 lpr=54 pi=[50,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:08:52 compute-0 ceph-osd[88096]: osd.2 pg_epoch: 54 pg[7.8( empty local-lis/les=0/0 n=0 ec=48/24 lis/c=48/48 les/c/f=49/49/0 sis=54) [2] r=0 lpr=54 pi=[48,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:08:52 compute-0 ceph-osd[85971]: osd.0 pg_epoch: 54 pg[11.6( empty local-lis/les=0/0 n=0 ec=52/39 lis/c=52/52 les/c/f=53/53/0 sis=54) [0] r=0 lpr=54 pi=[52,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:08:52 compute-0 ceph-osd[85971]: osd.0 pg_epoch: 54 pg[8.b( empty local-lis/les=0/0 n=0 ec=50/33 lis/c=50/50 les/c/f=51/51/0 sis=54) [0] r=0 lpr=54 pi=[50,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:08:52 compute-0 ceph-osd[88096]: osd.2 pg_epoch: 54 pg[8.2( empty local-lis/les=0/0 n=0 ec=50/33 lis/c=50/50 les/c/f=51/51/0 sis=54) [2] r=0 lpr=54 pi=[50,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:08:52 compute-0 ceph-osd[85971]: osd.0 pg_epoch: 54 pg[3.f( empty local-lis/les=0/0 n=0 ec=45/20 lis/c=45/45 les/c/f=46/46/0 sis=54) [0] r=0 lpr=54 pi=[45,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:08:52 compute-0 ceph-osd[85971]: osd.0 pg_epoch: 54 pg[9.5( empty local-lis/les=0/0 n=0 ec=50/35 lis/c=50/50 les/c/f=51/51/0 sis=54) [0] r=0 lpr=54 pi=[50,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:08:52 compute-0 ceph-osd[88096]: osd.2 pg_epoch: 54 pg[7.a( empty local-lis/les=0/0 n=0 ec=48/24 lis/c=48/48 les/c/f=49/49/0 sis=54) [2] r=0 lpr=54 pi=[48,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:08:52 compute-0 ceph-osd[85971]: osd.0 pg_epoch: 54 pg[8.6( empty local-lis/les=0/0 n=0 ec=50/33 lis/c=50/50 les/c/f=51/51/0 sis=54) [0] r=0 lpr=54 pi=[50,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:08:52 compute-0 ceph-osd[85971]: osd.0 pg_epoch: 54 pg[11.19( empty local-lis/les=0/0 n=0 ec=52/39 lis/c=52/52 les/c/f=53/53/0 sis=54) [0] r=0 lpr=54 pi=[52,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:08:52 compute-0 ceph-osd[88096]: osd.2 pg_epoch: 54 pg[3.e( empty local-lis/les=0/0 n=0 ec=45/20 lis/c=45/45 les/c/f=46/46/0 sis=54) [2] r=0 lpr=54 pi=[45,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:08:52 compute-0 ceph-osd[85971]: osd.0 pg_epoch: 54 pg[9.1b( empty local-lis/les=0/0 n=0 ec=50/35 lis/c=50/50 les/c/f=51/51/0 sis=54) [0] r=0 lpr=54 pi=[50,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:08:52 compute-0 ceph-osd[85971]: osd.0 pg_epoch: 54 pg[3.12( empty local-lis/les=0/0 n=0 ec=45/20 lis/c=45/45 les/c/f=46/46/0 sis=54) [0] r=0 lpr=54 pi=[45,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:08:52 compute-0 ceph-osd[88096]: osd.2 pg_epoch: 54 pg[8.4( empty local-lis/les=0/0 n=0 ec=50/33 lis/c=50/50 les/c/f=51/51/0 sis=54) [2] r=0 lpr=54 pi=[50,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:08:52 compute-0 ceph-osd[85971]: osd.0 pg_epoch: 54 pg[8.18( empty local-lis/les=0/0 n=0 ec=50/33 lis/c=50/50 les/c/f=51/51/0 sis=54) [0] r=0 lpr=54 pi=[50,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:08:52 compute-0 ceph-osd[88096]: osd.2 pg_epoch: 54 pg[11.18( empty local-lis/les=0/0 n=0 ec=52/39 lis/c=52/52 les/c/f=53/53/0 sis=54) [2] r=0 lpr=54 pi=[52,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:08:52 compute-0 ceph-osd[85971]: osd.0 pg_epoch: 54 pg[8.1a( empty local-lis/les=0/0 n=0 ec=50/33 lis/c=50/50 les/c/f=51/51/0 sis=54) [0] r=0 lpr=54 pi=[50,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:08:52 compute-0 ceph-osd[85971]: osd.0 pg_epoch: 54 pg[9.19( empty local-lis/les=0/0 n=0 ec=50/35 lis/c=50/50 les/c/f=51/51/0 sis=54) [0] r=0 lpr=54 pi=[50,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:08:52 compute-0 ceph-osd[85971]: osd.0 pg_epoch: 54 pg[9.1f( empty local-lis/les=0/0 n=0 ec=50/35 lis/c=50/50 les/c/f=51/51/0 sis=54) [0] r=0 lpr=54 pi=[50,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:08:52 compute-0 ceph-osd[88096]: osd.2 pg_epoch: 54 pg[8.1b( empty local-lis/les=0/0 n=0 ec=50/33 lis/c=50/50 les/c/f=51/51/0 sis=54) [2] r=0 lpr=54 pi=[50,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:08:52 compute-0 ceph-osd[85971]: osd.0 pg_epoch: 54 pg[3.15( empty local-lis/les=0/0 n=0 ec=45/20 lis/c=45/45 les/c/f=46/46/0 sis=54) [0] r=0 lpr=54 pi=[45,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:08:52 compute-0 ceph-osd[88096]: osd.2 pg_epoch: 54 pg[7.15( empty local-lis/les=0/0 n=0 ec=48/24 lis/c=48/48 les/c/f=49/49/0 sis=54) [2] r=0 lpr=54 pi=[48,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:08:52 compute-0 ceph-osd[85971]: osd.0 pg_epoch: 54 pg[8.1d( empty local-lis/les=0/0 n=0 ec=50/33 lis/c=50/50 les/c/f=51/51/0 sis=54) [0] r=0 lpr=54 pi=[50,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:08:52 compute-0 ceph-osd[88096]: osd.2 pg_epoch: 54 pg[11.1a( empty local-lis/les=0/0 n=0 ec=52/39 lis/c=52/52 les/c/f=53/53/0 sis=54) [2] r=0 lpr=54 pi=[52,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:08:52 compute-0 ceph-osd[85971]: osd.0 pg_epoch: 54 pg[9.1d( empty local-lis/les=0/0 n=0 ec=50/35 lis/c=50/50 les/c/f=51/51/0 sis=54) [0] r=0 lpr=54 pi=[50,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:08:52 compute-0 ceph-osd[88096]: osd.2 pg_epoch: 54 pg[3.11( empty local-lis/les=0/0 n=0 ec=45/20 lis/c=45/45 les/c/f=46/46/0 sis=54) [2] r=0 lpr=54 pi=[45,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:08:52 compute-0 ceph-osd[85971]: osd.0 pg_epoch: 54 pg[8.1f( empty local-lis/les=0/0 n=0 ec=50/33 lis/c=50/50 les/c/f=51/51/0 sis=54) [0] r=0 lpr=54 pi=[50,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:08:52 compute-0 ceph-osd[85971]: osd.0 pg_epoch: 54 pg[7.13( empty local-lis/les=0/0 n=0 ec=48/24 lis/c=48/48 les/c/f=49/49/0 sis=54) [0] r=0 lpr=54 pi=[48,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:08:52 compute-0 ceph-osd[85971]: osd.0 pg_epoch: 54 pg[3.17( empty local-lis/les=0/0 n=0 ec=45/20 lis/c=45/45 les/c/f=46/46/0 sis=54) [0] r=0 lpr=54 pi=[45,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:08:52 compute-0 ceph-osd[88096]: osd.2 pg_epoch: 54 pg[11.1b( empty local-lis/les=0/0 n=0 ec=52/39 lis/c=52/52 les/c/f=53/53/0 sis=54) [2] r=0 lpr=54 pi=[52,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:08:52 compute-0 ceph-osd[88096]: osd.2 pg_epoch: 54 pg[11.1c( empty local-lis/les=0/0 n=0 ec=52/39 lis/c=52/52 les/c/f=53/53/0 sis=54) [2] r=0 lpr=54 pi=[52,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:08:52 compute-0 ceph-osd[88096]: osd.2 pg_epoch: 54 pg[7.11( empty local-lis/les=0/0 n=0 ec=48/24 lis/c=48/48 les/c/f=49/49/0 sis=54) [2] r=0 lpr=54 pi=[48,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:08:52 compute-0 ceph-osd[88096]: osd.2 pg_epoch: 54 pg[11.1e( empty local-lis/les=0/0 n=0 ec=52/39 lis/c=52/52 les/c/f=53/53/0 sis=54) [2] r=0 lpr=54 pi=[52,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:08:52 compute-0 ceph-osd[88096]: osd.2 pg_epoch: 54 pg[11.1f( empty local-lis/les=0/0 n=0 ec=52/39 lis/c=52/52 les/c/f=53/53/0 sis=54) [2] r=0 lpr=54 pi=[52,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:08:52 compute-0 ceph-osd[88096]: osd.2 pg_epoch: 54 pg[3.16( empty local-lis/les=0/0 n=0 ec=45/20 lis/c=45/45 les/c/f=46/46/0 sis=54) [2] r=0 lpr=54 pi=[45,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:08:52 compute-0 ceph-osd[88096]: osd.2 pg_epoch: 54 pg[8.1c( empty local-lis/les=0/0 n=0 ec=50/33 lis/c=50/50 les/c/f=51/51/0 sis=54) [2] r=0 lpr=54 pi=[50,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:08:53 compute-0 ceph-mgr[75519]: [progress INFO root] Completed event 4e7c68ba-2212-45d3-b6e9-ef9409eb8f49 (Global Recovery Event) in 15 seconds
Jan 31 08:08:53 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e54 do_prune osdmap full prune enabled
Jan 31 08:08:53 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e55 e55: 3 total, 3 up, 3 in
Jan 31 08:08:53 compute-0 ceph-mon[75227]: log_channel(cluster) log [DBG] : osdmap e55: 3 total, 3 up, 3 in
Jan 31 08:08:53 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v117: 305 pgs: 305 active+clean; 461 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:08:53 compute-0 ceph-osd[88096]: osd.2 pg_epoch: 55 pg[4.18( empty local-lis/les=54/55 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=54) [2] r=0 lpr=54 pi=[45,54)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:08:53 compute-0 ceph-osd[88096]: osd.2 pg_epoch: 55 pg[3.1e( empty local-lis/les=54/55 n=0 ec=45/20 lis/c=45/45 les/c/f=46/46/0 sis=54) [2] r=0 lpr=54 pi=[45,54)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:08:53 compute-0 ceph-osd[85971]: osd.0 pg_epoch: 55 pg[9.13( empty local-lis/les=0/0 n=0 ec=50/35 lis/c=50/50 les/c/f=51/51/0 sis=55) [0]/[1] r=-1 lpr=55 pi=[50,55)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 08:08:53 compute-0 ceph-osd[85971]: osd.0 pg_epoch: 55 pg[9.13( empty local-lis/les=0/0 n=0 ec=50/35 lis/c=50/50 les/c/f=51/51/0 sis=55) [0]/[1] r=-1 lpr=55 pi=[50,55)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 31 08:08:53 compute-0 ceph-osd[85971]: osd.0 pg_epoch: 55 pg[9.11( empty local-lis/les=0/0 n=0 ec=50/35 lis/c=50/50 les/c/f=51/51/0 sis=55) [0]/[1] r=-1 lpr=55 pi=[50,55)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 08:08:53 compute-0 ceph-osd[85971]: osd.0 pg_epoch: 55 pg[9.11( empty local-lis/les=0/0 n=0 ec=50/35 lis/c=50/50 les/c/f=51/51/0 sis=55) [0]/[1] r=-1 lpr=55 pi=[50,55)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 31 08:08:53 compute-0 ceph-osd[85971]: osd.0 pg_epoch: 55 pg[9.5( empty local-lis/les=0/0 n=0 ec=50/35 lis/c=50/50 les/c/f=51/51/0 sis=55) [0]/[1] r=-1 lpr=55 pi=[50,55)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 08:08:53 compute-0 ceph-osd[85971]: osd.0 pg_epoch: 55 pg[9.5( empty local-lis/les=0/0 n=0 ec=50/35 lis/c=50/50 les/c/f=51/51/0 sis=55) [0]/[1] r=-1 lpr=55 pi=[50,55)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 31 08:08:53 compute-0 ceph-osd[85971]: osd.0 pg_epoch: 55 pg[9.b( empty local-lis/les=0/0 n=0 ec=50/35 lis/c=50/50 les/c/f=51/51/0 sis=55) [0]/[1] r=-1 lpr=55 pi=[50,55)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 08:08:53 compute-0 ceph-osd[85971]: osd.0 pg_epoch: 55 pg[9.7( empty local-lis/les=0/0 n=0 ec=50/35 lis/c=50/50 les/c/f=51/51/0 sis=55) [0]/[1] r=-1 lpr=55 pi=[50,55)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 08:08:53 compute-0 ceph-osd[85971]: osd.0 pg_epoch: 55 pg[9.b( empty local-lis/les=0/0 n=0 ec=50/35 lis/c=50/50 les/c/f=51/51/0 sis=55) [0]/[1] r=-1 lpr=55 pi=[50,55)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 31 08:08:53 compute-0 ceph-osd[85971]: osd.0 pg_epoch: 55 pg[9.7( empty local-lis/les=0/0 n=0 ec=50/35 lis/c=50/50 les/c/f=51/51/0 sis=55) [0]/[1] r=-1 lpr=55 pi=[50,55)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 31 08:08:53 compute-0 ceph-osd[85971]: osd.0 pg_epoch: 55 pg[9.17( empty local-lis/les=0/0 n=0 ec=50/35 lis/c=50/50 les/c/f=51/51/0 sis=55) [0]/[1] r=-1 lpr=55 pi=[50,55)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 08:08:53 compute-0 ceph-osd[85971]: osd.0 pg_epoch: 55 pg[9.17( empty local-lis/les=0/0 n=0 ec=50/35 lis/c=50/50 les/c/f=51/51/0 sis=55) [0]/[1] r=-1 lpr=55 pi=[50,55)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 31 08:08:53 compute-0 ceph-osd[85971]: osd.0 pg_epoch: 55 pg[9.9( empty local-lis/les=0/0 n=0 ec=50/35 lis/c=50/50 les/c/f=51/51/0 sis=55) [0]/[1] r=-1 lpr=55 pi=[50,55)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 08:08:53 compute-0 ceph-osd[85971]: osd.0 pg_epoch: 55 pg[9.9( empty local-lis/les=0/0 n=0 ec=50/35 lis/c=50/50 les/c/f=51/51/0 sis=55) [0]/[1] r=-1 lpr=55 pi=[50,55)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 31 08:08:53 compute-0 ceph-osd[85971]: osd.0 pg_epoch: 55 pg[9.f( empty local-lis/les=0/0 n=0 ec=50/35 lis/c=50/50 les/c/f=51/51/0 sis=55) [0]/[1] r=-1 lpr=55 pi=[50,55)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 08:08:53 compute-0 ceph-osd[85971]: osd.0 pg_epoch: 55 pg[9.f( empty local-lis/les=0/0 n=0 ec=50/35 lis/c=50/50 les/c/f=51/51/0 sis=55) [0]/[1] r=-1 lpr=55 pi=[50,55)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 31 08:08:53 compute-0 ceph-osd[85971]: osd.0 pg_epoch: 55 pg[9.d( empty local-lis/les=0/0 n=0 ec=50/35 lis/c=50/50 les/c/f=51/51/0 sis=55) [0]/[1] r=-1 lpr=55 pi=[50,55)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 08:08:53 compute-0 ceph-osd[85971]: osd.0 pg_epoch: 55 pg[9.d( empty local-lis/les=0/0 n=0 ec=50/35 lis/c=50/50 les/c/f=51/51/0 sis=55) [0]/[1] r=-1 lpr=55 pi=[50,55)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 31 08:08:53 compute-0 ceph-osd[85971]: osd.0 pg_epoch: 55 pg[9.1( empty local-lis/les=0/0 n=0 ec=50/35 lis/c=50/50 les/c/f=51/51/0 sis=55) [0]/[1] r=-1 lpr=55 pi=[50,55)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 08:08:53 compute-0 ceph-osd[85971]: osd.0 pg_epoch: 55 pg[9.1( empty local-lis/les=0/0 n=0 ec=50/35 lis/c=50/50 les/c/f=51/51/0 sis=55) [0]/[1] r=-1 lpr=55 pi=[50,55)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 31 08:08:53 compute-0 ceph-osd[85971]: osd.0 pg_epoch: 55 pg[9.3( empty local-lis/les=0/0 n=0 ec=50/35 lis/c=50/50 les/c/f=51/51/0 sis=55) [0]/[1] r=-1 lpr=55 pi=[50,55)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 08:08:53 compute-0 ceph-osd[85971]: osd.0 pg_epoch: 55 pg[9.3( empty local-lis/les=0/0 n=0 ec=50/35 lis/c=50/50 les/c/f=51/51/0 sis=55) [0]/[1] r=-1 lpr=55 pi=[50,55)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 31 08:08:53 compute-0 ceph-osd[85971]: osd.0 pg_epoch: 55 pg[9.1d( empty local-lis/les=0/0 n=0 ec=50/35 lis/c=50/50 les/c/f=51/51/0 sis=55) [0]/[1] r=-1 lpr=55 pi=[50,55)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 08:08:53 compute-0 ceph-osd[85971]: osd.0 pg_epoch: 55 pg[9.1d( empty local-lis/les=0/0 n=0 ec=50/35 lis/c=50/50 les/c/f=51/51/0 sis=55) [0]/[1] r=-1 lpr=55 pi=[50,55)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 31 08:08:53 compute-0 ceph-osd[85971]: osd.0 pg_epoch: 55 pg[9.1f( empty local-lis/les=0/0 n=0 ec=50/35 lis/c=50/50 les/c/f=51/51/0 sis=55) [0]/[1] r=-1 lpr=55 pi=[50,55)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 08:08:53 compute-0 ceph-osd[85971]: osd.0 pg_epoch: 55 pg[9.1f( empty local-lis/les=0/0 n=0 ec=50/35 lis/c=50/50 les/c/f=51/51/0 sis=55) [0]/[1] r=-1 lpr=55 pi=[50,55)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 31 08:08:53 compute-0 ceph-osd[85971]: osd.0 pg_epoch: 55 pg[9.19( empty local-lis/les=0/0 n=0 ec=50/35 lis/c=50/50 les/c/f=51/51/0 sis=55) [0]/[1] r=-1 lpr=55 pi=[50,55)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 08:08:53 compute-0 ceph-osd[85971]: osd.0 pg_epoch: 55 pg[9.19( empty local-lis/les=0/0 n=0 ec=50/35 lis/c=50/50 les/c/f=51/51/0 sis=55) [0]/[1] r=-1 lpr=55 pi=[50,55)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 31 08:08:53 compute-0 ceph-osd[85971]: osd.0 pg_epoch: 55 pg[9.1b( empty local-lis/les=0/0 n=0 ec=50/35 lis/c=50/50 les/c/f=51/51/0 sis=55) [0]/[1] r=-1 lpr=55 pi=[50,55)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 08:08:53 compute-0 ceph-osd[85971]: osd.0 pg_epoch: 55 pg[9.1b( empty local-lis/les=0/0 n=0 ec=50/35 lis/c=50/50 les/c/f=51/51/0 sis=55) [0]/[1] r=-1 lpr=55 pi=[50,55)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 31 08:08:53 compute-0 ceph-osd[85971]: osd.0 pg_epoch: 55 pg[9.15( empty local-lis/les=0/0 n=0 ec=50/35 lis/c=50/50 les/c/f=51/51/0 sis=55) [0]/[1] r=-1 lpr=55 pi=[50,55)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 08:08:53 compute-0 ceph-osd[85971]: osd.0 pg_epoch: 55 pg[9.15( empty local-lis/les=0/0 n=0 ec=50/35 lis/c=50/50 les/c/f=51/51/0 sis=55) [0]/[1] r=-1 lpr=55 pi=[50,55)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 31 08:08:53 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "3"} v 0)
Jan 31 08:08:53 compute-0 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "3"} : dispatch
Jan 31 08:08:53 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "3"} v 0)
Jan 31 08:08:53 compute-0 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "3"} : dispatch
Jan 31 08:08:53 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 55 pg[9.5( v 53'484 (0'0,53'484] local-lis/les=50/51 n=7 ec=50/35 lis/c=50/50 les/c/f=51/51/0 sis=55) [0]/[1] r=0 lpr=55 pi=[50,55)/1 crt=41'483 lcod 41'483 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 08:08:53 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 55 pg[9.5( v 53'484 (0'0,53'484] local-lis/les=50/51 n=7 ec=50/35 lis/c=50/50 les/c/f=51/51/0 sis=55) [0]/[1] r=0 lpr=55 pi=[50,55)/1 crt=41'483 lcod 41'483 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 31 08:08:53 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 55 pg[9.7( v 41'483 (0'0,41'483] local-lis/les=50/51 n=7 ec=50/35 lis/c=50/50 les/c/f=51/51/0 sis=55) [0]/[1] r=0 lpr=55 pi=[50,55)/1 crt=41'483 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 08:08:53 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 55 pg[9.7( v 41'483 (0'0,41'483] local-lis/les=50/51 n=7 ec=50/35 lis/c=50/50 les/c/f=51/51/0 sis=55) [0]/[1] r=0 lpr=55 pi=[50,55)/1 crt=41'483 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 31 08:08:53 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e55 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 08:08:53 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 55 pg[9.f( v 41'483 (0'0,41'483] local-lis/les=50/51 n=7 ec=50/35 lis/c=50/50 les/c/f=51/51/0 sis=55) [0]/[1] r=0 lpr=55 pi=[50,55)/1 crt=41'483 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 08:08:53 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 55 pg[9.f( v 41'483 (0'0,41'483] local-lis/les=50/51 n=7 ec=50/35 lis/c=50/50 les/c/f=51/51/0 sis=55) [0]/[1] r=0 lpr=55 pi=[50,55)/1 crt=41'483 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 31 08:08:53 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 55 pg[9.b( v 41'483 (0'0,41'483] local-lis/les=50/51 n=7 ec=50/35 lis/c=50/50 les/c/f=51/51/0 sis=55) [0]/[1] r=0 lpr=55 pi=[50,55)/1 crt=41'483 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 08:08:53 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 55 pg[9.b( v 41'483 (0'0,41'483] local-lis/les=50/51 n=7 ec=50/35 lis/c=50/50 les/c/f=51/51/0 sis=55) [0]/[1] r=0 lpr=55 pi=[50,55)/1 crt=41'483 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 31 08:08:53 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 55 pg[9.9( v 41'483 (0'0,41'483] local-lis/les=50/51 n=7 ec=50/35 lis/c=50/50 les/c/f=51/51/0 sis=55) [0]/[1] r=0 lpr=55 pi=[50,55)/1 crt=41'483 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 08:08:53 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 55 pg[9.9( v 41'483 (0'0,41'483] local-lis/les=50/51 n=7 ec=50/35 lis/c=50/50 les/c/f=51/51/0 sis=55) [0]/[1] r=0 lpr=55 pi=[50,55)/1 crt=41'483 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 31 08:08:53 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 55 pg[9.d( v 41'483 (0'0,41'483] local-lis/les=50/51 n=7 ec=50/35 lis/c=50/50 les/c/f=51/51/0 sis=55) [0]/[1] r=0 lpr=55 pi=[50,55)/1 crt=41'483 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 08:08:53 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 55 pg[9.d( v 41'483 (0'0,41'483] local-lis/les=50/51 n=7 ec=50/35 lis/c=50/50 les/c/f=51/51/0 sis=55) [0]/[1] r=0 lpr=55 pi=[50,55)/1 crt=41'483 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 31 08:08:53 compute-0 ceph-osd[88096]: osd.2 pg_epoch: 55 pg[7.1a( empty local-lis/les=54/55 n=0 ec=48/24 lis/c=48/48 les/c/f=49/49/0 sis=54) [2] r=0 lpr=54 pi=[48,54)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:08:53 compute-0 ceph-osd[88096]: osd.2 pg_epoch: 55 pg[8.15( v 34'6 (0'0,34'6] local-lis/les=54/55 n=0 ec=50/33 lis/c=50/50 les/c/f=51/51/0 sis=54) [2] r=0 lpr=54 pi=[50,54)/1 crt=34'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:08:53 compute-0 ceph-osd[88096]: osd.2 pg_epoch: 55 pg[4.1b( empty local-lis/les=54/55 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=54) [2] r=0 lpr=54 pi=[45,54)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:08:53 compute-0 ceph-osd[88096]: osd.2 pg_epoch: 55 pg[11.15( empty local-lis/les=54/55 n=0 ec=52/39 lis/c=52/52 les/c/f=53/53/0 sis=54) [2] r=0 lpr=54 pi=[52,54)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:08:53 compute-0 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pgp_num_actual", "val": "32"}]': finished
Jan 31 08:08:53 compute-0 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pgp_num_actual", "val": "32"}]': finished
Jan 31 08:08:53 compute-0 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pgp_num_actual", "val": "32"}]': finished
Jan 31 08:08:53 compute-0 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "2"}]': finished
Jan 31 08:08:53 compute-0 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pgp_num_actual", "val": "32"}]': finished
Jan 31 08:08:53 compute-0 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "2"}]': finished
Jan 31 08:08:53 compute-0 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pgp_num_actual", "val": "32"}]': finished
Jan 31 08:08:53 compute-0 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pgp_num_actual", "val": "32"}]': finished
Jan 31 08:08:53 compute-0 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pgp_num_actual", "val": "32"}]': finished
Jan 31 08:08:53 compute-0 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pgp_num_actual", "val": "32"}]': finished
Jan 31 08:08:53 compute-0 ceph-mon[75227]: osdmap e54: 3 total, 3 up, 3 in
Jan 31 08:08:53 compute-0 ceph-mon[75227]: osdmap e55: 3 total, 3 up, 3 in
Jan 31 08:08:53 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 55 pg[9.13( v 41'483 (0'0,41'483] local-lis/les=50/51 n=6 ec=50/35 lis/c=50/50 les/c/f=51/51/0 sis=55) [0]/[1] r=0 lpr=55 pi=[50,55)/1 crt=41'483 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 08:08:53 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 55 pg[9.13( v 41'483 (0'0,41'483] local-lis/les=50/51 n=6 ec=50/35 lis/c=50/50 les/c/f=51/51/0 sis=55) [0]/[1] r=0 lpr=55 pi=[50,55)/1 crt=41'483 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 31 08:08:53 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 55 pg[9.11( v 41'483 (0'0,41'483] local-lis/les=50/51 n=7 ec=50/35 lis/c=50/50 les/c/f=51/51/0 sis=55) [0]/[1] r=0 lpr=55 pi=[50,55)/1 crt=41'483 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 08:08:53 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 55 pg[9.11( v 41'483 (0'0,41'483] local-lis/les=50/51 n=7 ec=50/35 lis/c=50/50 les/c/f=51/51/0 sis=55) [0]/[1] r=0 lpr=55 pi=[50,55)/1 crt=41'483 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 31 08:08:53 compute-0 ceph-osd[85971]: osd.0 pg_epoch: 55 pg[2.11( empty local-lis/les=54/55 n=0 ec=43/19 lis/c=43/43 les/c/f=44/44/0 sis=54) [0] r=0 lpr=54 pi=[43,54)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:08:53 compute-0 ceph-osd[85971]: osd.0 pg_epoch: 55 pg[10.16( v 41'18 (0'0,41'18] local-lis/les=54/55 n=0 ec=52/37 lis/c=52/52 les/c/f=53/53/0 sis=54) [0] r=0 lpr=54 pi=[52,54)/1 crt=41'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:08:53 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 55 pg[9.15( v 41'483 (0'0,41'483] local-lis/les=50/51 n=6 ec=50/35 lis/c=50/50 les/c/f=51/51/0 sis=55) [0]/[1] r=0 lpr=55 pi=[50,55)/1 crt=41'483 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 08:08:53 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 55 pg[9.15( v 41'483 (0'0,41'483] local-lis/les=50/51 n=6 ec=50/35 lis/c=50/50 les/c/f=51/51/0 sis=55) [0]/[1] r=0 lpr=55 pi=[50,55)/1 crt=41'483 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 31 08:08:53 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 55 pg[9.1( v 41'483 (0'0,41'483] local-lis/les=50/51 n=7 ec=50/35 lis/c=50/50 les/c/f=51/51/0 sis=55) [0]/[1] r=0 lpr=55 pi=[50,55)/1 crt=41'483 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 08:08:53 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 55 pg[9.1( v 41'483 (0'0,41'483] local-lis/les=50/51 n=7 ec=50/35 lis/c=50/50 les/c/f=51/51/0 sis=55) [0]/[1] r=0 lpr=55 pi=[50,55)/1 crt=41'483 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 31 08:08:53 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 55 pg[9.3( v 41'483 (0'0,41'483] local-lis/les=50/51 n=7 ec=50/35 lis/c=50/50 les/c/f=51/51/0 sis=55) [0]/[1] r=0 lpr=55 pi=[50,55)/1 crt=41'483 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 08:08:53 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 55 pg[9.3( v 41'483 (0'0,41'483] local-lis/les=50/51 n=7 ec=50/35 lis/c=50/50 les/c/f=51/51/0 sis=55) [0]/[1] r=0 lpr=55 pi=[50,55)/1 crt=41'483 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 31 08:08:53 compute-0 ceph-osd[88096]: osd.2 pg_epoch: 55 pg[11.3( empty local-lis/les=54/55 n=0 ec=52/39 lis/c=52/52 les/c/f=53/53/0 sis=54) [2] r=0 lpr=54 pi=[52,54)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:08:53 compute-0 ceph-osd[88096]: osd.2 pg_epoch: 55 pg[3.1d( empty local-lis/les=54/55 n=0 ec=45/20 lis/c=45/45 les/c/f=46/46/0 sis=54) [2] r=0 lpr=54 pi=[45,54)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:08:53 compute-0 ceph-osd[88096]: osd.2 pg_epoch: 55 pg[8.11( v 34'6 (0'0,34'6] local-lis/les=54/55 n=0 ec=50/33 lis/c=50/50 les/c/f=51/51/0 sis=54) [2] r=0 lpr=54 pi=[50,54)/1 crt=34'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:08:53 compute-0 ceph-osd[88096]: osd.2 pg_epoch: 55 pg[3.8( empty local-lis/les=54/55 n=0 ec=45/20 lis/c=45/45 les/c/f=46/46/0 sis=54) [2] r=0 lpr=54 pi=[45,54)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:08:53 compute-0 ceph-osd[88096]: osd.2 pg_epoch: 55 pg[4.1a( empty local-lis/les=54/55 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=54) [2] r=0 lpr=54 pi=[45,54)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:08:53 compute-0 ceph-osd[88096]: osd.2 pg_epoch: 55 pg[7.c( empty local-lis/les=54/55 n=0 ec=48/24 lis/c=48/48 les/c/f=49/49/0 sis=54) [2] r=0 lpr=54 pi=[48,54)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:08:53 compute-0 ceph-osd[88096]: osd.2 pg_epoch: 55 pg[11.12( empty local-lis/les=54/55 n=0 ec=52/39 lis/c=52/52 les/c/f=53/53/0 sis=54) [2] r=0 lpr=54 pi=[52,54)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:08:53 compute-0 ceph-osd[88096]: osd.2 pg_epoch: 55 pg[3.7( empty local-lis/les=54/55 n=0 ec=45/20 lis/c=45/45 les/c/f=46/46/0 sis=54) [2] r=0 lpr=54 pi=[45,54)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:08:53 compute-0 ceph-osd[88096]: osd.2 pg_epoch: 55 pg[11.d( empty local-lis/les=54/55 n=0 ec=52/39 lis/c=52/52 les/c/f=53/53/0 sis=54) [2] r=0 lpr=54 pi=[52,54)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:08:53 compute-0 ceph-osd[88096]: osd.2 pg_epoch: 55 pg[11.8( empty local-lis/les=54/55 n=0 ec=52/39 lis/c=52/52 les/c/f=53/53/0 sis=54) [2] r=0 lpr=54 pi=[52,54)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:08:53 compute-0 ceph-osd[88096]: osd.2 pg_epoch: 55 pg[7.1( empty local-lis/les=54/55 n=0 ec=48/24 lis/c=48/48 les/c/f=49/49/0 sis=54) [2] r=0 lpr=54 pi=[48,54)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:08:53 compute-0 ceph-osd[88096]: osd.2 pg_epoch: 55 pg[8.2( v 34'6 (0'0,34'6] local-lis/les=54/55 n=1 ec=50/33 lis/c=50/50 les/c/f=51/51/0 sis=54) [2] r=0 lpr=54 pi=[50,54)/1 crt=34'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:08:53 compute-0 ceph-osd[88096]: osd.2 pg_epoch: 55 pg[4.e( empty local-lis/les=54/55 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=54) [2] r=0 lpr=54 pi=[45,54)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:08:53 compute-0 ceph-osd[88096]: osd.2 pg_epoch: 55 pg[7.2( empty local-lis/les=54/55 n=0 ec=48/24 lis/c=48/48 les/c/f=49/49/0 sis=54) [2] r=0 lpr=54 pi=[48,54)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:08:53 compute-0 ceph-osd[88096]: osd.2 pg_epoch: 55 pg[3.5( empty local-lis/les=54/55 n=0 ec=45/20 lis/c=45/45 les/c/f=46/46/0 sis=54) [2] r=0 lpr=54 pi=[45,54)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:08:53 compute-0 ceph-osd[88096]: osd.2 pg_epoch: 55 pg[8.d( v 34'6 (0'0,34'6] local-lis/les=54/55 n=0 ec=50/33 lis/c=50/50 les/c/f=51/51/0 sis=54) [2] r=0 lpr=54 pi=[50,54)/1 crt=34'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:08:53 compute-0 ceph-osd[88096]: osd.2 pg_epoch: 55 pg[4.1( empty local-lis/les=54/55 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=54) [2] r=0 lpr=54 pi=[45,54)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:08:53 compute-0 ceph-osd[88096]: osd.2 pg_epoch: 55 pg[11.9( empty local-lis/les=54/55 n=0 ec=52/39 lis/c=52/52 les/c/f=53/53/0 sis=54) [2] r=0 lpr=54 pi=[52,54)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:08:53 compute-0 ceph-osd[88096]: osd.2 pg_epoch: 55 pg[7.5( empty local-lis/les=54/55 n=0 ec=48/24 lis/c=48/48 les/c/f=49/49/0 sis=54) [2] r=0 lpr=54 pi=[48,54)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:08:53 compute-0 ceph-osd[88096]: osd.2 pg_epoch: 55 pg[11.b( empty local-lis/les=54/55 n=0 ec=52/39 lis/c=52/52 les/c/f=53/53/0 sis=54) [2] r=0 lpr=54 pi=[52,54)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:08:53 compute-0 ceph-osd[88096]: osd.2 pg_epoch: 55 pg[11.2( empty local-lis/les=54/55 n=0 ec=52/39 lis/c=52/52 les/c/f=53/53/0 sis=54) [2] r=0 lpr=54 pi=[52,54)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:08:53 compute-0 ceph-osd[88096]: osd.2 pg_epoch: 55 pg[7.e( empty local-lis/les=54/55 n=0 ec=48/24 lis/c=48/48 les/c/f=49/49/0 sis=54) [2] r=0 lpr=54 pi=[48,54)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:08:53 compute-0 ceph-osd[88096]: osd.2 pg_epoch: 55 pg[7.8( empty local-lis/les=54/55 n=0 ec=48/24 lis/c=48/48 les/c/f=49/49/0 sis=54) [2] r=0 lpr=54 pi=[48,54)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:08:53 compute-0 ceph-osd[88096]: osd.2 pg_epoch: 55 pg[8.4( v 34'6 (0'0,34'6] local-lis/les=54/55 n=1 ec=50/33 lis/c=50/50 les/c/f=51/51/0 sis=54) [2] r=0 lpr=54 pi=[50,54)/1 crt=34'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:08:53 compute-0 ceph-osd[88096]: osd.2 pg_epoch: 55 pg[4.a( empty local-lis/les=54/55 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=54) [2] r=0 lpr=54 pi=[45,54)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:08:53 compute-0 ceph-osd[88096]: osd.2 pg_epoch: 55 pg[3.e( empty local-lis/les=54/55 n=0 ec=45/20 lis/c=45/45 les/c/f=46/46/0 sis=54) [2] r=0 lpr=54 pi=[45,54)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:08:53 compute-0 ceph-osd[88096]: osd.2 pg_epoch: 55 pg[7.a( empty local-lis/les=54/55 n=0 ec=48/24 lis/c=48/48 les/c/f=49/49/0 sis=54) [2] r=0 lpr=54 pi=[48,54)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:08:53 compute-0 ceph-osd[88096]: osd.2 pg_epoch: 55 pg[3.11( empty local-lis/les=54/55 n=0 ec=45/20 lis/c=45/45 les/c/f=46/46/0 sis=54) [2] r=0 lpr=54 pi=[45,54)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:08:53 compute-0 ceph-osd[88096]: osd.2 pg_epoch: 55 pg[7.15( empty local-lis/les=54/55 n=0 ec=48/24 lis/c=48/48 les/c/f=49/49/0 sis=54) [2] r=0 lpr=54 pi=[48,54)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:08:53 compute-0 ceph-osd[88096]: osd.2 pg_epoch: 55 pg[8.1b( v 34'6 (0'0,34'6] local-lis/les=54/55 n=0 ec=50/33 lis/c=50/50 les/c/f=51/51/0 sis=54) [2] r=0 lpr=54 pi=[50,54)/1 crt=34'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:08:53 compute-0 ceph-osd[88096]: osd.2 pg_epoch: 55 pg[11.18( empty local-lis/les=54/55 n=0 ec=52/39 lis/c=52/52 les/c/f=53/53/0 sis=54) [2] r=0 lpr=54 pi=[52,54)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:08:53 compute-0 ceph-osd[88096]: osd.2 pg_epoch: 55 pg[7.11( empty local-lis/les=54/55 n=0 ec=48/24 lis/c=48/48 les/c/f=49/49/0 sis=54) [2] r=0 lpr=54 pi=[48,54)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:08:53 compute-0 ceph-osd[88096]: osd.2 pg_epoch: 55 pg[11.1a( empty local-lis/les=54/55 n=0 ec=52/39 lis/c=52/52 les/c/f=53/53/0 sis=54) [2] r=0 lpr=54 pi=[52,54)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:08:53 compute-0 ceph-osd[88096]: osd.2 pg_epoch: 55 pg[11.1b( empty local-lis/les=54/55 n=0 ec=52/39 lis/c=52/52 les/c/f=53/53/0 sis=54) [2] r=0 lpr=54 pi=[52,54)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:08:53 compute-0 ceph-osd[88096]: osd.2 pg_epoch: 55 pg[11.1c( empty local-lis/les=54/55 n=0 ec=52/39 lis/c=52/52 les/c/f=53/53/0 sis=54) [2] r=0 lpr=54 pi=[52,54)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:08:53 compute-0 ceph-osd[88096]: osd.2 pg_epoch: 55 pg[11.1f( empty local-lis/les=54/55 n=0 ec=52/39 lis/c=52/52 les/c/f=53/53/0 sis=54) [2] r=0 lpr=54 pi=[52,54)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:08:53 compute-0 ceph-osd[88096]: osd.2 pg_epoch: 55 pg[4.13( empty local-lis/les=54/55 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=54) [2] r=0 lpr=54 pi=[45,54)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:08:53 compute-0 ceph-osd[88096]: osd.2 pg_epoch: 55 pg[3.16( empty local-lis/les=54/55 n=0 ec=45/20 lis/c=45/45 les/c/f=46/46/0 sis=54) [2] r=0 lpr=54 pi=[45,54)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:08:53 compute-0 ceph-osd[88096]: osd.2 pg_epoch: 55 pg[11.1e( empty local-lis/les=54/55 n=0 ec=52/39 lis/c=52/52 les/c/f=53/53/0 sis=54) [2] r=0 lpr=54 pi=[52,54)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:08:53 compute-0 ceph-osd[88096]: osd.2 pg_epoch: 55 pg[8.1c( v 34'6 lc 0'0 (0'0,34'6] local-lis/les=54/55 n=0 ec=50/33 lis/c=50/50 les/c/f=51/51/0 sis=54) [2] r=0 lpr=54 pi=[50,54)/1 crt=34'6 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:08:53 compute-0 ceph-osd[88096]: osd.2 pg_epoch: 55 pg[4.11( empty local-lis/les=54/55 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=54) [2] r=0 lpr=54 pi=[45,54)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:08:53 compute-0 ceph-osd[88096]: osd.2 pg_epoch: 55 pg[11.11( empty local-lis/les=54/55 n=0 ec=52/39 lis/c=52/52 les/c/f=53/53/0 sis=54) [2] r=0 lpr=54 pi=[52,54)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:08:53 compute-0 ceph-osd[88096]: osd.2 pg_epoch: 55 pg[8.12( v 34'6 (0'0,34'6] local-lis/les=54/55 n=0 ec=50/33 lis/c=50/50 les/c/f=51/51/0 sis=54) [2] r=0 lpr=54 pi=[50,54)/1 crt=34'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:08:53 compute-0 ceph-osd[88096]: osd.2 pg_epoch: 55 pg[7.1c( empty local-lis/les=54/55 n=0 ec=48/24 lis/c=48/48 les/c/f=49/49/0 sis=54) [2] r=0 lpr=54 pi=[48,54)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:08:53 compute-0 ceph-osd[88096]: osd.2 pg_epoch: 55 pg[4.1c( empty local-lis/les=54/55 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=54) [2] r=0 lpr=54 pi=[45,54)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:08:53 compute-0 ceph-osd[88096]: osd.2 pg_epoch: 55 pg[3.18( empty local-lis/les=54/55 n=0 ec=45/20 lis/c=45/45 les/c/f=46/46/0 sis=54) [2] r=0 lpr=54 pi=[45,54)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:08:53 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 55 pg[9.17( v 41'483 (0'0,41'483] local-lis/les=50/51 n=6 ec=50/35 lis/c=50/50 les/c/f=51/51/0 sis=55) [0]/[1] r=0 lpr=55 pi=[50,55)/1 crt=41'483 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 08:08:53 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 55 pg[9.17( v 41'483 (0'0,41'483] local-lis/les=50/51 n=6 ec=50/35 lis/c=50/50 les/c/f=51/51/0 sis=55) [0]/[1] r=0 lpr=55 pi=[50,55)/1 crt=41'483 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 31 08:08:53 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 55 pg[9.1b( v 41'483 (0'0,41'483] local-lis/les=50/51 n=6 ec=50/35 lis/c=50/50 les/c/f=51/51/0 sis=55) [0]/[1] r=0 lpr=55 pi=[50,55)/1 crt=41'483 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 08:08:53 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 55 pg[9.1b( v 41'483 (0'0,41'483] local-lis/les=50/51 n=6 ec=50/35 lis/c=50/50 les/c/f=51/51/0 sis=55) [0]/[1] r=0 lpr=55 pi=[50,55)/1 crt=41'483 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 31 08:08:53 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 55 pg[9.19( v 41'483 (0'0,41'483] local-lis/les=50/51 n=6 ec=50/35 lis/c=50/50 les/c/f=51/51/0 sis=55) [0]/[1] r=0 lpr=55 pi=[50,55)/1 crt=41'483 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 08:08:53 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 55 pg[9.19( v 41'483 (0'0,41'483] local-lis/les=50/51 n=6 ec=50/35 lis/c=50/50 les/c/f=51/51/0 sis=55) [0]/[1] r=0 lpr=55 pi=[50,55)/1 crt=41'483 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 31 08:08:53 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 55 pg[9.1f( v 41'483 (0'0,41'483] local-lis/les=50/51 n=6 ec=50/35 lis/c=50/50 les/c/f=51/51/0 sis=55) [0]/[1] r=0 lpr=55 pi=[50,55)/1 crt=41'483 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 08:08:53 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 55 pg[9.1f( v 41'483 (0'0,41'483] local-lis/les=50/51 n=6 ec=50/35 lis/c=50/50 les/c/f=51/51/0 sis=55) [0]/[1] r=0 lpr=55 pi=[50,55)/1 crt=41'483 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 31 08:08:53 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 55 pg[9.1d( v 41'483 (0'0,41'483] local-lis/les=50/51 n=6 ec=50/35 lis/c=50/50 les/c/f=51/51/0 sis=55) [0]/[1] r=0 lpr=55 pi=[50,55)/1 crt=41'483 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 08:08:53 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 55 pg[9.1d( v 41'483 (0'0,41'483] local-lis/les=50/51 n=6 ec=50/35 lis/c=50/50 les/c/f=51/51/0 sis=55) [0]/[1] r=0 lpr=55 pi=[50,55)/1 crt=41'483 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 31 08:08:53 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 55 pg[2.17( empty local-lis/les=54/55 n=0 ec=43/19 lis/c=43/43 les/c/f=44/44/0 sis=54) [1] r=0 lpr=54 pi=[43,54)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:08:53 compute-0 ceph-osd[85971]: osd.0 pg_epoch: 55 pg[5.14( empty local-lis/les=54/55 n=0 ec=47/22 lis/c=47/47 les/c/f=48/48/0 sis=54) [0] r=0 lpr=54 pi=[47,54)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:08:53 compute-0 ceph-osd[85971]: osd.0 pg_epoch: 55 pg[10.1e( v 41'18 (0'0,41'18] local-lis/les=54/55 n=0 ec=52/37 lis/c=52/52 les/c/f=53/53/0 sis=54) [0] r=0 lpr=54 pi=[52,54)/1 crt=41'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:08:53 compute-0 ceph-osd[85971]: osd.0 pg_epoch: 55 pg[5.15( empty local-lis/les=54/55 n=0 ec=47/22 lis/c=47/47 les/c/f=48/48/0 sis=54) [0] r=0 lpr=54 pi=[47,54)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:08:53 compute-0 ceph-osd[85971]: osd.0 pg_epoch: 55 pg[2.16( empty local-lis/les=54/55 n=0 ec=43/19 lis/c=43/43 les/c/f=44/44/0 sis=54) [0] r=0 lpr=54 pi=[43,54)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:08:53 compute-0 ceph-osd[85971]: osd.0 pg_epoch: 55 pg[2.8( empty local-lis/les=54/55 n=0 ec=43/19 lis/c=43/43 les/c/f=44/44/0 sis=54) [0] r=0 lpr=54 pi=[43,54)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:08:53 compute-0 ceph-osd[85971]: osd.0 pg_epoch: 55 pg[2.b( empty local-lis/les=54/55 n=0 ec=43/19 lis/c=43/43 les/c/f=44/44/0 sis=54) [0] r=0 lpr=54 pi=[43,54)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:08:53 compute-0 ceph-osd[85971]: osd.0 pg_epoch: 55 pg[5.2( empty local-lis/les=54/55 n=0 ec=47/22 lis/c=47/47 les/c/f=48/48/0 sis=54) [0] r=0 lpr=54 pi=[47,54)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:08:53 compute-0 ceph-osd[85971]: osd.0 pg_epoch: 55 pg[10.1( v 41'18 (0'0,41'18] local-lis/les=54/55 n=1 ec=52/37 lis/c=52/52 les/c/f=53/53/0 sis=54) [0] r=0 lpr=54 pi=[52,54)/1 crt=41'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:08:53 compute-0 ceph-osd[85971]: osd.0 pg_epoch: 55 pg[10.e( v 53'19 lc 38'4 (0'0,53'19] local-lis/les=54/55 n=0 ec=52/37 lis/c=52/52 les/c/f=53/53/0 sis=54) [0] r=0 lpr=54 pi=[52,54)/1 crt=53'19 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:08:53 compute-0 ceph-osd[85971]: osd.0 pg_epoch: 55 pg[2.1f( empty local-lis/les=54/55 n=0 ec=43/19 lis/c=43/43 les/c/f=44/44/0 sis=54) [0] r=0 lpr=54 pi=[43,54)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:08:53 compute-0 ceph-osd[85971]: osd.0 pg_epoch: 55 pg[10.d( v 53'19 lc 38'5 (0'0,53'19] local-lis/les=54/55 n=0 ec=52/37 lis/c=52/52 les/c/f=53/53/0 sis=54) [0] r=0 lpr=54 pi=[52,54)/1 crt=53'19 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:08:53 compute-0 ceph-osd[85971]: osd.0 pg_epoch: 55 pg[10.17( v 41'18 (0'0,41'18] local-lis/les=54/55 n=0 ec=52/37 lis/c=52/52 les/c/f=53/53/0 sis=54) [0] r=0 lpr=54 pi=[52,54)/1 crt=41'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:08:53 compute-0 ceph-osd[85971]: osd.0 pg_epoch: 55 pg[5.5( empty local-lis/les=54/55 n=0 ec=47/22 lis/c=47/47 les/c/f=48/48/0 sis=54) [0] r=0 lpr=54 pi=[47,54)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:08:53 compute-0 ceph-osd[85971]: osd.0 pg_epoch: 55 pg[2.f( empty local-lis/les=54/55 n=0 ec=43/19 lis/c=43/43 les/c/f=44/44/0 sis=54) [0] r=0 lpr=54 pi=[43,54)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:08:53 compute-0 ceph-osd[85971]: osd.0 pg_epoch: 55 pg[2.2( empty local-lis/les=54/55 n=0 ec=43/19 lis/c=43/43 les/c/f=44/44/0 sis=54) [0] r=0 lpr=54 pi=[43,54)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:08:53 compute-0 ceph-osd[85971]: osd.0 pg_epoch: 55 pg[10.7( v 41'18 (0'0,41'18] local-lis/les=54/55 n=1 ec=52/37 lis/c=52/52 les/c/f=53/53/0 sis=54) [0] r=0 lpr=54 pi=[52,54)/1 crt=41'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:08:53 compute-0 ceph-osd[85971]: osd.0 pg_epoch: 55 pg[2.13( empty local-lis/les=54/55 n=0 ec=43/19 lis/c=43/43 les/c/f=44/44/0 sis=54) [0] r=0 lpr=54 pi=[43,54)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:08:53 compute-0 ceph-osd[85971]: osd.0 pg_epoch: 55 pg[5.3( empty local-lis/les=54/55 n=0 ec=47/22 lis/c=47/47 les/c/f=48/48/0 sis=54) [0] r=0 lpr=54 pi=[47,54)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:08:53 compute-0 ceph-osd[85971]: osd.0 pg_epoch: 55 pg[10.4( v 41'18 (0'0,41'18] local-lis/les=54/55 n=1 ec=52/37 lis/c=52/52 les/c/f=53/53/0 sis=54) [0] r=0 lpr=54 pi=[52,54)/1 crt=41'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:08:53 compute-0 ceph-osd[85971]: osd.0 pg_epoch: 55 pg[2.1c( empty local-lis/les=54/55 n=0 ec=43/19 lis/c=43/43 les/c/f=44/44/0 sis=54) [0] r=0 lpr=54 pi=[43,54)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:08:53 compute-0 ceph-osd[85971]: osd.0 pg_epoch: 55 pg[2.1d( empty local-lis/les=54/55 n=0 ec=43/19 lis/c=43/43 les/c/f=44/44/0 sis=54) [0] r=0 lpr=54 pi=[43,54)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:08:53 compute-0 ceph-osd[85971]: osd.0 pg_epoch: 55 pg[5.7( empty local-lis/les=54/55 n=0 ec=47/22 lis/c=47/47 les/c/f=48/48/0 sis=54) [0] r=0 lpr=54 pi=[47,54)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:08:53 compute-0 ceph-osd[85971]: osd.0 pg_epoch: 55 pg[10.15( v 53'19 lc 38'3 (0'0,53'19] local-lis/les=54/55 n=0 ec=52/37 lis/c=52/52 les/c/f=53/53/0 sis=54) [0] r=0 lpr=54 pi=[52,54)/1 crt=53'19 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:08:53 compute-0 ceph-osd[85971]: osd.0 pg_epoch: 55 pg[10.8( v 41'18 (0'0,41'18] local-lis/les=54/55 n=1 ec=52/37 lis/c=52/52 les/c/f=53/53/0 sis=54) [0] r=0 lpr=54 pi=[52,54)/1 crt=41'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:08:53 compute-0 ceph-osd[85971]: osd.0 pg_epoch: 55 pg[5.1e( empty local-lis/les=54/55 n=0 ec=47/22 lis/c=47/47 les/c/f=48/48/0 sis=54) [0] r=0 lpr=54 pi=[47,54)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:08:53 compute-0 ceph-osd[85971]: osd.0 pg_epoch: 55 pg[2.18( empty local-lis/les=54/55 n=0 ec=43/19 lis/c=43/43 les/c/f=44/44/0 sis=54) [0] r=0 lpr=54 pi=[43,54)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:08:53 compute-0 ceph-osd[85971]: osd.0 pg_epoch: 55 pg[2.19( empty local-lis/les=54/55 n=0 ec=43/19 lis/c=43/43 les/c/f=44/44/0 sis=54) [0] r=0 lpr=54 pi=[43,54)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:08:53 compute-0 ceph-osd[85971]: osd.0 pg_epoch: 55 pg[11.10( empty local-lis/les=54/55 n=0 ec=52/39 lis/c=52/52 les/c/f=53/53/0 sis=54) [0] r=0 lpr=54 pi=[52,54)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:08:53 compute-0 ceph-osd[85971]: osd.0 pg_epoch: 55 pg[10.9( v 53'19 lc 38'8 (0'0,53'19] local-lis/les=54/55 n=1 ec=52/37 lis/c=52/52 les/c/f=53/53/0 sis=54) [0] r=0 lpr=54 pi=[52,54)/1 crt=53'19 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:08:53 compute-0 ceph-osd[85971]: osd.0 pg_epoch: 55 pg[3.1b( empty local-lis/les=54/55 n=0 ec=45/20 lis/c=45/45 les/c/f=46/46/0 sis=54) [0] r=0 lpr=54 pi=[45,54)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:08:53 compute-0 ceph-osd[85971]: osd.0 pg_epoch: 55 pg[8.10( v 34'6 (0'0,34'6] local-lis/les=54/55 n=0 ec=50/33 lis/c=50/50 les/c/f=51/51/0 sis=54) [0] r=0 lpr=54 pi=[50,54)/1 crt=34'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:08:53 compute-0 ceph-osd[85971]: osd.0 pg_epoch: 55 pg[7.1f( empty local-lis/les=54/55 n=0 ec=48/24 lis/c=48/48 les/c/f=49/49/0 sis=54) [0] r=0 lpr=54 pi=[48,54)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:08:53 compute-0 ceph-osd[85971]: osd.0 pg_epoch: 55 pg[3.f( empty local-lis/les=54/55 n=0 ec=45/20 lis/c=45/45 les/c/f=46/46/0 sis=54) [0] r=0 lpr=54 pi=[45,54)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:08:53 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 55 pg[5.13( empty local-lis/les=54/55 n=0 ec=47/22 lis/c=47/47 les/c/f=48/48/0 sis=54) [1] r=0 lpr=54 pi=[47,54)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:08:53 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 55 pg[2.15( empty local-lis/les=54/55 n=0 ec=43/19 lis/c=43/43 les/c/f=44/44/0 sis=54) [1] r=0 lpr=54 pi=[43,54)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:08:53 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 55 pg[10.1a( v 41'18 (0'0,41'18] local-lis/les=54/55 n=0 ec=52/37 lis/c=52/52 les/c/f=53/53/0 sis=54) [1] r=0 lpr=54 pi=[52,54)/1 crt=41'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:08:53 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 55 pg[5.11( empty local-lis/les=54/55 n=0 ec=47/22 lis/c=47/47 les/c/f=48/48/0 sis=54) [1] r=0 lpr=54 pi=[47,54)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:08:53 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 55 pg[5.12( empty local-lis/les=54/55 n=0 ec=47/22 lis/c=47/47 les/c/f=48/48/0 sis=54) [1] r=0 lpr=54 pi=[47,54)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:08:53 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 55 pg[10.19( v 41'18 (0'0,41'18] local-lis/les=54/55 n=0 ec=52/37 lis/c=52/52 les/c/f=53/53/0 sis=54) [1] r=0 lpr=54 pi=[52,54)/1 crt=41'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:08:53 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 55 pg[4.8( empty local-lis/les=54/55 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=54) [1] r=0 lpr=54 pi=[45,54)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:08:53 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 55 pg[5.9( empty local-lis/les=54/55 n=0 ec=47/22 lis/c=47/47 les/c/f=48/48/0 sis=54) [1] r=0 lpr=54 pi=[47,54)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:08:53 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 55 pg[5.16( empty local-lis/les=54/55 n=0 ec=47/22 lis/c=47/47 les/c/f=48/48/0 sis=54) [1] r=0 lpr=54 pi=[47,54)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:08:53 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 55 pg[10.6( v 41'18 (0'0,41'18] local-lis/les=54/55 n=1 ec=52/37 lis/c=52/52 les/c/f=53/53/0 sis=54) [1] r=0 lpr=54 pi=[52,54)/1 crt=41'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:08:53 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 55 pg[6.b( v 39'39 lc 0'0 (0'0,39'39] local-lis/les=54/55 n=1 ec=48/23 lis/c=48/48 les/c/f=51/51/0 sis=54) [1] r=0 lpr=54 pi=[48,54)/1 crt=39'39 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:08:53 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 55 pg[6.9( v 39'39 (0'0,39'39] local-lis/les=54/55 n=1 ec=48/23 lis/c=48/48 les/c/f=51/51/0 sis=54) [1] r=0 lpr=54 pi=[48,54)/1 crt=39'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:08:53 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 55 pg[2.d( empty local-lis/les=54/55 n=0 ec=43/19 lis/c=43/43 les/c/f=44/44/0 sis=54) [1] r=0 lpr=54 pi=[43,54)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:08:53 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 55 pg[5.f( empty local-lis/les=54/55 n=0 ec=47/22 lis/c=47/47 les/c/f=48/48/0 sis=54) [1] r=0 lpr=54 pi=[47,54)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:08:53 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 55 pg[6.7( v 39'39 lc 37'21 (0'0,39'39] local-lis/les=54/55 n=1 ec=48/23 lis/c=48/48 les/c/f=51/51/0 sis=54) [1] r=0 lpr=54 pi=[48,54)/1 crt=39'39 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:08:53 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 55 pg[10.b( v 41'18 (0'0,41'18] local-lis/les=54/55 n=0 ec=52/37 lis/c=52/52 les/c/f=53/53/0 sis=54) [1] r=0 lpr=54 pi=[52,54)/1 crt=41'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:08:53 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 55 pg[2.3( empty local-lis/les=54/55 n=0 ec=43/19 lis/c=43/43 les/c/f=44/44/0 sis=54) [1] r=0 lpr=54 pi=[43,54)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:08:53 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 55 pg[6.5( v 39'39 lc 37'9 (0'0,39'39] local-lis/les=54/55 n=2 ec=48/23 lis/c=48/48 les/c/f=51/51/0 sis=54) [1] r=0 lpr=54 pi=[48,54)/1 crt=39'39 lcod 0'0 mlcod 0'0 active+degraded m=2 mbc={255={(0+1)=2}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:08:53 compute-0 ceph-osd[85971]: osd.0 pg_epoch: 55 pg[8.b( v 34'6 (0'0,34'6] local-lis/les=54/55 n=0 ec=50/33 lis/c=50/50 les/c/f=51/51/0 sis=54) [0] r=0 lpr=54 pi=[50,54)/1 crt=34'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:08:53 compute-0 ceph-osd[85971]: osd.0 pg_epoch: 55 pg[7.4( empty local-lis/les=54/55 n=0 ec=48/24 lis/c=48/48 les/c/f=49/49/0 sis=54) [0] r=0 lpr=54 pi=[48,54)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:08:53 compute-0 ceph-osd[85971]: osd.0 pg_epoch: 55 pg[3.c( empty local-lis/les=54/55 n=0 ec=45/20 lis/c=45/45 les/c/f=46/46/0 sis=54) [0] r=0 lpr=54 pi=[45,54)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:08:53 compute-0 ceph-osd[85971]: osd.0 pg_epoch: 55 pg[5.4( empty local-lis/les=54/55 n=0 ec=47/22 lis/c=47/47 les/c/f=48/48/0 sis=54) [0] r=0 lpr=54 pi=[47,54)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:08:53 compute-0 ceph-osd[85971]: osd.0 pg_epoch: 55 pg[3.1( empty local-lis/les=54/55 n=0 ec=45/20 lis/c=45/45 les/c/f=46/46/0 sis=54) [0] r=0 lpr=54 pi=[45,54)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:08:53 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 55 pg[6.1( v 39'39 (0'0,39'39] local-lis/les=54/55 n=2 ec=48/23 lis/c=48/48 les/c/f=51/51/0 sis=54) [1] r=0 lpr=54 pi=[48,54)/1 crt=39'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:08:53 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 55 pg[4.7( empty local-lis/les=54/55 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=54) [1] r=0 lpr=54 pi=[45,54)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:08:53 compute-0 ceph-osd[85971]: osd.0 pg_epoch: 55 pg[11.4( empty local-lis/les=54/55 n=0 ec=52/39 lis/c=52/52 les/c/f=53/53/0 sis=54) [0] r=0 lpr=54 pi=[52,54)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:08:53 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 55 pg[10.2( v 41'18 (0'0,41'18] local-lis/les=54/55 n=1 ec=52/37 lis/c=52/52 les/c/f=53/53/0 sis=54) [1] r=0 lpr=54 pi=[52,54)/1 crt=41'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:08:53 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 55 pg[4.9( empty local-lis/les=54/55 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=54) [1] r=0 lpr=54 pi=[45,54)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:08:53 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 55 pg[2.5( empty local-lis/les=54/55 n=0 ec=43/19 lis/c=43/43 les/c/f=44/44/0 sis=54) [1] r=0 lpr=54 pi=[43,54)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:08:53 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 55 pg[6.f( v 39'39 lc 37'1 (0'0,39'39] local-lis/les=54/55 n=1 ec=48/23 lis/c=48/48 les/c/f=51/51/0 sis=54) [1] r=0 lpr=54 pi=[48,54)/1 crt=39'39 lcod 0'0 mlcod 0'0 active+degraded m=3 mbc={255={(0+1)=3}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:08:53 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 55 pg[5.c( empty local-lis/les=54/55 n=0 ec=47/22 lis/c=47/47 les/c/f=48/48/0 sis=54) [1] r=0 lpr=54 pi=[47,54)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:08:53 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 55 pg[2.9( empty local-lis/les=54/55 n=0 ec=43/19 lis/c=43/43 les/c/f=44/44/0 sis=54) [1] r=0 lpr=54 pi=[43,54)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:08:53 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 55 pg[4.4( empty local-lis/les=54/55 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=54) [1] r=0 lpr=54 pi=[45,54)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:08:53 compute-0 ceph-osd[85971]: osd.0 pg_epoch: 55 pg[7.18( empty local-lis/les=54/55 n=0 ec=48/24 lis/c=48/48 les/c/f=49/49/0 sis=54) [0] r=0 lpr=54 pi=[48,54)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:08:53 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 55 pg[6.3( v 39'39 lc 0'0 (0'0,39'39] local-lis/les=54/55 n=2 ec=48/23 lis/c=48/48 les/c/f=51/51/0 sis=54) [1] r=0 lpr=54 pi=[48,54)/1 crt=39'39 mlcod 0'0 active+degraded m=2 mbc={255={(0+1)=2}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:08:53 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 55 pg[10.f( v 41'18 (0'0,41'18] local-lis/les=54/55 n=0 ec=52/37 lis/c=52/52 les/c/f=53/53/0 sis=54) [1] r=0 lpr=54 pi=[52,54)/1 crt=41'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:08:53 compute-0 ceph-osd[85971]: osd.0 pg_epoch: 55 pg[7.9( empty local-lis/les=54/55 n=0 ec=48/24 lis/c=48/48 les/c/f=49/49/0 sis=54) [0] r=0 lpr=54 pi=[48,54)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:08:53 compute-0 ceph-osd[85971]: osd.0 pg_epoch: 55 pg[11.14( empty local-lis/les=54/55 n=0 ec=52/39 lis/c=52/52 les/c/f=53/53/0 sis=54) [0] r=0 lpr=54 pi=[52,54)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:08:53 compute-0 ceph-osd[85971]: osd.0 pg_epoch: 55 pg[7.6( empty local-lis/les=54/55 n=0 ec=48/24 lis/c=48/48 les/c/f=49/49/0 sis=54) [0] r=0 lpr=54 pi=[48,54)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:08:53 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 55 pg[5.1( empty local-lis/les=54/55 n=0 ec=47/22 lis/c=47/47 les/c/f=48/48/0 sis=54) [1] r=0 lpr=54 pi=[47,54)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:08:53 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 55 pg[10.11( v 41'18 (0'0,41'18] local-lis/les=54/55 n=0 ec=52/37 lis/c=52/52 les/c/f=53/53/0 sis=54) [1] r=0 lpr=54 pi=[52,54)/1 crt=41'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:08:53 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 55 pg[10.10( v 41'18 (0'0,41'18] local-lis/les=54/55 n=0 ec=52/37 lis/c=52/52 les/c/f=53/53/0 sis=54) [1] r=0 lpr=54 pi=[52,54)/1 crt=41'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:08:53 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 55 pg[10.13( v 41'18 (0'0,41'18] local-lis/les=54/55 n=0 ec=52/37 lis/c=52/52 les/c/f=53/53/0 sis=54) [1] r=0 lpr=54 pi=[52,54)/1 crt=41'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:08:53 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 55 pg[2.1b( empty local-lis/les=54/55 n=0 ec=43/19 lis/c=43/43 les/c/f=44/44/0 sis=54) [1] r=0 lpr=54 pi=[43,54)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:08:53 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 55 pg[4.5( empty local-lis/les=54/55 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=54) [1] r=0 lpr=54 pi=[45,54)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:08:53 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 55 pg[2.a( empty local-lis/les=54/55 n=0 ec=43/19 lis/c=43/43 les/c/f=44/44/0 sis=54) [1] r=0 lpr=54 pi=[43,54)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:08:53 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 55 pg[5.1a( empty local-lis/les=54/55 n=0 ec=47/22 lis/c=47/47 les/c/f=48/48/0 sis=54) [1] r=0 lpr=54 pi=[47,54)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:08:53 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 55 pg[10.14( v 53'19 lc 38'7 (0'0,53'19] local-lis/les=54/55 n=0 ec=52/37 lis/c=52/52 les/c/f=53/53/0 sis=54) [1] r=0 lpr=54 pi=[52,54)/1 crt=53'19 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:08:53 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 55 pg[2.4( empty local-lis/les=54/55 n=0 ec=43/19 lis/c=43/43 les/c/f=44/44/0 sis=54) [1] r=0 lpr=54 pi=[43,54)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:08:53 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 55 pg[5.1d( empty local-lis/les=54/55 n=0 ec=47/22 lis/c=47/47 les/c/f=48/48/0 sis=54) [1] r=0 lpr=54 pi=[47,54)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:08:53 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 55 pg[4.2( empty local-lis/les=54/55 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=54) [1] r=0 lpr=54 pi=[45,54)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:08:53 compute-0 ceph-osd[85971]: osd.0 pg_epoch: 55 pg[8.9( v 34'6 (0'0,34'6] local-lis/les=54/55 n=0 ec=50/33 lis/c=50/50 les/c/f=51/51/0 sis=54) [0] r=0 lpr=54 pi=[50,54)/1 crt=34'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:08:53 compute-0 ceph-osd[85971]: osd.0 pg_epoch: 55 pg[11.6( empty local-lis/les=54/55 n=0 ec=52/39 lis/c=52/52 les/c/f=53/53/0 sis=54) [0] r=0 lpr=54 pi=[52,54)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:08:53 compute-0 ceph-osd[85971]: osd.0 pg_epoch: 55 pg[3.3( empty local-lis/les=54/55 n=0 ec=45/20 lis/c=45/45 les/c/f=46/46/0 sis=54) [0] r=0 lpr=54 pi=[45,54)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:08:53 compute-0 ceph-osd[85971]: osd.0 pg_epoch: 55 pg[8.6( v 34'6 lc 0'0 (0'0,34'6] local-lis/les=54/55 n=1 ec=50/33 lis/c=50/50 les/c/f=51/51/0 sis=54) [0] r=0 lpr=54 pi=[50,54)/1 crt=34'6 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:08:53 compute-0 ceph-osd[85971]: osd.0 pg_epoch: 55 pg[11.e( empty local-lis/les=54/55 n=0 ec=52/39 lis/c=52/52 les/c/f=53/53/0 sis=54) [0] r=0 lpr=54 pi=[52,54)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:08:53 compute-0 ceph-osd[85971]: osd.0 pg_epoch: 55 pg[3.6( empty local-lis/les=54/55 n=0 ec=45/20 lis/c=45/45 les/c/f=46/46/0 sis=54) [0] r=0 lpr=54 pi=[45,54)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:08:53 compute-0 ceph-osd[85971]: osd.0 pg_epoch: 55 pg[8.f( v 34'6 lc 0'0 (0'0,34'6] local-lis/les=54/55 n=0 ec=50/33 lis/c=50/50 les/c/f=51/51/0 sis=54) [0] r=0 lpr=54 pi=[50,54)/1 crt=34'6 mlcod 0'0 active+degraded m=2 mbc={255={(0+1)=2}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:08:53 compute-0 ceph-osd[85971]: osd.0 pg_epoch: 55 pg[8.c( v 34'6 (0'0,34'6] local-lis/les=54/55 n=0 ec=50/33 lis/c=50/50 les/c/f=51/51/0 sis=54) [0] r=0 lpr=54 pi=[50,54)/1 crt=34'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:08:53 compute-0 ceph-osd[85971]: osd.0 pg_epoch: 55 pg[7.3( empty local-lis/les=54/55 n=0 ec=48/24 lis/c=48/48 les/c/f=49/49/0 sis=54) [0] r=0 lpr=54 pi=[48,54)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:08:53 compute-0 ceph-osd[85971]: osd.0 pg_epoch: 55 pg[11.f( empty local-lis/les=54/55 n=0 ec=52/39 lis/c=52/52 les/c/f=53/53/0 sis=54) [0] r=0 lpr=54 pi=[52,54)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:08:53 compute-0 ceph-osd[85971]: osd.0 pg_epoch: 55 pg[8.e( v 34'6 (0'0,34'6] local-lis/les=54/55 n=0 ec=50/33 lis/c=50/50 les/c/f=51/51/0 sis=54) [0] r=0 lpr=54 pi=[50,54)/1 crt=34'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:08:53 compute-0 ceph-osd[85971]: osd.0 pg_epoch: 55 pg[3.a( empty local-lis/les=54/55 n=0 ec=45/20 lis/c=45/45 les/c/f=46/46/0 sis=54) [0] r=0 lpr=54 pi=[45,54)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:08:53 compute-0 ceph-osd[85971]: osd.0 pg_epoch: 55 pg[7.f( empty local-lis/les=54/55 n=0 ec=48/24 lis/c=48/48 les/c/f=49/49/0 sis=54) [0] r=0 lpr=54 pi=[48,54)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:08:53 compute-0 ceph-osd[85971]: osd.0 pg_epoch: 55 pg[11.1( empty local-lis/les=54/55 n=0 ec=52/39 lis/c=52/52 les/c/f=53/53/0 sis=54) [0] r=0 lpr=54 pi=[52,54)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:08:53 compute-0 ceph-osd[85971]: osd.0 pg_epoch: 55 pg[3.17( empty local-lis/les=54/55 n=0 ec=45/20 lis/c=45/45 les/c/f=46/46/0 sis=54) [0] r=0 lpr=54 pi=[45,54)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:08:53 compute-0 ceph-osd[85971]: osd.0 pg_epoch: 55 pg[7.13( empty local-lis/les=54/55 n=0 ec=48/24 lis/c=48/48 les/c/f=49/49/0 sis=54) [0] r=0 lpr=54 pi=[48,54)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:08:53 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 55 pg[5.19( empty local-lis/les=54/55 n=0 ec=47/22 lis/c=47/47 les/c/f=48/48/0 sis=54) [1] r=0 lpr=54 pi=[47,54)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:08:53 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 55 pg[5.18( empty local-lis/les=54/55 n=0 ec=47/22 lis/c=47/47 les/c/f=48/48/0 sis=54) [1] r=0 lpr=54 pi=[47,54)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:08:53 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 55 pg[4.d( empty local-lis/les=54/55 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=54) [1] r=0 lpr=54 pi=[45,54)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:08:53 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 55 pg[10.12( v 53'19 lc 41'17 (0'0,53'19] local-lis/les=54/55 n=0 ec=52/37 lis/c=52/52 les/c/f=53/53/0 sis=54) [1] r=0 lpr=54 pi=[52,54)/1 crt=53'19 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:08:53 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 55 pg[2.6( empty local-lis/les=54/55 n=0 ec=43/19 lis/c=43/43 les/c/f=44/44/0 sis=54) [1] r=0 lpr=54 pi=[43,54)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:08:53 compute-0 ceph-osd[85971]: osd.0 pg_epoch: 55 pg[8.1d( v 34'6 (0'0,34'6] local-lis/les=54/55 n=0 ec=50/33 lis/c=50/50 les/c/f=51/51/0 sis=54) [0] r=0 lpr=54 pi=[50,54)/1 crt=34'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:08:53 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 55 pg[4.f( empty local-lis/les=54/55 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=54) [1] r=0 lpr=54 pi=[45,54)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:08:53 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 55 pg[6.d( v 39'39 lc 37'10 (0'0,39'39] local-lis/les=54/55 n=1 ec=48/23 lis/c=48/48 les/c/f=51/51/0 sis=54) [1] r=0 lpr=54 pi=[48,54)/1 crt=39'39 lcod 0'0 mlcod 0'0 active+degraded m=2 mbc={255={(0+1)=2}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:08:53 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 55 pg[2.7( empty local-lis/les=54/55 n=0 ec=43/19 lis/c=43/43 les/c/f=44/44/0 sis=54) [1] r=0 lpr=54 pi=[43,54)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:08:53 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 55 pg[4.12( empty local-lis/les=54/55 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=54) [1] r=0 lpr=54 pi=[45,54)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:08:53 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 55 pg[4.14( empty local-lis/les=54/55 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=54) [1] r=0 lpr=54 pi=[45,54)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:08:53 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 55 pg[4.10( empty local-lis/les=54/55 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=54) [1] r=0 lpr=54 pi=[45,54)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:08:53 compute-0 ceph-osd[85971]: osd.0 pg_epoch: 55 pg[3.9( empty local-lis/les=54/55 n=0 ec=45/20 lis/c=45/45 les/c/f=46/46/0 sis=54) [0] r=0 lpr=54 pi=[45,54)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:08:53 compute-0 ceph-osd[85971]: osd.0 pg_epoch: 55 pg[3.15( empty local-lis/les=54/55 n=0 ec=45/20 lis/c=45/45 les/c/f=46/46/0 sis=54) [0] r=0 lpr=54 pi=[45,54)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:08:53 compute-0 ceph-osd[85971]: osd.0 pg_epoch: 55 pg[8.1f( v 34'6 (0'0,34'6] local-lis/les=54/55 n=0 ec=50/33 lis/c=50/50 les/c/f=51/51/0 sis=54) [0] r=0 lpr=54 pi=[50,54)/1 crt=34'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:08:53 compute-0 ceph-osd[85971]: osd.0 pg_epoch: 55 pg[8.18( v 34'6 (0'0,34'6] local-lis/les=54/55 n=0 ec=50/33 lis/c=50/50 les/c/f=51/51/0 sis=54) [0] r=0 lpr=54 pi=[50,54)/1 crt=34'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:08:53 compute-0 ceph-osd[85971]: osd.0 pg_epoch: 55 pg[11.19( empty local-lis/les=54/55 n=0 ec=52/39 lis/c=52/52 les/c/f=53/53/0 sis=54) [0] r=0 lpr=54 pi=[52,54)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:08:53 compute-0 ceph-osd[85971]: osd.0 pg_epoch: 55 pg[8.1a( v 34'6 (0'0,34'6] local-lis/les=54/55 n=0 ec=50/33 lis/c=50/50 les/c/f=51/51/0 sis=54) [0] r=0 lpr=54 pi=[50,54)/1 crt=34'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:08:53 compute-0 ceph-osd[85971]: osd.0 pg_epoch: 55 pg[3.12( empty local-lis/les=54/55 n=0 ec=45/20 lis/c=45/45 les/c/f=46/46/0 sis=54) [0] r=0 lpr=54 pi=[45,54)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:08:53 compute-0 ceph-osd[85971]: osd.0 pg_epoch: 55 pg[11.17( empty local-lis/les=54/55 n=0 ec=52/39 lis/c=52/52 les/c/f=53/53/0 sis=54) [0] r=0 lpr=54 pi=[52,54)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:08:53 compute-0 ceph-osd[85971]: osd.0 pg_epoch: 55 pg[8.14( v 34'6 (0'0,34'6] local-lis/les=54/55 n=0 ec=50/33 lis/c=50/50 les/c/f=51/51/0 sis=54) [0] r=0 lpr=54 pi=[50,54)/1 crt=34'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:08:53 compute-0 ceph-osd[85971]: osd.0 pg_epoch: 55 pg[3.1f( empty local-lis/les=54/55 n=0 ec=45/20 lis/c=45/45 les/c/f=46/46/0 sis=54) [0] r=0 lpr=54 pi=[45,54)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:08:53 compute-0 ceph-osd[85971]: osd.0 pg_epoch: 55 pg[7.1b( empty local-lis/les=54/55 n=0 ec=48/24 lis/c=48/48 les/c/f=49/49/0 sis=54) [0] r=0 lpr=54 pi=[48,54)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:08:54 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e55 do_prune osdmap full prune enabled
Jan 31 08:08:54 compute-0 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "3"}]': finished
Jan 31 08:08:54 compute-0 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "3"}]': finished
Jan 31 08:08:54 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e56 e56: 3 total, 3 up, 3 in
Jan 31 08:08:54 compute-0 ceph-mon[75227]: log_channel(cluster) log [DBG] : osdmap e56: 3 total, 3 up, 3 in
Jan 31 08:08:55 compute-0 ceph-mon[75227]: pgmap v117: 305 pgs: 305 active+clean; 461 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:08:55 compute-0 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "3"} : dispatch
Jan 31 08:08:55 compute-0 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "3"} : dispatch
Jan 31 08:08:55 compute-0 ceph-osd[85971]: osd.0 pg_epoch: 56 pg[6.a( v 39'39 (0'0,39'39] local-lis/les=48/51 n=1 ec=48/23 lis/c=48/48 les/c/f=51/51/0 sis=56 pruub=11.930760384s) [1] r=-1 lpr=56 pi=[48,56)/1 crt=39'39 lcod 0'0 active pruub 110.934173584s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 08:08:55 compute-0 ceph-osd[85971]: osd.0 pg_epoch: 56 pg[6.a( v 39'39 (0'0,39'39] local-lis/les=48/51 n=1 ec=48/23 lis/c=48/48 les/c/f=51/51/0 sis=56 pruub=11.930695534s) [1] r=-1 lpr=56 pi=[48,56)/1 crt=39'39 lcod 0'0 unknown NOTIFY pruub 110.934173584s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 08:08:55 compute-0 ceph-osd[85971]: osd.0 pg_epoch: 56 pg[6.6( v 39'39 (0'0,39'39] local-lis/les=48/51 n=2 ec=48/23 lis/c=48/48 les/c/f=51/51/0 sis=56 pruub=11.923440933s) [1] r=-1 lpr=56 pi=[48,56)/1 crt=39'39 lcod 0'0 active pruub 110.927833557s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 08:08:55 compute-0 ceph-osd[85971]: osd.0 pg_epoch: 56 pg[6.6( v 39'39 (0'0,39'39] local-lis/les=48/51 n=2 ec=48/23 lis/c=48/48 les/c/f=51/51/0 sis=56 pruub=11.923384666s) [1] r=-1 lpr=56 pi=[48,56)/1 crt=39'39 lcod 0'0 unknown NOTIFY pruub 110.927833557s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 08:08:55 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 56 pg[6.a( empty local-lis/les=0/0 n=0 ec=48/23 lis/c=48/48 les/c/f=51/51/0 sis=56) [1] r=0 lpr=56 pi=[48,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:08:55 compute-0 ceph-osd[85971]: osd.0 pg_epoch: 56 pg[6.2( v 39'39 (0'0,39'39] local-lis/les=48/51 n=2 ec=48/23 lis/c=48/48 les/c/f=51/51/0 sis=56 pruub=11.929800034s) [1] r=-1 lpr=56 pi=[48,56)/1 crt=39'39 lcod 0'0 active pruub 110.934494019s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 08:08:55 compute-0 ceph-osd[85971]: osd.0 pg_epoch: 56 pg[6.e( v 39'39 (0'0,39'39] local-lis/les=48/51 n=1 ec=48/23 lis/c=48/48 les/c/f=51/51/0 sis=56 pruub=11.929306984s) [1] r=-1 lpr=56 pi=[48,56)/1 crt=39'39 lcod 0'0 active pruub 110.934066772s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 08:08:55 compute-0 ceph-osd[85971]: osd.0 pg_epoch: 56 pg[6.2( v 39'39 (0'0,39'39] local-lis/les=48/51 n=2 ec=48/23 lis/c=48/48 les/c/f=51/51/0 sis=56 pruub=11.929759026s) [1] r=-1 lpr=56 pi=[48,56)/1 crt=39'39 lcod 0'0 unknown NOTIFY pruub 110.934494019s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 08:08:55 compute-0 ceph-osd[85971]: osd.0 pg_epoch: 56 pg[6.e( v 39'39 (0'0,39'39] local-lis/les=48/51 n=1 ec=48/23 lis/c=48/48 les/c/f=51/51/0 sis=56 pruub=11.929266930s) [1] r=-1 lpr=56 pi=[48,56)/1 crt=39'39 lcod 0'0 unknown NOTIFY pruub 110.934066772s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 08:08:55 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 56 pg[6.6( empty local-lis/les=0/0 n=0 ec=48/23 lis/c=48/48 les/c/f=51/51/0 sis=56) [1] r=0 lpr=56 pi=[48,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:08:55 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 56 pg[6.2( empty local-lis/les=0/0 n=0 ec=48/23 lis/c=48/48 les/c/f=51/51/0 sis=56) [1] r=0 lpr=56 pi=[48,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:08:55 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 56 pg[6.e( empty local-lis/les=0/0 n=0 ec=48/23 lis/c=48/48 les/c/f=51/51/0 sis=56) [1] r=0 lpr=56 pi=[48,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:08:55 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 56 pg[9.1b( v 41'483 (0'0,41'483] local-lis/les=55/56 n=6 ec=50/35 lis/c=50/50 les/c/f=51/51/0 sis=55) [0]/[1] async=[0] r=0 lpr=55 pi=[50,55)/1 crt=41'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=3}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:08:55 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 56 pg[9.15( v 41'483 (0'0,41'483] local-lis/les=55/56 n=6 ec=50/35 lis/c=50/50 les/c/f=51/51/0 sis=55) [0]/[1] async=[0] r=0 lpr=55 pi=[50,55)/1 crt=41'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:08:55 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 56 pg[9.1d( v 41'483 (0'0,41'483] local-lis/les=55/56 n=6 ec=50/35 lis/c=50/50 les/c/f=51/51/0 sis=55) [0]/[1] async=[0] r=0 lpr=55 pi=[50,55)/1 crt=41'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:08:55 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 56 pg[9.3( v 41'483 (0'0,41'483] local-lis/les=55/56 n=7 ec=50/35 lis/c=50/50 les/c/f=51/51/0 sis=55) [0]/[1] async=[0] r=0 lpr=55 pi=[50,55)/1 crt=41'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=8}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:08:55 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 56 pg[9.19( v 41'483 (0'0,41'483] local-lis/les=55/56 n=6 ec=50/35 lis/c=50/50 les/c/f=51/51/0 sis=55) [0]/[1] async=[0] r=0 lpr=55 pi=[50,55)/1 crt=41'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=9}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:08:55 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 56 pg[9.1( v 41'483 (0'0,41'483] local-lis/les=55/56 n=7 ec=50/35 lis/c=50/50 les/c/f=51/51/0 sis=55) [0]/[1] async=[0] r=0 lpr=55 pi=[50,55)/1 crt=41'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=9}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:08:55 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 56 pg[9.d( v 41'483 (0'0,41'483] local-lis/les=55/56 n=7 ec=50/35 lis/c=50/50 les/c/f=51/51/0 sis=55) [0]/[1] async=[0] r=0 lpr=55 pi=[50,55)/1 crt=41'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=8}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:08:55 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 56 pg[9.1f( v 41'483 (0'0,41'483] local-lis/les=55/56 n=6 ec=50/35 lis/c=50/50 les/c/f=51/51/0 sis=55) [0]/[1] async=[0] r=0 lpr=55 pi=[50,55)/1 crt=41'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:08:55 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 56 pg[9.f( v 41'483 (0'0,41'483] local-lis/les=55/56 n=7 ec=50/35 lis/c=50/50 les/c/f=51/51/0 sis=55) [0]/[1] async=[0] r=0 lpr=55 pi=[50,55)/1 crt=41'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=8}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:08:55 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 56 pg[9.9( v 41'483 (0'0,41'483] local-lis/les=55/56 n=7 ec=50/35 lis/c=50/50 les/c/f=51/51/0 sis=55) [0]/[1] async=[0] r=0 lpr=55 pi=[50,55)/1 crt=41'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:08:55 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 56 pg[9.17( v 41'483 (0'0,41'483] local-lis/les=55/56 n=6 ec=50/35 lis/c=50/50 les/c/f=51/51/0 sis=55) [0]/[1] async=[0] r=0 lpr=55 pi=[50,55)/1 crt=41'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:08:55 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 56 pg[9.5( v 53'484 (0'0,53'484] local-lis/les=55/56 n=7 ec=50/35 lis/c=50/50 les/c/f=51/51/0 sis=55) [0]/[1] async=[0] r=0 lpr=55 pi=[50,55)/1 crt=53'484 lcod 41'483 mlcod 0'0 active+remapped mbc={255={(0+1)=9}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:08:55 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 56 pg[9.b( v 41'483 (0'0,41'483] local-lis/les=55/56 n=7 ec=50/35 lis/c=50/50 les/c/f=51/51/0 sis=55) [0]/[1] async=[0] r=0 lpr=55 pi=[50,55)/1 crt=41'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:08:55 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 56 pg[9.13( v 41'483 (0'0,41'483] local-lis/les=55/56 n=6 ec=50/35 lis/c=50/50 les/c/f=51/51/0 sis=55) [0]/[1] async=[0] r=0 lpr=55 pi=[50,55)/1 crt=41'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=6}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:08:55 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 56 pg[9.11( v 41'483 (0'0,41'483] local-lis/les=55/56 n=7 ec=50/35 lis/c=50/50 les/c/f=51/51/0 sis=55) [0]/[1] async=[0] r=0 lpr=55 pi=[50,55)/1 crt=41'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=7}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:08:55 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 56 pg[9.7( v 41'483 (0'0,41'483] local-lis/les=55/56 n=7 ec=50/35 lis/c=50/50 les/c/f=51/51/0 sis=55) [0]/[1] async=[0] r=0 lpr=55 pi=[50,55)/1 crt=41'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=7}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:08:55 compute-0 ceph-osd[85971]: log_channel(cluster) log [DBG] : 4.1d scrub starts
Jan 31 08:08:55 compute-0 ceph-osd[85971]: log_channel(cluster) log [DBG] : 4.1d scrub ok
Jan 31 08:08:55 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v119: 305 pgs: 2 active+recovery_wait, 16 active+recovery_wait+remapped, 4 peering, 3 active+recovery_wait+degraded, 2 active+recovering, 278 active+clean; 461 KiB data, 99 MiB used, 60 GiB / 60 GiB avail; 6/249 objects degraded (2.410%); 103/249 objects misplaced (41.365%); 87 B/s, 1 objects/s recovering
Jan 31 08:08:56 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e56 do_prune osdmap full prune enabled
Jan 31 08:08:56 compute-0 ceph-mon[75227]: log_channel(cluster) log [WRN] : Health check failed: Degraded data redundancy: 6/249 objects degraded (2.410%), 3 pgs degraded (PG_DEGRADED)
Jan 31 08:08:56 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e57 e57: 3 total, 3 up, 3 in
Jan 31 08:08:56 compute-0 ceph-mon[75227]: log_channel(cluster) log [DBG] : osdmap e57: 3 total, 3 up, 3 in
Jan 31 08:08:56 compute-0 ceph-osd[85971]: log_channel(cluster) log [DBG] : 4.1e scrub starts
Jan 31 08:08:56 compute-0 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "3"}]': finished
Jan 31 08:08:56 compute-0 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "3"}]': finished
Jan 31 08:08:56 compute-0 ceph-mon[75227]: osdmap e56: 3 total, 3 up, 3 in
Jan 31 08:08:56 compute-0 ceph-osd[85971]: log_channel(cluster) log [DBG] : 4.1e scrub ok
Jan 31 08:08:56 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 57 pg[6.2( v 39'39 (0'0,39'39] local-lis/les=56/57 n=2 ec=48/23 lis/c=48/48 les/c/f=51/51/0 sis=56) [1] r=0 lpr=56 pi=[48,56)/1 crt=39'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:08:56 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 57 pg[6.6( v 39'39 lc 0'0 (0'0,39'39] local-lis/les=56/57 n=2 ec=48/23 lis/c=48/48 les/c/f=51/51/0 sis=56) [1] r=0 lpr=56 pi=[48,56)/1 crt=39'39 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:08:56 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 57 pg[6.e( v 39'39 lc 37'19 (0'0,39'39] local-lis/les=56/57 n=1 ec=48/23 lis/c=48/48 les/c/f=51/51/0 sis=56) [1] r=0 lpr=56 pi=[48,56)/1 crt=39'39 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:08:56 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 57 pg[6.a( v 39'39 (0'0,39'39] local-lis/les=56/57 n=1 ec=48/23 lis/c=48/48 les/c/f=51/51/0 sis=56) [1] r=0 lpr=56 pi=[48,56)/1 crt=39'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:08:57 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e57 do_prune osdmap full prune enabled
Jan 31 08:08:57 compute-0 ceph-mon[75227]: 4.1d scrub starts
Jan 31 08:08:57 compute-0 ceph-mon[75227]: 4.1d scrub ok
Jan 31 08:08:57 compute-0 ceph-mon[75227]: pgmap v119: 305 pgs: 2 active+recovery_wait, 16 active+recovery_wait+remapped, 4 peering, 3 active+recovery_wait+degraded, 2 active+recovering, 278 active+clean; 461 KiB data, 99 MiB used, 60 GiB / 60 GiB avail; 6/249 objects degraded (2.410%); 103/249 objects misplaced (41.365%); 87 B/s, 1 objects/s recovering
Jan 31 08:08:57 compute-0 ceph-mon[75227]: Health check failed: Degraded data redundancy: 6/249 objects degraded (2.410%), 3 pgs degraded (PG_DEGRADED)
Jan 31 08:08:57 compute-0 ceph-mon[75227]: osdmap e57: 3 total, 3 up, 3 in
Jan 31 08:08:57 compute-0 ceph-mon[75227]: 4.1e scrub starts
Jan 31 08:08:57 compute-0 ceph-mon[75227]: 4.1e scrub ok
Jan 31 08:08:57 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v121: 305 pgs: 2 active+recovery_wait, 16 active+recovery_wait+remapped, 4 peering, 3 active+recovery_wait+degraded, 2 active+recovering, 278 active+clean; 461 KiB data, 99 MiB used, 60 GiB / 60 GiB avail; 6/249 objects degraded (2.410%); 103/249 objects misplaced (41.365%); 98 B/s, 2 objects/s recovering
Jan 31 08:08:57 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e58 e58: 3 total, 3 up, 3 in
Jan 31 08:08:57 compute-0 ceph-mon[75227]: log_channel(cluster) log [DBG] : osdmap e58: 3 total, 3 up, 3 in
Jan 31 08:08:58 compute-0 ceph-mgr[75519]: [progress INFO root] Writing back 17 completed events
Jan 31 08:08:58 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Jan 31 08:08:58 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 58 pg[9.1b( v 41'483 (0'0,41'483] local-lis/les=55/56 n=6 ec=50/35 lis/c=55/50 les/c/f=56/51/0 sis=58 pruub=12.939789772s) [0] async=[0] r=-1 lpr=58 pi=[50,58)/1 crt=41'483 lcod 0'0 active pruub 110.058883667s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 08:08:58 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 58 pg[9.1b( v 41'483 (0'0,41'483] local-lis/les=55/56 n=6 ec=50/35 lis/c=55/50 les/c/f=56/51/0 sis=58 pruub=12.939711571s) [0] r=-1 lpr=58 pi=[50,58)/1 crt=41'483 lcod 0'0 unknown NOTIFY pruub 110.058883667s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 08:08:58 compute-0 ceph-osd[85971]: osd.0 pg_epoch: 58 pg[9.1b( v 41'483 (0'0,41'483] local-lis/les=0/0 n=6 ec=50/35 lis/c=55/50 les/c/f=56/51/0 sis=58) [0] r=0 lpr=58 pi=[50,58)/1 pct=0'0 crt=41'483 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 08:08:58 compute-0 ceph-osd[85971]: osd.0 pg_epoch: 58 pg[9.1b( v 41'483 (0'0,41'483] local-lis/les=0/0 n=6 ec=50/35 lis/c=55/50 les/c/f=56/51/0 sis=58) [0] r=0 lpr=58 pi=[50,58)/1 crt=41'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:08:58 compute-0 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 08:08:58 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e58 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 08:08:58 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e58 do_prune osdmap full prune enabled
Jan 31 08:08:59 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e59 e59: 3 total, 3 up, 3 in
Jan 31 08:08:59 compute-0 ceph-mon[75227]: log_channel(cluster) log [DBG] : osdmap e59: 3 total, 3 up, 3 in
Jan 31 08:08:59 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 59 pg[9.15( v 41'483 (0'0,41'483] local-lis/les=55/56 n=6 ec=50/35 lis/c=55/50 les/c/f=56/51/0 sis=59 pruub=11.936095238s) [0] async=[0] r=-1 lpr=59 pi=[50,59)/1 crt=41'483 lcod 0'0 active pruub 110.058906555s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 08:08:59 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 59 pg[9.15( v 41'483 (0'0,41'483] local-lis/les=55/56 n=6 ec=50/35 lis/c=55/50 les/c/f=56/51/0 sis=59 pruub=11.936017036s) [0] r=-1 lpr=59 pi=[50,59)/1 crt=41'483 lcod 0'0 unknown NOTIFY pruub 110.058906555s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 08:08:59 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 59 pg[9.3( v 41'483 (0'0,41'483] local-lis/les=55/56 n=7 ec=50/35 lis/c=55/50 les/c/f=56/51/0 sis=59 pruub=11.941884041s) [0] async=[0] r=-1 lpr=59 pi=[50,59)/1 crt=41'483 lcod 0'0 active pruub 110.067108154s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 08:08:59 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 59 pg[9.3( v 41'483 (0'0,41'483] local-lis/les=55/56 n=7 ec=50/35 lis/c=55/50 les/c/f=56/51/0 sis=59 pruub=11.941802979s) [0] r=-1 lpr=59 pi=[50,59)/1 crt=41'483 lcod 0'0 unknown NOTIFY pruub 110.067108154s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 08:08:59 compute-0 ceph-osd[85971]: osd.0 pg_epoch: 59 pg[9.3( v 41'483 (0'0,41'483] local-lis/les=0/0 n=7 ec=50/35 lis/c=55/50 les/c/f=56/51/0 sis=59) [0] r=0 lpr=59 pi=[50,59)/1 pct=0'0 crt=41'483 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 08:08:59 compute-0 ceph-osd[85971]: osd.0 pg_epoch: 59 pg[9.3( v 41'483 (0'0,41'483] local-lis/les=0/0 n=7 ec=50/35 lis/c=55/50 les/c/f=56/51/0 sis=59) [0] r=0 lpr=59 pi=[50,59)/1 crt=41'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:08:59 compute-0 ceph-osd[85971]: osd.0 pg_epoch: 59 pg[9.15( v 41'483 (0'0,41'483] local-lis/les=0/0 n=6 ec=50/35 lis/c=55/50 les/c/f=56/51/0 sis=59) [0] r=0 lpr=59 pi=[50,59)/1 pct=0'0 crt=41'483 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 08:08:59 compute-0 ceph-osd[85971]: osd.0 pg_epoch: 59 pg[9.15( v 41'483 (0'0,41'483] local-lis/les=0/0 n=6 ec=50/35 lis/c=55/50 les/c/f=56/51/0 sis=59) [0] r=0 lpr=59 pi=[50,59)/1 crt=41'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:08:59 compute-0 ceph-mon[75227]: pgmap v121: 305 pgs: 2 active+recovery_wait, 16 active+recovery_wait+remapped, 4 peering, 3 active+recovery_wait+degraded, 2 active+recovering, 278 active+clean; 461 KiB data, 99 MiB used, 60 GiB / 60 GiB avail; 6/249 objects degraded (2.410%); 103/249 objects misplaced (41.365%); 98 B/s, 2 objects/s recovering
Jan 31 08:08:59 compute-0 ceph-mon[75227]: osdmap e58: 3 total, 3 up, 3 in
Jan 31 08:08:59 compute-0 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 08:08:59 compute-0 ceph-osd[85971]: osd.0 pg_epoch: 59 pg[9.1b( v 41'483 (0'0,41'483] local-lis/les=58/59 n=6 ec=50/35 lis/c=55/50 les/c/f=56/51/0 sis=58) [0] r=0 lpr=58 pi=[50,58)/1 crt=41'483 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:08:59 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v124: 305 pgs: 2 active+recovery_wait, 13 active+recovery_wait+remapped, 6 peering, 2 active+recovery_wait+degraded, 1 active+recovering, 281 active+clean; 461 KiB data, 99 MiB used, 60 GiB / 60 GiB avail; 4/249 objects degraded (1.606%); 87/249 objects misplaced (34.940%); 193 B/s, 5 objects/s recovering
Jan 31 08:09:00 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e59 do_prune osdmap full prune enabled
Jan 31 08:09:00 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e60 e60: 3 total, 3 up, 3 in
Jan 31 08:09:00 compute-0 ceph-mon[75227]: log_channel(cluster) log [DBG] : osdmap e60: 3 total, 3 up, 3 in
Jan 31 08:09:00 compute-0 ceph-osd[85971]: osd.0 pg_epoch: 60 pg[9.d( v 41'483 (0'0,41'483] local-lis/les=0/0 n=7 ec=50/35 lis/c=55/50 les/c/f=56/51/0 sis=60) [0] r=0 lpr=60 pi=[50,60)/1 pct=0'0 crt=41'483 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 08:09:00 compute-0 ceph-osd[85971]: osd.0 pg_epoch: 60 pg[9.d( v 41'483 (0'0,41'483] local-lis/les=0/0 n=7 ec=50/35 lis/c=55/50 les/c/f=56/51/0 sis=60) [0] r=0 lpr=60 pi=[50,60)/1 crt=41'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:09:00 compute-0 ceph-osd[85971]: osd.0 pg_epoch: 60 pg[9.19( v 41'483 (0'0,41'483] local-lis/les=0/0 n=6 ec=50/35 lis/c=55/50 les/c/f=56/51/0 sis=60) [0] r=0 lpr=60 pi=[50,60)/1 pct=0'0 crt=41'483 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 08:09:00 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 60 pg[9.d( v 41'483 (0'0,41'483] local-lis/les=55/56 n=7 ec=50/35 lis/c=55/50 les/c/f=56/51/0 sis=60 pruub=10.214437485s) [0] async=[0] r=-1 lpr=60 pi=[50,60)/1 crt=41'483 lcod 0'0 active pruub 110.067276001s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 08:09:00 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 60 pg[9.d( v 41'483 (0'0,41'483] local-lis/les=55/56 n=7 ec=50/35 lis/c=55/50 les/c/f=56/51/0 sis=60 pruub=10.214324951s) [0] r=-1 lpr=60 pi=[50,60)/1 crt=41'483 lcod 0'0 unknown NOTIFY pruub 110.067276001s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 08:09:00 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 60 pg[9.19( v 41'483 (0'0,41'483] local-lis/les=55/56 n=6 ec=50/35 lis/c=55/50 les/c/f=56/51/0 sis=60 pruub=10.212243080s) [0] async=[0] r=-1 lpr=60 pi=[50,60)/1 crt=41'483 lcod 0'0 active pruub 110.067161560s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 08:09:00 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 60 pg[9.19( v 41'483 (0'0,41'483] local-lis/les=55/56 n=6 ec=50/35 lis/c=55/50 les/c/f=56/51/0 sis=60 pruub=10.212137222s) [0] r=-1 lpr=60 pi=[50,60)/1 crt=41'483 lcod 0'0 unknown NOTIFY pruub 110.067161560s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 08:09:00 compute-0 ceph-osd[85971]: osd.0 pg_epoch: 60 pg[9.19( v 41'483 (0'0,41'483] local-lis/les=0/0 n=6 ec=50/35 lis/c=55/50 les/c/f=56/51/0 sis=60) [0] r=0 lpr=60 pi=[50,60)/1 crt=41'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:09:01 compute-0 ceph-osd[88096]: log_channel(cluster) log [DBG] : 2.1a scrub starts
Jan 31 08:09:01 compute-0 ceph-osd[88096]: log_channel(cluster) log [DBG] : 2.1a scrub ok
Jan 31 08:09:01 compute-0 ceph-mon[75227]: osdmap e59: 3 total, 3 up, 3 in
Jan 31 08:09:01 compute-0 ceph-osd[85971]: osd.0 pg_epoch: 60 pg[9.15( v 41'483 (0'0,41'483] local-lis/les=59/60 n=6 ec=50/35 lis/c=55/50 les/c/f=56/51/0 sis=59) [0] r=0 lpr=59 pi=[50,59)/1 crt=41'483 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:09:01 compute-0 ceph-osd[85971]: osd.0 pg_epoch: 60 pg[9.3( v 41'483 (0'0,41'483] local-lis/les=59/60 n=7 ec=50/35 lis/c=55/50 les/c/f=56/51/0 sis=59) [0] r=0 lpr=59 pi=[50,59)/1 crt=41'483 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:09:01 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e60 do_prune osdmap full prune enabled
Jan 31 08:09:01 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v126: 305 pgs: 1 active+recovering+remapped, 10 active+recovery_wait+remapped, 2 active+remapped, 2 peering, 290 active+clean; 461 KiB data, 99 MiB used, 60 GiB / 60 GiB avail; 62/249 objects misplaced (24.900%); 433 B/s, 1 keys/s, 9 objects/s recovering
Jan 31 08:09:01 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e61 e61: 3 total, 3 up, 3 in
Jan 31 08:09:01 compute-0 ceph-mon[75227]: log_channel(cluster) log [DBG] : osdmap e61: 3 total, 3 up, 3 in
Jan 31 08:09:02 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 61 pg[9.1( v 41'483 (0'0,41'483] local-lis/les=55/56 n=7 ec=50/35 lis/c=55/50 les/c/f=56/51/0 sis=61 pruub=9.120767593s) [0] async=[0] r=-1 lpr=61 pi=[50,61)/1 crt=41'483 lcod 0'0 active pruub 110.067260742s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 08:09:02 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 61 pg[9.1( v 41'483 (0'0,41'483] local-lis/les=55/56 n=7 ec=50/35 lis/c=55/50 les/c/f=56/51/0 sis=61 pruub=9.120581627s) [0] r=-1 lpr=61 pi=[50,61)/1 crt=41'483 lcod 0'0 unknown NOTIFY pruub 110.067260742s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 08:09:02 compute-0 ceph-osd[85971]: osd.0 pg_epoch: 61 pg[9.1( v 41'483 (0'0,41'483] local-lis/les=0/0 n=7 ec=50/35 lis/c=55/50 les/c/f=56/51/0 sis=61) [0] r=0 lpr=61 pi=[50,61)/1 pct=0'0 crt=41'483 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 08:09:02 compute-0 ceph-osd[85971]: osd.0 pg_epoch: 61 pg[9.1( v 41'483 (0'0,41'483] local-lis/les=0/0 n=7 ec=50/35 lis/c=55/50 les/c/f=56/51/0 sis=61) [0] r=0 lpr=61 pi=[50,61)/1 crt=41'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:09:02 compute-0 ceph-mon[75227]: log_channel(cluster) log [INF] : Health check cleared: PG_DEGRADED (was: Degraded data redundancy: 4/249 objects degraded (1.606%), 2 pgs degraded)
Jan 31 08:09:02 compute-0 ceph-mon[75227]: log_channel(cluster) log [INF] : Cluster is now healthy
Jan 31 08:09:02 compute-0 ceph-mon[75227]: pgmap v124: 305 pgs: 2 active+recovery_wait, 13 active+recovery_wait+remapped, 6 peering, 2 active+recovery_wait+degraded, 1 active+recovering, 281 active+clean; 461 KiB data, 99 MiB used, 60 GiB / 60 GiB avail; 4/249 objects degraded (1.606%); 87/249 objects misplaced (34.940%); 193 B/s, 5 objects/s recovering
Jan 31 08:09:02 compute-0 ceph-mon[75227]: osdmap e60: 3 total, 3 up, 3 in
Jan 31 08:09:02 compute-0 ceph-mon[75227]: 2.1a scrub starts
Jan 31 08:09:02 compute-0 ceph-mon[75227]: 2.1a scrub ok
Jan 31 08:09:02 compute-0 ceph-mon[75227]: osdmap e61: 3 total, 3 up, 3 in
Jan 31 08:09:02 compute-0 ceph-osd[85971]: osd.0 pg_epoch: 61 pg[9.19( v 41'483 (0'0,41'483] local-lis/les=60/61 n=6 ec=50/35 lis/c=55/50 les/c/f=56/51/0 sis=60) [0] r=0 lpr=60 pi=[50,60)/1 crt=41'483 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:09:02 compute-0 ceph-osd[85971]: osd.0 pg_epoch: 61 pg[9.d( v 41'483 (0'0,41'483] local-lis/les=60/61 n=7 ec=50/35 lis/c=55/50 les/c/f=56/51/0 sis=60) [0] r=0 lpr=60 pi=[50,60)/1 crt=41'483 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:09:02 compute-0 ceph-mgr[75519]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:09:02 compute-0 ceph-mgr[75519]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:09:02 compute-0 ceph-mgr[75519]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:09:02 compute-0 ceph-mgr[75519]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:09:02 compute-0 ceph-mgr[75519]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:09:02 compute-0 ceph-mgr[75519]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:09:02 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e61 do_prune osdmap full prune enabled
Jan 31 08:09:03 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e62 e62: 3 total, 3 up, 3 in
Jan 31 08:09:03 compute-0 ceph-mon[75227]: log_channel(cluster) log [DBG] : osdmap e62: 3 total, 3 up, 3 in
Jan 31 08:09:03 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 62 pg[9.f( v 41'483 (0'0,41'483] local-lis/les=55/56 n=7 ec=50/35 lis/c=55/50 les/c/f=56/51/0 sis=62 pruub=15.935668945s) [0] async=[0] r=-1 lpr=62 pi=[50,62)/1 crt=41'483 lcod 0'0 active pruub 118.067489624s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 08:09:03 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 62 pg[9.f( v 41'483 (0'0,41'483] local-lis/les=55/56 n=7 ec=50/35 lis/c=55/50 les/c/f=56/51/0 sis=62 pruub=15.935050011s) [0] r=-1 lpr=62 pi=[50,62)/1 crt=41'483 lcod 0'0 unknown NOTIFY pruub 118.067489624s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 08:09:03 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 62 pg[9.1f( v 41'483 (0'0,41'483] local-lis/les=55/56 n=6 ec=50/35 lis/c=55/50 les/c/f=56/51/0 sis=62 pruub=15.933579445s) [0] async=[0] r=-1 lpr=62 pi=[50,62)/1 crt=41'483 lcod 0'0 active pruub 118.067451477s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 08:09:03 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 62 pg[9.1f( v 41'483 (0'0,41'483] local-lis/les=55/56 n=6 ec=50/35 lis/c=55/50 les/c/f=56/51/0 sis=62 pruub=15.933343887s) [0] r=-1 lpr=62 pi=[50,62)/1 crt=41'483 lcod 0'0 unknown NOTIFY pruub 118.067451477s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 08:09:03 compute-0 ceph-osd[85971]: osd.0 pg_epoch: 62 pg[9.1f( v 41'483 (0'0,41'483] local-lis/les=0/0 n=6 ec=50/35 lis/c=55/50 les/c/f=56/51/0 sis=62) [0] r=0 lpr=62 pi=[50,62)/1 pct=0'0 crt=41'483 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 08:09:03 compute-0 ceph-osd[85971]: osd.0 pg_epoch: 62 pg[9.f( v 41'483 (0'0,41'483] local-lis/les=0/0 n=7 ec=50/35 lis/c=55/50 les/c/f=56/51/0 sis=62) [0] r=0 lpr=62 pi=[50,62)/1 pct=0'0 crt=41'483 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 08:09:03 compute-0 ceph-osd[85971]: osd.0 pg_epoch: 62 pg[9.f( v 41'483 (0'0,41'483] local-lis/les=0/0 n=7 ec=50/35 lis/c=55/50 les/c/f=56/51/0 sis=62) [0] r=0 lpr=62 pi=[50,62)/1 crt=41'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:09:03 compute-0 ceph-osd[85971]: osd.0 pg_epoch: 62 pg[9.1f( v 41'483 (0'0,41'483] local-lis/les=0/0 n=6 ec=50/35 lis/c=55/50 les/c/f=56/51/0 sis=62) [0] r=0 lpr=62 pi=[50,62)/1 crt=41'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:09:03 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v129: 305 pgs: 1 active+recovering+remapped, 10 active+recovery_wait+remapped, 2 active+remapped, 2 peering, 290 active+clean; 461 KiB data, 99 MiB used, 60 GiB / 60 GiB avail; 62/249 objects misplaced (24.900%); 522 B/s, 2 keys/s, 10 objects/s recovering
Jan 31 08:09:03 compute-0 ceph-osd[85971]: osd.0 pg_epoch: 62 pg[9.1( v 41'483 (0'0,41'483] local-lis/les=61/62 n=7 ec=50/35 lis/c=55/50 les/c/f=56/51/0 sis=61) [0] r=0 lpr=61 pi=[50,61)/1 crt=41'483 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:09:03 compute-0 ceph-mon[75227]: pgmap v126: 305 pgs: 1 active+recovering+remapped, 10 active+recovery_wait+remapped, 2 active+remapped, 2 peering, 290 active+clean; 461 KiB data, 99 MiB used, 60 GiB / 60 GiB avail; 62/249 objects misplaced (24.900%); 433 B/s, 1 keys/s, 9 objects/s recovering
Jan 31 08:09:03 compute-0 ceph-mon[75227]: Health check cleared: PG_DEGRADED (was: Degraded data redundancy: 4/249 objects degraded (1.606%), 2 pgs degraded)
Jan 31 08:09:03 compute-0 ceph-mon[75227]: Cluster is now healthy
Jan 31 08:09:03 compute-0 ceph-mon[75227]: osdmap e62: 3 total, 3 up, 3 in
Jan 31 08:09:03 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e62 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 08:09:03 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e62 do_prune osdmap full prune enabled
Jan 31 08:09:04 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e63 e63: 3 total, 3 up, 3 in
Jan 31 08:09:04 compute-0 ceph-mon[75227]: log_channel(cluster) log [DBG] : osdmap e63: 3 total, 3 up, 3 in
Jan 31 08:09:04 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 63 pg[9.9( v 41'483 (0'0,41'483] local-lis/les=55/56 n=7 ec=50/35 lis/c=55/50 les/c/f=56/51/0 sis=63 pruub=14.899969101s) [0] async=[0] r=-1 lpr=63 pi=[50,63)/1 crt=41'483 lcod 0'0 active pruub 118.067497253s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 08:09:04 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 63 pg[9.9( v 41'483 (0'0,41'483] local-lis/les=55/56 n=7 ec=50/35 lis/c=55/50 les/c/f=56/51/0 sis=63 pruub=14.899835587s) [0] r=-1 lpr=63 pi=[50,63)/1 crt=41'483 lcod 0'0 unknown NOTIFY pruub 118.067497253s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 08:09:04 compute-0 ceph-osd[85971]: osd.0 pg_epoch: 63 pg[9.9( v 41'483 (0'0,41'483] local-lis/les=0/0 n=7 ec=50/35 lis/c=55/50 les/c/f=56/51/0 sis=63) [0] r=0 lpr=63 pi=[50,63)/1 pct=0'0 crt=41'483 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 08:09:04 compute-0 ceph-osd[85971]: osd.0 pg_epoch: 63 pg[9.9( v 41'483 (0'0,41'483] local-lis/les=0/0 n=7 ec=50/35 lis/c=55/50 les/c/f=56/51/0 sis=63) [0] r=0 lpr=63 pi=[50,63)/1 crt=41'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:09:04 compute-0 ceph-osd[85971]: osd.0 pg_epoch: 63 pg[9.f( v 41'483 (0'0,41'483] local-lis/les=62/63 n=7 ec=50/35 lis/c=55/50 les/c/f=56/51/0 sis=62) [0] r=0 lpr=62 pi=[50,62)/1 crt=41'483 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:09:04 compute-0 ceph-osd[85971]: osd.0 pg_epoch: 63 pg[9.1f( v 41'483 (0'0,41'483] local-lis/les=62/63 n=6 ec=50/35 lis/c=55/50 les/c/f=56/51/0 sis=62) [0] r=0 lpr=62 pi=[50,62)/1 crt=41'483 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:09:04 compute-0 ceph-mon[75227]: pgmap v129: 305 pgs: 1 active+recovering+remapped, 10 active+recovery_wait+remapped, 2 active+remapped, 2 peering, 290 active+clean; 461 KiB data, 99 MiB used, 60 GiB / 60 GiB avail; 62/249 objects misplaced (24.900%); 522 B/s, 2 keys/s, 10 objects/s recovering
Jan 31 08:09:04 compute-0 ceph-mon[75227]: osdmap e63: 3 total, 3 up, 3 in
Jan 31 08:09:05 compute-0 ceph-osd[88096]: log_channel(cluster) log [DBG] : 5.1f scrub starts
Jan 31 08:09:05 compute-0 ceph-osd[88096]: log_channel(cluster) log [DBG] : 5.1f scrub ok
Jan 31 08:09:05 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e63 do_prune osdmap full prune enabled
Jan 31 08:09:05 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e64 e64: 3 total, 3 up, 3 in
Jan 31 08:09:05 compute-0 ceph-mon[75227]: log_channel(cluster) log [DBG] : osdmap e64: 3 total, 3 up, 3 in
Jan 31 08:09:05 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 64 pg[9.17( v 41'483 (0'0,41'483] local-lis/les=55/56 n=6 ec=50/35 lis/c=55/50 les/c/f=56/51/0 sis=64 pruub=13.703495026s) [0] async=[0] r=-1 lpr=64 pi=[50,64)/1 crt=41'483 lcod 0'0 active pruub 118.067504883s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 08:09:05 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 64 pg[9.11( v 41'483 (0'0,41'483] local-lis/les=55/56 n=7 ec=50/35 lis/c=55/50 les/c/f=56/51/0 sis=64 pruub=13.703622818s) [0] async=[0] r=-1 lpr=64 pi=[50,64)/1 crt=41'483 lcod 0'0 active pruub 118.067710876s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 08:09:05 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 64 pg[9.17( v 41'483 (0'0,41'483] local-lis/les=55/56 n=6 ec=50/35 lis/c=55/50 les/c/f=56/51/0 sis=64 pruub=13.703415871s) [0] r=-1 lpr=64 pi=[50,64)/1 crt=41'483 lcod 0'0 unknown NOTIFY pruub 118.067504883s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 08:09:05 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 64 pg[9.11( v 41'483 (0'0,41'483] local-lis/les=55/56 n=7 ec=50/35 lis/c=55/50 les/c/f=56/51/0 sis=64 pruub=13.703359604s) [0] r=-1 lpr=64 pi=[50,64)/1 crt=41'483 lcod 0'0 unknown NOTIFY pruub 118.067710876s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 08:09:05 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 64 pg[9.b( v 41'483 (0'0,41'483] local-lis/les=55/56 n=7 ec=50/35 lis/c=55/50 les/c/f=56/51/0 sis=64 pruub=13.702342987s) [0] async=[0] r=-1 lpr=64 pi=[50,64)/1 crt=41'483 lcod 0'0 active pruub 118.067642212s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 08:09:05 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 64 pg[9.13( v 41'483 (0'0,41'483] local-lis/les=55/56 n=6 ec=50/35 lis/c=55/50 les/c/f=56/51/0 sis=64 pruub=13.702383995s) [0] async=[0] r=-1 lpr=64 pi=[50,64)/1 crt=41'483 lcod 0'0 active pruub 118.067672729s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 08:09:05 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 64 pg[9.b( v 41'483 (0'0,41'483] local-lis/les=55/56 n=7 ec=50/35 lis/c=55/50 les/c/f=56/51/0 sis=64 pruub=13.702288628s) [0] r=-1 lpr=64 pi=[50,64)/1 crt=41'483 lcod 0'0 unknown NOTIFY pruub 118.067642212s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 08:09:05 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 64 pg[9.13( v 41'483 (0'0,41'483] local-lis/les=55/56 n=6 ec=50/35 lis/c=55/50 les/c/f=56/51/0 sis=64 pruub=13.702269554s) [0] r=-1 lpr=64 pi=[50,64)/1 crt=41'483 lcod 0'0 unknown NOTIFY pruub 118.067672729s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 08:09:05 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 64 pg[9.5( v 56'485 (0'0,56'485] local-lis/les=55/56 n=7 ec=50/35 lis/c=55/50 les/c/f=56/51/0 sis=64 pruub=13.700806618s) [0] async=[0] r=-1 lpr=64 pi=[50,64)/1 crt=53'484 lcod 53'484 active pruub 118.067619324s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 08:09:05 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 64 pg[9.5( v 56'485 (0'0,56'485] local-lis/les=55/56 n=7 ec=50/35 lis/c=55/50 les/c/f=56/51/0 sis=64 pruub=13.700736046s) [0] r=-1 lpr=64 pi=[50,64)/1 crt=53'484 lcod 53'484 unknown NOTIFY pruub 118.067619324s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 08:09:05 compute-0 ceph-osd[85971]: osd.0 pg_epoch: 64 pg[9.b( v 41'483 (0'0,41'483] local-lis/les=0/0 n=7 ec=50/35 lis/c=55/50 les/c/f=56/51/0 sis=64) [0] r=0 lpr=64 pi=[50,64)/1 pct=0'0 crt=41'483 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 08:09:05 compute-0 ceph-osd[85971]: osd.0 pg_epoch: 64 pg[9.b( v 41'483 (0'0,41'483] local-lis/les=0/0 n=7 ec=50/35 lis/c=55/50 les/c/f=56/51/0 sis=64) [0] r=0 lpr=64 pi=[50,64)/1 crt=41'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:09:05 compute-0 ceph-osd[85971]: osd.0 pg_epoch: 64 pg[9.5( v 56'485 (0'0,56'485] local-lis/les=0/0 n=7 ec=50/35 lis/c=55/50 les/c/f=56/51/0 sis=64) [0] r=0 lpr=64 pi=[50,64)/1 pct=0'0 crt=53'484 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 08:09:05 compute-0 ceph-osd[85971]: osd.0 pg_epoch: 64 pg[9.5( v 56'485 (0'0,56'485] local-lis/les=0/0 n=7 ec=50/35 lis/c=55/50 les/c/f=56/51/0 sis=64) [0] r=0 lpr=64 pi=[50,64)/1 crt=53'484 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:09:05 compute-0 ceph-osd[85971]: osd.0 pg_epoch: 64 pg[9.11( v 41'483 (0'0,41'483] local-lis/les=0/0 n=7 ec=50/35 lis/c=55/50 les/c/f=56/51/0 sis=64) [0] r=0 lpr=64 pi=[50,64)/1 pct=0'0 crt=41'483 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 08:09:05 compute-0 ceph-osd[85971]: osd.0 pg_epoch: 64 pg[9.13( v 41'483 (0'0,41'483] local-lis/les=0/0 n=6 ec=50/35 lis/c=55/50 les/c/f=56/51/0 sis=64) [0] r=0 lpr=64 pi=[50,64)/1 pct=0'0 crt=41'483 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 08:09:05 compute-0 ceph-osd[85971]: osd.0 pg_epoch: 64 pg[9.17( v 41'483 (0'0,41'483] local-lis/les=0/0 n=6 ec=50/35 lis/c=55/50 les/c/f=56/51/0 sis=64) [0] r=0 lpr=64 pi=[50,64)/1 pct=0'0 crt=41'483 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 08:09:05 compute-0 ceph-osd[85971]: osd.0 pg_epoch: 64 pg[9.11( v 41'483 (0'0,41'483] local-lis/les=0/0 n=7 ec=50/35 lis/c=55/50 les/c/f=56/51/0 sis=64) [0] r=0 lpr=64 pi=[50,64)/1 crt=41'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:09:05 compute-0 ceph-osd[85971]: osd.0 pg_epoch: 64 pg[9.13( v 41'483 (0'0,41'483] local-lis/les=0/0 n=6 ec=50/35 lis/c=55/50 les/c/f=56/51/0 sis=64) [0] r=0 lpr=64 pi=[50,64)/1 crt=41'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:09:05 compute-0 ceph-osd[85971]: osd.0 pg_epoch: 64 pg[9.17( v 41'483 (0'0,41'483] local-lis/les=0/0 n=6 ec=50/35 lis/c=55/50 les/c/f=56/51/0 sis=64) [0] r=0 lpr=64 pi=[50,64)/1 crt=41'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:09:05 compute-0 ceph-osd[85971]: osd.0 pg_epoch: 64 pg[9.9( v 41'483 (0'0,41'483] local-lis/les=63/64 n=7 ec=50/35 lis/c=55/50 les/c/f=56/51/0 sis=63) [0] r=0 lpr=63 pi=[50,63)/1 crt=41'483 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:09:05 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v132: 305 pgs: 1 active+recovering+remapped, 1 active+recovery_wait+remapped, 5 active+remapped, 1 peering, 297 active+clean; 461 KiB data, 99 MiB used, 60 GiB / 60 GiB avail; 9/249 objects misplaced (3.614%); 552 B/s, 13 objects/s recovering
Jan 31 08:09:06 compute-0 ceph-mon[75227]: 5.1f scrub starts
Jan 31 08:09:06 compute-0 ceph-mon[75227]: 5.1f scrub ok
Jan 31 08:09:06 compute-0 ceph-mon[75227]: osdmap e64: 3 total, 3 up, 3 in
Jan 31 08:09:06 compute-0 ceph-osd[85971]: log_channel(cluster) log [DBG] : 4.b scrub starts
Jan 31 08:09:06 compute-0 ceph-osd[85971]: log_channel(cluster) log [DBG] : 4.b scrub ok
Jan 31 08:09:06 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e64 do_prune osdmap full prune enabled
Jan 31 08:09:06 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e65 e65: 3 total, 3 up, 3 in
Jan 31 08:09:06 compute-0 ceph-mon[75227]: log_channel(cluster) log [DBG] : osdmap e65: 3 total, 3 up, 3 in
Jan 31 08:09:06 compute-0 ceph-osd[85971]: osd.0 pg_epoch: 65 pg[9.7( v 41'483 (0'0,41'483] local-lis/les=0/0 n=7 ec=50/35 lis/c=55/50 les/c/f=56/51/0 sis=65) [0] r=0 lpr=65 pi=[50,65)/1 pct=0'0 crt=41'483 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 08:09:06 compute-0 ceph-osd[85971]: osd.0 pg_epoch: 65 pg[9.7( v 41'483 (0'0,41'483] local-lis/les=0/0 n=7 ec=50/35 lis/c=55/50 les/c/f=56/51/0 sis=65) [0] r=0 lpr=65 pi=[50,65)/1 crt=41'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:09:06 compute-0 ceph-osd[85971]: osd.0 pg_epoch: 65 pg[9.1d( v 41'483 (0'0,41'483] local-lis/les=0/0 n=6 ec=50/35 lis/c=55/50 les/c/f=56/51/0 sis=65) [0] r=0 lpr=65 pi=[50,65)/1 pct=0'0 crt=41'483 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 08:09:06 compute-0 ceph-osd[85971]: osd.0 pg_epoch: 65 pg[9.1d( v 41'483 (0'0,41'483] local-lis/les=0/0 n=6 ec=50/35 lis/c=55/50 les/c/f=56/51/0 sis=65) [0] r=0 lpr=65 pi=[50,65)/1 crt=41'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:09:06 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 65 pg[9.7( v 41'483 (0'0,41'483] local-lis/les=55/56 n=7 ec=50/35 lis/c=55/50 les/c/f=56/51/0 sis=65 pruub=12.617675781s) [0] async=[0] r=-1 lpr=65 pi=[50,65)/1 crt=41'483 lcod 0'0 active pruub 118.067848206s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 08:09:06 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 65 pg[9.7( v 41'483 (0'0,41'483] local-lis/les=55/56 n=7 ec=50/35 lis/c=55/50 les/c/f=56/51/0 sis=65 pruub=12.617586136s) [0] r=-1 lpr=65 pi=[50,65)/1 crt=41'483 lcod 0'0 unknown NOTIFY pruub 118.067848206s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 08:09:06 compute-0 ceph-osd[85971]: osd.0 pg_epoch: 65 pg[9.13( v 41'483 (0'0,41'483] local-lis/les=64/65 n=6 ec=50/35 lis/c=55/50 les/c/f=56/51/0 sis=64) [0] r=0 lpr=64 pi=[50,64)/1 crt=41'483 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:09:06 compute-0 ceph-osd[85971]: osd.0 pg_epoch: 65 pg[9.11( v 41'483 (0'0,41'483] local-lis/les=64/65 n=7 ec=50/35 lis/c=55/50 les/c/f=56/51/0 sis=64) [0] r=0 lpr=64 pi=[50,64)/1 crt=41'483 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:09:06 compute-0 ceph-osd[85971]: osd.0 pg_epoch: 65 pg[9.5( v 56'485 (0'0,56'485] local-lis/les=64/65 n=7 ec=50/35 lis/c=55/50 les/c/f=56/51/0 sis=64) [0] r=0 lpr=64 pi=[50,64)/1 crt=56'485 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:09:06 compute-0 ceph-osd[85971]: osd.0 pg_epoch: 65 pg[9.b( v 41'483 (0'0,41'483] local-lis/les=64/65 n=7 ec=50/35 lis/c=55/50 les/c/f=56/51/0 sis=64) [0] r=0 lpr=64 pi=[50,64)/1 crt=41'483 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:09:06 compute-0 ceph-osd[85971]: osd.0 pg_epoch: 65 pg[9.17( v 41'483 (0'0,41'483] local-lis/les=64/65 n=6 ec=50/35 lis/c=55/50 les/c/f=56/51/0 sis=64) [0] r=0 lpr=64 pi=[50,64)/1 crt=41'483 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:09:06 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 65 pg[9.1d( v 41'483 (0'0,41'483] local-lis/les=55/56 n=6 ec=50/35 lis/c=55/50 les/c/f=56/51/0 sis=65 pruub=12.613872528s) [0] async=[0] r=-1 lpr=65 pi=[50,65)/1 crt=41'483 lcod 0'0 active pruub 118.067260742s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 08:09:06 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 65 pg[9.1d( v 41'483 (0'0,41'483] local-lis/les=55/56 n=6 ec=50/35 lis/c=55/50 les/c/f=56/51/0 sis=65 pruub=12.613566399s) [0] r=-1 lpr=65 pi=[50,65)/1 crt=41'483 lcod 0'0 unknown NOTIFY pruub 118.067260742s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 08:09:06 compute-0 ceph-osd[88096]: log_channel(cluster) log [DBG] : 10.1f scrub starts
Jan 31 08:09:06 compute-0 ceph-osd[88096]: log_channel(cluster) log [DBG] : 10.1f scrub ok
Jan 31 08:09:07 compute-0 ceph-mon[75227]: pgmap v132: 305 pgs: 1 active+recovering+remapped, 1 active+recovery_wait+remapped, 5 active+remapped, 1 peering, 297 active+clean; 461 KiB data, 99 MiB used, 60 GiB / 60 GiB avail; 9/249 objects misplaced (3.614%); 552 B/s, 13 objects/s recovering
Jan 31 08:09:07 compute-0 ceph-mon[75227]: osdmap e65: 3 total, 3 up, 3 in
Jan 31 08:09:07 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e65 do_prune osdmap full prune enabled
Jan 31 08:09:07 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e66 e66: 3 total, 3 up, 3 in
Jan 31 08:09:07 compute-0 ceph-mon[75227]: log_channel(cluster) log [DBG] : osdmap e66: 3 total, 3 up, 3 in
Jan 31 08:09:07 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v135: 305 pgs: 1 active+recovering+remapped, 1 active+recovery_wait+remapped, 5 active+remapped, 1 peering, 297 active+clean; 461 KiB data, 99 MiB used, 60 GiB / 60 GiB avail; 9/249 objects misplaced (3.614%); 554 B/s, 13 objects/s recovering
Jan 31 08:09:07 compute-0 ceph-osd[85971]: osd.0 pg_epoch: 66 pg[9.7( v 41'483 (0'0,41'483] local-lis/les=65/66 n=7 ec=50/35 lis/c=55/50 les/c/f=56/51/0 sis=65) [0] r=0 lpr=65 pi=[50,65)/1 crt=41'483 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:09:07 compute-0 ceph-osd[85971]: osd.0 pg_epoch: 66 pg[9.1d( v 41'483 (0'0,41'483] local-lis/les=65/66 n=6 ec=50/35 lis/c=55/50 les/c/f=56/51/0 sis=65) [0] r=0 lpr=65 pi=[50,65)/1 crt=41'483 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:09:08 compute-0 ceph-mon[75227]: 4.b scrub starts
Jan 31 08:09:08 compute-0 ceph-mon[75227]: 4.b scrub ok
Jan 31 08:09:08 compute-0 ceph-mon[75227]: 10.1f scrub starts
Jan 31 08:09:08 compute-0 ceph-mon[75227]: 10.1f scrub ok
Jan 31 08:09:08 compute-0 ceph-mon[75227]: osdmap e66: 3 total, 3 up, 3 in
Jan 31 08:09:08 compute-0 ceph-mon[75227]: pgmap v135: 305 pgs: 1 active+recovering+remapped, 1 active+recovery_wait+remapped, 5 active+remapped, 1 peering, 297 active+clean; 461 KiB data, 99 MiB used, 60 GiB / 60 GiB avail; 9/249 objects misplaced (3.614%); 554 B/s, 13 objects/s recovering
Jan 31 08:09:09 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e66 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 08:09:09 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v136: 305 pgs: 305 active+clean; 461 KiB data, 99 MiB used, 60 GiB / 60 GiB avail; 489 B/s, 11 objects/s recovering
Jan 31 08:09:09 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "4"} v 0)
Jan 31 08:09:09 compute-0 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "4"} : dispatch
Jan 31 08:09:09 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "4"} v 0)
Jan 31 08:09:09 compute-0 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "4"} : dispatch
Jan 31 08:09:09 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e66 do_prune osdmap full prune enabled
Jan 31 08:09:09 compute-0 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "4"}]': finished
Jan 31 08:09:09 compute-0 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "4"}]': finished
Jan 31 08:09:09 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e67 e67: 3 total, 3 up, 3 in
Jan 31 08:09:09 compute-0 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "4"} : dispatch
Jan 31 08:09:09 compute-0 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "4"} : dispatch
Jan 31 08:09:09 compute-0 ceph-mon[75227]: log_channel(cluster) log [DBG] : osdmap e67: 3 total, 3 up, 3 in
Jan 31 08:09:10 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 67 pg[6.3( v 39'39 (0'0,39'39] local-lis/les=54/55 n=2 ec=48/23 lis/c=54/54 les/c/f=55/57/0 sis=67 pruub=15.362014771s) [0] r=-1 lpr=67 pi=[54,67)/1 crt=39'39 active pruub 124.710304260s@ mbc={255={}}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 08:09:10 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 67 pg[6.3( v 39'39 (0'0,39'39] local-lis/les=54/55 n=2 ec=48/23 lis/c=54/54 les/c/f=55/57/0 sis=67 pruub=15.361968994s) [0] r=-1 lpr=67 pi=[54,67)/1 crt=39'39 unknown NOTIFY pruub 124.710304260s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 08:09:10 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 67 pg[6.f( v 39'39 (0'0,39'39] local-lis/les=54/55 n=1 ec=48/23 lis/c=54/54 les/c/f=55/56/0 sis=67 pruub=15.361686707s) [0] r=-1 lpr=67 pi=[54,67)/1 crt=39'39 active pruub 124.710273743s@ mbc={255={}}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 08:09:10 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 67 pg[6.f( v 39'39 (0'0,39'39] local-lis/les=54/55 n=1 ec=48/23 lis/c=54/54 les/c/f=55/56/0 sis=67 pruub=15.361618042s) [0] r=-1 lpr=67 pi=[54,67)/1 crt=39'39 unknown NOTIFY pruub 124.710273743s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 08:09:10 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 67 pg[6.7( v 39'39 (0'0,39'39] local-lis/les=54/55 n=1 ec=48/23 lis/c=54/54 les/c/f=55/55/0 sis=67 pruub=15.359692574s) [0] r=-1 lpr=67 pi=[54,67)/1 crt=39'39 active pruub 124.709526062s@ mbc={255={}}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 08:09:10 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 67 pg[6.7( v 39'39 (0'0,39'39] local-lis/les=54/55 n=1 ec=48/23 lis/c=54/54 les/c/f=55/55/0 sis=67 pruub=15.359621048s) [0] r=-1 lpr=67 pi=[54,67)/1 crt=39'39 unknown NOTIFY pruub 124.709526062s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 08:09:10 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 67 pg[6.b( v 39'39 (0'0,39'39] local-lis/les=54/55 n=1 ec=48/23 lis/c=54/54 les/c/f=55/55/0 sis=67 pruub=15.359480858s) [0] r=-1 lpr=67 pi=[54,67)/1 crt=39'39 active pruub 124.709434509s@ mbc={255={}}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 08:09:10 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 67 pg[6.b( v 39'39 (0'0,39'39] local-lis/les=54/55 n=1 ec=48/23 lis/c=54/54 les/c/f=55/55/0 sis=67 pruub=15.359438896s) [0] r=-1 lpr=67 pi=[54,67)/1 crt=39'39 unknown NOTIFY pruub 124.709434509s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 08:09:10 compute-0 ceph-osd[85971]: osd.0 pg_epoch: 67 pg[6.f( empty local-lis/les=0/0 n=0 ec=48/23 lis/c=54/54 les/c/f=55/56/0 sis=67) [0] r=0 lpr=67 pi=[54,67)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:09:10 compute-0 ceph-osd[85971]: osd.0 pg_epoch: 67 pg[6.3( empty local-lis/les=0/0 n=0 ec=48/23 lis/c=54/54 les/c/f=55/57/0 sis=67) [0] r=0 lpr=67 pi=[54,67)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:09:10 compute-0 ceph-osd[85971]: osd.0 pg_epoch: 67 pg[6.b( empty local-lis/les=0/0 n=0 ec=48/23 lis/c=54/54 les/c/f=55/55/0 sis=67) [0] r=0 lpr=67 pi=[54,67)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:09:10 compute-0 ceph-osd[85971]: osd.0 pg_epoch: 67 pg[6.7( empty local-lis/les=0/0 n=0 ec=48/23 lis/c=54/54 les/c/f=55/55/0 sis=67) [0] r=0 lpr=67 pi=[54,67)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:09:10 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e67 do_prune osdmap full prune enabled
Jan 31 08:09:10 compute-0 ceph-mon[75227]: pgmap v136: 305 pgs: 305 active+clean; 461 KiB data, 99 MiB used, 60 GiB / 60 GiB avail; 489 B/s, 11 objects/s recovering
Jan 31 08:09:10 compute-0 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "4"}]': finished
Jan 31 08:09:10 compute-0 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "4"}]': finished
Jan 31 08:09:10 compute-0 ceph-mon[75227]: osdmap e67: 3 total, 3 up, 3 in
Jan 31 08:09:10 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e68 e68: 3 total, 3 up, 3 in
Jan 31 08:09:10 compute-0 ceph-mon[75227]: log_channel(cluster) log [DBG] : osdmap e68: 3 total, 3 up, 3 in
Jan 31 08:09:10 compute-0 ceph-osd[85971]: osd.0 pg_epoch: 68 pg[6.7( v 39'39 lc 37'21 (0'0,39'39] local-lis/les=67/68 n=1 ec=48/23 lis/c=54/54 les/c/f=55/55/0 sis=67) [0] r=0 lpr=67 pi=[54,67)/1 crt=39'39 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:09:10 compute-0 ceph-osd[85971]: osd.0 pg_epoch: 68 pg[6.3( v 39'39 lc 0'0 (0'0,39'39] local-lis/les=67/68 n=2 ec=48/23 lis/c=54/54 les/c/f=55/57/0 sis=67) [0] r=0 lpr=67 pi=[54,67)/1 crt=39'39 mlcod 0'0 active+degraded m=2 mbc={255={(0+1)=2}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:09:10 compute-0 ceph-osd[85971]: osd.0 pg_epoch: 68 pg[6.b( v 39'39 lc 0'0 (0'0,39'39] local-lis/les=67/68 n=1 ec=48/23 lis/c=54/54 les/c/f=55/55/0 sis=67) [0] r=0 lpr=67 pi=[54,67)/1 crt=39'39 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:09:10 compute-0 ceph-osd[85971]: osd.0 pg_epoch: 68 pg[6.f( v 39'39 lc 37'1 (0'0,39'39] local-lis/les=67/68 n=1 ec=48/23 lis/c=54/54 les/c/f=55/56/0 sis=67) [0] r=0 lpr=67 pi=[54,67)/1 crt=39'39 lcod 0'0 mlcod 0'0 active+degraded m=3 mbc={255={(0+1)=3}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:09:11 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v139: 305 pgs: 305 active+clean; 461 KiB data, 99 MiB used, 60 GiB / 60 GiB avail; 93 B/s, 1 objects/s recovering
Jan 31 08:09:11 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "5"} v 0)
Jan 31 08:09:11 compute-0 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "5"} : dispatch
Jan 31 08:09:11 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "5"} v 0)
Jan 31 08:09:11 compute-0 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "5"} : dispatch
Jan 31 08:09:11 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e68 do_prune osdmap full prune enabled
Jan 31 08:09:11 compute-0 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "5"}]': finished
Jan 31 08:09:11 compute-0 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "5"}]': finished
Jan 31 08:09:11 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e69 e69: 3 total, 3 up, 3 in
Jan 31 08:09:12 compute-0 ceph-mon[75227]: log_channel(cluster) log [DBG] : osdmap e69: 3 total, 3 up, 3 in
Jan 31 08:09:12 compute-0 ceph-mon[75227]: osdmap e68: 3 total, 3 up, 3 in
Jan 31 08:09:12 compute-0 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "5"} : dispatch
Jan 31 08:09:12 compute-0 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "5"} : dispatch
Jan 31 08:09:12 compute-0 ceph-osd[85971]: osd.0 pg_epoch: 69 pg[6.4( v 39'39 (0'0,39'39] local-lis/les=48/51 n=2 ec=48/23 lis/c=48/48 les/c/f=51/51/0 sis=69 pruub=10.201407433s) [1] r=-1 lpr=69 pi=[48,69)/1 crt=39'39 lcod 0'0 active pruub 126.928176880s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 08:09:12 compute-0 ceph-osd[85971]: osd.0 pg_epoch: 69 pg[6.4( v 39'39 (0'0,39'39] local-lis/les=48/51 n=2 ec=48/23 lis/c=48/48 les/c/f=51/51/0 sis=69 pruub=10.201289177s) [1] r=-1 lpr=69 pi=[48,69)/1 crt=39'39 lcod 0'0 unknown NOTIFY pruub 126.928176880s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 08:09:12 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 69 pg[6.4( empty local-lis/les=0/0 n=0 ec=48/23 lis/c=48/48 les/c/f=51/51/0 sis=69) [1] r=0 lpr=69 pi=[48,69)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:09:12 compute-0 ceph-osd[85971]: osd.0 pg_epoch: 69 pg[6.c( v 39'39 (0'0,39'39] local-lis/les=48/51 n=1 ec=48/23 lis/c=48/48 les/c/f=51/51/0 sis=69 pruub=10.207127571s) [1] r=-1 lpr=69 pi=[48,69)/1 crt=39'39 lcod 0'0 active pruub 126.934516907s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 08:09:12 compute-0 ceph-osd[85971]: osd.0 pg_epoch: 69 pg[6.c( v 39'39 (0'0,39'39] local-lis/les=48/51 n=1 ec=48/23 lis/c=48/48 les/c/f=51/51/0 sis=69 pruub=10.206985474s) [1] r=-1 lpr=69 pi=[48,69)/1 crt=39'39 lcod 0'0 unknown NOTIFY pruub 126.934516907s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 08:09:12 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 69 pg[6.c( empty local-lis/les=0/0 n=0 ec=48/23 lis/c=48/48 les/c/f=51/51/0 sis=69) [1] r=0 lpr=69 pi=[48,69)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:09:13 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e69 do_prune osdmap full prune enabled
Jan 31 08:09:13 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v141: 305 pgs: 305 active+clean; 461 KiB data, 99 MiB used, 60 GiB / 60 GiB avail; 80 B/s, 1 objects/s recovering
Jan 31 08:09:13 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "6"} v 0)
Jan 31 08:09:13 compute-0 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "6"} : dispatch
Jan 31 08:09:13 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "6"} v 0)
Jan 31 08:09:13 compute-0 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "6"} : dispatch
Jan 31 08:09:13 compute-0 ceph-mon[75227]: pgmap v139: 305 pgs: 305 active+clean; 461 KiB data, 99 MiB used, 60 GiB / 60 GiB avail; 93 B/s, 1 objects/s recovering
Jan 31 08:09:13 compute-0 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "5"}]': finished
Jan 31 08:09:13 compute-0 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "5"}]': finished
Jan 31 08:09:13 compute-0 ceph-mon[75227]: osdmap e69: 3 total, 3 up, 3 in
Jan 31 08:09:13 compute-0 ceph-osd[88096]: log_channel(cluster) log [DBG] : 5.10 scrub starts
Jan 31 08:09:14 compute-0 ceph-osd[88096]: log_channel(cluster) log [DBG] : 5.10 scrub ok
Jan 31 08:09:14 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e70 e70: 3 total, 3 up, 3 in
Jan 31 08:09:14 compute-0 ceph-mon[75227]: log_channel(cluster) log [DBG] : osdmap e70: 3 total, 3 up, 3 in
Jan 31 08:09:14 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 70 pg[6.4( v 39'39 lc 37'11 (0'0,39'39] local-lis/les=69/70 n=2 ec=48/23 lis/c=48/48 les/c/f=51/51/0 sis=69) [1] r=0 lpr=69 pi=[48,69)/1 crt=39'39 lcod 0'0 mlcod 0'0 active+degraded m=4 mbc={255={(0+1)=4}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:09:14 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 70 pg[6.c( v 39'39 lc 37'17 (0'0,39'39] local-lis/les=69/70 n=1 ec=48/23 lis/c=48/48 les/c/f=51/51/0 sis=69) [1] r=0 lpr=69 pi=[48,69)/1 crt=39'39 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:09:15 compute-0 ceph-mon[75227]: pgmap v141: 305 pgs: 305 active+clean; 461 KiB data, 99 MiB used, 60 GiB / 60 GiB avail; 80 B/s, 1 objects/s recovering
Jan 31 08:09:15 compute-0 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "6"} : dispatch
Jan 31 08:09:15 compute-0 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "6"} : dispatch
Jan 31 08:09:15 compute-0 ceph-mon[75227]: 5.10 scrub starts
Jan 31 08:09:15 compute-0 ceph-mon[75227]: 5.10 scrub ok
Jan 31 08:09:15 compute-0 ceph-mon[75227]: osdmap e70: 3 total, 3 up, 3 in
Jan 31 08:09:15 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e70 do_prune osdmap full prune enabled
Jan 31 08:09:15 compute-0 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "6"}]': finished
Jan 31 08:09:15 compute-0 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "6"}]': finished
Jan 31 08:09:15 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e71 e71: 3 total, 3 up, 3 in
Jan 31 08:09:15 compute-0 ceph-mon[75227]: log_channel(cluster) log [DBG] : osdmap e71: 3 total, 3 up, 3 in
Jan 31 08:09:15 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v144: 305 pgs: 1 active+recovery_wait+degraded, 1 active+recovering, 303 active+clean; 461 KiB data, 99 MiB used, 60 GiB / 60 GiB avail; 1/250 objects degraded (0.400%); 2/250 objects misplaced (0.800%); 131 B/s, 2 keys/s, 1 objects/s recovering
Jan 31 08:09:16 compute-0 ceph-osd[88096]: log_channel(cluster) log [DBG] : 10.1d scrub starts
Jan 31 08:09:16 compute-0 ceph-osd[88096]: log_channel(cluster) log [DBG] : 10.1d scrub ok
Jan 31 08:09:16 compute-0 ceph-osd[85971]: log_channel(cluster) log [DBG] : 4.6 scrub starts
Jan 31 08:09:16 compute-0 ceph-osd[85971]: log_channel(cluster) log [DBG] : 4.6 scrub ok
Jan 31 08:09:16 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 71 pg[6.d( v 39'39 (0'0,39'39] local-lis/les=54/55 n=1 ec=48/23 lis/c=54/54 les/c/f=55/57/0 sis=71 pruub=9.507270813s) [0] r=-1 lpr=71 pi=[54,71)/1 crt=39'39 active pruub 124.724533081s@ mbc={255={}}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 08:09:16 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 71 pg[6.d( v 39'39 (0'0,39'39] local-lis/les=54/55 n=1 ec=48/23 lis/c=54/54 les/c/f=55/57/0 sis=71 pruub=9.507144928s) [0] r=-1 lpr=71 pi=[54,71)/1 crt=39'39 unknown NOTIFY pruub 124.724533081s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 08:09:16 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 71 pg[6.5( v 39'39 (0'0,39'39] local-lis/les=54/55 n=2 ec=48/23 lis/c=54/54 les/c/f=55/56/0 sis=71 pruub=9.491441727s) [0] r=-1 lpr=71 pi=[54,71)/1 crt=39'39 active pruub 124.709686279s@ mbc={255={}}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 08:09:16 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 71 pg[6.5( v 39'39 (0'0,39'39] local-lis/les=54/55 n=2 ec=48/23 lis/c=54/54 les/c/f=55/56/0 sis=71 pruub=9.491330147s) [0] r=-1 lpr=71 pi=[54,71)/1 crt=39'39 unknown NOTIFY pruub 124.709686279s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 08:09:16 compute-0 ceph-osd[85971]: osd.0 pg_epoch: 71 pg[6.d( empty local-lis/les=0/0 n=0 ec=48/23 lis/c=54/54 les/c/f=55/57/0 sis=71) [0] r=0 lpr=71 pi=[54,71)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:09:16 compute-0 ceph-osd[85971]: osd.0 pg_epoch: 71 pg[6.5( empty local-lis/les=0/0 n=0 ec=48/23 lis/c=54/54 les/c/f=55/56/0 sis=71) [0] r=0 lpr=71 pi=[54,71)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:09:16 compute-0 ceph-mon[75227]: log_channel(cluster) log [WRN] : Health check failed: Degraded data redundancy: 1/250 objects degraded (0.400%), 1 pg degraded (PG_DEGRADED)
Jan 31 08:09:16 compute-0 sudo[98336]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eyzxnvwlqzxaphaccpmdaszrpytlbsil ; /usr/bin/python3'
Jan 31 08:09:16 compute-0 sudo[98336]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:09:16 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e71 do_prune osdmap full prune enabled
Jan 31 08:09:16 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e72 e72: 3 total, 3 up, 3 in
Jan 31 08:09:16 compute-0 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "6"}]': finished
Jan 31 08:09:16 compute-0 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "6"}]': finished
Jan 31 08:09:16 compute-0 ceph-mon[75227]: osdmap e71: 3 total, 3 up, 3 in
Jan 31 08:09:16 compute-0 ceph-mon[75227]: pgmap v144: 305 pgs: 1 active+recovery_wait+degraded, 1 active+recovering, 303 active+clean; 461 KiB data, 99 MiB used, 60 GiB / 60 GiB avail; 1/250 objects degraded (0.400%); 2/250 objects misplaced (0.800%); 131 B/s, 2 keys/s, 1 objects/s recovering
Jan 31 08:09:16 compute-0 ceph-mon[75227]: 10.1d scrub starts
Jan 31 08:09:16 compute-0 ceph-mon[75227]: 10.1d scrub ok
Jan 31 08:09:16 compute-0 python3[98338]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint radosgw-admin quay.io/ceph/ceph:v20 --fsid 82c880e6-d992-5408-8b12-efff9c275473 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   user info --uid openstack _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 08:09:16 compute-0 ceph-osd[85971]: osd.0 pg_epoch: 72 pg[6.5( v 39'39 lc 37'9 (0'0,39'39] local-lis/les=71/72 n=2 ec=48/23 lis/c=54/54 les/c/f=55/56/0 sis=71) [0] r=0 lpr=71 pi=[54,71)/1 crt=39'39 lcod 0'0 mlcod 0'0 active+degraded m=2 mbc={255={(0+1)=2}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:09:16 compute-0 ceph-mon[75227]: log_channel(cluster) log [DBG] : osdmap e72: 3 total, 3 up, 3 in
Jan 31 08:09:16 compute-0 ceph-osd[85971]: osd.0 pg_epoch: 72 pg[6.d( v 39'39 lc 37'10 (0'0,39'39] local-lis/les=71/72 n=1 ec=48/23 lis/c=54/54 les/c/f=55/57/0 sis=71) [0] r=0 lpr=71 pi=[54,71)/1 crt=39'39 lcod 0'0 mlcod 0'0 active+degraded m=2 mbc={255={(0+1)=2}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:09:16 compute-0 podman[98339]: 2026-01-31 08:09:16.847145906 +0000 UTC m=+0.027272280 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 31 08:09:17 compute-0 podman[98339]: 2026-01-31 08:09:17.113435384 +0000 UTC m=+0.293561758 container create 77aba5e69dd2575770db5a1604e3cb95535183fb3da7f0134903bb59c8bd043d (image=quay.io/ceph/ceph:v20, name=compassionate_lamport, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030)
Jan 31 08:09:17 compute-0 systemd[76601]: Starting Mark boot as successful...
Jan 31 08:09:17 compute-0 systemd[76601]: Finished Mark boot as successful.
Jan 31 08:09:17 compute-0 systemd[1]: Started libpod-conmon-77aba5e69dd2575770db5a1604e3cb95535183fb3da7f0134903bb59c8bd043d.scope.
Jan 31 08:09:17 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:09:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bc32ff7283532f7aeffbd42bc1d97235c7072d17f9a22b67cf43905b93d1852e/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 08:09:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bc32ff7283532f7aeffbd42bc1d97235c7072d17f9a22b67cf43905b93d1852e/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 08:09:17 compute-0 podman[98339]: 2026-01-31 08:09:17.36080374 +0000 UTC m=+0.540930154 container init 77aba5e69dd2575770db5a1604e3cb95535183fb3da7f0134903bb59c8bd043d (image=quay.io/ceph/ceph:v20, name=compassionate_lamport, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030)
Jan 31 08:09:17 compute-0 podman[98339]: 2026-01-31 08:09:17.365801429 +0000 UTC m=+0.545927793 container start 77aba5e69dd2575770db5a1604e3cb95535183fb3da7f0134903bb59c8bd043d (image=quay.io/ceph/ceph:v20, name=compassionate_lamport, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 08:09:17 compute-0 podman[98339]: 2026-01-31 08:09:17.415730801 +0000 UTC m=+0.595857165 container attach 77aba5e69dd2575770db5a1604e3cb95535183fb3da7f0134903bb59c8bd043d (image=quay.io/ceph/ceph:v20, name=compassionate_lamport, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 08:09:17 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v146: 305 pgs: 1 active+recovery_wait+degraded, 1 active+recovering, 303 active+clean; 461 KiB data, 99 MiB used, 60 GiB / 60 GiB avail; 1/250 objects degraded (0.400%); 2/250 objects misplaced (0.800%); 113 B/s, 1 keys/s, 1 objects/s recovering
Jan 31 08:09:18 compute-0 ceph-mon[75227]: 4.6 scrub starts
Jan 31 08:09:18 compute-0 ceph-mon[75227]: 4.6 scrub ok
Jan 31 08:09:18 compute-0 ceph-mon[75227]: Health check failed: Degraded data redundancy: 1/250 objects degraded (0.400%), 1 pg degraded (PG_DEGRADED)
Jan 31 08:09:18 compute-0 ceph-mon[75227]: osdmap e72: 3 total, 3 up, 3 in
Jan 31 08:09:18 compute-0 ceph-osd[85971]: log_channel(cluster) log [DBG] : 4.19 scrub starts
Jan 31 08:09:18 compute-0 ceph-osd[85971]: log_channel(cluster) log [DBG] : 4.19 scrub ok
Jan 31 08:09:19 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e72 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 08:09:19 compute-0 ceph-mon[75227]: pgmap v146: 305 pgs: 1 active+recovery_wait+degraded, 1 active+recovering, 303 active+clean; 461 KiB data, 99 MiB used, 60 GiB / 60 GiB avail; 1/250 objects degraded (0.400%); 2/250 objects misplaced (0.800%); 113 B/s, 1 keys/s, 1 objects/s recovering
Jan 31 08:09:19 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v147: 305 pgs: 1 active+recovery_wait+degraded, 1 active+recovering, 303 active+clean; 461 KiB data, 99 MiB used, 60 GiB / 60 GiB avail; 4.3 KiB/s rd, 7 op/s; 1/250 objects degraded (0.400%); 2/250 objects misplaced (0.800%); 134 B/s, 1 keys/s, 1 objects/s recovering
Jan 31 08:09:20 compute-0 ceph-osd[88096]: log_channel(cluster) log [DBG] : 10.1c scrub starts
Jan 31 08:09:20 compute-0 ceph-osd[88096]: log_channel(cluster) log [DBG] : 10.1c scrub ok
Jan 31 08:09:20 compute-0 compassionate_lamport[98356]: could not fetch user info: no user info saved
Jan 31 08:09:20 compute-0 ceph-mon[75227]: 4.19 scrub starts
Jan 31 08:09:20 compute-0 ceph-mon[75227]: 4.19 scrub ok
Jan 31 08:09:20 compute-0 ceph-mon[75227]: pgmap v147: 305 pgs: 1 active+recovery_wait+degraded, 1 active+recovering, 303 active+clean; 461 KiB data, 99 MiB used, 60 GiB / 60 GiB avail; 4.3 KiB/s rd, 7 op/s; 1/250 objects degraded (0.400%); 2/250 objects misplaced (0.800%); 134 B/s, 1 keys/s, 1 objects/s recovering
Jan 31 08:09:20 compute-0 systemd[1]: libpod-77aba5e69dd2575770db5a1604e3cb95535183fb3da7f0134903bb59c8bd043d.scope: Deactivated successfully.
Jan 31 08:09:20 compute-0 podman[98339]: 2026-01-31 08:09:20.586918443 +0000 UTC m=+3.767044837 container died 77aba5e69dd2575770db5a1604e3cb95535183fb3da7f0134903bb59c8bd043d (image=quay.io/ceph/ceph:v20, name=compassionate_lamport, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Jan 31 08:09:21 compute-0 systemd[1]: var-lib-containers-storage-overlay-bc32ff7283532f7aeffbd42bc1d97235c7072d17f9a22b67cf43905b93d1852e-merged.mount: Deactivated successfully.
Jan 31 08:09:21 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v148: 305 pgs: 305 active+clean; 461 KiB data, 99 MiB used, 60 GiB / 60 GiB avail; 10 KiB/s rd, 18 op/s; 335 B/s, 1 keys/s, 2 objects/s recovering
Jan 31 08:09:21 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "7"} v 0)
Jan 31 08:09:21 compute-0 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "7"} : dispatch
Jan 31 08:09:21 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "7"} v 0)
Jan 31 08:09:21 compute-0 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "7"} : dispatch
Jan 31 08:09:21 compute-0 ceph-mon[75227]: 10.1c scrub starts
Jan 31 08:09:21 compute-0 ceph-mon[75227]: 10.1c scrub ok
Jan 31 08:09:21 compute-0 podman[98339]: 2026-01-31 08:09:21.738102569 +0000 UTC m=+4.918228933 container remove 77aba5e69dd2575770db5a1604e3cb95535183fb3da7f0134903bb59c8bd043d (image=quay.io/ceph/ceph:v20, name=compassionate_lamport, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True)
Jan 31 08:09:21 compute-0 systemd[1]: libpod-conmon-77aba5e69dd2575770db5a1604e3cb95535183fb3da7f0134903bb59c8bd043d.scope: Deactivated successfully.
Jan 31 08:09:21 compute-0 sudo[98336]: pam_unix(sudo:session): session closed for user root
Jan 31 08:09:21 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e72 do_prune osdmap full prune enabled
Jan 31 08:09:21 compute-0 sudo[98476]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-anwixnvflhnzrixmklyvupoblcftxpez ; /usr/bin/python3'
Jan 31 08:09:21 compute-0 sudo[98476]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:09:22 compute-0 ceph-mon[75227]: log_channel(cluster) log [INF] : Health check cleared: PG_DEGRADED (was: Degraded data redundancy: 1/250 objects degraded (0.400%), 1 pg degraded)
Jan 31 08:09:22 compute-0 ceph-mon[75227]: log_channel(cluster) log [INF] : Cluster is now healthy
Jan 31 08:09:22 compute-0 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "7"}]': finished
Jan 31 08:09:22 compute-0 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "7"}]': finished
Jan 31 08:09:22 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e73 e73: 3 total, 3 up, 3 in
Jan 31 08:09:22 compute-0 python3[98478]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint radosgw-admin quay.io/ceph/ceph:v20 --fsid 82c880e6-d992-5408-8b12-efff9c275473 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   user create --uid="openstack" --display-name "openstack" _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 08:09:22 compute-0 podman[98479]: 2026-01-31 08:09:22.138834999 +0000 UTC m=+0.018815270 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 31 08:09:22 compute-0 ceph-osd[85971]: log_channel(cluster) log [DBG] : 4.3 scrub starts
Jan 31 08:09:22 compute-0 ceph-osd[85971]: log_channel(cluster) log [DBG] : 4.3 scrub ok
Jan 31 08:09:22 compute-0 ceph-mon[75227]: log_channel(cluster) log [DBG] : osdmap e73: 3 total, 3 up, 3 in
Jan 31 08:09:22 compute-0 podman[98479]: 2026-01-31 08:09:22.900141636 +0000 UTC m=+0.780121927 container create e84a3abb59df4c2c9802932ded4f4462c11b129d697a84a77565f828e96cbfe2 (image=quay.io/ceph/ceph:v20, name=gallant_montalcini, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Jan 31 08:09:23 compute-0 systemd[1]: Started libpod-conmon-e84a3abb59df4c2c9802932ded4f4462c11b129d697a84a77565f828e96cbfe2.scope.
Jan 31 08:09:23 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:09:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/99504d6cae02201ca5482d6e34f003d25ab28fb7e7eb2531f7629e8e7a4beae3/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 08:09:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/99504d6cae02201ca5482d6e34f003d25ab28fb7e7eb2531f7629e8e7a4beae3/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 08:09:23 compute-0 ceph-mon[75227]: pgmap v148: 305 pgs: 305 active+clean; 461 KiB data, 99 MiB used, 60 GiB / 60 GiB avail; 10 KiB/s rd, 18 op/s; 335 B/s, 1 keys/s, 2 objects/s recovering
Jan 31 08:09:23 compute-0 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "7"} : dispatch
Jan 31 08:09:23 compute-0 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "7"} : dispatch
Jan 31 08:09:23 compute-0 ceph-mon[75227]: Health check cleared: PG_DEGRADED (was: Degraded data redundancy: 1/250 objects degraded (0.400%), 1 pg degraded)
Jan 31 08:09:23 compute-0 ceph-mon[75227]: Cluster is now healthy
Jan 31 08:09:23 compute-0 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "7"}]': finished
Jan 31 08:09:23 compute-0 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "7"}]': finished
Jan 31 08:09:23 compute-0 ceph-mon[75227]: osdmap e73: 3 total, 3 up, 3 in
Jan 31 08:09:23 compute-0 ceph-osd[85971]: log_channel(cluster) log [DBG] : 4.0 scrub starts
Jan 31 08:09:23 compute-0 ceph-osd[85971]: log_channel(cluster) log [DBG] : 4.0 scrub ok
Jan 31 08:09:23 compute-0 ceph-osd[87035]: log_channel(cluster) log [DBG] : 11.16 scrub starts
Jan 31 08:09:23 compute-0 ceph-osd[87035]: log_channel(cluster) log [DBG] : 11.16 scrub ok
Jan 31 08:09:23 compute-0 podman[98479]: 2026-01-31 08:09:23.602897275 +0000 UTC m=+1.482877546 container init e84a3abb59df4c2c9802932ded4f4462c11b129d697a84a77565f828e96cbfe2 (image=quay.io/ceph/ceph:v20, name=gallant_montalcini, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030)
Jan 31 08:09:23 compute-0 podman[98479]: 2026-01-31 08:09:23.610591174 +0000 UTC m=+1.490571425 container start e84a3abb59df4c2c9802932ded4f4462c11b129d697a84a77565f828e96cbfe2 (image=quay.io/ceph/ceph:v20, name=gallant_montalcini, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 08:09:23 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v150: 305 pgs: 305 active+clean; 461 KiB data, 99 MiB used, 60 GiB / 60 GiB avail; 9.4 KiB/s rd, 16 op/s; 222 B/s, 0 objects/s recovering
Jan 31 08:09:23 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "8"} v 0)
Jan 31 08:09:23 compute-0 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "8"} : dispatch
Jan 31 08:09:23 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "8"} v 0)
Jan 31 08:09:23 compute-0 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "8"} : dispatch
Jan 31 08:09:23 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 73 pg[9.16( v 41'483 (0'0,41'483] local-lis/les=50/51 n=6 ec=50/35 lis/c=50/50 les/c/f=51/51/0 sis=73 pruub=15.146315575s) [2] r=-1 lpr=73 pi=[50,73)/1 crt=41'483 lcod 0'0 active pruub 137.893890381s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 08:09:23 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 73 pg[9.16( v 41'483 (0'0,41'483] local-lis/les=50/51 n=6 ec=50/35 lis/c=50/50 les/c/f=51/51/0 sis=73 pruub=15.146218300s) [2] r=-1 lpr=73 pi=[50,73)/1 crt=41'483 lcod 0'0 unknown NOTIFY pruub 137.893890381s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 08:09:23 compute-0 ceph-osd[88096]: osd.2 pg_epoch: 73 pg[9.16( empty local-lis/les=0/0 n=0 ec=50/35 lis/c=50/50 les/c/f=51/51/0 sis=73) [2] r=0 lpr=73 pi=[50,73)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:09:23 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 73 pg[9.e( v 72'486 (0'0,72'486] local-lis/les=50/51 n=7 ec=50/35 lis/c=50/50 les/c/f=51/51/0 sis=73 pruub=15.154381752s) [2] r=-1 lpr=73 pi=[50,73)/1 crt=72'485 lcod 72'485 active pruub 137.902816772s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 08:09:23 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 73 pg[9.e( v 72'486 (0'0,72'486] local-lis/les=50/51 n=7 ec=50/35 lis/c=50/50 les/c/f=51/51/0 sis=73 pruub=15.154334068s) [2] r=-1 lpr=73 pi=[50,73)/1 crt=72'485 lcod 72'485 unknown NOTIFY pruub 137.902816772s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 08:09:23 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 73 pg[9.6( v 41'483 (0'0,41'483] local-lis/les=50/51 n=7 ec=50/35 lis/c=50/50 les/c/f=51/51/0 sis=73 pruub=15.154219627s) [2] r=-1 lpr=73 pi=[50,73)/1 crt=41'483 lcod 0'0 active pruub 137.902801514s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 08:09:23 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 73 pg[9.6( v 41'483 (0'0,41'483] local-lis/les=50/51 n=7 ec=50/35 lis/c=50/50 les/c/f=51/51/0 sis=73 pruub=15.154117584s) [2] r=-1 lpr=73 pi=[50,73)/1 crt=41'483 lcod 0'0 unknown NOTIFY pruub 137.902801514s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 08:09:23 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 73 pg[9.1e( v 72'484 (0'0,72'484] local-lis/les=50/51 n=6 ec=50/35 lis/c=50/50 les/c/f=51/51/0 sis=73 pruub=15.153962135s) [2] r=-1 lpr=73 pi=[50,73)/1 crt=41'483 lcod 41'483 active pruub 137.903045654s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 08:09:23 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 73 pg[9.1e( v 72'484 (0'0,72'484] local-lis/les=50/51 n=6 ec=50/35 lis/c=50/50 les/c/f=51/51/0 sis=73 pruub=15.153791428s) [2] r=-1 lpr=73 pi=[50,73)/1 crt=41'483 lcod 41'483 unknown NOTIFY pruub 137.903045654s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 08:09:23 compute-0 ceph-osd[88096]: osd.2 pg_epoch: 73 pg[9.e( empty local-lis/les=0/0 n=0 ec=50/35 lis/c=50/50 les/c/f=51/51/0 sis=73) [2] r=0 lpr=73 pi=[50,73)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:09:23 compute-0 ceph-osd[88096]: osd.2 pg_epoch: 73 pg[9.6( empty local-lis/les=0/0 n=0 ec=50/35 lis/c=50/50 les/c/f=51/51/0 sis=73) [2] r=0 lpr=73 pi=[50,73)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:09:23 compute-0 ceph-osd[88096]: osd.2 pg_epoch: 73 pg[9.1e( empty local-lis/les=0/0 n=0 ec=50/35 lis/c=50/50 les/c/f=51/51/0 sis=73) [2] r=0 lpr=73 pi=[50,73)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:09:23 compute-0 podman[98479]: 2026-01-31 08:09:23.838996856 +0000 UTC m=+1.718977147 container attach e84a3abb59df4c2c9802932ded4f4462c11b129d697a84a77565f828e96cbfe2 (image=quay.io/ceph/ceph:v20, name=gallant_montalcini, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Jan 31 08:09:24 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e73 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 08:09:24 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e73 do_prune osdmap full prune enabled
Jan 31 08:09:24 compute-0 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "8"}]': finished
Jan 31 08:09:24 compute-0 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "8"}]': finished
Jan 31 08:09:24 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e74 e74: 3 total, 3 up, 3 in
Jan 31 08:09:24 compute-0 ceph-mon[75227]: log_channel(cluster) log [DBG] : osdmap e74: 3 total, 3 up, 3 in
Jan 31 08:09:24 compute-0 ceph-osd[88096]: osd.2 pg_epoch: 74 pg[9.e( empty local-lis/les=0/0 n=0 ec=50/35 lis/c=50/50 les/c/f=51/51/0 sis=74) [2]/[1] r=-1 lpr=74 pi=[50,74)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 08:09:24 compute-0 ceph-osd[88096]: osd.2 pg_epoch: 74 pg[9.16( empty local-lis/les=0/0 n=0 ec=50/35 lis/c=50/50 les/c/f=51/51/0 sis=74) [2]/[1] r=-1 lpr=74 pi=[50,74)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 08:09:24 compute-0 ceph-osd[88096]: osd.2 pg_epoch: 74 pg[9.e( empty local-lis/les=0/0 n=0 ec=50/35 lis/c=50/50 les/c/f=51/51/0 sis=74) [2]/[1] r=-1 lpr=74 pi=[50,74)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 31 08:09:24 compute-0 ceph-osd[88096]: osd.2 pg_epoch: 74 pg[9.16( empty local-lis/les=0/0 n=0 ec=50/35 lis/c=50/50 les/c/f=51/51/0 sis=74) [2]/[1] r=-1 lpr=74 pi=[50,74)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 31 08:09:24 compute-0 ceph-osd[85971]: osd.0 pg_epoch: 74 pg[9.7( v 72'485 (0'0,72'485] local-lis/les=65/66 n=7 ec=50/35 lis/c=65/65 les/c/f=66/66/0 sis=74 pruub=14.993849754s) [2] r=-1 lpr=74 pi=[65,74)/1 crt=72'484 lcod 72'484 active pruub 143.717453003s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 08:09:24 compute-0 ceph-osd[85971]: osd.0 pg_epoch: 74 pg[9.7( v 72'485 (0'0,72'485] local-lis/les=65/66 n=7 ec=50/35 lis/c=65/65 les/c/f=66/66/0 sis=74 pruub=14.993807793s) [2] r=-1 lpr=74 pi=[65,74)/1 crt=72'484 lcod 72'484 unknown NOTIFY pruub 143.717453003s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 08:09:24 compute-0 ceph-osd[85971]: osd.0 pg_epoch: 74 pg[9.17( v 72'484 (0'0,72'484] local-lis/les=64/65 n=6 ec=50/35 lis/c=64/64 les/c/f=65/65/0 sis=74 pruub=13.763647079s) [2] r=-1 lpr=74 pi=[64,74)/1 crt=41'483 lcod 41'483 active pruub 142.487899780s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 08:09:24 compute-0 ceph-osd[85971]: osd.0 pg_epoch: 74 pg[9.17( v 72'484 (0'0,72'484] local-lis/les=64/65 n=6 ec=50/35 lis/c=64/64 les/c/f=65/65/0 sis=74 pruub=13.763580322s) [2] r=-1 lpr=74 pi=[64,74)/1 crt=41'483 lcod 41'483 unknown NOTIFY pruub 142.487899780s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 08:09:24 compute-0 ceph-osd[85971]: osd.0 pg_epoch: 74 pg[9.f( v 72'484 (0'0,72'484] local-lis/les=62/63 n=7 ec=50/35 lis/c=62/62 les/c/f=63/63/0 sis=74 pruub=11.484574318s) [2] r=-1 lpr=74 pi=[62,74)/1 crt=41'483 lcod 41'483 active pruub 140.209106445s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 08:09:24 compute-0 ceph-osd[85971]: osd.0 pg_epoch: 74 pg[9.f( v 72'484 (0'0,72'484] local-lis/les=62/63 n=7 ec=50/35 lis/c=62/62 les/c/f=63/63/0 sis=74 pruub=11.484548569s) [2] r=-1 lpr=74 pi=[62,74)/1 crt=41'483 lcod 41'483 unknown NOTIFY pruub 140.209106445s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 08:09:24 compute-0 ceph-osd[85971]: osd.0 pg_epoch: 74 pg[9.1f( v 41'483 (0'0,41'483] local-lis/les=62/63 n=6 ec=50/35 lis/c=62/62 les/c/f=63/63/0 sis=74 pruub=11.484263420s) [2] r=-1 lpr=74 pi=[62,74)/1 crt=41'483 active pruub 140.209136963s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 08:09:24 compute-0 ceph-osd[85971]: osd.0 pg_epoch: 74 pg[9.1f( v 41'483 (0'0,41'483] local-lis/les=62/63 n=6 ec=50/35 lis/c=62/62 les/c/f=63/63/0 sis=74 pruub=11.484206200s) [2] r=-1 lpr=74 pi=[62,74)/1 crt=41'483 unknown NOTIFY pruub 140.209136963s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 08:09:24 compute-0 ceph-osd[88096]: osd.2 pg_epoch: 74 pg[9.6( empty local-lis/les=0/0 n=0 ec=50/35 lis/c=50/50 les/c/f=51/51/0 sis=74) [2]/[1] r=-1 lpr=74 pi=[50,74)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 08:09:24 compute-0 ceph-osd[88096]: osd.2 pg_epoch: 74 pg[9.7( empty local-lis/les=0/0 n=0 ec=50/35 lis/c=65/65 les/c/f=66/66/0 sis=74) [2] r=0 lpr=74 pi=[65,74)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:09:24 compute-0 ceph-osd[88096]: osd.2 pg_epoch: 74 pg[9.6( empty local-lis/les=0/0 n=0 ec=50/35 lis/c=50/50 les/c/f=51/51/0 sis=74) [2]/[1] r=-1 lpr=74 pi=[50,74)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 31 08:09:24 compute-0 ceph-osd[88096]: osd.2 pg_epoch: 74 pg[9.17( empty local-lis/les=0/0 n=0 ec=50/35 lis/c=64/64 les/c/f=65/65/0 sis=74) [2] r=0 lpr=74 pi=[64,74)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:09:24 compute-0 ceph-osd[88096]: osd.2 pg_epoch: 74 pg[9.f( empty local-lis/les=0/0 n=0 ec=50/35 lis/c=62/62 les/c/f=63/63/0 sis=74) [2] r=0 lpr=74 pi=[62,74)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:09:24 compute-0 ceph-osd[88096]: osd.2 pg_epoch: 74 pg[9.1f( empty local-lis/les=0/0 n=0 ec=50/35 lis/c=62/62 les/c/f=63/63/0 sis=74) [2] r=0 lpr=74 pi=[62,74)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:09:24 compute-0 ceph-osd[88096]: osd.2 pg_epoch: 74 pg[9.1e( empty local-lis/les=0/0 n=0 ec=50/35 lis/c=50/50 les/c/f=51/51/0 sis=74) [2]/[1] r=-1 lpr=74 pi=[50,74)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 08:09:24 compute-0 ceph-osd[88096]: osd.2 pg_epoch: 74 pg[9.1e( empty local-lis/les=0/0 n=0 ec=50/35 lis/c=50/50 les/c/f=51/51/0 sis=74) [2]/[1] r=-1 lpr=74 pi=[50,74)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 31 08:09:24 compute-0 ceph-mon[75227]: 4.3 scrub starts
Jan 31 08:09:24 compute-0 ceph-mon[75227]: 4.3 scrub ok
Jan 31 08:09:24 compute-0 ceph-mon[75227]: pgmap v150: 305 pgs: 305 active+clean; 461 KiB data, 99 MiB used, 60 GiB / 60 GiB avail; 9.4 KiB/s rd, 16 op/s; 222 B/s, 0 objects/s recovering
Jan 31 08:09:24 compute-0 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "8"} : dispatch
Jan 31 08:09:24 compute-0 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "8"} : dispatch
Jan 31 08:09:24 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 74 pg[9.6( v 41'483 (0'0,41'483] local-lis/les=50/51 n=7 ec=50/35 lis/c=50/50 les/c/f=51/51/0 sis=74) [2]/[1] r=0 lpr=74 pi=[50,74)/1 crt=41'483 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 08:09:24 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 74 pg[9.6( v 41'483 (0'0,41'483] local-lis/les=50/51 n=7 ec=50/35 lis/c=50/50 les/c/f=51/51/0 sis=74) [2]/[1] r=0 lpr=74 pi=[50,74)/1 crt=41'483 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 31 08:09:24 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 74 pg[9.1e( v 72'484 (0'0,72'484] local-lis/les=50/51 n=6 ec=50/35 lis/c=50/50 les/c/f=51/51/0 sis=74) [2]/[1] r=0 lpr=74 pi=[50,74)/1 crt=41'483 lcod 41'483 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 08:09:24 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 74 pg[9.e( v 72'486 (0'0,72'486] local-lis/les=50/51 n=7 ec=50/35 lis/c=50/50 les/c/f=51/51/0 sis=74) [2]/[1] r=0 lpr=74 pi=[50,74)/1 crt=72'485 lcod 72'485 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 08:09:24 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 74 pg[9.1e( v 72'484 (0'0,72'484] local-lis/les=50/51 n=6 ec=50/35 lis/c=50/50 les/c/f=51/51/0 sis=74) [2]/[1] r=0 lpr=74 pi=[50,74)/1 crt=41'483 lcod 41'483 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 31 08:09:24 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 74 pg[9.e( v 72'486 (0'0,72'486] local-lis/les=50/51 n=7 ec=50/35 lis/c=50/50 les/c/f=51/51/0 sis=74) [2]/[1] r=0 lpr=74 pi=[50,74)/1 crt=72'485 lcod 72'485 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 31 08:09:24 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 74 pg[9.16( v 41'483 (0'0,41'483] local-lis/les=50/51 n=6 ec=50/35 lis/c=50/50 les/c/f=51/51/0 sis=74) [2]/[1] r=0 lpr=74 pi=[50,74)/1 crt=41'483 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 08:09:24 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 74 pg[9.16( v 41'483 (0'0,41'483] local-lis/les=50/51 n=6 ec=50/35 lis/c=50/50 les/c/f=51/51/0 sis=74) [2]/[1] r=0 lpr=74 pi=[50,74)/1 crt=41'483 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 31 08:09:24 compute-0 ceph-osd[88096]: log_channel(cluster) log [DBG] : 2.14 scrub starts
Jan 31 08:09:25 compute-0 ceph-osd[88096]: log_channel(cluster) log [DBG] : 2.14 scrub ok
Jan 31 08:09:25 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e74 do_prune osdmap full prune enabled
Jan 31 08:09:25 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e75 e75: 3 total, 3 up, 3 in
Jan 31 08:09:25 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v152: 305 pgs: 4 remapped+peering, 301 active+clean; 461 KiB data, 99 MiB used, 60 GiB / 60 GiB avail; 12 KiB/s rd, 19 op/s; 222 B/s, 0 objects/s recovering
Jan 31 08:09:25 compute-0 ceph-mon[75227]: log_channel(cluster) log [DBG] : osdmap e75: 3 total, 3 up, 3 in
Jan 31 08:09:25 compute-0 ceph-osd[88096]: osd.2 pg_epoch: 75 pg[9.17( empty local-lis/les=0/0 n=0 ec=50/35 lis/c=64/64 les/c/f=65/65/0 sis=75) [2]/[0] r=-1 lpr=75 pi=[64,75)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 08:09:25 compute-0 ceph-osd[88096]: osd.2 pg_epoch: 75 pg[9.17( empty local-lis/les=0/0 n=0 ec=50/35 lis/c=64/64 les/c/f=65/65/0 sis=75) [2]/[0] r=-1 lpr=75 pi=[64,75)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 31 08:09:25 compute-0 ceph-osd[88096]: osd.2 pg_epoch: 75 pg[9.f( empty local-lis/les=0/0 n=0 ec=50/35 lis/c=62/62 les/c/f=63/63/0 sis=75) [2]/[0] r=-1 lpr=75 pi=[62,75)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 08:09:25 compute-0 ceph-osd[88096]: osd.2 pg_epoch: 75 pg[9.f( empty local-lis/les=0/0 n=0 ec=50/35 lis/c=62/62 les/c/f=63/63/0 sis=75) [2]/[0] r=-1 lpr=75 pi=[62,75)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 31 08:09:25 compute-0 ceph-osd[88096]: osd.2 pg_epoch: 75 pg[9.7( empty local-lis/les=0/0 n=0 ec=50/35 lis/c=65/65 les/c/f=66/66/0 sis=75) [2]/[0] r=-1 lpr=75 pi=[65,75)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 08:09:25 compute-0 ceph-osd[88096]: osd.2 pg_epoch: 75 pg[9.1f( empty local-lis/les=0/0 n=0 ec=50/35 lis/c=62/62 les/c/f=63/63/0 sis=75) [2]/[0] r=-1 lpr=75 pi=[62,75)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 08:09:25 compute-0 ceph-osd[88096]: osd.2 pg_epoch: 75 pg[9.7( empty local-lis/les=0/0 n=0 ec=50/35 lis/c=65/65 les/c/f=66/66/0 sis=75) [2]/[0] r=-1 lpr=75 pi=[65,75)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 31 08:09:25 compute-0 ceph-osd[88096]: osd.2 pg_epoch: 75 pg[9.1f( empty local-lis/les=0/0 n=0 ec=50/35 lis/c=62/62 les/c/f=63/63/0 sis=75) [2]/[0] r=-1 lpr=75 pi=[62,75)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 31 08:09:26 compute-0 ceph-osd[85971]: osd.0 pg_epoch: 75 pg[9.1f( v 41'483 (0'0,41'483] local-lis/les=62/63 n=6 ec=50/35 lis/c=62/62 les/c/f=63/63/0 sis=75) [2]/[0] r=0 lpr=75 pi=[62,75)/1 crt=41'483 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 08:09:26 compute-0 ceph-osd[85971]: osd.0 pg_epoch: 75 pg[9.f( v 72'484 (0'0,72'484] local-lis/les=62/63 n=7 ec=50/35 lis/c=62/62 les/c/f=63/63/0 sis=75) [2]/[0] r=0 lpr=75 pi=[62,75)/1 crt=41'483 lcod 41'483 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 08:09:26 compute-0 ceph-osd[85971]: osd.0 pg_epoch: 75 pg[9.1f( v 41'483 (0'0,41'483] local-lis/les=62/63 n=6 ec=50/35 lis/c=62/62 les/c/f=63/63/0 sis=75) [2]/[0] r=0 lpr=75 pi=[62,75)/1 crt=41'483 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 31 08:09:26 compute-0 ceph-osd[85971]: osd.0 pg_epoch: 75 pg[9.7( v 72'485 (0'0,72'485] local-lis/les=65/66 n=7 ec=50/35 lis/c=65/65 les/c/f=66/66/0 sis=75) [2]/[0] r=0 lpr=75 pi=[65,75)/1 crt=72'484 lcod 72'484 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 08:09:26 compute-0 ceph-osd[85971]: osd.0 pg_epoch: 75 pg[9.f( v 72'484 (0'0,72'484] local-lis/les=62/63 n=7 ec=50/35 lis/c=62/62 les/c/f=63/63/0 sis=75) [2]/[0] r=0 lpr=75 pi=[62,75)/1 crt=41'483 lcod 41'483 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 31 08:09:26 compute-0 ceph-osd[85971]: osd.0 pg_epoch: 75 pg[9.7( v 72'485 (0'0,72'485] local-lis/les=65/66 n=7 ec=50/35 lis/c=65/65 les/c/f=66/66/0 sis=75) [2]/[0] r=0 lpr=75 pi=[65,75)/1 crt=72'484 lcod 72'484 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 31 08:09:26 compute-0 ceph-osd[85971]: osd.0 pg_epoch: 75 pg[9.17( v 72'484 (0'0,72'484] local-lis/les=64/65 n=6 ec=50/35 lis/c=64/64 les/c/f=65/65/0 sis=75) [2]/[0] r=0 lpr=75 pi=[64,75)/1 crt=41'483 lcod 41'483 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 08:09:26 compute-0 ceph-osd[85971]: osd.0 pg_epoch: 75 pg[9.17( v 72'484 (0'0,72'484] local-lis/les=64/65 n=6 ec=50/35 lis/c=64/64 les/c/f=65/65/0 sis=75) [2]/[0] r=0 lpr=75 pi=[64,75)/1 crt=41'483 lcod 41'483 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 31 08:09:26 compute-0 ceph-mon[75227]: 4.0 scrub starts
Jan 31 08:09:26 compute-0 ceph-mon[75227]: 4.0 scrub ok
Jan 31 08:09:26 compute-0 ceph-mon[75227]: 11.16 scrub starts
Jan 31 08:09:26 compute-0 ceph-mon[75227]: 11.16 scrub ok
Jan 31 08:09:26 compute-0 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "8"}]': finished
Jan 31 08:09:26 compute-0 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "8"}]': finished
Jan 31 08:09:26 compute-0 ceph-mon[75227]: osdmap e74: 3 total, 3 up, 3 in
Jan 31 08:09:26 compute-0 ceph-mon[75227]: 2.14 scrub starts
Jan 31 08:09:26 compute-0 ceph-mon[75227]: 2.14 scrub ok
Jan 31 08:09:26 compute-0 ceph-mon[75227]: osdmap e75: 3 total, 3 up, 3 in
Jan 31 08:09:26 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 75 pg[9.1e( v 72'484 (0'0,72'484] local-lis/les=74/75 n=6 ec=50/35 lis/c=50/50 les/c/f=51/51/0 sis=74) [2]/[1] async=[2] r=0 lpr=74 pi=[50,74)/1 crt=72'484 lcod 41'483 mlcod 0'0 active+remapped mbc={255={(0+1)=6}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:09:26 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 75 pg[9.6( v 41'483 (0'0,41'483] local-lis/les=74/75 n=7 ec=50/35 lis/c=50/50 les/c/f=51/51/0 sis=74) [2]/[1] async=[2] r=0 lpr=74 pi=[50,74)/1 crt=41'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=6}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:09:26 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 75 pg[9.e( v 72'486 (0'0,72'486] local-lis/les=74/75 n=7 ec=50/35 lis/c=50/50 les/c/f=51/51/0 sis=74) [2]/[1] async=[2] r=0 lpr=74 pi=[50,74)/1 crt=72'486 lcod 72'485 mlcod 0'0 active+remapped mbc={255={(0+1)=8}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:09:26 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 75 pg[9.16( v 41'483 (0'0,41'483] local-lis/les=74/75 n=6 ec=50/35 lis/c=50/50 les/c/f=51/51/0 sis=74) [2]/[1] async=[2] r=0 lpr=74 pi=[50,74)/1 crt=41'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:09:26 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e75 do_prune osdmap full prune enabled
Jan 31 08:09:27 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e76 e76: 3 total, 3 up, 3 in
Jan 31 08:09:27 compute-0 ceph-mon[75227]: log_channel(cluster) log [DBG] : osdmap e76: 3 total, 3 up, 3 in
Jan 31 08:09:27 compute-0 ceph-osd[88096]: osd.2 pg_epoch: 76 pg[9.1e( v 75'485 (0'0,75'485] local-lis/les=0/0 n=6 ec=50/35 lis/c=74/50 les/c/f=75/51/0 sis=76) [2] r=0 lpr=76 pi=[50,76)/1 pct=0'0 crt=72'484 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 08:09:27 compute-0 ceph-osd[88096]: osd.2 pg_epoch: 76 pg[9.1e( v 75'485 (0'0,75'485] local-lis/les=0/0 n=6 ec=50/35 lis/c=74/50 les/c/f=75/51/0 sis=76) [2] r=0 lpr=76 pi=[50,76)/1 crt=72'484 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:09:27 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 76 pg[9.1e( v 75'485 (0'0,75'485] local-lis/les=74/75 n=6 ec=50/35 lis/c=74/50 les/c/f=75/51/0 sis=76 pruub=14.690245628s) [2] async=[2] r=-1 lpr=76 pi=[50,76)/1 crt=72'484 lcod 72'484 active pruub 140.986663818s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 08:09:27 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 76 pg[9.1e( v 75'485 (0'0,75'485] local-lis/les=74/75 n=6 ec=50/35 lis/c=74/50 les/c/f=75/51/0 sis=76 pruub=14.690156937s) [2] r=-1 lpr=76 pi=[50,76)/1 crt=72'484 lcod 72'484 unknown NOTIFY pruub 140.986663818s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 08:09:27 compute-0 ceph-mon[75227]: pgmap v152: 305 pgs: 4 remapped+peering, 301 active+clean; 461 KiB data, 99 MiB used, 60 GiB / 60 GiB avail; 12 KiB/s rd, 19 op/s; 222 B/s, 0 objects/s recovering
Jan 31 08:09:27 compute-0 ceph-osd[85971]: osd.0 pg_epoch: 76 pg[9.f( v 72'484 (0'0,72'484] local-lis/les=75/76 n=7 ec=50/35 lis/c=62/62 les/c/f=63/63/0 sis=75) [2]/[0] async=[2] r=0 lpr=75 pi=[62,75)/1 crt=72'484 lcod 41'483 mlcod 0'0 active+remapped mbc={255={(0+1)=8}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:09:27 compute-0 ceph-osd[85971]: osd.0 pg_epoch: 76 pg[9.17( v 72'484 (0'0,72'484] local-lis/les=75/76 n=6 ec=50/35 lis/c=64/64 les/c/f=65/65/0 sis=75) [2]/[0] async=[2] r=0 lpr=75 pi=[64,75)/1 crt=72'484 lcod 41'483 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:09:27 compute-0 ceph-osd[85971]: osd.0 pg_epoch: 76 pg[9.1f( v 41'483 (0'0,41'483] local-lis/les=75/76 n=6 ec=50/35 lis/c=62/62 les/c/f=63/63/0 sis=75) [2]/[0] async=[2] r=0 lpr=75 pi=[62,75)/1 crt=41'483 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:09:27 compute-0 ceph-osd[85971]: osd.0 pg_epoch: 76 pg[9.7( v 72'485 (0'0,72'485] local-lis/les=75/76 n=7 ec=50/35 lis/c=65/65 les/c/f=66/66/0 sis=75) [2]/[0] async=[2] r=0 lpr=75 pi=[65,75)/1 crt=72'485 lcod 72'484 mlcod 0'0 active+remapped mbc={255={(0+1)=7}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:09:27 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v155: 305 pgs: 4 remapped+peering, 301 active+clean; 461 KiB data, 99 MiB used, 60 GiB / 60 GiB avail; 3.5 KiB/s rd, 3 op/s
Jan 31 08:09:28 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e76 do_prune osdmap full prune enabled
Jan 31 08:09:28 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e77 e77: 3 total, 3 up, 3 in
Jan 31 08:09:28 compute-0 ceph-mon[75227]: log_channel(cluster) log [DBG] : osdmap e77: 3 total, 3 up, 3 in
Jan 31 08:09:28 compute-0 ceph-osd[88096]: osd.2 pg_epoch: 77 pg[9.e( v 75'488 (0'0,75'488] local-lis/les=0/0 n=7 ec=50/35 lis/c=74/50 les/c/f=75/51/0 sis=77) [2] r=0 lpr=77 pi=[50,77)/1 pct=0'0 crt=75'487 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 08:09:28 compute-0 ceph-osd[88096]: osd.2 pg_epoch: 77 pg[9.e( v 75'488 (0'0,75'488] local-lis/les=0/0 n=7 ec=50/35 lis/c=74/50 les/c/f=75/51/0 sis=77) [2] r=0 lpr=77 pi=[50,77)/1 crt=75'487 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:09:28 compute-0 ceph-osd[88096]: osd.2 pg_epoch: 77 pg[9.6( v 41'483 (0'0,41'483] local-lis/les=0/0 n=7 ec=50/35 lis/c=74/50 les/c/f=75/51/0 sis=77) [2] r=0 lpr=77 pi=[50,77)/1 pct=0'0 crt=41'483 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 08:09:28 compute-0 ceph-osd[88096]: osd.2 pg_epoch: 77 pg[9.6( v 41'483 (0'0,41'483] local-lis/les=0/0 n=7 ec=50/35 lis/c=74/50 les/c/f=75/51/0 sis=77) [2] r=0 lpr=77 pi=[50,77)/1 crt=41'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:09:28 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 77 pg[9.6( v 41'483 (0'0,41'483] local-lis/les=74/75 n=7 ec=50/35 lis/c=74/50 les/c/f=75/51/0 sis=77 pruub=13.442836761s) [2] async=[2] r=-1 lpr=77 pi=[50,77)/1 crt=41'483 lcod 0'0 active pruub 140.986724854s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 08:09:28 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 77 pg[9.6( v 41'483 (0'0,41'483] local-lis/les=74/75 n=7 ec=50/35 lis/c=74/50 les/c/f=75/51/0 sis=77 pruub=13.442744255s) [2] r=-1 lpr=77 pi=[50,77)/1 crt=41'483 lcod 0'0 unknown NOTIFY pruub 140.986724854s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 08:09:28 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 77 pg[9.e( v 75'488 (0'0,75'488] local-lis/les=74/75 n=7 ec=50/35 lis/c=74/50 les/c/f=75/51/0 sis=77 pruub=13.442586899s) [2] async=[2] r=-1 lpr=77 pi=[50,77)/1 crt=75'487 lcod 75'487 active pruub 140.986816406s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 08:09:28 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 77 pg[9.e( v 75'488 (0'0,75'488] local-lis/les=74/75 n=7 ec=50/35 lis/c=74/50 les/c/f=75/51/0 sis=77 pruub=13.442452431s) [2] r=-1 lpr=77 pi=[50,77)/1 crt=75'487 lcod 75'487 unknown NOTIFY pruub 140.986816406s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 08:09:28 compute-0 ceph-osd[88096]: osd.2 pg_epoch: 77 pg[9.1e( v 75'485 (0'0,75'485] local-lis/les=76/77 n=6 ec=50/35 lis/c=74/50 les/c/f=75/51/0 sis=76) [2] r=0 lpr=76 pi=[50,76)/1 crt=75'485 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:09:29 compute-0 ceph-mon[75227]: osdmap e76: 3 total, 3 up, 3 in
Jan 31 08:09:29 compute-0 ceph-mon[75227]: pgmap v155: 305 pgs: 4 remapped+peering, 301 active+clean; 461 KiB data, 99 MiB used, 60 GiB / 60 GiB avail; 3.5 KiB/s rd, 3 op/s
Jan 31 08:09:29 compute-0 ceph-mon[75227]: osdmap e77: 3 total, 3 up, 3 in
Jan 31 08:09:29 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e77 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 08:09:29 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e77 do_prune osdmap full prune enabled
Jan 31 08:09:29 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e78 e78: 3 total, 3 up, 3 in
Jan 31 08:09:29 compute-0 ceph-mon[75227]: log_channel(cluster) log [DBG] : osdmap e78: 3 total, 3 up, 3 in
Jan 31 08:09:29 compute-0 ceph-osd[87035]: log_channel(cluster) log [DBG] : 7.19 scrub starts
Jan 31 08:09:29 compute-0 ceph-osd[85971]: osd.0 pg_epoch: 78 pg[9.f( v 76'485 (0'0,76'485] local-lis/les=75/76 n=7 ec=50/35 lis/c=75/62 les/c/f=76/63/0 sis=78 pruub=14.123814583s) [2] async=[2] r=-1 lpr=78 pi=[62,78)/1 crt=72'484 lcod 72'484 active pruub 147.509994507s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 08:09:29 compute-0 ceph-osd[85971]: osd.0 pg_epoch: 78 pg[9.f( v 76'485 (0'0,76'485] local-lis/les=75/76 n=7 ec=50/35 lis/c=75/62 les/c/f=76/63/0 sis=78 pruub=14.123645782s) [2] r=-1 lpr=78 pi=[62,78)/1 crt=72'484 lcod 72'484 unknown NOTIFY pruub 147.509994507s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 08:09:29 compute-0 ceph-osd[88096]: osd.2 pg_epoch: 78 pg[9.f( v 76'485 (0'0,76'485] local-lis/les=0/0 n=7 ec=50/35 lis/c=75/62 les/c/f=76/63/0 sis=78) [2] r=0 lpr=78 pi=[62,78)/1 pct=0'0 crt=72'484 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 08:09:29 compute-0 ceph-osd[88096]: osd.2 pg_epoch: 78 pg[9.f( v 76'485 (0'0,76'485] local-lis/les=0/0 n=7 ec=50/35 lis/c=75/62 les/c/f=76/63/0 sis=78) [2] r=0 lpr=78 pi=[62,78)/1 crt=72'484 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:09:29 compute-0 ceph-osd[87035]: log_channel(cluster) log [DBG] : 7.19 scrub ok
Jan 31 08:09:29 compute-0 ceph-osd[88096]: osd.2 pg_epoch: 78 pg[9.e( v 75'488 (0'0,75'488] local-lis/les=77/78 n=7 ec=50/35 lis/c=74/50 les/c/f=75/51/0 sis=77) [2] r=0 lpr=77 pi=[50,77)/1 crt=75'488 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:09:29 compute-0 ceph-osd[88096]: osd.2 pg_epoch: 78 pg[9.6( v 41'483 (0'0,41'483] local-lis/les=77/78 n=7 ec=50/35 lis/c=74/50 les/c/f=75/51/0 sis=77) [2] r=0 lpr=77 pi=[50,77)/1 crt=41'483 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:09:29 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v158: 305 pgs: 1 active+recovery_wait+remapped, 1 active+remapped, 1 active+recovering+remapped, 4 remapped+peering, 298 active+clean; 461 KiB data, 99 MiB used, 60 GiB / 60 GiB avail; 3.0 KiB/s rd, 5 op/s; 12/250 objects misplaced (4.800%); 38 B/s, 0 objects/s recovering
Jan 31 08:09:30 compute-0 gallant_montalcini[98494]: {
Jan 31 08:09:30 compute-0 gallant_montalcini[98494]:     "user_id": "openstack",
Jan 31 08:09:30 compute-0 gallant_montalcini[98494]:     "display_name": "openstack",
Jan 31 08:09:30 compute-0 gallant_montalcini[98494]:     "email": "",
Jan 31 08:09:30 compute-0 gallant_montalcini[98494]:     "suspended": 0,
Jan 31 08:09:30 compute-0 gallant_montalcini[98494]:     "max_buckets": 1000,
Jan 31 08:09:30 compute-0 gallant_montalcini[98494]:     "subusers": [],
Jan 31 08:09:30 compute-0 gallant_montalcini[98494]:     "keys": [
Jan 31 08:09:30 compute-0 gallant_montalcini[98494]:         {
Jan 31 08:09:30 compute-0 gallant_montalcini[98494]:             "user": "openstack",
Jan 31 08:09:30 compute-0 gallant_montalcini[98494]:             "access_key": "4KDH2DMGXG8BAVVQQ56K",
Jan 31 08:09:30 compute-0 gallant_montalcini[98494]:             "secret_key": "2aeBpeL3MIacX0sq4TmgW6Hei3rNmrWGE6tXmn0q",
Jan 31 08:09:30 compute-0 gallant_montalcini[98494]:             "active": true,
Jan 31 08:09:30 compute-0 gallant_montalcini[98494]:             "create_date": "2026-01-31T08:09:30.193014Z"
Jan 31 08:09:30 compute-0 gallant_montalcini[98494]:         }
Jan 31 08:09:30 compute-0 gallant_montalcini[98494]:     ],
Jan 31 08:09:30 compute-0 gallant_montalcini[98494]:     "swift_keys": [],
Jan 31 08:09:30 compute-0 gallant_montalcini[98494]:     "caps": [],
Jan 31 08:09:30 compute-0 gallant_montalcini[98494]:     "op_mask": "read, write, delete",
Jan 31 08:09:30 compute-0 gallant_montalcini[98494]:     "default_placement": "",
Jan 31 08:09:30 compute-0 gallant_montalcini[98494]:     "default_storage_class": "",
Jan 31 08:09:30 compute-0 gallant_montalcini[98494]:     "placement_tags": [],
Jan 31 08:09:30 compute-0 gallant_montalcini[98494]:     "bucket_quota": {
Jan 31 08:09:30 compute-0 gallant_montalcini[98494]:         "enabled": false,
Jan 31 08:09:30 compute-0 gallant_montalcini[98494]:         "check_on_raw": false,
Jan 31 08:09:30 compute-0 gallant_montalcini[98494]:         "max_size": -1,
Jan 31 08:09:30 compute-0 gallant_montalcini[98494]:         "max_size_kb": 0,
Jan 31 08:09:30 compute-0 gallant_montalcini[98494]:         "max_objects": -1
Jan 31 08:09:30 compute-0 gallant_montalcini[98494]:     },
Jan 31 08:09:30 compute-0 gallant_montalcini[98494]:     "user_quota": {
Jan 31 08:09:30 compute-0 gallant_montalcini[98494]:         "enabled": false,
Jan 31 08:09:30 compute-0 gallant_montalcini[98494]:         "check_on_raw": false,
Jan 31 08:09:30 compute-0 gallant_montalcini[98494]:         "max_size": -1,
Jan 31 08:09:30 compute-0 gallant_montalcini[98494]:         "max_size_kb": 0,
Jan 31 08:09:30 compute-0 gallant_montalcini[98494]:         "max_objects": -1
Jan 31 08:09:30 compute-0 gallant_montalcini[98494]:     },
Jan 31 08:09:30 compute-0 gallant_montalcini[98494]:     "temp_url_keys": [],
Jan 31 08:09:30 compute-0 gallant_montalcini[98494]:     "type": "rgw",
Jan 31 08:09:30 compute-0 gallant_montalcini[98494]:     "mfa_ids": [],
Jan 31 08:09:30 compute-0 gallant_montalcini[98494]:     "account_id": "",
Jan 31 08:09:30 compute-0 gallant_montalcini[98494]:     "path": "/",
Jan 31 08:09:30 compute-0 gallant_montalcini[98494]:     "create_date": "2026-01-31T08:09:30.192463Z",
Jan 31 08:09:30 compute-0 gallant_montalcini[98494]:     "tags": [],
Jan 31 08:09:30 compute-0 gallant_montalcini[98494]:     "group_ids": []
Jan 31 08:09:30 compute-0 gallant_montalcini[98494]: }
Jan 31 08:09:30 compute-0 gallant_montalcini[98494]: 
Jan 31 08:09:30 compute-0 systemd[1]: libpod-e84a3abb59df4c2c9802932ded4f4462c11b129d697a84a77565f828e96cbfe2.scope: Deactivated successfully.
Jan 31 08:09:30 compute-0 podman[98479]: 2026-01-31 08:09:30.228490127 +0000 UTC m=+8.108470418 container died e84a3abb59df4c2c9802932ded4f4462c11b129d697a84a77565f828e96cbfe2 (image=quay.io/ceph/ceph:v20, name=gallant_montalcini, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Jan 31 08:09:30 compute-0 systemd[1]: var-lib-containers-storage-overlay-99504d6cae02201ca5482d6e34f003d25ab28fb7e7eb2531f7629e8e7a4beae3-merged.mount: Deactivated successfully.
Jan 31 08:09:30 compute-0 podman[98479]: 2026-01-31 08:09:30.275690768 +0000 UTC m=+8.155671019 container remove e84a3abb59df4c2c9802932ded4f4462c11b129d697a84a77565f828e96cbfe2 (image=quay.io/ceph/ceph:v20, name=gallant_montalcini, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 08:09:30 compute-0 systemd[1]: libpod-conmon-e84a3abb59df4c2c9802932ded4f4462c11b129d697a84a77565f828e96cbfe2.scope: Deactivated successfully.
Jan 31 08:09:30 compute-0 sudo[98476]: pam_unix(sudo:session): session closed for user root
Jan 31 08:09:30 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e78 do_prune osdmap full prune enabled
Jan 31 08:09:30 compute-0 ceph-mon[75227]: osdmap e78: 3 total, 3 up, 3 in
Jan 31 08:09:30 compute-0 ceph-mon[75227]: 7.19 scrub starts
Jan 31 08:09:30 compute-0 ceph-mon[75227]: 7.19 scrub ok
Jan 31 08:09:30 compute-0 ceph-mon[75227]: pgmap v158: 305 pgs: 1 active+recovery_wait+remapped, 1 active+remapped, 1 active+recovering+remapped, 4 remapped+peering, 298 active+clean; 461 KiB data, 99 MiB used, 60 GiB / 60 GiB avail; 3.0 KiB/s rd, 5 op/s; 12/250 objects misplaced (4.800%); 38 B/s, 0 objects/s recovering
Jan 31 08:09:30 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e79 e79: 3 total, 3 up, 3 in
Jan 31 08:09:30 compute-0 ceph-mon[75227]: log_channel(cluster) log [DBG] : osdmap e79: 3 total, 3 up, 3 in
Jan 31 08:09:30 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 79 pg[9.16( v 41'483 (0'0,41'483] local-lis/les=74/75 n=6 ec=50/35 lis/c=74/50 les/c/f=75/51/0 sis=79 pruub=11.654903412s) [2] async=[2] r=-1 lpr=79 pi=[50,79)/1 crt=41'483 lcod 0'0 active pruub 140.986663818s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 08:09:30 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 79 pg[9.16( v 41'483 (0'0,41'483] local-lis/les=74/75 n=6 ec=50/35 lis/c=74/50 les/c/f=75/51/0 sis=79 pruub=11.654653549s) [2] r=-1 lpr=79 pi=[50,79)/1 crt=41'483 lcod 0'0 unknown NOTIFY pruub 140.986663818s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 08:09:30 compute-0 ceph-osd[88096]: osd.2 pg_epoch: 79 pg[9.16( v 41'483 (0'0,41'483] local-lis/les=0/0 n=6 ec=50/35 lis/c=74/50 les/c/f=75/51/0 sis=79) [2] r=0 lpr=79 pi=[50,79)/1 pct=0'0 crt=41'483 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 08:09:30 compute-0 ceph-osd[88096]: osd.2 pg_epoch: 79 pg[9.16( v 41'483 (0'0,41'483] local-lis/les=0/0 n=6 ec=50/35 lis/c=74/50 les/c/f=75/51/0 sis=79) [2] r=0 lpr=79 pi=[50,79)/1 crt=41'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:09:30 compute-0 ceph-osd[85971]: osd.0 pg_epoch: 79 pg[9.17( v 78'485 (0'0,78'485] local-lis/les=75/76 n=6 ec=50/35 lis/c=75/64 les/c/f=76/65/0 sis=79 pruub=13.149453163s) [2] async=[2] r=-1 lpr=79 pi=[64,79)/1 crt=72'484 lcod 72'484 active pruub 147.522628784s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 08:09:30 compute-0 ceph-osd[85971]: osd.0 pg_epoch: 79 pg[9.7( v 78'487 (0'0,78'487] local-lis/les=75/76 n=7 ec=50/35 lis/c=75/65 les/c/f=76/66/0 sis=79 pruub=13.149522781s) [2] async=[2] r=-1 lpr=79 pi=[65,79)/1 crt=76'486 lcod 76'486 active pruub 147.522735596s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 08:09:30 compute-0 ceph-osd[85971]: osd.0 pg_epoch: 79 pg[9.17( v 78'485 (0'0,78'485] local-lis/les=75/76 n=6 ec=50/35 lis/c=75/64 les/c/f=76/65/0 sis=79 pruub=13.149383545s) [2] r=-1 lpr=79 pi=[64,79)/1 crt=72'484 lcod 72'484 unknown NOTIFY pruub 147.522628784s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 08:09:30 compute-0 ceph-osd[85971]: osd.0 pg_epoch: 79 pg[9.7( v 78'487 (0'0,78'487] local-lis/les=75/76 n=7 ec=50/35 lis/c=75/65 les/c/f=76/66/0 sis=79 pruub=13.149377823s) [2] r=-1 lpr=79 pi=[65,79)/1 crt=76'486 lcod 76'486 unknown NOTIFY pruub 147.522735596s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 08:09:30 compute-0 ceph-osd[88096]: osd.2 pg_epoch: 79 pg[9.17( v 78'485 (0'0,78'485] local-lis/les=0/0 n=6 ec=50/35 lis/c=75/64 les/c/f=76/65/0 sis=79) [2] r=0 lpr=79 pi=[64,79)/1 pct=0'0 crt=72'484 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 08:09:30 compute-0 ceph-osd[88096]: osd.2 pg_epoch: 79 pg[9.17( v 78'485 (0'0,78'485] local-lis/les=0/0 n=6 ec=50/35 lis/c=75/64 les/c/f=76/65/0 sis=79) [2] r=0 lpr=79 pi=[64,79)/1 crt=72'484 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:09:30 compute-0 ceph-osd[85971]: osd.0 pg_epoch: 79 pg[9.1f( v 41'483 (0'0,41'483] local-lis/les=75/76 n=6 ec=50/35 lis/c=75/62 les/c/f=76/63/0 sis=79 pruub=13.148480415s) [2] async=[2] r=-1 lpr=79 pi=[62,79)/1 crt=41'483 active pruub 147.522628784s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 08:09:30 compute-0 ceph-osd[85971]: osd.0 pg_epoch: 79 pg[9.1f( v 41'483 (0'0,41'483] local-lis/les=75/76 n=6 ec=50/35 lis/c=75/62 les/c/f=76/63/0 sis=79 pruub=13.148399353s) [2] r=-1 lpr=79 pi=[62,79)/1 crt=41'483 unknown NOTIFY pruub 147.522628784s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 08:09:30 compute-0 ceph-osd[88096]: osd.2 pg_epoch: 79 pg[9.7( v 78'487 (0'0,78'487] local-lis/les=0/0 n=7 ec=50/35 lis/c=75/65 les/c/f=76/66/0 sis=79) [2] r=0 lpr=79 pi=[65,79)/1 pct=0'0 crt=76'486 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 08:09:30 compute-0 ceph-osd[88096]: osd.2 pg_epoch: 79 pg[9.7( v 78'487 (0'0,78'487] local-lis/les=0/0 n=7 ec=50/35 lis/c=75/65 les/c/f=76/66/0 sis=79) [2] r=0 lpr=79 pi=[65,79)/1 crt=76'486 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:09:30 compute-0 ceph-osd[88096]: osd.2 pg_epoch: 79 pg[9.1f( v 41'483 (0'0,41'483] local-lis/les=0/0 n=6 ec=50/35 lis/c=75/62 les/c/f=76/63/0 sis=79) [2] r=0 lpr=79 pi=[62,79)/1 pct=0'0 crt=41'483 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 08:09:30 compute-0 ceph-osd[88096]: osd.2 pg_epoch: 79 pg[9.1f( v 41'483 (0'0,41'483] local-lis/les=0/0 n=6 ec=50/35 lis/c=75/62 les/c/f=76/63/0 sis=79) [2] r=0 lpr=79 pi=[62,79)/1 crt=41'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:09:30 compute-0 ceph-osd[88096]: osd.2 pg_epoch: 79 pg[9.f( v 76'485 (0'0,76'485] local-lis/les=78/79 n=7 ec=50/35 lis/c=75/62 les/c/f=76/63/0 sis=78) [2] r=0 lpr=78 pi=[62,78)/1 crt=76'485 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:09:30 compute-0 ceph-osd[87035]: log_channel(cluster) log [DBG] : 8.16 scrub starts
Jan 31 08:09:30 compute-0 ceph-osd[87035]: log_channel(cluster) log [DBG] : 8.16 scrub ok
Jan 31 08:09:30 compute-0 ceph-osd[88096]: log_channel(cluster) log [DBG] : 10.1b scrub starts
Jan 31 08:09:30 compute-0 ceph-osd[88096]: log_channel(cluster) log [DBG] : 10.1b scrub ok
Jan 31 08:09:31 compute-0 ceph-osd[85971]: log_channel(cluster) log [DBG] : 4.c scrub starts
Jan 31 08:09:31 compute-0 ceph-osd[85971]: log_channel(cluster) log [DBG] : 4.c scrub ok
Jan 31 08:09:31 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e79 do_prune osdmap full prune enabled
Jan 31 08:09:31 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e80 e80: 3 total, 3 up, 3 in
Jan 31 08:09:31 compute-0 ceph-mon[75227]: log_channel(cluster) log [DBG] : osdmap e80: 3 total, 3 up, 3 in
Jan 31 08:09:31 compute-0 ceph-osd[88096]: osd.2 pg_epoch: 80 pg[9.16( v 41'483 (0'0,41'483] local-lis/les=79/80 n=6 ec=50/35 lis/c=74/50 les/c/f=75/51/0 sis=79) [2] r=0 lpr=79 pi=[50,79)/1 crt=41'483 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:09:31 compute-0 ceph-osd[88096]: osd.2 pg_epoch: 80 pg[9.17( v 78'485 (0'0,78'485] local-lis/les=79/80 n=6 ec=50/35 lis/c=75/64 les/c/f=76/65/0 sis=79) [2] r=0 lpr=79 pi=[64,79)/1 crt=78'485 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:09:31 compute-0 ceph-mon[75227]: osdmap e79: 3 total, 3 up, 3 in
Jan 31 08:09:31 compute-0 ceph-mon[75227]: 8.16 scrub starts
Jan 31 08:09:31 compute-0 ceph-mon[75227]: 8.16 scrub ok
Jan 31 08:09:31 compute-0 ceph-mon[75227]: 10.1b scrub starts
Jan 31 08:09:31 compute-0 ceph-mon[75227]: 10.1b scrub ok
Jan 31 08:09:31 compute-0 ceph-osd[88096]: osd.2 pg_epoch: 80 pg[9.7( v 78'487 (0'0,78'487] local-lis/les=79/80 n=7 ec=50/35 lis/c=75/65 les/c/f=76/66/0 sis=79) [2] r=0 lpr=79 pi=[65,79)/1 crt=78'487 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:09:31 compute-0 ceph-osd[88096]: osd.2 pg_epoch: 80 pg[9.1f( v 41'483 (0'0,41'483] local-lis/les=79/80 n=6 ec=50/35 lis/c=75/62 les/c/f=76/63/0 sis=79) [2] r=0 lpr=79 pi=[62,79)/1 crt=41'483 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:09:31 compute-0 ceph-mgr[75519]: [balancer INFO root] Optimize plan auto_2026-01-31_08:09:31
Jan 31 08:09:31 compute-0 ceph-mgr[75519]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 31 08:09:31 compute-0 ceph-mgr[75519]: [balancer INFO root] Some PGs (0.003279) are inactive; try again later
Jan 31 08:09:31 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v161: 305 pgs: 1 active+recovery_wait+remapped, 1 peering, 2 active+remapped, 1 active+recovering+remapped, 300 active+clean; 461 KiB data, 100 MiB used, 60 GiB / 60 GiB avail; 13 KiB/s rd, 25 op/s; 12/250 objects misplaced (4.800%); 311 B/s, 8 objects/s recovering
Jan 31 08:09:32 compute-0 ceph-mon[75227]: 4.c scrub starts
Jan 31 08:09:32 compute-0 ceph-mon[75227]: 4.c scrub ok
Jan 31 08:09:32 compute-0 ceph-mon[75227]: osdmap e80: 3 total, 3 up, 3 in
Jan 31 08:09:32 compute-0 ceph-mon[75227]: pgmap v161: 305 pgs: 1 active+recovery_wait+remapped, 1 peering, 2 active+remapped, 1 active+recovering+remapped, 300 active+clean; 461 KiB data, 100 MiB used, 60 GiB / 60 GiB avail; 13 KiB/s rd, 25 op/s; 12/250 objects misplaced (4.800%); 311 B/s, 8 objects/s recovering
Jan 31 08:09:32 compute-0 ceph-osd[87035]: log_channel(cluster) log [DBG] : 3.1c scrub starts
Jan 31 08:09:32 compute-0 ceph-osd[87035]: log_channel(cluster) log [DBG] : 3.1c scrub ok
Jan 31 08:09:32 compute-0 ceph-mgr[75519]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:09:32 compute-0 ceph-mgr[75519]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:09:32 compute-0 ceph-mgr[75519]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:09:32 compute-0 ceph-mgr[75519]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:09:32 compute-0 ceph-mgr[75519]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 31 08:09:32 compute-0 ceph-mgr[75519]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 08:09:32 compute-0 ceph-mgr[75519]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 31 08:09:32 compute-0 ceph-mgr[75519]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 08:09:32 compute-0 ceph-mgr[75519]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:09:32 compute-0 ceph-mgr[75519]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:09:32 compute-0 ceph-mgr[75519]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 08:09:32 compute-0 ceph-mgr[75519]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 08:09:32 compute-0 ceph-mgr[75519]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 08:09:32 compute-0 ceph-mgr[75519]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 08:09:32 compute-0 ceph-mgr[75519]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 08:09:32 compute-0 ceph-mgr[75519]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 08:09:33 compute-0 ceph-mon[75227]: 3.1c scrub starts
Jan 31 08:09:33 compute-0 ceph-mon[75227]: 3.1c scrub ok
Jan 31 08:09:33 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v162: 305 pgs: 1 active+recovery_wait+remapped, 1 peering, 2 active+remapped, 1 active+recovering+remapped, 300 active+clean; 461 KiB data, 100 MiB used, 60 GiB / 60 GiB avail; 9.6 KiB/s rd, 19 op/s; 12/250 objects misplaced (4.800%); 235 B/s, 6 objects/s recovering
Jan 31 08:09:34 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e80 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 08:09:34 compute-0 ceph-mon[75227]: pgmap v162: 305 pgs: 1 active+recovery_wait+remapped, 1 peering, 2 active+remapped, 1 active+recovering+remapped, 300 active+clean; 461 KiB data, 100 MiB used, 60 GiB / 60 GiB avail; 9.6 KiB/s rd, 19 op/s; 12/250 objects misplaced (4.800%); 235 B/s, 6 objects/s recovering
Jan 31 08:09:35 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v163: 305 pgs: 305 active+clean; 461 KiB data, 100 MiB used, 60 GiB / 60 GiB avail; 9.8 KiB/s rd, 323 B/s wr, 20 op/s; 315 B/s, 7 objects/s recovering
Jan 31 08:09:35 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "9"} v 0)
Jan 31 08:09:35 compute-0 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "9"} : dispatch
Jan 31 08:09:35 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "9"} v 0)
Jan 31 08:09:35 compute-0 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "9"} : dispatch
Jan 31 08:09:35 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e80 do_prune osdmap full prune enabled
Jan 31 08:09:35 compute-0 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "9"}]': finished
Jan 31 08:09:35 compute-0 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "9"}]': finished
Jan 31 08:09:35 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e81 e81: 3 total, 3 up, 3 in
Jan 31 08:09:35 compute-0 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "9"} : dispatch
Jan 31 08:09:35 compute-0 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "9"} : dispatch
Jan 31 08:09:35 compute-0 ceph-osd[85971]: osd.0 pg_epoch: 81 pg[6.8( v 39'39 (0'0,39'39] local-lis/les=48/51 n=1 ec=48/23 lis/c=48/48 les/c/f=51/51/0 sis=81 pruub=11.125459671s) [2] r=-1 lpr=81 pi=[48,81)/1 crt=39'39 lcod 0'0 active pruub 150.934844971s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 08:09:35 compute-0 ceph-osd[85971]: osd.0 pg_epoch: 81 pg[6.8( v 39'39 (0'0,39'39] local-lis/les=48/51 n=1 ec=48/23 lis/c=48/48 les/c/f=51/51/0 sis=81 pruub=11.125280380s) [2] r=-1 lpr=81 pi=[48,81)/1 crt=39'39 lcod 0'0 unknown NOTIFY pruub 150.934844971s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 08:09:35 compute-0 ceph-mon[75227]: log_channel(cluster) log [DBG] : osdmap e81: 3 total, 3 up, 3 in
Jan 31 08:09:35 compute-0 ceph-osd[88096]: osd.2 pg_epoch: 81 pg[6.8( empty local-lis/les=0/0 n=0 ec=48/23 lis/c=48/48 les/c/f=51/51/0 sis=81) [2] r=0 lpr=81 pi=[48,81)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:09:35 compute-0 ceph-osd[88096]: log_channel(cluster) log [DBG] : 2.12 scrub starts
Jan 31 08:09:35 compute-0 ceph-osd[88096]: log_channel(cluster) log [DBG] : 2.12 scrub ok
Jan 31 08:09:36 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 81 pg[9.8( v 41'483 (0'0,41'483] local-lis/les=50/51 n=7 ec=50/35 lis/c=50/50 les/c/f=51/51/0 sis=81 pruub=10.450086594s) [2] r=-1 lpr=81 pi=[50,81)/1 crt=41'483 lcod 0'0 active pruub 145.902664185s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 08:09:36 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 81 pg[9.8( v 41'483 (0'0,41'483] local-lis/les=50/51 n=7 ec=50/35 lis/c=50/50 les/c/f=51/51/0 sis=81 pruub=10.450015068s) [2] r=-1 lpr=81 pi=[50,81)/1 crt=41'483 lcod 0'0 unknown NOTIFY pruub 145.902664185s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 08:09:36 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 81 pg[9.18( v 78'487 (0'0,78'487] local-lis/les=50/51 n=6 ec=50/35 lis/c=50/50 les/c/f=51/51/0 sis=81 pruub=10.450208664s) [2] r=-1 lpr=81 pi=[50,81)/1 crt=76'486 lcod 76'486 active pruub 145.903137207s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 08:09:36 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 81 pg[9.18( v 78'487 (0'0,78'487] local-lis/les=50/51 n=6 ec=50/35 lis/c=50/50 les/c/f=51/51/0 sis=81 pruub=10.450128555s) [2] r=-1 lpr=81 pi=[50,81)/1 crt=76'486 lcod 76'486 unknown NOTIFY pruub 145.903137207s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 08:09:36 compute-0 ceph-osd[88096]: osd.2 pg_epoch: 81 pg[9.8( empty local-lis/les=0/0 n=0 ec=50/35 lis/c=50/50 les/c/f=51/51/0 sis=81) [2] r=0 lpr=81 pi=[50,81)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:09:36 compute-0 ceph-osd[88096]: osd.2 pg_epoch: 81 pg[9.18( empty local-lis/les=0/0 n=0 ec=50/35 lis/c=50/50 les/c/f=51/51/0 sis=81) [2] r=0 lpr=81 pi=[50,81)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:09:36 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e81 do_prune osdmap full prune enabled
Jan 31 08:09:36 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e82 e82: 3 total, 3 up, 3 in
Jan 31 08:09:36 compute-0 ceph-mon[75227]: log_channel(cluster) log [DBG] : osdmap e82: 3 total, 3 up, 3 in
Jan 31 08:09:36 compute-0 ceph-osd[88096]: log_channel(cluster) log [DBG] : 10.18 scrub starts
Jan 31 08:09:36 compute-0 ceph-osd[88096]: osd.2 pg_epoch: 82 pg[9.8( empty local-lis/les=0/0 n=0 ec=50/35 lis/c=50/50 les/c/f=51/51/0 sis=82) [2]/[1] r=-1 lpr=82 pi=[50,82)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 08:09:36 compute-0 ceph-osd[88096]: osd.2 pg_epoch: 82 pg[9.8( empty local-lis/les=0/0 n=0 ec=50/35 lis/c=50/50 les/c/f=51/51/0 sis=82) [2]/[1] r=-1 lpr=82 pi=[50,82)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 31 08:09:36 compute-0 ceph-osd[88096]: osd.2 pg_epoch: 82 pg[9.18( empty local-lis/les=0/0 n=0 ec=50/35 lis/c=50/50 les/c/f=51/51/0 sis=82) [2]/[1] r=-1 lpr=82 pi=[50,82)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 08:09:36 compute-0 ceph-osd[88096]: osd.2 pg_epoch: 82 pg[9.18( empty local-lis/les=0/0 n=0 ec=50/35 lis/c=50/50 les/c/f=51/51/0 sis=82) [2]/[1] r=-1 lpr=82 pi=[50,82)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 31 08:09:36 compute-0 ceph-osd[88096]: log_channel(cluster) log [DBG] : 10.18 scrub ok
Jan 31 08:09:37 compute-0 ceph-mon[75227]: pgmap v163: 305 pgs: 305 active+clean; 461 KiB data, 100 MiB used, 60 GiB / 60 GiB avail; 9.8 KiB/s rd, 323 B/s wr, 20 op/s; 315 B/s, 7 objects/s recovering
Jan 31 08:09:37 compute-0 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "9"}]': finished
Jan 31 08:09:37 compute-0 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "9"}]': finished
Jan 31 08:09:37 compute-0 ceph-mon[75227]: osdmap e81: 3 total, 3 up, 3 in
Jan 31 08:09:37 compute-0 ceph-mon[75227]: 2.12 scrub starts
Jan 31 08:09:37 compute-0 ceph-mon[75227]: 2.12 scrub ok
Jan 31 08:09:37 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 82 pg[9.18( v 78'487 (0'0,78'487] local-lis/les=50/51 n=6 ec=50/35 lis/c=50/50 les/c/f=51/51/0 sis=82) [2]/[1] r=0 lpr=82 pi=[50,82)/1 crt=76'486 lcod 76'486 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 08:09:37 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 82 pg[9.8( v 41'483 (0'0,41'483] local-lis/les=50/51 n=7 ec=50/35 lis/c=50/50 les/c/f=51/51/0 sis=82) [2]/[1] r=0 lpr=82 pi=[50,82)/1 crt=41'483 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 08:09:37 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 82 pg[9.8( v 41'483 (0'0,41'483] local-lis/les=50/51 n=7 ec=50/35 lis/c=50/50 les/c/f=51/51/0 sis=82) [2]/[1] r=0 lpr=82 pi=[50,82)/1 crt=41'483 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 31 08:09:37 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 82 pg[9.18( v 78'487 (0'0,78'487] local-lis/les=50/51 n=6 ec=50/35 lis/c=50/50 les/c/f=51/51/0 sis=82) [2]/[1] r=0 lpr=82 pi=[50,82)/1 crt=76'486 lcod 76'486 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 31 08:09:37 compute-0 ceph-osd[88096]: osd.2 pg_epoch: 82 pg[6.8( v 39'39 (0'0,39'39] local-lis/les=81/82 n=1 ec=48/23 lis/c=48/48 les/c/f=51/51/0 sis=81) [2] r=0 lpr=81 pi=[48,81)/1 crt=39'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:09:37 compute-0 ceph-osd[87035]: log_channel(cluster) log [DBG] : 8.17 scrub starts
Jan 31 08:09:37 compute-0 ceph-osd[87035]: log_channel(cluster) log [DBG] : 8.17 scrub ok
Jan 31 08:09:37 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v166: 305 pgs: 305 active+clean; 461 KiB data, 100 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 325 B/s wr, 3 op/s; 118 B/s, 1 objects/s recovering
Jan 31 08:09:37 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "10"} v 0)
Jan 31 08:09:37 compute-0 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "10"} : dispatch
Jan 31 08:09:37 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "10"} v 0)
Jan 31 08:09:37 compute-0 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "10"} : dispatch
Jan 31 08:09:38 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e82 do_prune osdmap full prune enabled
Jan 31 08:09:38 compute-0 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "10"}]': finished
Jan 31 08:09:38 compute-0 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "10"}]': finished
Jan 31 08:09:38 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e83 e83: 3 total, 3 up, 3 in
Jan 31 08:09:38 compute-0 sudo[98593]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 08:09:38 compute-0 sudo[98593]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:09:38 compute-0 sudo[98593]: pam_unix(sudo:session): session closed for user root
Jan 31 08:09:38 compute-0 ceph-mon[75227]: log_channel(cluster) log [DBG] : osdmap e83: 3 total, 3 up, 3 in
Jan 31 08:09:38 compute-0 sudo[98618]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/82c880e6-d992-5408-8b12-efff9c275473/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Jan 31 08:09:38 compute-0 sudo[98618]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:09:38 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 83 pg[6.9( v 39'39 (0'0,39'39] local-lis/les=54/55 n=1 ec=48/23 lis/c=54/54 les/c/f=55/55/0 sis=83 pruub=11.604473114s) [0] r=-1 lpr=83 pi=[54,83)/1 crt=39'39 lcod 0'0 active pruub 148.712814331s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 08:09:38 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 83 pg[6.9( v 39'39 (0'0,39'39] local-lis/les=54/55 n=1 ec=48/23 lis/c=54/54 les/c/f=55/55/0 sis=83 pruub=11.604420662s) [0] r=-1 lpr=83 pi=[54,83)/1 crt=39'39 lcod 0'0 unknown NOTIFY pruub 148.712814331s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 08:09:38 compute-0 ceph-osd[85971]: osd.0 pg_epoch: 83 pg[6.9( empty local-lis/les=0/0 n=0 ec=48/23 lis/c=54/54 les/c/f=55/55/0 sis=83) [0] r=0 lpr=83 pi=[54,83)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:09:38 compute-0 ceph-mon[75227]: osdmap e82: 3 total, 3 up, 3 in
Jan 31 08:09:38 compute-0 ceph-mon[75227]: 10.18 scrub starts
Jan 31 08:09:38 compute-0 ceph-mon[75227]: 10.18 scrub ok
Jan 31 08:09:38 compute-0 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "10"} : dispatch
Jan 31 08:09:38 compute-0 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "10"} : dispatch
Jan 31 08:09:38 compute-0 ceph-osd[87035]: log_channel(cluster) log [DBG] : 11.13 scrub starts
Jan 31 08:09:38 compute-0 ceph-osd[87035]: log_channel(cluster) log [DBG] : 11.13 scrub ok
Jan 31 08:09:38 compute-0 sudo[98618]: pam_unix(sudo:session): session closed for user root
Jan 31 08:09:38 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 31 08:09:38 compute-0 ceph-mon[75227]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 08:09:38 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 31 08:09:38 compute-0 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 31 08:09:38 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 31 08:09:38 compute-0 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 08:09:38 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 31 08:09:38 compute-0 ceph-mon[75227]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 31 08:09:38 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 31 08:09:38 compute-0 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 31 08:09:38 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 31 08:09:38 compute-0 ceph-mon[75227]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 08:09:38 compute-0 sudo[98673]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 08:09:38 compute-0 sudo[98673]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:09:38 compute-0 sudo[98673]: pam_unix(sudo:session): session closed for user root
Jan 31 08:09:38 compute-0 sudo[98698]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/82c880e6-d992-5408-8b12-efff9c275473/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 82c880e6-d992-5408-8b12-efff9c275473 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Jan 31 08:09:38 compute-0 sudo[98698]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:09:38 compute-0 ceph-osd[88096]: log_channel(cluster) log [DBG] : 2.10 scrub starts
Jan 31 08:09:38 compute-0 ceph-osd[88096]: log_channel(cluster) log [DBG] : 2.10 scrub ok
Jan 31 08:09:39 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e83 do_prune osdmap full prune enabled
Jan 31 08:09:39 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 83 pg[9.8( v 41'483 (0'0,41'483] local-lis/les=82/83 n=7 ec=50/35 lis/c=50/50 les/c/f=51/51/0 sis=82) [2]/[1] async=[2] r=0 lpr=82 pi=[50,82)/1 crt=41'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=7}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:09:39 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 83 pg[9.18( v 78'487 (0'0,78'487] local-lis/les=82/83 n=6 ec=50/35 lis/c=50/50 les/c/f=51/51/0 sis=82) [2]/[1] async=[2] r=0 lpr=82 pi=[50,82)/1 crt=78'487 lcod 76'486 mlcod 0'0 active+remapped mbc={255={(0+1)=6}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:09:39 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e84 e84: 3 total, 3 up, 3 in
Jan 31 08:09:39 compute-0 ceph-mon[75227]: log_channel(cluster) log [DBG] : osdmap e84: 3 total, 3 up, 3 in
Jan 31 08:09:39 compute-0 ceph-osd[85971]: osd.0 pg_epoch: 84 pg[6.9( v 39'39 (0'0,39'39] local-lis/les=83/84 n=1 ec=48/23 lis/c=54/54 les/c/f=55/55/0 sis=83) [0] r=0 lpr=83 pi=[54,83)/1 crt=39'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:09:39 compute-0 podman[98735]: 2026-01-31 08:09:39.169443908 +0000 UTC m=+0.020489989 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 08:09:39 compute-0 podman[98735]: 2026-01-31 08:09:39.298830141 +0000 UTC m=+0.149876212 container create 33ad0cb98de224978781d7182f0a7567bc88039f6355763e33b4f01e0bc763ee (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=naughty_wu, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=tentacle, ceph=True)
Jan 31 08:09:39 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e84 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 08:09:39 compute-0 ceph-mon[75227]: 8.17 scrub starts
Jan 31 08:09:39 compute-0 ceph-mon[75227]: 8.17 scrub ok
Jan 31 08:09:39 compute-0 ceph-mon[75227]: pgmap v166: 305 pgs: 305 active+clean; 461 KiB data, 100 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 325 B/s wr, 3 op/s; 118 B/s, 1 objects/s recovering
Jan 31 08:09:39 compute-0 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "10"}]': finished
Jan 31 08:09:39 compute-0 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "10"}]': finished
Jan 31 08:09:39 compute-0 ceph-mon[75227]: osdmap e83: 3 total, 3 up, 3 in
Jan 31 08:09:39 compute-0 ceph-mon[75227]: 11.13 scrub starts
Jan 31 08:09:39 compute-0 ceph-mon[75227]: 11.13 scrub ok
Jan 31 08:09:39 compute-0 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 08:09:39 compute-0 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 31 08:09:39 compute-0 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 08:09:39 compute-0 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 31 08:09:39 compute-0 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 31 08:09:39 compute-0 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 08:09:39 compute-0 ceph-mon[75227]: osdmap e84: 3 total, 3 up, 3 in
Jan 31 08:09:39 compute-0 systemd[1]: Started libpod-conmon-33ad0cb98de224978781d7182f0a7567bc88039f6355763e33b4f01e0bc763ee.scope.
Jan 31 08:09:39 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:09:39 compute-0 podman[98735]: 2026-01-31 08:09:39.575165487 +0000 UTC m=+0.426211568 container init 33ad0cb98de224978781d7182f0a7567bc88039f6355763e33b4f01e0bc763ee (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=naughty_wu, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 08:09:39 compute-0 podman[98735]: 2026-01-31 08:09:39.583399711 +0000 UTC m=+0.434445772 container start 33ad0cb98de224978781d7182f0a7567bc88039f6355763e33b4f01e0bc763ee (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=naughty_wu, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 31 08:09:39 compute-0 naughty_wu[98751]: 167 167
Jan 31 08:09:39 compute-0 systemd[1]: libpod-33ad0cb98de224978781d7182f0a7567bc88039f6355763e33b4f01e0bc763ee.scope: Deactivated successfully.
Jan 31 08:09:39 compute-0 podman[98735]: 2026-01-31 08:09:39.618816193 +0000 UTC m=+0.469862264 container attach 33ad0cb98de224978781d7182f0a7567bc88039f6355763e33b4f01e0bc763ee (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=naughty_wu, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 08:09:39 compute-0 podman[98735]: 2026-01-31 08:09:39.61906995 +0000 UTC m=+0.470116011 container died 33ad0cb98de224978781d7182f0a7567bc88039f6355763e33b4f01e0bc763ee (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=naughty_wu, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 08:09:39 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v169: 305 pgs: 305 active+clean; 461 KiB data, 117 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:09:39 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "11"} v 0)
Jan 31 08:09:39 compute-0 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "11"} : dispatch
Jan 31 08:09:39 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "11"} v 0)
Jan 31 08:09:39 compute-0 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "11"} : dispatch
Jan 31 08:09:39 compute-0 systemd[1]: var-lib-containers-storage-overlay-8ef390ce3237eb08a76ef2d6a5437afa0f196c3b716b7612fdcd94fa581f1297-merged.mount: Deactivated successfully.
Jan 31 08:09:39 compute-0 podman[98735]: 2026-01-31 08:09:39.992952603 +0000 UTC m=+0.843998674 container remove 33ad0cb98de224978781d7182f0a7567bc88039f6355763e33b4f01e0bc763ee (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=naughty_wu, ceph=True, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Jan 31 08:09:39 compute-0 systemd[1]: libpod-conmon-33ad0cb98de224978781d7182f0a7567bc88039f6355763e33b4f01e0bc763ee.scope: Deactivated successfully.
Jan 31 08:09:40 compute-0 ceph-osd[88096]: log_channel(cluster) log [DBG] : 5.17 scrub starts
Jan 31 08:09:40 compute-0 ceph-osd[88096]: log_channel(cluster) log [DBG] : 5.17 scrub ok
Jan 31 08:09:40 compute-0 podman[98777]: 2026-01-31 08:09:40.15920095 +0000 UTC m=+0.084488320 container create cde9b967240cf6c124997ae491985ffed0cc147b969b01de54b5ae6449040573 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nostalgic_bhaskara, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 08:09:40 compute-0 podman[98777]: 2026-01-31 08:09:40.098652442 +0000 UTC m=+0.023939822 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 08:09:40 compute-0 systemd[1]: Started libpod-conmon-cde9b967240cf6c124997ae491985ffed0cc147b969b01de54b5ae6449040573.scope.
Jan 31 08:09:40 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:09:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dbda0a376dfffedfe34b61ba25fc5ef872103418477c54c68218699a7d5874ba/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 08:09:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dbda0a376dfffedfe34b61ba25fc5ef872103418477c54c68218699a7d5874ba/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 08:09:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dbda0a376dfffedfe34b61ba25fc5ef872103418477c54c68218699a7d5874ba/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 08:09:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dbda0a376dfffedfe34b61ba25fc5ef872103418477c54c68218699a7d5874ba/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 08:09:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dbda0a376dfffedfe34b61ba25fc5ef872103418477c54c68218699a7d5874ba/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 31 08:09:40 compute-0 podman[98777]: 2026-01-31 08:09:40.430719703 +0000 UTC m=+0.356007053 container init cde9b967240cf6c124997ae491985ffed0cc147b969b01de54b5ae6449040573 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nostalgic_bhaskara, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 31 08:09:40 compute-0 podman[98777]: 2026-01-31 08:09:40.438184135 +0000 UTC m=+0.363471505 container start cde9b967240cf6c124997ae491985ffed0cc147b969b01de54b5ae6449040573 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nostalgic_bhaskara, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 08:09:40 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e84 do_prune osdmap full prune enabled
Jan 31 08:09:40 compute-0 ceph-mon[75227]: 2.10 scrub starts
Jan 31 08:09:40 compute-0 ceph-mon[75227]: 2.10 scrub ok
Jan 31 08:09:40 compute-0 ceph-mon[75227]: pgmap v169: 305 pgs: 305 active+clean; 461 KiB data, 117 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:09:40 compute-0 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "11"} : dispatch
Jan 31 08:09:40 compute-0 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "11"} : dispatch
Jan 31 08:09:40 compute-0 ceph-mon[75227]: 5.17 scrub starts
Jan 31 08:09:40 compute-0 ceph-mon[75227]: 5.17 scrub ok
Jan 31 08:09:40 compute-0 podman[98777]: 2026-01-31 08:09:40.471873965 +0000 UTC m=+0.397161295 container attach cde9b967240cf6c124997ae491985ffed0cc147b969b01de54b5ae6449040573 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nostalgic_bhaskara, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Jan 31 08:09:40 compute-0 ceph-osd[87035]: log_channel(cluster) log [DBG] : 7.1e scrub starts
Jan 31 08:09:40 compute-0 ceph-osd[87035]: log_channel(cluster) log [DBG] : 7.1e scrub ok
Jan 31 08:09:40 compute-0 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "11"}]': finished
Jan 31 08:09:40 compute-0 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "11"}]': finished
Jan 31 08:09:40 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e85 e85: 3 total, 3 up, 3 in
Jan 31 08:09:40 compute-0 ceph-mon[75227]: log_channel(cluster) log [DBG] : osdmap e85: 3 total, 3 up, 3 in
Jan 31 08:09:40 compute-0 ceph-osd[88096]: osd.2 pg_epoch: 85 pg[9.8( v 41'483 (0'0,41'483] local-lis/les=0/0 n=7 ec=50/35 lis/c=82/50 les/c/f=83/51/0 sis=85) [2] r=0 lpr=85 pi=[50,85)/1 pct=0'0 crt=41'483 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 08:09:40 compute-0 ceph-osd[88096]: osd.2 pg_epoch: 85 pg[9.8( v 41'483 (0'0,41'483] local-lis/les=0/0 n=7 ec=50/35 lis/c=82/50 les/c/f=83/51/0 sis=85) [2] r=0 lpr=85 pi=[50,85)/1 crt=41'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:09:40 compute-0 ceph-osd[88096]: osd.2 pg_epoch: 85 pg[9.18( v 78'487 (0'0,78'487] local-lis/les=0/0 n=6 ec=50/35 lis/c=82/50 les/c/f=83/51/0 sis=85) [2] r=0 lpr=85 pi=[50,85)/1 pct=0'0 crt=78'487 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 08:09:40 compute-0 ceph-osd[88096]: osd.2 pg_epoch: 85 pg[9.18( v 78'487 (0'0,78'487] local-lis/les=0/0 n=6 ec=50/35 lis/c=82/50 les/c/f=83/51/0 sis=85) [2] r=0 lpr=85 pi=[50,85)/1 crt=78'487 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:09:40 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 85 pg[9.18( v 78'487 (0'0,78'487] local-lis/les=82/83 n=6 ec=50/35 lis/c=82/50 les/c/f=83/51/0 sis=85 pruub=14.465391159s) [2] async=[2] r=-1 lpr=85 pi=[50,85)/1 crt=78'487 lcod 76'486 active pruub 154.003845215s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 08:09:40 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 85 pg[9.8( v 41'483 (0'0,41'483] local-lis/les=82/83 n=7 ec=50/35 lis/c=82/50 les/c/f=83/51/0 sis=85 pruub=14.428462029s) [2] async=[2] r=-1 lpr=85 pi=[50,85)/1 crt=41'483 lcod 0'0 active pruub 153.966842651s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 08:09:40 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 85 pg[9.8( v 41'483 (0'0,41'483] local-lis/les=82/83 n=7 ec=50/35 lis/c=82/50 les/c/f=83/51/0 sis=85 pruub=14.428300858s) [2] r=-1 lpr=85 pi=[50,85)/1 crt=41'483 lcod 0'0 unknown NOTIFY pruub 153.966842651s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 08:09:40 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 85 pg[9.18( v 78'487 (0'0,78'487] local-lis/les=82/83 n=6 ec=50/35 lis/c=82/50 les/c/f=83/51/0 sis=85 pruub=14.465301514s) [2] r=-1 lpr=85 pi=[50,85)/1 crt=78'487 lcod 76'486 unknown NOTIFY pruub 154.003845215s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 08:09:40 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 85 pg[6.a( v 39'39 (0'0,39'39] local-lis/les=56/57 n=1 ec=48/23 lis/c=56/56 les/c/f=57/57/0 sis=85 pruub=11.792774200s) [0] r=-1 lpr=85 pi=[56,85)/1 crt=39'39 lcod 0'0 active pruub 151.332031250s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 08:09:40 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 85 pg[6.a( v 39'39 (0'0,39'39] local-lis/les=56/57 n=1 ec=48/23 lis/c=56/56 les/c/f=57/57/0 sis=85 pruub=11.792756081s) [0] r=-1 lpr=85 pi=[56,85)/1 crt=39'39 lcod 0'0 unknown NOTIFY pruub 151.332031250s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 08:09:40 compute-0 ceph-osd[85971]: osd.0 pg_epoch: 85 pg[6.a( empty local-lis/les=0/0 n=0 ec=48/23 lis/c=56/56 les/c/f=57/57/0 sis=85) [0] r=0 lpr=85 pi=[56,85)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:09:40 compute-0 nostalgic_bhaskara[98794]: --> passed data devices: 0 physical, 3 LVM
Jan 31 08:09:40 compute-0 nostalgic_bhaskara[98794]: --> All data devices are unavailable
Jan 31 08:09:40 compute-0 systemd[1]: libpod-cde9b967240cf6c124997ae491985ffed0cc147b969b01de54b5ae6449040573.scope: Deactivated successfully.
Jan 31 08:09:40 compute-0 podman[98777]: 2026-01-31 08:09:40.841596444 +0000 UTC m=+0.766883804 container died cde9b967240cf6c124997ae491985ffed0cc147b969b01de54b5ae6449040573 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nostalgic_bhaskara, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 31 08:09:40 compute-0 ceph-osd[88096]: log_channel(cluster) log [DBG] : 5.8 scrub starts
Jan 31 08:09:40 compute-0 ceph-osd[88096]: log_channel(cluster) log [DBG] : 5.8 scrub ok
Jan 31 08:09:41 compute-0 systemd[1]: var-lib-containers-storage-overlay-dbda0a376dfffedfe34b61ba25fc5ef872103418477c54c68218699a7d5874ba-merged.mount: Deactivated successfully.
Jan 31 08:09:41 compute-0 podman[98777]: 2026-01-31 08:09:41.227424952 +0000 UTC m=+1.152712312 container remove cde9b967240cf6c124997ae491985ffed0cc147b969b01de54b5ae6449040573 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nostalgic_bhaskara, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3)
Jan 31 08:09:41 compute-0 sudo[98698]: pam_unix(sudo:session): session closed for user root
Jan 31 08:09:41 compute-0 systemd[1]: libpod-conmon-cde9b967240cf6c124997ae491985ffed0cc147b969b01de54b5ae6449040573.scope: Deactivated successfully.
Jan 31 08:09:41 compute-0 sudo[98826]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 08:09:41 compute-0 sudo[98826]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:09:41 compute-0 sudo[98826]: pam_unix(sudo:session): session closed for user root
Jan 31 08:09:41 compute-0 sudo[98851]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/82c880e6-d992-5408-8b12-efff9c275473/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 82c880e6-d992-5408-8b12-efff9c275473 -- lvm list --format json
Jan 31 08:09:41 compute-0 sudo[98851]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:09:41 compute-0 ceph-osd[85971]: log_channel(cluster) log [DBG] : 4.15 scrub starts
Jan 31 08:09:41 compute-0 ceph-osd[87035]: log_channel(cluster) log [DBG] : 3.19 scrub starts
Jan 31 08:09:41 compute-0 ceph-osd[87035]: log_channel(cluster) log [DBG] : 3.19 scrub ok
Jan 31 08:09:41 compute-0 ceph-osd[85971]: log_channel(cluster) log [DBG] : 4.15 scrub ok
Jan 31 08:09:41 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e85 do_prune osdmap full prune enabled
Jan 31 08:09:41 compute-0 podman[98889]: 2026-01-31 08:09:41.613207888 +0000 UTC m=+0.029204248 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 08:09:41 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v171: 305 pgs: 2 active+remapped, 303 active+clean; 461 KiB data, 117 MiB used, 60 GiB / 60 GiB avail; 109 B/s, 2 objects/s recovering
Jan 31 08:09:41 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "12"} v 0)
Jan 31 08:09:41 compute-0 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "12"} : dispatch
Jan 31 08:09:41 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "12"} v 0)
Jan 31 08:09:41 compute-0 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "12"} : dispatch
Jan 31 08:09:41 compute-0 ceph-mon[75227]: 7.1e scrub starts
Jan 31 08:09:41 compute-0 ceph-mon[75227]: 7.1e scrub ok
Jan 31 08:09:41 compute-0 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "11"}]': finished
Jan 31 08:09:41 compute-0 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "11"}]': finished
Jan 31 08:09:41 compute-0 ceph-mon[75227]: osdmap e85: 3 total, 3 up, 3 in
Jan 31 08:09:41 compute-0 ceph-mon[75227]: 5.8 scrub starts
Jan 31 08:09:41 compute-0 ceph-mon[75227]: 5.8 scrub ok
Jan 31 08:09:41 compute-0 podman[98889]: 2026-01-31 08:09:41.860812451 +0000 UTC m=+0.276808791 container create 97c24bb4f9a47a7f1af59723cd8ca905773e9c4086dc716a512fc0cca293d828 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_swartz, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 08:09:41 compute-0 systemd[1]: Started libpod-conmon-97c24bb4f9a47a7f1af59723cd8ca905773e9c4086dc716a512fc0cca293d828.scope.
Jan 31 08:09:41 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:09:42 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e86 e86: 3 total, 3 up, 3 in
Jan 31 08:09:42 compute-0 podman[98889]: 2026-01-31 08:09:42.404901508 +0000 UTC m=+0.820897928 container init 97c24bb4f9a47a7f1af59723cd8ca905773e9c4086dc716a512fc0cca293d828 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_swartz, CEPH_REF=tentacle, ceph=True, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3)
Jan 31 08:09:42 compute-0 ceph-mon[75227]: log_channel(cluster) log [DBG] : osdmap e86: 3 total, 3 up, 3 in
Jan 31 08:09:42 compute-0 podman[98889]: 2026-01-31 08:09:42.414470952 +0000 UTC m=+0.830467292 container start 97c24bb4f9a47a7f1af59723cd8ca905773e9c4086dc716a512fc0cca293d828 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_swartz, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Jan 31 08:09:42 compute-0 fervent_swartz[98905]: 167 167
Jan 31 08:09:42 compute-0 systemd[1]: libpod-97c24bb4f9a47a7f1af59723cd8ca905773e9c4086dc716a512fc0cca293d828.scope: Deactivated successfully.
Jan 31 08:09:42 compute-0 ceph-osd[85971]: osd.0 pg_epoch: 86 pg[6.a( v 39'39 (0'0,39'39] local-lis/les=85/86 n=1 ec=48/23 lis/c=56/56 les/c/f=57/57/0 sis=85) [0] r=0 lpr=85 pi=[56,85)/1 crt=39'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:09:42 compute-0 ceph-osd[88096]: osd.2 pg_epoch: 86 pg[9.8( v 41'483 (0'0,41'483] local-lis/les=85/86 n=7 ec=50/35 lis/c=82/50 les/c/f=83/51/0 sis=85) [2] r=0 lpr=85 pi=[50,85)/1 crt=41'483 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:09:42 compute-0 podman[98889]: 2026-01-31 08:09:42.484637976 +0000 UTC m=+0.900634396 container attach 97c24bb4f9a47a7f1af59723cd8ca905773e9c4086dc716a512fc0cca293d828 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_swartz, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Jan 31 08:09:42 compute-0 podman[98889]: 2026-01-31 08:09:42.48511229 +0000 UTC m=+0.901108670 container died 97c24bb4f9a47a7f1af59723cd8ca905773e9c4086dc716a512fc0cca293d828 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_swartz, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 08:09:42 compute-0 ceph-osd[85971]: log_channel(cluster) log [DBG] : 4.16 scrub starts
Jan 31 08:09:42 compute-0 ceph-osd[88096]: osd.2 pg_epoch: 86 pg[9.18( v 78'487 (0'0,78'487] local-lis/les=85/86 n=6 ec=50/35 lis/c=82/50 les/c/f=83/51/0 sis=85) [2] r=0 lpr=85 pi=[50,85)/1 crt=78'487 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:09:42 compute-0 ceph-osd[85971]: log_channel(cluster) log [DBG] : 4.16 scrub ok
Jan 31 08:09:42 compute-0 systemd[1]: var-lib-containers-storage-overlay-65b61540c9d1bcba4f093c979b9b921c813d1d24f575f14044b87b1cb3cb3c1f-merged.mount: Deactivated successfully.
Jan 31 08:09:42 compute-0 podman[98889]: 2026-01-31 08:09:42.755641234 +0000 UTC m=+1.171637604 container remove 97c24bb4f9a47a7f1af59723cd8ca905773e9c4086dc716a512fc0cca293d828 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_swartz, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 08:09:42 compute-0 systemd[1]: libpod-conmon-97c24bb4f9a47a7f1af59723cd8ca905773e9c4086dc716a512fc0cca293d828.scope: Deactivated successfully.
Jan 31 08:09:42 compute-0 ceph-mon[75227]: 3.19 scrub starts
Jan 31 08:09:42 compute-0 ceph-mon[75227]: 4.15 scrub starts
Jan 31 08:09:42 compute-0 ceph-mon[75227]: 3.19 scrub ok
Jan 31 08:09:42 compute-0 ceph-mon[75227]: 4.15 scrub ok
Jan 31 08:09:42 compute-0 ceph-mon[75227]: pgmap v171: 305 pgs: 2 active+remapped, 303 active+clean; 461 KiB data, 117 MiB used, 60 GiB / 60 GiB avail; 109 B/s, 2 objects/s recovering
Jan 31 08:09:42 compute-0 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "12"} : dispatch
Jan 31 08:09:42 compute-0 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "12"} : dispatch
Jan 31 08:09:42 compute-0 ceph-mon[75227]: osdmap e86: 3 total, 3 up, 3 in
Jan 31 08:09:42 compute-0 podman[98929]: 2026-01-31 08:09:42.947061528 +0000 UTC m=+0.062661052 container create 03020ddf96bc4edf8bf726d59a5b660ce6890aa28a9c40bd9cf7a79ff0148e28 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=upbeat_albattani, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 08:09:42 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] _maybe_adjust
Jan 31 08:09:42 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:09:42 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 31 08:09:42 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:09:42 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 08:09:42 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:09:42 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 08:09:42 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:09:42 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 08:09:42 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:09:42 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 08:09:42 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:09:42 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.6463876445909616e-06 of space, bias 4.0, pg target 0.001975665173509154 quantized to 16 (current 16)
Jan 31 08:09:42 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:09:42 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 08:09:42 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:09:42 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 31 08:09:42 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:09:42 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.45134954743765e-06 of space, bias 1.0, pg target 0.0013354048642312951 quantized to 32 (current 32)
Jan 31 08:09:42 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:09:42 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 08:09:42 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:09:42 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 31 08:09:43 compute-0 podman[98929]: 2026-01-31 08:09:42.910766201 +0000 UTC m=+0.026365755 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 08:09:43 compute-0 systemd[1]: Started libpod-conmon-03020ddf96bc4edf8bf726d59a5b660ce6890aa28a9c40bd9cf7a79ff0148e28.scope.
Jan 31 08:09:43 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:09:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d5a03c1bab692b002dab876c41f2af2c6f98fed7923e51ed9b8f1f469a82d941/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 08:09:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d5a03c1bab692b002dab876c41f2af2c6f98fed7923e51ed9b8f1f469a82d941/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 08:09:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d5a03c1bab692b002dab876c41f2af2c6f98fed7923e51ed9b8f1f469a82d941/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 08:09:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d5a03c1bab692b002dab876c41f2af2c6f98fed7923e51ed9b8f1f469a82d941/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 08:09:43 compute-0 podman[98929]: 2026-01-31 08:09:43.111311396 +0000 UTC m=+0.226910970 container init 03020ddf96bc4edf8bf726d59a5b660ce6890aa28a9c40bd9cf7a79ff0148e28 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=upbeat_albattani, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Jan 31 08:09:43 compute-0 podman[98929]: 2026-01-31 08:09:43.116354856 +0000 UTC m=+0.231954380 container start 03020ddf96bc4edf8bf726d59a5b660ce6890aa28a9c40bd9cf7a79ff0148e28 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=upbeat_albattani, CEPH_REF=tentacle, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 08:09:43 compute-0 podman[98929]: 2026-01-31 08:09:43.17811271 +0000 UTC m=+0.293712244 container attach 03020ddf96bc4edf8bf726d59a5b660ce6890aa28a9c40bd9cf7a79ff0148e28 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=upbeat_albattani, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Jan 31 08:09:43 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e86 do_prune osdmap full prune enabled
Jan 31 08:09:43 compute-0 upbeat_albattani[98946]: {
Jan 31 08:09:43 compute-0 upbeat_albattani[98946]:     "0": [
Jan 31 08:09:43 compute-0 upbeat_albattani[98946]:         {
Jan 31 08:09:43 compute-0 upbeat_albattani[98946]:             "devices": [
Jan 31 08:09:43 compute-0 upbeat_albattani[98946]:                 "/dev/loop3"
Jan 31 08:09:43 compute-0 upbeat_albattani[98946]:             ],
Jan 31 08:09:43 compute-0 upbeat_albattani[98946]:             "lv_name": "ceph_lv0",
Jan 31 08:09:43 compute-0 upbeat_albattani[98946]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 08:09:43 compute-0 upbeat_albattani[98946]:             "lv_size": "21470642176",
Jan 31 08:09:43 compute-0 upbeat_albattani[98946]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=MTsNbY-MKaT-jGv0-3onj-5WQa-gnK0-BbfLsK,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=82c880e6-d992-5408-8b12-efff9c275473,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=39c36249-2898-4a76-b317-8e4ca379866f,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 31 08:09:43 compute-0 upbeat_albattani[98946]:             "lv_uuid": "MTsNbY-MKaT-jGv0-3onj-5WQa-gnK0-BbfLsK",
Jan 31 08:09:43 compute-0 upbeat_albattani[98946]:             "name": "ceph_lv0",
Jan 31 08:09:43 compute-0 upbeat_albattani[98946]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 08:09:43 compute-0 upbeat_albattani[98946]:             "tags": {
Jan 31 08:09:43 compute-0 upbeat_albattani[98946]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 31 08:09:43 compute-0 upbeat_albattani[98946]:                 "ceph.block_uuid": "MTsNbY-MKaT-jGv0-3onj-5WQa-gnK0-BbfLsK",
Jan 31 08:09:43 compute-0 upbeat_albattani[98946]:                 "ceph.cephx_lockbox_secret": "",
Jan 31 08:09:43 compute-0 upbeat_albattani[98946]:                 "ceph.cluster_fsid": "82c880e6-d992-5408-8b12-efff9c275473",
Jan 31 08:09:43 compute-0 upbeat_albattani[98946]:                 "ceph.cluster_name": "ceph",
Jan 31 08:09:43 compute-0 upbeat_albattani[98946]:                 "ceph.crush_device_class": "",
Jan 31 08:09:43 compute-0 upbeat_albattani[98946]:                 "ceph.encrypted": "0",
Jan 31 08:09:43 compute-0 upbeat_albattani[98946]:                 "ceph.objectstore": "bluestore",
Jan 31 08:09:43 compute-0 upbeat_albattani[98946]:                 "ceph.osd_fsid": "39c36249-2898-4a76-b317-8e4ca379866f",
Jan 31 08:09:43 compute-0 upbeat_albattani[98946]:                 "ceph.osd_id": "0",
Jan 31 08:09:43 compute-0 upbeat_albattani[98946]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 31 08:09:43 compute-0 upbeat_albattani[98946]:                 "ceph.type": "block",
Jan 31 08:09:43 compute-0 upbeat_albattani[98946]:                 "ceph.vdo": "0",
Jan 31 08:09:43 compute-0 upbeat_albattani[98946]:                 "ceph.with_tpm": "0"
Jan 31 08:09:43 compute-0 upbeat_albattani[98946]:             },
Jan 31 08:09:43 compute-0 upbeat_albattani[98946]:             "type": "block",
Jan 31 08:09:43 compute-0 upbeat_albattani[98946]:             "vg_name": "ceph_vg0"
Jan 31 08:09:43 compute-0 upbeat_albattani[98946]:         }
Jan 31 08:09:43 compute-0 upbeat_albattani[98946]:     ],
Jan 31 08:09:43 compute-0 upbeat_albattani[98946]:     "1": [
Jan 31 08:09:43 compute-0 upbeat_albattani[98946]:         {
Jan 31 08:09:43 compute-0 upbeat_albattani[98946]:             "devices": [
Jan 31 08:09:43 compute-0 upbeat_albattani[98946]:                 "/dev/loop4"
Jan 31 08:09:43 compute-0 upbeat_albattani[98946]:             ],
Jan 31 08:09:43 compute-0 upbeat_albattani[98946]:             "lv_name": "ceph_lv1",
Jan 31 08:09:43 compute-0 upbeat_albattani[98946]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 31 08:09:43 compute-0 upbeat_albattani[98946]:             "lv_size": "21470642176",
Jan 31 08:09:43 compute-0 upbeat_albattani[98946]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=p93Mbf-DMxT-pcUt-jSJE-SFna-oscq-yTAd40,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=82c880e6-d992-5408-8b12-efff9c275473,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=dacad4fa-56d8-4937-b2d8-306fb75187f3,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 31 08:09:43 compute-0 upbeat_albattani[98946]:             "lv_uuid": "p93Mbf-DMxT-pcUt-jSJE-SFna-oscq-yTAd40",
Jan 31 08:09:43 compute-0 upbeat_albattani[98946]:             "name": "ceph_lv1",
Jan 31 08:09:43 compute-0 upbeat_albattani[98946]:             "path": "/dev/ceph_vg1/ceph_lv1",
Jan 31 08:09:43 compute-0 upbeat_albattani[98946]:             "tags": {
Jan 31 08:09:43 compute-0 upbeat_albattani[98946]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 31 08:09:43 compute-0 upbeat_albattani[98946]:                 "ceph.block_uuid": "p93Mbf-DMxT-pcUt-jSJE-SFna-oscq-yTAd40",
Jan 31 08:09:43 compute-0 upbeat_albattani[98946]:                 "ceph.cephx_lockbox_secret": "",
Jan 31 08:09:43 compute-0 upbeat_albattani[98946]:                 "ceph.cluster_fsid": "82c880e6-d992-5408-8b12-efff9c275473",
Jan 31 08:09:43 compute-0 upbeat_albattani[98946]:                 "ceph.cluster_name": "ceph",
Jan 31 08:09:43 compute-0 upbeat_albattani[98946]:                 "ceph.crush_device_class": "",
Jan 31 08:09:43 compute-0 upbeat_albattani[98946]:                 "ceph.encrypted": "0",
Jan 31 08:09:43 compute-0 upbeat_albattani[98946]:                 "ceph.objectstore": "bluestore",
Jan 31 08:09:43 compute-0 upbeat_albattani[98946]:                 "ceph.osd_fsid": "dacad4fa-56d8-4937-b2d8-306fb75187f3",
Jan 31 08:09:43 compute-0 upbeat_albattani[98946]:                 "ceph.osd_id": "1",
Jan 31 08:09:43 compute-0 upbeat_albattani[98946]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 31 08:09:43 compute-0 upbeat_albattani[98946]:                 "ceph.type": "block",
Jan 31 08:09:43 compute-0 upbeat_albattani[98946]:                 "ceph.vdo": "0",
Jan 31 08:09:43 compute-0 upbeat_albattani[98946]:                 "ceph.with_tpm": "0"
Jan 31 08:09:43 compute-0 upbeat_albattani[98946]:             },
Jan 31 08:09:43 compute-0 upbeat_albattani[98946]:             "type": "block",
Jan 31 08:09:43 compute-0 upbeat_albattani[98946]:             "vg_name": "ceph_vg1"
Jan 31 08:09:43 compute-0 upbeat_albattani[98946]:         }
Jan 31 08:09:43 compute-0 upbeat_albattani[98946]:     ],
Jan 31 08:09:43 compute-0 upbeat_albattani[98946]:     "2": [
Jan 31 08:09:43 compute-0 upbeat_albattani[98946]:         {
Jan 31 08:09:43 compute-0 upbeat_albattani[98946]:             "devices": [
Jan 31 08:09:43 compute-0 upbeat_albattani[98946]:                 "/dev/loop5"
Jan 31 08:09:43 compute-0 upbeat_albattani[98946]:             ],
Jan 31 08:09:43 compute-0 upbeat_albattani[98946]:             "lv_name": "ceph_lv2",
Jan 31 08:09:43 compute-0 upbeat_albattani[98946]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 31 08:09:43 compute-0 upbeat_albattani[98946]:             "lv_size": "21470642176",
Jan 31 08:09:43 compute-0 upbeat_albattani[98946]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=fxd7JU-HnwP-NvcE-M4xv-EgEF-kK7y-w6dXCS,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=82c880e6-d992-5408-8b12-efff9c275473,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=faa25865-e7b6-44f9-8188-08bf287b941b,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 31 08:09:43 compute-0 upbeat_albattani[98946]:             "lv_uuid": "fxd7JU-HnwP-NvcE-M4xv-EgEF-kK7y-w6dXCS",
Jan 31 08:09:43 compute-0 upbeat_albattani[98946]:             "name": "ceph_lv2",
Jan 31 08:09:43 compute-0 upbeat_albattani[98946]:             "path": "/dev/ceph_vg2/ceph_lv2",
Jan 31 08:09:43 compute-0 upbeat_albattani[98946]:             "tags": {
Jan 31 08:09:43 compute-0 upbeat_albattani[98946]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 31 08:09:43 compute-0 upbeat_albattani[98946]:                 "ceph.block_uuid": "fxd7JU-HnwP-NvcE-M4xv-EgEF-kK7y-w6dXCS",
Jan 31 08:09:43 compute-0 upbeat_albattani[98946]:                 "ceph.cephx_lockbox_secret": "",
Jan 31 08:09:43 compute-0 upbeat_albattani[98946]:                 "ceph.cluster_fsid": "82c880e6-d992-5408-8b12-efff9c275473",
Jan 31 08:09:43 compute-0 upbeat_albattani[98946]:                 "ceph.cluster_name": "ceph",
Jan 31 08:09:43 compute-0 upbeat_albattani[98946]:                 "ceph.crush_device_class": "",
Jan 31 08:09:43 compute-0 upbeat_albattani[98946]:                 "ceph.encrypted": "0",
Jan 31 08:09:43 compute-0 upbeat_albattani[98946]:                 "ceph.objectstore": "bluestore",
Jan 31 08:09:43 compute-0 upbeat_albattani[98946]:                 "ceph.osd_fsid": "faa25865-e7b6-44f9-8188-08bf287b941b",
Jan 31 08:09:43 compute-0 upbeat_albattani[98946]:                 "ceph.osd_id": "2",
Jan 31 08:09:43 compute-0 upbeat_albattani[98946]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 31 08:09:43 compute-0 upbeat_albattani[98946]:                 "ceph.type": "block",
Jan 31 08:09:43 compute-0 upbeat_albattani[98946]:                 "ceph.vdo": "0",
Jan 31 08:09:43 compute-0 upbeat_albattani[98946]:                 "ceph.with_tpm": "0"
Jan 31 08:09:43 compute-0 upbeat_albattani[98946]:             },
Jan 31 08:09:43 compute-0 upbeat_albattani[98946]:             "type": "block",
Jan 31 08:09:43 compute-0 upbeat_albattani[98946]:             "vg_name": "ceph_vg2"
Jan 31 08:09:43 compute-0 upbeat_albattani[98946]:         }
Jan 31 08:09:43 compute-0 upbeat_albattani[98946]:     ]
Jan 31 08:09:43 compute-0 upbeat_albattani[98946]: }
Jan 31 08:09:43 compute-0 systemd[1]: libpod-03020ddf96bc4edf8bf726d59a5b660ce6890aa28a9c40bd9cf7a79ff0148e28.scope: Deactivated successfully.
Jan 31 08:09:43 compute-0 podman[98929]: 2026-01-31 08:09:43.456693532 +0000 UTC m=+0.572293046 container died 03020ddf96bc4edf8bf726d59a5b660ce6890aa28a9c40bd9cf7a79ff0148e28 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=upbeat_albattani, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2)
Jan 31 08:09:43 compute-0 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "12"}]': finished
Jan 31 08:09:43 compute-0 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "12"}]': finished
Jan 31 08:09:43 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e87 e87: 3 total, 3 up, 3 in
Jan 31 08:09:43 compute-0 ceph-osd[85971]: log_channel(cluster) log [DBG] : 4.17 scrub starts
Jan 31 08:09:43 compute-0 ceph-osd[85971]: log_channel(cluster) log [DBG] : 4.17 scrub ok
Jan 31 08:09:43 compute-0 ceph-mon[75227]: log_channel(cluster) log [DBG] : osdmap e87: 3 total, 3 up, 3 in
Jan 31 08:09:43 compute-0 ceph-osd[87035]: log_channel(cluster) log [DBG] : 7.1d scrub starts
Jan 31 08:09:43 compute-0 ceph-osd[87035]: log_channel(cluster) log [DBG] : 7.1d scrub ok
Jan 31 08:09:43 compute-0 systemd[1]: var-lib-containers-storage-overlay-d5a03c1bab692b002dab876c41f2af2c6f98fed7923e51ed9b8f1f469a82d941-merged.mount: Deactivated successfully.
Jan 31 08:09:43 compute-0 ceph-osd[85971]: osd.0 pg_epoch: 87 pg[6.b( v 39'39 (0'0,39'39] local-lis/les=67/68 n=1 ec=48/23 lis/c=67/67 les/c/f=68/68/0 sis=87 pruub=15.185071945s) [1] r=-1 lpr=87 pi=[67,87)/1 crt=39'39 active pruub 162.757888794s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 08:09:43 compute-0 ceph-osd[85971]: osd.0 pg_epoch: 87 pg[6.b( v 39'39 (0'0,39'39] local-lis/les=67/68 n=1 ec=48/23 lis/c=67/67 les/c/f=68/68/0 sis=87 pruub=15.185009003s) [1] r=-1 lpr=87 pi=[67,87)/1 crt=39'39 unknown NOTIFY pruub 162.757888794s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 08:09:43 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 87 pg[6.b( empty local-lis/les=0/0 n=0 ec=48/23 lis/c=67/67 les/c/f=68/68/0 sis=87) [1] r=0 lpr=87 pi=[67,87)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:09:43 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v174: 305 pgs: 2 active+remapped, 303 active+clean; 461 KiB data, 117 MiB used, 60 GiB / 60 GiB avail; 114 B/s, 2 objects/s recovering
Jan 31 08:09:43 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "13"} v 0)
Jan 31 08:09:43 compute-0 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "13"} : dispatch
Jan 31 08:09:43 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "13"} v 0)
Jan 31 08:09:43 compute-0 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "13"} : dispatch
Jan 31 08:09:43 compute-0 podman[98929]: 2026-01-31 08:09:43.794157614 +0000 UTC m=+0.909757148 container remove 03020ddf96bc4edf8bf726d59a5b660ce6890aa28a9c40bd9cf7a79ff0148e28 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=upbeat_albattani, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 08:09:43 compute-0 systemd[1]: libpod-conmon-03020ddf96bc4edf8bf726d59a5b660ce6890aa28a9c40bd9cf7a79ff0148e28.scope: Deactivated successfully.
Jan 31 08:09:43 compute-0 sudo[98851]: pam_unix(sudo:session): session closed for user root
Jan 31 08:09:43 compute-0 sudo[98969]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 08:09:43 compute-0 sudo[98969]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:09:43 compute-0 sudo[98969]: pam_unix(sudo:session): session closed for user root
Jan 31 08:09:43 compute-0 sudo[98994]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/82c880e6-d992-5408-8b12-efff9c275473/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 82c880e6-d992-5408-8b12-efff9c275473 -- raw list --format json
Jan 31 08:09:43 compute-0 sudo[98994]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:09:43 compute-0 ceph-mon[75227]: 4.16 scrub starts
Jan 31 08:09:43 compute-0 ceph-mon[75227]: 4.16 scrub ok
Jan 31 08:09:43 compute-0 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "12"}]': finished
Jan 31 08:09:43 compute-0 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "12"}]': finished
Jan 31 08:09:43 compute-0 ceph-mon[75227]: osdmap e87: 3 total, 3 up, 3 in
Jan 31 08:09:43 compute-0 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "13"} : dispatch
Jan 31 08:09:43 compute-0 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "13"} : dispatch
Jan 31 08:09:44 compute-0 podman[99031]: 2026-01-31 08:09:44.283598167 +0000 UTC m=+0.092483817 container create 19acd21e8b3b7374391d5c3b858b208d054c29ee6d436ac7bf7477aab73a1a91 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nifty_leavitt, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 08:09:44 compute-0 podman[99031]: 2026-01-31 08:09:44.222977937 +0000 UTC m=+0.031863447 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 08:09:44 compute-0 systemd[1]: Started libpod-conmon-19acd21e8b3b7374391d5c3b858b208d054c29ee6d436ac7bf7477aab73a1a91.scope.
Jan 31 08:09:44 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:09:44 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e87 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 08:09:44 compute-0 podman[99031]: 2026-01-31 08:09:44.423909814 +0000 UTC m=+0.232795314 container init 19acd21e8b3b7374391d5c3b858b208d054c29ee6d436ac7bf7477aab73a1a91 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nifty_leavitt, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 08:09:44 compute-0 podman[99031]: 2026-01-31 08:09:44.433377655 +0000 UTC m=+0.242263085 container start 19acd21e8b3b7374391d5c3b858b208d054c29ee6d436ac7bf7477aab73a1a91 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nifty_leavitt, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 08:09:44 compute-0 nifty_leavitt[99047]: 167 167
Jan 31 08:09:44 compute-0 systemd[1]: libpod-19acd21e8b3b7374391d5c3b858b208d054c29ee6d436ac7bf7477aab73a1a91.scope: Deactivated successfully.
Jan 31 08:09:44 compute-0 ceph-osd[85971]: log_channel(cluster) log [DBG] : 2.11 scrub starts
Jan 31 08:09:44 compute-0 ceph-osd[85971]: log_channel(cluster) log [DBG] : 2.11 scrub ok
Jan 31 08:09:44 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e87 do_prune osdmap full prune enabled
Jan 31 08:09:44 compute-0 podman[99031]: 2026-01-31 08:09:44.464898141 +0000 UTC m=+0.273783561 container attach 19acd21e8b3b7374391d5c3b858b208d054c29ee6d436ac7bf7477aab73a1a91 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nifty_leavitt, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=tentacle)
Jan 31 08:09:44 compute-0 podman[99031]: 2026-01-31 08:09:44.465408016 +0000 UTC m=+0.274293436 container died 19acd21e8b3b7374391d5c3b858b208d054c29ee6d436ac7bf7477aab73a1a91 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nifty_leavitt, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=tentacle, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 08:09:44 compute-0 ceph-osd[87035]: log_channel(cluster) log [DBG] : 8.13 scrub starts
Jan 31 08:09:44 compute-0 ceph-osd[87035]: log_channel(cluster) log [DBG] : 8.13 scrub ok
Jan 31 08:09:44 compute-0 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "13"}]': finished
Jan 31 08:09:44 compute-0 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "13"}]': finished
Jan 31 08:09:44 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e88 e88: 3 total, 3 up, 3 in
Jan 31 08:09:44 compute-0 ceph-mon[75227]: log_channel(cluster) log [DBG] : osdmap e88: 3 total, 3 up, 3 in
Jan 31 08:09:44 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 88 pg[9.c( v 41'483 (0'0,41'483] local-lis/les=50/51 n=7 ec=50/35 lis/c=50/50 les/c/f=51/51/0 sis=88 pruub=10.368991852s) [2] r=-1 lpr=88 pi=[50,88)/1 crt=41'483 lcod 0'0 active pruub 153.902481079s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 08:09:44 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 88 pg[9.1c( v 78'487 (0'0,78'487] local-lis/les=50/51 n=6 ec=50/35 lis/c=50/50 les/c/f=51/51/0 sis=88 pruub=10.369693756s) [2] r=-1 lpr=88 pi=[50,88)/1 crt=76'486 lcod 76'486 active pruub 153.903579712s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 08:09:44 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 88 pg[9.1c( v 78'487 (0'0,78'487] local-lis/les=50/51 n=6 ec=50/35 lis/c=50/50 les/c/f=51/51/0 sis=88 pruub=10.369609833s) [2] r=-1 lpr=88 pi=[50,88)/1 crt=76'486 lcod 76'486 unknown NOTIFY pruub 153.903579712s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 08:09:44 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 88 pg[9.c( v 41'483 (0'0,41'483] local-lis/les=50/51 n=7 ec=50/35 lis/c=50/50 les/c/f=51/51/0 sis=88 pruub=10.368498802s) [2] r=-1 lpr=88 pi=[50,88)/1 crt=41'483 lcod 0'0 unknown NOTIFY pruub 153.902481079s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 08:09:44 compute-0 ceph-osd[88096]: osd.2 pg_epoch: 88 pg[9.1c( empty local-lis/les=0/0 n=0 ec=50/35 lis/c=50/50 les/c/f=51/51/0 sis=88) [2] r=0 lpr=88 pi=[50,88)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:09:44 compute-0 ceph-osd[88096]: osd.2 pg_epoch: 88 pg[9.c( empty local-lis/les=0/0 n=0 ec=50/35 lis/c=50/50 les/c/f=51/51/0 sis=88) [2] r=0 lpr=88 pi=[50,88)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:09:44 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 88 pg[6.b( v 39'39 lc 0'0 (0'0,39'39] local-lis/les=87/88 n=1 ec=48/23 lis/c=67/67 les/c/f=68/68/0 sis=87) [1] r=0 lpr=87 pi=[67,87)/1 crt=39'39 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:09:44 compute-0 systemd[1]: var-lib-containers-storage-overlay-da25fe6ad9b8efeb1e9e99e075758e977306b49d2c3140fdb551ad5df98b9db0-merged.mount: Deactivated successfully.
Jan 31 08:09:44 compute-0 podman[99031]: 2026-01-31 08:09:44.838393822 +0000 UTC m=+0.647279322 container remove 19acd21e8b3b7374391d5c3b858b208d054c29ee6d436ac7bf7477aab73a1a91 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nifty_leavitt, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 08:09:44 compute-0 ceph-osd[88096]: log_channel(cluster) log [DBG] : 2.e scrub starts
Jan 31 08:09:44 compute-0 systemd[1]: libpod-conmon-19acd21e8b3b7374391d5c3b858b208d054c29ee6d436ac7bf7477aab73a1a91.scope: Deactivated successfully.
Jan 31 08:09:44 compute-0 ceph-osd[88096]: log_channel(cluster) log [DBG] : 2.e scrub ok
Jan 31 08:09:45 compute-0 ceph-mon[75227]: 4.17 scrub starts
Jan 31 08:09:45 compute-0 ceph-mon[75227]: 4.17 scrub ok
Jan 31 08:09:45 compute-0 ceph-mon[75227]: 7.1d scrub starts
Jan 31 08:09:45 compute-0 ceph-mon[75227]: 7.1d scrub ok
Jan 31 08:09:45 compute-0 ceph-mon[75227]: pgmap v174: 305 pgs: 2 active+remapped, 303 active+clean; 461 KiB data, 117 MiB used, 60 GiB / 60 GiB avail; 114 B/s, 2 objects/s recovering
Jan 31 08:09:45 compute-0 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "13"}]': finished
Jan 31 08:09:45 compute-0 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "13"}]': finished
Jan 31 08:09:45 compute-0 ceph-mon[75227]: osdmap e88: 3 total, 3 up, 3 in
Jan 31 08:09:45 compute-0 podman[99072]: 2026-01-31 08:09:44.979480562 +0000 UTC m=+0.024294882 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 08:09:45 compute-0 podman[99072]: 2026-01-31 08:09:45.129139716 +0000 UTC m=+0.173954026 container create 06cedd1236cfdb7a4fa113a57346206cccb4994d25724140958ab16456c6d927 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=loving_meninsky, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 08:09:45 compute-0 systemd[1]: Started libpod-conmon-06cedd1236cfdb7a4fa113a57346206cccb4994d25724140958ab16456c6d927.scope.
Jan 31 08:09:45 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:09:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/32ee0e43712fa228bfaa6bc0715890306c7d172b549f63dfc1a345dc19854b06/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 08:09:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/32ee0e43712fa228bfaa6bc0715890306c7d172b549f63dfc1a345dc19854b06/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 08:09:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/32ee0e43712fa228bfaa6bc0715890306c7d172b549f63dfc1a345dc19854b06/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 08:09:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/32ee0e43712fa228bfaa6bc0715890306c7d172b549f63dfc1a345dc19854b06/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 08:09:45 compute-0 podman[99072]: 2026-01-31 08:09:45.306782052 +0000 UTC m=+0.351596452 container init 06cedd1236cfdb7a4fa113a57346206cccb4994d25724140958ab16456c6d927 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=loving_meninsky, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Jan 31 08:09:45 compute-0 podman[99072]: 2026-01-31 08:09:45.312839342 +0000 UTC m=+0.357653672 container start 06cedd1236cfdb7a4fa113a57346206cccb4994d25724140958ab16456c6d927 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=loving_meninsky, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Jan 31 08:09:45 compute-0 podman[99072]: 2026-01-31 08:09:45.396070453 +0000 UTC m=+0.440884793 container attach 06cedd1236cfdb7a4fa113a57346206cccb4994d25724140958ab16456c6d927 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=loving_meninsky, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 08:09:45 compute-0 ceph-osd[87035]: log_channel(cluster) log [DBG] : 7.7 scrub starts
Jan 31 08:09:45 compute-0 ceph-osd[87035]: log_channel(cluster) log [DBG] : 7.7 scrub ok
Jan 31 08:09:45 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e88 do_prune osdmap full prune enabled
Jan 31 08:09:45 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v176: 305 pgs: 2 unknown, 303 active+clean; 461 KiB data, 118 MiB used, 60 GiB / 60 GiB avail; 0 B/s, 0 objects/s recovering
Jan 31 08:09:45 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e89 e89: 3 total, 3 up, 3 in
Jan 31 08:09:45 compute-0 ceph-mon[75227]: log_channel(cluster) log [DBG] : osdmap e89: 3 total, 3 up, 3 in
Jan 31 08:09:45 compute-0 ceph-osd[88096]: osd.2 pg_epoch: 89 pg[9.c( empty local-lis/les=0/0 n=0 ec=50/35 lis/c=50/50 les/c/f=51/51/0 sis=89) [2]/[1] r=-1 lpr=89 pi=[50,89)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 08:09:45 compute-0 ceph-osd[88096]: osd.2 pg_epoch: 89 pg[9.c( empty local-lis/les=0/0 n=0 ec=50/35 lis/c=50/50 les/c/f=51/51/0 sis=89) [2]/[1] r=-1 lpr=89 pi=[50,89)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 31 08:09:45 compute-0 ceph-osd[88096]: osd.2 pg_epoch: 89 pg[9.1c( empty local-lis/les=0/0 n=0 ec=50/35 lis/c=50/50 les/c/f=51/51/0 sis=89) [2]/[1] r=-1 lpr=89 pi=[50,89)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 08:09:45 compute-0 ceph-osd[88096]: osd.2 pg_epoch: 89 pg[9.1c( empty local-lis/les=0/0 n=0 ec=50/35 lis/c=50/50 les/c/f=51/51/0 sis=89) [2]/[1] r=-1 lpr=89 pi=[50,89)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 31 08:09:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 89 pg[9.c( v 41'483 (0'0,41'483] local-lis/les=50/51 n=7 ec=50/35 lis/c=50/50 les/c/f=51/51/0 sis=89) [2]/[1] r=0 lpr=89 pi=[50,89)/1 crt=41'483 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 08:09:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 89 pg[9.1c( v 78'487 (0'0,78'487] local-lis/les=50/51 n=6 ec=50/35 lis/c=50/50 les/c/f=51/51/0 sis=89) [2]/[1] r=0 lpr=89 pi=[50,89)/1 crt=76'486 lcod 76'486 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 08:09:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 89 pg[9.c( v 41'483 (0'0,41'483] local-lis/les=50/51 n=7 ec=50/35 lis/c=50/50 les/c/f=51/51/0 sis=89) [2]/[1] r=0 lpr=89 pi=[50,89)/1 crt=41'483 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 31 08:09:45 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 89 pg[9.1c( v 78'487 (0'0,78'487] local-lis/les=50/51 n=6 ec=50/35 lis/c=50/50 les/c/f=51/51/0 sis=89) [2]/[1] r=0 lpr=89 pi=[50,89)/1 crt=76'486 lcod 76'486 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 31 08:09:45 compute-0 lvm[99164]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 31 08:09:45 compute-0 lvm[99164]: VG ceph_vg0 finished
Jan 31 08:09:45 compute-0 lvm[99167]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 31 08:09:45 compute-0 lvm[99167]: VG ceph_vg1 finished
Jan 31 08:09:45 compute-0 lvm[99169]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 31 08:09:45 compute-0 lvm[99169]: VG ceph_vg2 finished
Jan 31 08:09:46 compute-0 loving_meninsky[99088]: {}
Jan 31 08:09:46 compute-0 systemd[1]: libpod-06cedd1236cfdb7a4fa113a57346206cccb4994d25724140958ab16456c6d927.scope: Deactivated successfully.
Jan 31 08:09:46 compute-0 systemd[1]: libpod-06cedd1236cfdb7a4fa113a57346206cccb4994d25724140958ab16456c6d927.scope: Consumed 1.005s CPU time.
Jan 31 08:09:46 compute-0 podman[99072]: 2026-01-31 08:09:46.105527401 +0000 UTC m=+1.150341771 container died 06cedd1236cfdb7a4fa113a57346206cccb4994d25724140958ab16456c6d927 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=loving_meninsky, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, OSD_FLAVOR=default)
Jan 31 08:09:46 compute-0 ceph-mon[75227]: 2.11 scrub starts
Jan 31 08:09:46 compute-0 ceph-mon[75227]: 2.11 scrub ok
Jan 31 08:09:46 compute-0 ceph-mon[75227]: 8.13 scrub starts
Jan 31 08:09:46 compute-0 ceph-mon[75227]: 8.13 scrub ok
Jan 31 08:09:46 compute-0 ceph-mon[75227]: 2.e scrub starts
Jan 31 08:09:46 compute-0 ceph-mon[75227]: 2.e scrub ok
Jan 31 08:09:46 compute-0 ceph-mon[75227]: osdmap e89: 3 total, 3 up, 3 in
Jan 31 08:09:46 compute-0 systemd[1]: var-lib-containers-storage-overlay-32ee0e43712fa228bfaa6bc0715890306c7d172b549f63dfc1a345dc19854b06-merged.mount: Deactivated successfully.
Jan 31 08:09:46 compute-0 podman[99072]: 2026-01-31 08:09:46.373798708 +0000 UTC m=+1.418613038 container remove 06cedd1236cfdb7a4fa113a57346206cccb4994d25724140958ab16456c6d927 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=loving_meninsky, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3)
Jan 31 08:09:46 compute-0 systemd[1]: libpod-conmon-06cedd1236cfdb7a4fa113a57346206cccb4994d25724140958ab16456c6d927.scope: Deactivated successfully.
Jan 31 08:09:46 compute-0 sudo[98994]: pam_unix(sudo:session): session closed for user root
Jan 31 08:09:46 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 31 08:09:46 compute-0 ceph-osd[85971]: log_channel(cluster) log [DBG] : 10.16 scrub starts
Jan 31 08:09:46 compute-0 ceph-osd[85971]: log_channel(cluster) log [DBG] : 10.16 scrub ok
Jan 31 08:09:46 compute-0 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 08:09:46 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 31 08:09:46 compute-0 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 08:09:46 compute-0 sudo[99188]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 31 08:09:46 compute-0 sudo[99188]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:09:46 compute-0 sudo[99188]: pam_unix(sudo:session): session closed for user root
Jan 31 08:09:46 compute-0 sshd-session[99186]: Accepted publickey for zuul from 192.168.122.30 port 41164 ssh2: ECDSA SHA256:Skb+4tfaoVfLHQIqkRSeA/sFlTrVc6ZnX8V66qTLHY8
Jan 31 08:09:46 compute-0 systemd-logind[793]: New session 34 of user zuul.
Jan 31 08:09:46 compute-0 systemd[1]: Started Session 34 of User zuul.
Jan 31 08:09:46 compute-0 sshd-session[99186]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 31 08:09:46 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e89 do_prune osdmap full prune enabled
Jan 31 08:09:46 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e90 e90: 3 total, 3 up, 3 in
Jan 31 08:09:46 compute-0 ceph-mon[75227]: log_channel(cluster) log [DBG] : osdmap e90: 3 total, 3 up, 3 in
Jan 31 08:09:46 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 90 pg[9.c( v 41'483 (0'0,41'483] local-lis/les=89/90 n=7 ec=50/35 lis/c=50/50 les/c/f=51/51/0 sis=89) [2]/[1] async=[2] r=0 lpr=89 pi=[50,89)/1 crt=41'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:09:46 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 90 pg[9.1c( v 78'487 (0'0,78'487] local-lis/les=89/90 n=6 ec=50/35 lis/c=50/50 les/c/f=51/51/0 sis=89) [2]/[1] async=[2] r=0 lpr=89 pi=[50,89)/1 crt=78'487 lcod 76'486 mlcod 0'0 active+remapped mbc={255={(0+1)=9}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:09:47 compute-0 ceph-mon[75227]: 7.7 scrub starts
Jan 31 08:09:47 compute-0 ceph-mon[75227]: 7.7 scrub ok
Jan 31 08:09:47 compute-0 ceph-mon[75227]: pgmap v176: 305 pgs: 2 unknown, 303 active+clean; 461 KiB data, 118 MiB used, 60 GiB / 60 GiB avail; 0 B/s, 0 objects/s recovering
Jan 31 08:09:47 compute-0 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 08:09:47 compute-0 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 08:09:47 compute-0 ceph-mon[75227]: osdmap e90: 3 total, 3 up, 3 in
Jan 31 08:09:47 compute-0 ceph-osd[85971]: log_channel(cluster) log [DBG] : 10.1e scrub starts
Jan 31 08:09:47 compute-0 ceph-osd[85971]: log_channel(cluster) log [DBG] : 10.1e scrub ok
Jan 31 08:09:47 compute-0 python3.9[99364]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 31 08:09:47 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v179: 305 pgs: 2 unknown, 303 active+clean; 461 KiB data, 118 MiB used, 60 GiB / 60 GiB avail; 0 B/s, 0 objects/s recovering
Jan 31 08:09:47 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e90 do_prune osdmap full prune enabled
Jan 31 08:09:47 compute-0 ceph-osd[88096]: log_channel(cluster) log [DBG] : 10.5 scrub starts
Jan 31 08:09:47 compute-0 ceph-osd[88096]: log_channel(cluster) log [DBG] : 10.5 scrub ok
Jan 31 08:09:48 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e91 e91: 3 total, 3 up, 3 in
Jan 31 08:09:48 compute-0 ceph-mon[75227]: log_channel(cluster) log [DBG] : osdmap e91: 3 total, 3 up, 3 in
Jan 31 08:09:48 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 91 pg[9.c( v 41'483 (0'0,41'483] local-lis/les=89/90 n=7 ec=50/35 lis/c=89/50 les/c/f=90/51/0 sis=91 pruub=14.711726189s) [2] async=[2] r=-1 lpr=91 pi=[50,91)/1 crt=41'483 lcod 0'0 active pruub 161.700485229s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 08:09:48 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 91 pg[9.c( v 41'483 (0'0,41'483] local-lis/les=89/90 n=7 ec=50/35 lis/c=89/50 les/c/f=90/51/0 sis=91 pruub=14.711628914s) [2] r=-1 lpr=91 pi=[50,91)/1 crt=41'483 lcod 0'0 unknown NOTIFY pruub 161.700485229s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 08:09:48 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 91 pg[9.1c( v 78'487 (0'0,78'487] local-lis/les=89/90 n=6 ec=50/35 lis/c=89/50 les/c/f=90/51/0 sis=91 pruub=14.717023849s) [2] async=[2] r=-1 lpr=91 pi=[50,91)/1 crt=78'487 lcod 76'486 active pruub 161.707305908s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 08:09:48 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 91 pg[9.1c( v 78'487 (0'0,78'487] local-lis/les=89/90 n=6 ec=50/35 lis/c=89/50 les/c/f=90/51/0 sis=91 pruub=14.716917038s) [2] r=-1 lpr=91 pi=[50,91)/1 crt=78'487 lcod 76'486 unknown NOTIFY pruub 161.707305908s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 08:09:48 compute-0 ceph-osd[88096]: osd.2 pg_epoch: 91 pg[9.c( v 41'483 (0'0,41'483] local-lis/les=0/0 n=7 ec=50/35 lis/c=89/50 les/c/f=90/51/0 sis=91) [2] r=0 lpr=91 pi=[50,91)/1 pct=0'0 crt=41'483 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 08:09:48 compute-0 ceph-osd[88096]: osd.2 pg_epoch: 91 pg[9.1c( v 78'487 (0'0,78'487] local-lis/les=0/0 n=6 ec=50/35 lis/c=89/50 les/c/f=90/51/0 sis=91) [2] r=0 lpr=91 pi=[50,91)/1 pct=0'0 crt=78'487 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 08:09:48 compute-0 ceph-osd[88096]: osd.2 pg_epoch: 91 pg[9.c( v 41'483 (0'0,41'483] local-lis/les=0/0 n=7 ec=50/35 lis/c=89/50 les/c/f=90/51/0 sis=91) [2] r=0 lpr=91 pi=[50,91)/1 crt=41'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:09:48 compute-0 ceph-osd[88096]: osd.2 pg_epoch: 91 pg[9.1c( v 78'487 (0'0,78'487] local-lis/les=0/0 n=6 ec=50/35 lis/c=89/50 les/c/f=90/51/0 sis=91) [2] r=0 lpr=91 pi=[50,91)/1 crt=78'487 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:09:48 compute-0 ceph-mon[75227]: 10.16 scrub starts
Jan 31 08:09:48 compute-0 ceph-mon[75227]: 10.16 scrub ok
Jan 31 08:09:48 compute-0 ceph-mon[75227]: 10.1e scrub starts
Jan 31 08:09:48 compute-0 ceph-mon[75227]: 10.1e scrub ok
Jan 31 08:09:48 compute-0 ceph-mon[75227]: pgmap v179: 305 pgs: 2 unknown, 303 active+clean; 461 KiB data, 118 MiB used, 60 GiB / 60 GiB avail; 0 B/s, 0 objects/s recovering
Jan 31 08:09:48 compute-0 ceph-mon[75227]: osdmap e91: 3 total, 3 up, 3 in
Jan 31 08:09:48 compute-0 ceph-osd[88096]: log_channel(cluster) log [DBG] : 5.a scrub starts
Jan 31 08:09:48 compute-0 ceph-osd[88096]: log_channel(cluster) log [DBG] : 5.a scrub ok
Jan 31 08:09:49 compute-0 sudo[99581]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cdvcnpymaqocjmsqhocpnrmcfvwsltov ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769846988.5709028-27-244287878733674/AnsiballZ_command.py'
Jan 31 08:09:49 compute-0 sudo[99581]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:09:49 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e91 do_prune osdmap full prune enabled
Jan 31 08:09:49 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e92 e92: 3 total, 3 up, 3 in
Jan 31 08:09:49 compute-0 ceph-mon[75227]: log_channel(cluster) log [DBG] : osdmap e92: 3 total, 3 up, 3 in
Jan 31 08:09:49 compute-0 python3.9[99583]: ansible-ansible.legacy.command Invoked with _raw_params=set -euxo pipefail
                                            pushd /var/tmp
                                            curl -sL https://github.com/openstack-k8s-operators/repo-setup/archive/refs/heads/main.tar.gz | tar -xz
                                            pushd repo-setup-main
                                            python3 -m venv ./venv
                                            PBR_VERSION=0.0.0 ./venv/bin/pip install ./
                                            ./venv/bin/repo-setup current-podified -b antelope
                                            popd
                                            rm -rf repo-setup-main
                                             _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 08:09:49 compute-0 ceph-osd[88096]: osd.2 pg_epoch: 92 pg[9.c( v 41'483 (0'0,41'483] local-lis/les=91/92 n=7 ec=50/35 lis/c=89/50 les/c/f=90/51/0 sis=91) [2] r=0 lpr=91 pi=[50,91)/1 crt=41'483 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:09:49 compute-0 ceph-osd[88096]: osd.2 pg_epoch: 92 pg[9.1c( v 78'487 (0'0,78'487] local-lis/les=91/92 n=6 ec=50/35 lis/c=89/50 les/c/f=90/51/0 sis=91) [2] r=0 lpr=91 pi=[50,91)/1 crt=78'487 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:09:49 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e92 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 08:09:49 compute-0 ceph-osd[85971]: log_channel(cluster) log [DBG] : 5.14 scrub starts
Jan 31 08:09:49 compute-0 ceph-osd[85971]: log_channel(cluster) log [DBG] : 5.14 scrub ok
Jan 31 08:09:49 compute-0 ceph-mon[75227]: 10.5 scrub starts
Jan 31 08:09:49 compute-0 ceph-mon[75227]: 10.5 scrub ok
Jan 31 08:09:49 compute-0 ceph-mon[75227]: 5.a scrub starts
Jan 31 08:09:49 compute-0 ceph-mon[75227]: 5.a scrub ok
Jan 31 08:09:49 compute-0 ceph-mon[75227]: osdmap e92: 3 total, 3 up, 3 in
Jan 31 08:09:49 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v182: 305 pgs: 2 unknown, 303 active+clean; 461 KiB data, 118 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:09:50 compute-0 ceph-mon[75227]: 5.14 scrub starts
Jan 31 08:09:50 compute-0 ceph-mon[75227]: 5.14 scrub ok
Jan 31 08:09:50 compute-0 ceph-mon[75227]: pgmap v182: 305 pgs: 2 unknown, 303 active+clean; 461 KiB data, 118 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:09:50 compute-0 ceph-osd[88096]: log_channel(cluster) log [DBG] : 2.c scrub starts
Jan 31 08:09:50 compute-0 ceph-osd[88096]: log_channel(cluster) log [DBG] : 2.c scrub ok
Jan 31 08:09:51 compute-0 ceph-osd[87035]: log_channel(cluster) log [DBG] : 8.8 scrub starts
Jan 31 08:09:51 compute-0 ceph-osd[87035]: log_channel(cluster) log [DBG] : 8.8 scrub ok
Jan 31 08:09:51 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v183: 305 pgs: 305 active+clean; 461 KiB data, 118 MiB used, 60 GiB / 60 GiB avail; 7.0 KiB/s rd, 683 B/s wr, 17 op/s; 69 B/s, 2 objects/s recovering
Jan 31 08:09:51 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "14"} v 0)
Jan 31 08:09:51 compute-0 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "14"} : dispatch
Jan 31 08:09:51 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "14"} v 0)
Jan 31 08:09:51 compute-0 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "14"} : dispatch
Jan 31 08:09:51 compute-0 ceph-mon[75227]: 2.c scrub starts
Jan 31 08:09:51 compute-0 ceph-mon[75227]: 2.c scrub ok
Jan 31 08:09:51 compute-0 ceph-osd[88096]: log_channel(cluster) log [DBG] : 5.b scrub starts
Jan 31 08:09:51 compute-0 ceph-osd[88096]: log_channel(cluster) log [DBG] : 5.b scrub ok
Jan 31 08:09:51 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e92 do_prune osdmap full prune enabled
Jan 31 08:09:51 compute-0 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "14"}]': finished
Jan 31 08:09:51 compute-0 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "14"}]': finished
Jan 31 08:09:51 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e93 e93: 3 total, 3 up, 3 in
Jan 31 08:09:52 compute-0 ceph-mon[75227]: log_channel(cluster) log [DBG] : osdmap e93: 3 total, 3 up, 3 in
Jan 31 08:09:52 compute-0 ceph-osd[85971]: osd.0 pg_epoch: 93 pg[6.d( v 39'39 (0'0,39'39] local-lis/les=71/72 n=1 ec=48/23 lis/c=71/71 les/c/f=72/72/0 sis=93 pruub=12.769381523s) [1] r=-1 lpr=93 pi=[71,93)/1 crt=39'39 active pruub 168.799011230s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 08:09:52 compute-0 ceph-osd[85971]: osd.0 pg_epoch: 93 pg[6.d( v 39'39 (0'0,39'39] local-lis/les=71/72 n=1 ec=48/23 lis/c=71/71 les/c/f=72/72/0 sis=93 pruub=12.769127846s) [1] r=-1 lpr=93 pi=[71,93)/1 crt=39'39 unknown NOTIFY pruub 168.799011230s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 08:09:52 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 93 pg[6.d( empty local-lis/les=0/0 n=0 ec=48/23 lis/c=71/71 les/c/f=72/72/0 sis=93) [1] r=0 lpr=93 pi=[71,93)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:09:52 compute-0 ceph-mon[75227]: 8.8 scrub starts
Jan 31 08:09:52 compute-0 ceph-mon[75227]: 8.8 scrub ok
Jan 31 08:09:52 compute-0 ceph-mon[75227]: pgmap v183: 305 pgs: 305 active+clean; 461 KiB data, 118 MiB used, 60 GiB / 60 GiB avail; 7.0 KiB/s rd, 683 B/s wr, 17 op/s; 69 B/s, 2 objects/s recovering
Jan 31 08:09:52 compute-0 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "14"} : dispatch
Jan 31 08:09:52 compute-0 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "14"} : dispatch
Jan 31 08:09:52 compute-0 ceph-mon[75227]: 5.b scrub starts
Jan 31 08:09:52 compute-0 ceph-mon[75227]: 5.b scrub ok
Jan 31 08:09:52 compute-0 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "14"}]': finished
Jan 31 08:09:52 compute-0 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "14"}]': finished
Jan 31 08:09:52 compute-0 ceph-mon[75227]: osdmap e93: 3 total, 3 up, 3 in
Jan 31 08:09:52 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e93 do_prune osdmap full prune enabled
Jan 31 08:09:53 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e94 e94: 3 total, 3 up, 3 in
Jan 31 08:09:53 compute-0 ceph-mon[75227]: log_channel(cluster) log [DBG] : osdmap e94: 3 total, 3 up, 3 in
Jan 31 08:09:53 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 94 pg[6.d( v 39'39 lc 37'10 (0'0,39'39] local-lis/les=93/94 n=1 ec=48/23 lis/c=71/71 les/c/f=72/72/0 sis=93) [1] r=0 lpr=93 pi=[71,93)/1 crt=39'39 lcod 0'0 mlcod 0'0 active+degraded m=2 mbc={255={(0+1)=2}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:09:53 compute-0 ceph-osd[85971]: log_channel(cluster) log [DBG] : 5.15 scrub starts
Jan 31 08:09:53 compute-0 ceph-osd[85971]: log_channel(cluster) log [DBG] : 5.15 scrub ok
Jan 31 08:09:53 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v186: 305 pgs: 305 active+clean; 461 KiB data, 118 MiB used, 60 GiB / 60 GiB avail; 7.4 KiB/s rd, 723 B/s wr, 18 op/s; 73 B/s, 2 objects/s recovering
Jan 31 08:09:53 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "15"} v 0)
Jan 31 08:09:53 compute-0 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "15"} : dispatch
Jan 31 08:09:53 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "15"} v 0)
Jan 31 08:09:53 compute-0 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "15"} : dispatch
Jan 31 08:09:54 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e94 do_prune osdmap full prune enabled
Jan 31 08:09:54 compute-0 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "15"}]': finished
Jan 31 08:09:54 compute-0 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "15"}]': finished
Jan 31 08:09:54 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e95 e95: 3 total, 3 up, 3 in
Jan 31 08:09:54 compute-0 ceph-mon[75227]: log_channel(cluster) log [DBG] : osdmap e95: 3 total, 3 up, 3 in
Jan 31 08:09:54 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e95 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 08:09:54 compute-0 ceph-mon[75227]: osdmap e94: 3 total, 3 up, 3 in
Jan 31 08:09:54 compute-0 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "15"} : dispatch
Jan 31 08:09:54 compute-0 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "15"} : dispatch
Jan 31 08:09:55 compute-0 ceph-osd[85971]: log_channel(cluster) log [DBG] : 2.8 scrub starts
Jan 31 08:09:55 compute-0 ceph-osd[85971]: log_channel(cluster) log [DBG] : 2.8 scrub ok
Jan 31 08:09:55 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v188: 305 pgs: 305 active+clean; 461 KiB data, 135 MiB used, 60 GiB / 60 GiB avail; 7.0 KiB/s rd, 682 B/s wr, 17 op/s; 81 B/s, 2 objects/s recovering
Jan 31 08:09:55 compute-0 ceph-osd[88096]: log_channel(cluster) log [DBG] : 10.3 scrub starts
Jan 31 08:09:55 compute-0 ceph-osd[88096]: log_channel(cluster) log [DBG] : 10.3 scrub ok
Jan 31 08:09:55 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "16"} v 0)
Jan 31 08:09:55 compute-0 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "16"} : dispatch
Jan 31 08:09:55 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "16"} v 0)
Jan 31 08:09:55 compute-0 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "16"} : dispatch
Jan 31 08:09:55 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e95 do_prune osdmap full prune enabled
Jan 31 08:09:55 compute-0 ceph-mon[75227]: 5.15 scrub starts
Jan 31 08:09:55 compute-0 ceph-mon[75227]: 5.15 scrub ok
Jan 31 08:09:55 compute-0 ceph-mon[75227]: pgmap v186: 305 pgs: 305 active+clean; 461 KiB data, 118 MiB used, 60 GiB / 60 GiB avail; 7.4 KiB/s rd, 723 B/s wr, 18 op/s; 73 B/s, 2 objects/s recovering
Jan 31 08:09:55 compute-0 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "15"}]': finished
Jan 31 08:09:55 compute-0 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "15"}]': finished
Jan 31 08:09:55 compute-0 ceph-mon[75227]: osdmap e95: 3 total, 3 up, 3 in
Jan 31 08:09:56 compute-0 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "16"}]': finished
Jan 31 08:09:56 compute-0 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "16"}]': finished
Jan 31 08:09:56 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e96 e96: 3 total, 3 up, 3 in
Jan 31 08:09:56 compute-0 ceph-mon[75227]: log_channel(cluster) log [DBG] : osdmap e96: 3 total, 3 up, 3 in
Jan 31 08:09:56 compute-0 ceph-osd[85971]: osd.0 pg_epoch: 96 pg[6.f( v 39'39 (0'0,39'39] local-lis/les=67/68 n=1 ec=48/23 lis/c=67/67 les/c/f=68/68/0 sis=96 pruub=10.486717224s) [2] r=-1 lpr=96 pi=[67,96)/1 crt=39'39 active pruub 170.757995605s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 08:09:56 compute-0 ceph-osd[85971]: osd.0 pg_epoch: 96 pg[6.f( v 39'39 (0'0,39'39] local-lis/les=67/68 n=1 ec=48/23 lis/c=67/67 les/c/f=68/68/0 sis=96 pruub=10.486496925s) [2] r=-1 lpr=96 pi=[67,96)/1 crt=39'39 unknown NOTIFY pruub 170.757995605s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 08:09:56 compute-0 ceph-osd[88096]: osd.2 pg_epoch: 96 pg[6.f( empty local-lis/les=0/0 n=0 ec=48/23 lis/c=67/67 les/c/f=68/68/0 sis=96) [2] r=0 lpr=96 pi=[67,96)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:09:56 compute-0 ceph-osd[87035]: log_channel(cluster) log [DBG] : 8.a scrub starts
Jan 31 08:09:56 compute-0 ceph-osd[87035]: log_channel(cluster) log [DBG] : 8.a scrub ok
Jan 31 08:09:56 compute-0 ceph-osd[88096]: log_channel(cluster) log [DBG] : 2.0 scrub starts
Jan 31 08:09:56 compute-0 ceph-osd[88096]: log_channel(cluster) log [DBG] : 2.0 scrub ok
Jan 31 08:09:57 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e96 do_prune osdmap full prune enabled
Jan 31 08:09:57 compute-0 ceph-mon[75227]: 2.8 scrub starts
Jan 31 08:09:57 compute-0 ceph-mon[75227]: 2.8 scrub ok
Jan 31 08:09:57 compute-0 ceph-mon[75227]: pgmap v188: 305 pgs: 305 active+clean; 461 KiB data, 135 MiB used, 60 GiB / 60 GiB avail; 7.0 KiB/s rd, 682 B/s wr, 17 op/s; 81 B/s, 2 objects/s recovering
Jan 31 08:09:57 compute-0 ceph-mon[75227]: 10.3 scrub starts
Jan 31 08:09:57 compute-0 ceph-mon[75227]: 10.3 scrub ok
Jan 31 08:09:57 compute-0 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "16"} : dispatch
Jan 31 08:09:57 compute-0 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "16"} : dispatch
Jan 31 08:09:57 compute-0 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "16"}]': finished
Jan 31 08:09:57 compute-0 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "16"}]': finished
Jan 31 08:09:57 compute-0 ceph-mon[75227]: osdmap e96: 3 total, 3 up, 3 in
Jan 31 08:09:57 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v190: 305 pgs: 305 active+clean; 461 KiB data, 135 MiB used, 60 GiB / 60 GiB avail; 12 B/s, 0 objects/s recovering
Jan 31 08:09:57 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "17"} v 0)
Jan 31 08:09:57 compute-0 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "17"} : dispatch
Jan 31 08:09:57 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e97 e97: 3 total, 3 up, 3 in
Jan 31 08:09:57 compute-0 sudo[99581]: pam_unix(sudo:session): session closed for user root
Jan 31 08:09:57 compute-0 ceph-osd[88096]: log_channel(cluster) log [DBG] : 5.0 scrub starts
Jan 31 08:09:57 compute-0 ceph-osd[88096]: log_channel(cluster) log [DBG] : 5.0 scrub ok
Jan 31 08:09:57 compute-0 ceph-mon[75227]: log_channel(cluster) log [DBG] : osdmap e97: 3 total, 3 up, 3 in
Jan 31 08:09:58 compute-0 ceph-osd[88096]: osd.2 pg_epoch: 97 pg[6.f( v 39'39 lc 37'1 (0'0,39'39] local-lis/les=96/97 n=1 ec=48/23 lis/c=67/67 les/c/f=68/68/0 sis=96) [2] r=0 lpr=96 pi=[67,96)/1 crt=39'39 lcod 0'0 mlcod 0'0 active+degraded m=3 mbc={255={(0+1)=3}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:09:58 compute-0 sshd-session[99214]: Connection closed by 192.168.122.30 port 41164
Jan 31 08:09:58 compute-0 sshd-session[99186]: pam_unix(sshd:session): session closed for user zuul
Jan 31 08:09:58 compute-0 systemd[1]: session-34.scope: Deactivated successfully.
Jan 31 08:09:58 compute-0 systemd[1]: session-34.scope: Consumed 7.801s CPU time.
Jan 31 08:09:58 compute-0 systemd-logind[793]: Session 34 logged out. Waiting for processes to exit.
Jan 31 08:09:58 compute-0 systemd-logind[793]: Removed session 34.
Jan 31 08:09:58 compute-0 ceph-mon[75227]: 8.a scrub starts
Jan 31 08:09:58 compute-0 ceph-mon[75227]: 8.a scrub ok
Jan 31 08:09:58 compute-0 ceph-mon[75227]: 2.0 scrub starts
Jan 31 08:09:58 compute-0 ceph-mon[75227]: 2.0 scrub ok
Jan 31 08:09:58 compute-0 ceph-mon[75227]: pgmap v190: 305 pgs: 305 active+clean; 461 KiB data, 135 MiB used, 60 GiB / 60 GiB avail; 12 B/s, 0 objects/s recovering
Jan 31 08:09:58 compute-0 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "17"} : dispatch
Jan 31 08:09:58 compute-0 ceph-mon[75227]: osdmap e97: 3 total, 3 up, 3 in
Jan 31 08:09:58 compute-0 ceph-osd[85971]: log_channel(cluster) log [DBG] : 2.16 scrub starts
Jan 31 08:09:58 compute-0 ceph-osd[85971]: log_channel(cluster) log [DBG] : 2.16 scrub ok
Jan 31 08:09:58 compute-0 ceph-osd[87035]: log_channel(cluster) log [DBG] : 11.0 scrub starts
Jan 31 08:09:58 compute-0 ceph-osd[87035]: log_channel(cluster) log [DBG] : 11.0 scrub ok
Jan 31 08:09:58 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e97 do_prune osdmap full prune enabled
Jan 31 08:09:58 compute-0 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "17"}]': finished
Jan 31 08:09:58 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e98 e98: 3 total, 3 up, 3 in
Jan 31 08:09:59 compute-0 ceph-mon[75227]: log_channel(cluster) log [DBG] : osdmap e98: 3 total, 3 up, 3 in
Jan 31 08:09:59 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e98 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 08:09:59 compute-0 ceph-mon[75227]: 5.0 scrub starts
Jan 31 08:09:59 compute-0 ceph-mon[75227]: 5.0 scrub ok
Jan 31 08:09:59 compute-0 ceph-mon[75227]: 2.16 scrub starts
Jan 31 08:09:59 compute-0 ceph-mon[75227]: 2.16 scrub ok
Jan 31 08:09:59 compute-0 ceph-mon[75227]: 11.0 scrub starts
Jan 31 08:09:59 compute-0 ceph-mon[75227]: 11.0 scrub ok
Jan 31 08:09:59 compute-0 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "17"}]': finished
Jan 31 08:09:59 compute-0 ceph-mon[75227]: osdmap e98: 3 total, 3 up, 3 in
Jan 31 08:09:59 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v193: 305 pgs: 305 active+clean; 461 KiB data, 135 MiB used, 60 GiB / 60 GiB avail; 13 B/s, 0 objects/s recovering
Jan 31 08:09:59 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "18"} v 0)
Jan 31 08:09:59 compute-0 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "18"} : dispatch
Jan 31 08:10:00 compute-0 ceph-osd[85971]: log_channel(cluster) log [DBG] : 5.2 scrub starts
Jan 31 08:10:00 compute-0 ceph-osd[85971]: log_channel(cluster) log [DBG] : 5.2 scrub ok
Jan 31 08:10:00 compute-0 ceph-osd[87035]: log_channel(cluster) log [DBG] : 8.3 scrub starts
Jan 31 08:10:00 compute-0 ceph-osd[87035]: log_channel(cluster) log [DBG] : 8.3 scrub ok
Jan 31 08:10:00 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e98 do_prune osdmap full prune enabled
Jan 31 08:10:00 compute-0 ceph-mon[75227]: pgmap v193: 305 pgs: 305 active+clean; 461 KiB data, 135 MiB used, 60 GiB / 60 GiB avail; 13 B/s, 0 objects/s recovering
Jan 31 08:10:00 compute-0 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "18"} : dispatch
Jan 31 08:10:01 compute-0 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "18"}]': finished
Jan 31 08:10:01 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e99 e99: 3 total, 3 up, 3 in
Jan 31 08:10:01 compute-0 ceph-mon[75227]: log_channel(cluster) log [DBG] : osdmap e99: 3 total, 3 up, 3 in
Jan 31 08:10:01 compute-0 ceph-osd[85971]: log_channel(cluster) log [DBG] : 10.1 scrub starts
Jan 31 08:10:01 compute-0 ceph-osd[85971]: log_channel(cluster) log [DBG] : 10.1 scrub ok
Jan 31 08:10:01 compute-0 ceph-osd[87035]: log_channel(cluster) log [DBG] : 8.1 scrub starts
Jan 31 08:10:01 compute-0 ceph-osd[87035]: log_channel(cluster) log [DBG] : 8.1 scrub ok
Jan 31 08:10:01 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v195: 305 pgs: 305 active+clean; 461 KiB data, 135 MiB used, 60 GiB / 60 GiB avail; 110 B/s, 0 objects/s recovering
Jan 31 08:10:01 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "19"} v 0)
Jan 31 08:10:01 compute-0 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "19"} : dispatch
Jan 31 08:10:02 compute-0 ceph-mon[75227]: 5.2 scrub starts
Jan 31 08:10:02 compute-0 ceph-mon[75227]: 5.2 scrub ok
Jan 31 08:10:02 compute-0 ceph-mon[75227]: 8.3 scrub starts
Jan 31 08:10:02 compute-0 ceph-mon[75227]: 8.3 scrub ok
Jan 31 08:10:02 compute-0 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "18"}]': finished
Jan 31 08:10:02 compute-0 ceph-mon[75227]: osdmap e99: 3 total, 3 up, 3 in
Jan 31 08:10:02 compute-0 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "19"} : dispatch
Jan 31 08:10:02 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e99 do_prune osdmap full prune enabled
Jan 31 08:10:02 compute-0 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "19"}]': finished
Jan 31 08:10:02 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e100 e100: 3 total, 3 up, 3 in
Jan 31 08:10:02 compute-0 ceph-mon[75227]: log_channel(cluster) log [DBG] : osdmap e100: 3 total, 3 up, 3 in
Jan 31 08:10:02 compute-0 ceph-mgr[75519]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:10:02 compute-0 ceph-mgr[75519]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:10:02 compute-0 ceph-mgr[75519]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:10:02 compute-0 ceph-mgr[75519]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:10:02 compute-0 ceph-mgr[75519]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:10:02 compute-0 ceph-mgr[75519]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:10:02 compute-0 ceph-osd[88096]: log_channel(cluster) log [DBG] : 10.0 scrub starts
Jan 31 08:10:02 compute-0 ceph-osd[88096]: log_channel(cluster) log [DBG] : 10.0 scrub ok
Jan 31 08:10:03 compute-0 ceph-mon[75227]: 10.1 scrub starts
Jan 31 08:10:03 compute-0 ceph-mon[75227]: 10.1 scrub ok
Jan 31 08:10:03 compute-0 ceph-mon[75227]: 8.1 scrub starts
Jan 31 08:10:03 compute-0 ceph-mon[75227]: 8.1 scrub ok
Jan 31 08:10:03 compute-0 ceph-mon[75227]: pgmap v195: 305 pgs: 305 active+clean; 461 KiB data, 135 MiB used, 60 GiB / 60 GiB avail; 110 B/s, 0 objects/s recovering
Jan 31 08:10:03 compute-0 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "19"}]': finished
Jan 31 08:10:03 compute-0 ceph-mon[75227]: osdmap e100: 3 total, 3 up, 3 in
Jan 31 08:10:03 compute-0 ceph-osd[87035]: log_channel(cluster) log [DBG] : 8.0 scrub starts
Jan 31 08:10:03 compute-0 ceph-osd[87035]: log_channel(cluster) log [DBG] : 8.0 scrub ok
Jan 31 08:10:03 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v197: 305 pgs: 305 active+clean; 461 KiB data, 135 MiB used, 60 GiB / 60 GiB avail; 106 B/s, 0 objects/s recovering
Jan 31 08:10:03 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "20"} v 0)
Jan 31 08:10:03 compute-0 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "20"} : dispatch
Jan 31 08:10:04 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e100 do_prune osdmap full prune enabled
Jan 31 08:10:04 compute-0 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "20"}]': finished
Jan 31 08:10:04 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e101 e101: 3 total, 3 up, 3 in
Jan 31 08:10:04 compute-0 ceph-mon[75227]: log_channel(cluster) log [DBG] : osdmap e101: 3 total, 3 up, 3 in
Jan 31 08:10:04 compute-0 ceph-osd[85971]: osd.0 pg_epoch: 101 pg[9.13( v 75'485 (0'0,75'485] local-lis/les=64/65 n=6 ec=50/35 lis/c=64/64 les/c/f=65/65/0 sis=101 pruub=13.944024086s) [2] r=-1 lpr=101 pi=[64,101)/1 crt=72'484 lcod 72'484 active pruub 182.487518311s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 08:10:04 compute-0 ceph-osd[85971]: osd.0 pg_epoch: 101 pg[9.13( v 75'485 (0'0,75'485] local-lis/les=64/65 n=6 ec=50/35 lis/c=64/64 les/c/f=65/65/0 sis=101 pruub=13.943953514s) [2] r=-1 lpr=101 pi=[64,101)/1 crt=72'484 lcod 72'484 unknown NOTIFY pruub 182.487518311s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 08:10:04 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e101 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 08:10:04 compute-0 ceph-osd[88096]: osd.2 pg_epoch: 101 pg[9.13( empty local-lis/les=0/0 n=0 ec=50/35 lis/c=64/64 les/c/f=65/65/0 sis=101) [2] r=0 lpr=101 pi=[64,101)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:10:05 compute-0 ceph-mon[75227]: 10.0 scrub starts
Jan 31 08:10:05 compute-0 ceph-mon[75227]: 10.0 scrub ok
Jan 31 08:10:05 compute-0 ceph-mon[75227]: 8.0 scrub starts
Jan 31 08:10:05 compute-0 ceph-mon[75227]: 8.0 scrub ok
Jan 31 08:10:05 compute-0 ceph-mon[75227]: pgmap v197: 305 pgs: 305 active+clean; 461 KiB data, 135 MiB used, 60 GiB / 60 GiB avail; 106 B/s, 0 objects/s recovering
Jan 31 08:10:05 compute-0 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "20"} : dispatch
Jan 31 08:10:05 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e101 do_prune osdmap full prune enabled
Jan 31 08:10:05 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v199: 305 pgs: 305 active+clean; 462 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 102 B/s, 0 objects/s recovering
Jan 31 08:10:05 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "21"} v 0)
Jan 31 08:10:05 compute-0 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "21"} : dispatch
Jan 31 08:10:05 compute-0 ceph-osd[88096]: log_channel(cluster) log [DBG] : 2.1 scrub starts
Jan 31 08:10:05 compute-0 ceph-osd[88096]: log_channel(cluster) log [DBG] : 2.1 scrub ok
Jan 31 08:10:05 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e102 e102: 3 total, 3 up, 3 in
Jan 31 08:10:05 compute-0 ceph-mon[75227]: log_channel(cluster) log [DBG] : osdmap e102: 3 total, 3 up, 3 in
Jan 31 08:10:06 compute-0 ceph-osd[85971]: osd.0 pg_epoch: 102 pg[9.13( v 75'485 (0'0,75'485] local-lis/les=64/65 n=6 ec=50/35 lis/c=64/64 les/c/f=65/65/0 sis=102) [2]/[0] r=0 lpr=102 pi=[64,102)/1 crt=72'484 lcod 72'484 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 08:10:06 compute-0 ceph-osd[85971]: osd.0 pg_epoch: 102 pg[9.13( v 75'485 (0'0,75'485] local-lis/les=64/65 n=6 ec=50/35 lis/c=64/64 les/c/f=65/65/0 sis=102) [2]/[0] r=0 lpr=102 pi=[64,102)/1 crt=72'484 lcod 72'484 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 31 08:10:06 compute-0 ceph-osd[88096]: osd.2 pg_epoch: 102 pg[9.13( empty local-lis/les=0/0 n=0 ec=50/35 lis/c=64/64 les/c/f=65/65/0 sis=102) [2]/[0] r=-1 lpr=102 pi=[64,102)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 08:10:06 compute-0 ceph-osd[88096]: osd.2 pg_epoch: 102 pg[9.13( empty local-lis/les=0/0 n=0 ec=50/35 lis/c=64/64 les/c/f=65/65/0 sis=102) [2]/[0] r=-1 lpr=102 pi=[64,102)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 31 08:10:06 compute-0 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "20"}]': finished
Jan 31 08:10:06 compute-0 ceph-mon[75227]: osdmap e101: 3 total, 3 up, 3 in
Jan 31 08:10:06 compute-0 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "21"} : dispatch
Jan 31 08:10:06 compute-0 ceph-mon[75227]: osdmap e102: 3 total, 3 up, 3 in
Jan 31 08:10:06 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e102 do_prune osdmap full prune enabled
Jan 31 08:10:06 compute-0 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "21"}]': finished
Jan 31 08:10:06 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e103 e103: 3 total, 3 up, 3 in
Jan 31 08:10:06 compute-0 ceph-mon[75227]: log_channel(cluster) log [DBG] : osdmap e103: 3 total, 3 up, 3 in
Jan 31 08:10:07 compute-0 ceph-osd[85971]: osd.0 pg_epoch: 103 pg[9.13( v 75'485 (0'0,75'485] local-lis/les=102/103 n=6 ec=50/35 lis/c=64/64 les/c/f=65/65/0 sis=102) [2]/[0] async=[2] r=0 lpr=102 pi=[64,102)/1 crt=75'485 lcod 72'484 mlcod 0'0 active+remapped mbc={255={(0+1)=6}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:10:07 compute-0 ceph-osd[87035]: log_channel(cluster) log [DBG] : 3.b scrub starts
Jan 31 08:10:07 compute-0 ceph-osd[87035]: log_channel(cluster) log [DBG] : 3.b scrub ok
Jan 31 08:10:07 compute-0 ceph-mon[75227]: pgmap v199: 305 pgs: 305 active+clean; 462 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 102 B/s, 0 objects/s recovering
Jan 31 08:10:07 compute-0 ceph-mon[75227]: 2.1 scrub starts
Jan 31 08:10:07 compute-0 ceph-mon[75227]: 2.1 scrub ok
Jan 31 08:10:07 compute-0 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "21"}]': finished
Jan 31 08:10:07 compute-0 ceph-mon[75227]: osdmap e103: 3 total, 3 up, 3 in
Jan 31 08:10:07 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v202: 305 pgs: 305 active+clean; 462 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:10:07 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "22"} v 0)
Jan 31 08:10:07 compute-0 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "22"} : dispatch
Jan 31 08:10:07 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e103 do_prune osdmap full prune enabled
Jan 31 08:10:08 compute-0 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "22"}]': finished
Jan 31 08:10:08 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e104 e104: 3 total, 3 up, 3 in
Jan 31 08:10:08 compute-0 ceph-mon[75227]: log_channel(cluster) log [DBG] : osdmap e104: 3 total, 3 up, 3 in
Jan 31 08:10:08 compute-0 ceph-osd[85971]: osd.0 pg_epoch: 104 pg[9.13( v 75'485 (0'0,75'485] local-lis/les=102/103 n=6 ec=50/35 lis/c=102/64 les/c/f=103/65/0 sis=104 pruub=14.605498314s) [2] async=[2] r=-1 lpr=104 pi=[64,104)/1 crt=75'485 lcod 72'484 active pruub 186.982223511s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 08:10:08 compute-0 ceph-osd[85971]: osd.0 pg_epoch: 104 pg[9.13( v 75'485 (0'0,75'485] local-lis/les=102/103 n=6 ec=50/35 lis/c=102/64 les/c/f=103/65/0 sis=104 pruub=14.605233192s) [2] r=-1 lpr=104 pi=[64,104)/1 crt=75'485 lcod 72'484 unknown NOTIFY pruub 186.982223511s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 08:10:08 compute-0 ceph-osd[85971]: osd.0 pg_epoch: 104 pg[9.15( v 41'483 (0'0,41'483] local-lis/les=59/60 n=6 ec=50/35 lis/c=59/59 les/c/f=60/60/0 sis=104 pruub=12.902797699s) [1] r=-1 lpr=104 pi=[59,104)/1 crt=41'483 active pruub 185.281417847s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 08:10:08 compute-0 ceph-osd[85971]: osd.0 pg_epoch: 104 pg[9.15( v 41'483 (0'0,41'483] local-lis/les=59/60 n=6 ec=50/35 lis/c=59/59 les/c/f=60/60/0 sis=104 pruub=12.902758598s) [1] r=-1 lpr=104 pi=[59,104)/1 crt=41'483 unknown NOTIFY pruub 185.281417847s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 08:10:08 compute-0 ceph-osd[85971]: log_channel(cluster) log [DBG] : 2.b scrub starts
Jan 31 08:10:08 compute-0 ceph-osd[88096]: osd.2 pg_epoch: 104 pg[9.13( v 75'485 (0'0,75'485] local-lis/les=0/0 n=6 ec=50/35 lis/c=102/64 les/c/f=103/65/0 sis=104) [2] r=0 lpr=104 pi=[64,104)/1 pct=0'0 crt=75'485 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 08:10:08 compute-0 ceph-osd[88096]: osd.2 pg_epoch: 104 pg[9.13( v 75'485 (0'0,75'485] local-lis/les=0/0 n=6 ec=50/35 lis/c=102/64 les/c/f=103/65/0 sis=104) [2] r=0 lpr=104 pi=[64,104)/1 crt=75'485 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:10:08 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 104 pg[9.15( empty local-lis/les=0/0 n=0 ec=50/35 lis/c=59/59 les/c/f=60/60/0 sis=104) [1] r=0 lpr=104 pi=[59,104)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:10:08 compute-0 ceph-osd[85971]: log_channel(cluster) log [DBG] : 2.b scrub ok
Jan 31 08:10:09 compute-0 ceph-mon[75227]: 3.b scrub starts
Jan 31 08:10:09 compute-0 ceph-mon[75227]: 3.b scrub ok
Jan 31 08:10:09 compute-0 ceph-mon[75227]: pgmap v202: 305 pgs: 305 active+clean; 462 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:10:09 compute-0 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "22"} : dispatch
Jan 31 08:10:09 compute-0 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "22"}]': finished
Jan 31 08:10:09 compute-0 ceph-mon[75227]: osdmap e104: 3 total, 3 up, 3 in
Jan 31 08:10:09 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e104 do_prune osdmap full prune enabled
Jan 31 08:10:09 compute-0 ceph-osd[87035]: log_channel(cluster) log [DBG] : 11.c scrub starts
Jan 31 08:10:09 compute-0 ceph-osd[87035]: log_channel(cluster) log [DBG] : 11.c scrub ok
Jan 31 08:10:09 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v204: 305 pgs: 305 active+clean; 462 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:10:09 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "23"} v 0)
Jan 31 08:10:09 compute-0 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "23"} : dispatch
Jan 31 08:10:09 compute-0 ceph-osd[88096]: log_channel(cluster) log [DBG] : 10.a scrub starts
Jan 31 08:10:09 compute-0 ceph-osd[88096]: log_channel(cluster) log [DBG] : 10.a scrub ok
Jan 31 08:10:10 compute-0 ceph-osd[85971]: log_channel(cluster) log [DBG] : 10.17 scrub starts
Jan 31 08:10:10 compute-0 ceph-osd[85971]: log_channel(cluster) log [DBG] : 10.17 scrub ok
Jan 31 08:10:10 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e105 e105: 3 total, 3 up, 3 in
Jan 31 08:10:10 compute-0 ceph-mon[75227]: log_channel(cluster) log [DBG] : osdmap e105: 3 total, 3 up, 3 in
Jan 31 08:10:11 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 105 pg[9.15( empty local-lis/les=0/0 n=0 ec=50/35 lis/c=59/59 les/c/f=60/60/0 sis=105) [1]/[0] r=-1 lpr=105 pi=[59,105)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 08:10:11 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 105 pg[9.15( empty local-lis/les=0/0 n=0 ec=50/35 lis/c=59/59 les/c/f=60/60/0 sis=105) [1]/[0] r=-1 lpr=105 pi=[59,105)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 31 08:10:11 compute-0 ceph-osd[85971]: log_channel(cluster) log [DBG] : 5.5 scrub starts
Jan 31 08:10:11 compute-0 ceph-osd[87035]: log_channel(cluster) log [DBG] : 7.0 scrub starts
Jan 31 08:10:11 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e105 do_prune osdmap full prune enabled
Jan 31 08:10:11 compute-0 ceph-mon[75227]: 2.b scrub starts
Jan 31 08:10:11 compute-0 ceph-osd[85971]: osd.0 pg_epoch: 105 pg[9.15( v 41'483 (0'0,41'483] local-lis/les=59/60 n=6 ec=50/35 lis/c=59/59 les/c/f=60/60/0 sis=105) [1]/[0] r=0 lpr=105 pi=[59,105)/1 crt=41'483 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 08:10:11 compute-0 ceph-osd[85971]: osd.0 pg_epoch: 105 pg[9.15( v 41'483 (0'0,41'483] local-lis/les=59/60 n=6 ec=50/35 lis/c=59/59 les/c/f=60/60/0 sis=105) [1]/[0] r=0 lpr=105 pi=[59,105)/1 crt=41'483 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 31 08:10:11 compute-0 ceph-mon[75227]: 2.b scrub ok
Jan 31 08:10:11 compute-0 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "23"} : dispatch
Jan 31 08:10:11 compute-0 ceph-osd[87035]: log_channel(cluster) log [DBG] : 7.0 scrub ok
Jan 31 08:10:11 compute-0 ceph-osd[85971]: log_channel(cluster) log [DBG] : 5.5 scrub ok
Jan 31 08:10:11 compute-0 ceph-osd[88096]: osd.2 pg_epoch: 105 pg[9.13( v 75'485 (0'0,75'485] local-lis/les=104/105 n=6 ec=50/35 lis/c=102/64 les/c/f=103/65/0 sis=104) [2] r=0 lpr=104 pi=[64,104)/1 crt=75'485 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:10:11 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v206: 305 pgs: 1 unknown, 1 peering, 303 active+clean; 462 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 26 B/s, 0 objects/s recovering
Jan 31 08:10:11 compute-0 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "23"}]': finished
Jan 31 08:10:11 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e106 e106: 3 total, 3 up, 3 in
Jan 31 08:10:11 compute-0 ceph-mon[75227]: log_channel(cluster) log [DBG] : osdmap e106: 3 total, 3 up, 3 in
Jan 31 08:10:12 compute-0 ceph-osd[85971]: log_channel(cluster) log [DBG] : 2.f scrub starts
Jan 31 08:10:12 compute-0 ceph-osd[85971]: log_channel(cluster) log [DBG] : 2.f scrub ok
Jan 31 08:10:12 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e106 do_prune osdmap full prune enabled
Jan 31 08:10:12 compute-0 ceph-mon[75227]: 11.c scrub starts
Jan 31 08:10:12 compute-0 ceph-mon[75227]: 11.c scrub ok
Jan 31 08:10:12 compute-0 ceph-mon[75227]: pgmap v204: 305 pgs: 305 active+clean; 462 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:10:12 compute-0 ceph-mon[75227]: 10.a scrub starts
Jan 31 08:10:12 compute-0 ceph-mon[75227]: 10.a scrub ok
Jan 31 08:10:12 compute-0 ceph-mon[75227]: 10.17 scrub starts
Jan 31 08:10:12 compute-0 ceph-mon[75227]: 10.17 scrub ok
Jan 31 08:10:12 compute-0 ceph-mon[75227]: osdmap e105: 3 total, 3 up, 3 in
Jan 31 08:10:12 compute-0 ceph-mon[75227]: 5.5 scrub starts
Jan 31 08:10:12 compute-0 ceph-mon[75227]: 7.0 scrub starts
Jan 31 08:10:12 compute-0 ceph-mon[75227]: 7.0 scrub ok
Jan 31 08:10:12 compute-0 ceph-mon[75227]: 5.5 scrub ok
Jan 31 08:10:12 compute-0 ceph-mon[75227]: pgmap v206: 305 pgs: 1 unknown, 1 peering, 303 active+clean; 462 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 26 B/s, 0 objects/s recovering
Jan 31 08:10:12 compute-0 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "23"}]': finished
Jan 31 08:10:12 compute-0 ceph-mon[75227]: osdmap e106: 3 total, 3 up, 3 in
Jan 31 08:10:13 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e107 e107: 3 total, 3 up, 3 in
Jan 31 08:10:13 compute-0 ceph-mon[75227]: log_channel(cluster) log [DBG] : osdmap e107: 3 total, 3 up, 3 in
Jan 31 08:10:13 compute-0 ceph-osd[88096]: osd.2 pg_epoch: 106 pg[9.16( v 41'483 (0'0,41'483] local-lis/les=79/80 n=6 ec=50/35 lis/c=79/79 les/c/f=80/80/0 sis=106 pruub=13.781268120s) [0] r=-1 lpr=106 pi=[79,106)/1 crt=41'483 active pruub 179.828384399s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 08:10:13 compute-0 ceph-osd[88096]: osd.2 pg_epoch: 106 pg[9.16( v 41'483 (0'0,41'483] local-lis/les=79/80 n=6 ec=50/35 lis/c=79/79 les/c/f=80/80/0 sis=106 pruub=13.781217575s) [0] r=-1 lpr=106 pi=[79,106)/1 crt=41'483 unknown NOTIFY pruub 179.828384399s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 08:10:13 compute-0 ceph-osd[85971]: osd.0 pg_epoch: 106 pg[9.16( empty local-lis/les=0/0 n=0 ec=50/35 lis/c=79/79 les/c/f=80/80/0 sis=106) [0] r=0 lpr=106 pi=[79,106)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:10:13 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v209: 305 pgs: 1 unknown, 1 peering, 303 active+clean; 462 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 27 B/s, 0 objects/s recovering
Jan 31 08:10:13 compute-0 ceph-osd[85971]: osd.0 pg_epoch: 107 pg[9.15( v 41'483 (0'0,41'483] local-lis/les=105/107 n=6 ec=50/35 lis/c=59/59 les/c/f=60/60/0 sis=105) [1]/[0] async=[1] r=0 lpr=105 pi=[59,105)/1 crt=41'483 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:10:14 compute-0 ceph-mon[75227]: 2.f scrub starts
Jan 31 08:10:14 compute-0 ceph-mon[75227]: 2.f scrub ok
Jan 31 08:10:14 compute-0 ceph-mon[75227]: osdmap e107: 3 total, 3 up, 3 in
Jan 31 08:10:14 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e107 do_prune osdmap full prune enabled
Jan 31 08:10:14 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e108 e108: 3 total, 3 up, 3 in
Jan 31 08:10:14 compute-0 ceph-mon[75227]: log_channel(cluster) log [DBG] : osdmap e108: 3 total, 3 up, 3 in
Jan 31 08:10:14 compute-0 ceph-osd[85971]: osd.0 pg_epoch: 108 pg[9.16( empty local-lis/les=0/0 n=0 ec=50/35 lis/c=79/79 les/c/f=80/80/0 sis=108) [0]/[2] r=-1 lpr=108 pi=[79,108)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 08:10:14 compute-0 ceph-osd[85971]: osd.0 pg_epoch: 108 pg[9.16( empty local-lis/les=0/0 n=0 ec=50/35 lis/c=79/79 les/c/f=80/80/0 sis=108) [0]/[2] r=-1 lpr=108 pi=[79,108)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 31 08:10:14 compute-0 ceph-osd[85971]: osd.0 pg_epoch: 108 pg[9.15( v 41'483 (0'0,41'483] local-lis/les=105/107 n=6 ec=50/35 lis/c=105/59 les/c/f=107/60/0 sis=108 pruub=15.017568588s) [1] async=[1] r=-1 lpr=108 pi=[59,108)/1 crt=41'483 active pruub 193.725326538s@ mbc={255={}}] PeeringState::start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 08:10:14 compute-0 ceph-osd[85971]: osd.0 pg_epoch: 108 pg[9.15( v 41'483 (0'0,41'483] local-lis/les=105/107 n=6 ec=50/35 lis/c=105/59 les/c/f=107/60/0 sis=108 pruub=15.017430305s) [1] r=-1 lpr=108 pi=[59,108)/1 crt=41'483 unknown NOTIFY pruub 193.725326538s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 08:10:14 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e108 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 08:10:14 compute-0 sshd-session[99640]: Accepted publickey for zuul from 192.168.122.30 port 52690 ssh2: ECDSA SHA256:Skb+4tfaoVfLHQIqkRSeA/sFlTrVc6ZnX8V66qTLHY8
Jan 31 08:10:14 compute-0 systemd-logind[793]: New session 35 of user zuul.
Jan 31 08:10:14 compute-0 systemd[1]: Started Session 35 of User zuul.
Jan 31 08:10:14 compute-0 sshd-session[99640]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 31 08:10:14 compute-0 ceph-osd[88096]: osd.2 pg_epoch: 108 pg[9.16( v 41'483 (0'0,41'483] local-lis/les=79/80 n=6 ec=50/35 lis/c=79/79 les/c/f=80/80/0 sis=108) [0]/[2] r=0 lpr=108 pi=[79,108)/1 crt=41'483 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 08:10:14 compute-0 ceph-osd[88096]: osd.2 pg_epoch: 108 pg[9.16( v 41'483 (0'0,41'483] local-lis/les=79/80 n=6 ec=50/35 lis/c=79/79 les/c/f=80/80/0 sis=108) [0]/[2] r=0 lpr=108 pi=[79,108)/1 crt=41'483 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 31 08:10:14 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 108 pg[9.15( v 41'483 (0'0,41'483] local-lis/les=0/0 n=6 ec=50/35 lis/c=105/59 les/c/f=107/60/0 sis=108) [1] r=0 lpr=108 pi=[59,108)/1 pct=0'0 crt=41'483 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 08:10:14 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 108 pg[9.15( v 41'483 (0'0,41'483] local-lis/les=0/0 n=6 ec=50/35 lis/c=105/59 les/c/f=107/60/0 sis=108) [1] r=0 lpr=108 pi=[59,108)/1 crt=41'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:10:15 compute-0 ceph-mon[75227]: pgmap v209: 305 pgs: 1 unknown, 1 peering, 303 active+clean; 462 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 27 B/s, 0 objects/s recovering
Jan 31 08:10:15 compute-0 ceph-mon[75227]: osdmap e108: 3 total, 3 up, 3 in
Jan 31 08:10:15 compute-0 ceph-osd[85971]: log_channel(cluster) log [DBG] : 2.13 scrub starts
Jan 31 08:10:15 compute-0 ceph-osd[85971]: log_channel(cluster) log [DBG] : 2.13 scrub ok
Jan 31 08:10:15 compute-0 ceph-osd[87035]: log_channel(cluster) log [DBG] : 3.4 scrub starts
Jan 31 08:10:15 compute-0 ceph-osd[87035]: log_channel(cluster) log [DBG] : 3.4 scrub ok
Jan 31 08:10:15 compute-0 python3.9[99793]: ansible-ansible.legacy.ping Invoked with data=pong
Jan 31 08:10:15 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e108 do_prune osdmap full prune enabled
Jan 31 08:10:15 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v211: 305 pgs: 1 unknown, 1 peering, 303 active+clean; 462 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:10:15 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e109 e109: 3 total, 3 up, 3 in
Jan 31 08:10:16 compute-0 ceph-mon[75227]: log_channel(cluster) log [DBG] : osdmap e109: 3 total, 3 up, 3 in
Jan 31 08:10:16 compute-0 ceph-osd[87035]: log_channel(cluster) log [DBG] : 3.0 scrub starts
Jan 31 08:10:16 compute-0 ceph-osd[88096]: log_channel(cluster) log [DBG] : 5.6 scrub starts
Jan 31 08:10:16 compute-0 ceph-osd[88096]: log_channel(cluster) log [DBG] : 5.6 scrub ok
Jan 31 08:10:16 compute-0 python3.9[99967]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 31 08:10:16 compute-0 ceph-osd[87035]: log_channel(cluster) log [DBG] : 3.0 scrub ok
Jan 31 08:10:17 compute-0 ceph-osd[85971]: log_channel(cluster) log [DBG] : 5.3 scrub starts
Jan 31 08:10:17 compute-0 ceph-osd[85971]: log_channel(cluster) log [DBG] : 5.3 scrub ok
Jan 31 08:10:17 compute-0 ceph-osd[88096]: osd.2 pg_epoch: 109 pg[9.16( v 41'483 (0'0,41'483] local-lis/les=108/109 n=6 ec=50/35 lis/c=79/79 les/c/f=80/80/0 sis=108) [0]/[2] async=[0] r=0 lpr=108 pi=[79,108)/1 crt=41'483 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:10:17 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 109 pg[9.15( v 41'483 (0'0,41'483] local-lis/les=108/109 n=6 ec=50/35 lis/c=105/59 les/c/f=107/60/0 sis=108) [1] r=0 lpr=108 pi=[59,108)/1 crt=41'483 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:10:17 compute-0 sudo[100121]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-feqwugkqhdkegtbrycumgxjypzasuepl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847017.205471-40-71834713958358/AnsiballZ_command.py'
Jan 31 08:10:17 compute-0 sudo[100121]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:10:17 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v213: 305 pgs: 1 unknown, 1 peering, 303 active+clean; 462 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:10:17 compute-0 ceph-mon[75227]: pgmap v211: 305 pgs: 1 unknown, 1 peering, 303 active+clean; 462 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:10:17 compute-0 ceph-mon[75227]: osdmap e109: 3 total, 3 up, 3 in
Jan 31 08:10:17 compute-0 python3.9[100123]: ansible-ansible.legacy.command Invoked with _raw_params=PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin which growvols
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 08:10:17 compute-0 sudo[100121]: pam_unix(sudo:session): session closed for user root
Jan 31 08:10:18 compute-0 sudo[100274]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aipsyayveflyysxvqzevltgbglgnommq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847018.101952-52-227092370610921/AnsiballZ_stat.py'
Jan 31 08:10:18 compute-0 sudo[100274]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:10:18 compute-0 python3.9[100276]: ansible-ansible.builtin.stat Invoked with path=/etc/ansible/facts.d/bootc.fact follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 31 08:10:18 compute-0 sudo[100274]: pam_unix(sudo:session): session closed for user root
Jan 31 08:10:18 compute-0 ceph-osd[88096]: log_channel(cluster) log [DBG] : 10.c scrub starts
Jan 31 08:10:18 compute-0 ceph-osd[88096]: log_channel(cluster) log [DBG] : 10.c scrub ok
Jan 31 08:10:18 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e109 do_prune osdmap full prune enabled
Jan 31 08:10:18 compute-0 ceph-mon[75227]: 2.13 scrub starts
Jan 31 08:10:18 compute-0 ceph-mon[75227]: 2.13 scrub ok
Jan 31 08:10:18 compute-0 ceph-mon[75227]: 3.4 scrub starts
Jan 31 08:10:18 compute-0 ceph-mon[75227]: 3.4 scrub ok
Jan 31 08:10:18 compute-0 ceph-mon[75227]: 3.0 scrub starts
Jan 31 08:10:18 compute-0 ceph-mon[75227]: 5.6 scrub starts
Jan 31 08:10:18 compute-0 ceph-mon[75227]: 5.6 scrub ok
Jan 31 08:10:18 compute-0 ceph-mon[75227]: 3.0 scrub ok
Jan 31 08:10:18 compute-0 ceph-mon[75227]: 5.3 scrub starts
Jan 31 08:10:18 compute-0 ceph-mon[75227]: 5.3 scrub ok
Jan 31 08:10:18 compute-0 ceph-mon[75227]: pgmap v213: 305 pgs: 1 unknown, 1 peering, 303 active+clean; 462 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:10:19 compute-0 sudo[100428]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oxikgdnljcqfnynodwioyqtobdijoeur ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847018.9513535-63-68634384817604/AnsiballZ_file.py'
Jan 31 08:10:19 compute-0 sudo[100428]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:10:19 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e110 e110: 3 total, 3 up, 3 in
Jan 31 08:10:19 compute-0 ceph-mon[75227]: log_channel(cluster) log [DBG] : osdmap e110: 3 total, 3 up, 3 in
Jan 31 08:10:19 compute-0 ceph-osd[87035]: log_channel(cluster) log [DBG] : 11.a scrub starts
Jan 31 08:10:19 compute-0 python3.9[100430]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/log/journal setype=var_log_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 31 08:10:19 compute-0 sudo[100428]: pam_unix(sudo:session): session closed for user root
Jan 31 08:10:19 compute-0 ceph-osd[88096]: osd.2 pg_epoch: 110 pg[9.16( v 41'483 (0'0,41'483] local-lis/les=108/109 n=6 ec=50/35 lis/c=108/79 les/c/f=109/80/0 sis=110 pruub=13.738016129s) [0] async=[0] r=-1 lpr=110 pi=[79,110)/1 crt=41'483 active pruub 185.798767090s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 08:10:19 compute-0 ceph-osd[88096]: osd.2 pg_epoch: 110 pg[9.16( v 41'483 (0'0,41'483] local-lis/les=108/109 n=6 ec=50/35 lis/c=108/79 les/c/f=109/80/0 sis=110 pruub=13.737934113s) [0] r=-1 lpr=110 pi=[79,110)/1 crt=41'483 unknown NOTIFY pruub 185.798767090s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 08:10:19 compute-0 ceph-osd[85971]: osd.0 pg_epoch: 110 pg[9.16( v 41'483 (0'0,41'483] local-lis/les=0/0 n=6 ec=50/35 lis/c=108/79 les/c/f=109/80/0 sis=110) [0] r=0 lpr=110 pi=[79,110)/1 pct=0'0 crt=41'483 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 08:10:19 compute-0 ceph-osd[85971]: osd.0 pg_epoch: 110 pg[9.16( v 41'483 (0'0,41'483] local-lis/les=0/0 n=6 ec=50/35 lis/c=108/79 les/c/f=109/80/0 sis=110) [0] r=0 lpr=110 pi=[79,110)/1 crt=41'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:10:19 compute-0 ceph-osd[87035]: log_channel(cluster) log [DBG] : 11.a scrub ok
Jan 31 08:10:19 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v215: 305 pgs: 1 unknown, 1 peering, 303 active+clean; 462 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:10:19 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e110 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 08:10:19 compute-0 sudo[100580]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rhkmqyjyapggapbrtlffuzcauctwddix ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847019.750642-72-230927535550954/AnsiballZ_file.py'
Jan 31 08:10:19 compute-0 sudo[100580]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:10:20 compute-0 python3.9[100582]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/config-data/ansible-generated recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 31 08:10:20 compute-0 sudo[100580]: pam_unix(sudo:session): session closed for user root
Jan 31 08:10:20 compute-0 ceph-mon[75227]: 10.c scrub starts
Jan 31 08:10:20 compute-0 ceph-mon[75227]: 10.c scrub ok
Jan 31 08:10:20 compute-0 ceph-mon[75227]: osdmap e110: 3 total, 3 up, 3 in
Jan 31 08:10:20 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e110 do_prune osdmap full prune enabled
Jan 31 08:10:20 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e111 e111: 3 total, 3 up, 3 in
Jan 31 08:10:20 compute-0 ceph-osd[88096]: log_channel(cluster) log [DBG] : 5.e scrub starts
Jan 31 08:10:20 compute-0 ceph-osd[88096]: log_channel(cluster) log [DBG] : 5.e scrub ok
Jan 31 08:10:20 compute-0 ceph-mon[75227]: log_channel(cluster) log [DBG] : osdmap e111: 3 total, 3 up, 3 in
Jan 31 08:10:20 compute-0 python3.9[100732]: ansible-ansible.builtin.service_facts Invoked
Jan 31 08:10:20 compute-0 network[100749]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Jan 31 08:10:20 compute-0 network[100750]: 'network-scripts' will be removed from distribution in near future.
Jan 31 08:10:20 compute-0 network[100751]: It is advised to switch to 'NetworkManager' instead for network management.
Jan 31 08:10:21 compute-0 ceph-osd[85971]: osd.0 pg_epoch: 111 pg[9.16( v 41'483 (0'0,41'483] local-lis/les=110/111 n=6 ec=50/35 lis/c=108/79 les/c/f=109/80/0 sis=110) [0] r=0 lpr=110 pi=[79,110)/1 crt=41'483 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:10:21 compute-0 ceph-mon[75227]: 11.a scrub starts
Jan 31 08:10:21 compute-0 ceph-mon[75227]: 11.a scrub ok
Jan 31 08:10:21 compute-0 ceph-mon[75227]: pgmap v215: 305 pgs: 1 unknown, 1 peering, 303 active+clean; 462 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:10:21 compute-0 ceph-mon[75227]: osdmap e111: 3 total, 3 up, 3 in
Jan 31 08:10:21 compute-0 ceph-osd[85971]: log_channel(cluster) log [DBG] : 10.4 scrub starts
Jan 31 08:10:21 compute-0 ceph-osd[85971]: log_channel(cluster) log [DBG] : 10.4 scrub ok
Jan 31 08:10:21 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v217: 305 pgs: 1 unknown, 304 active+clean; 462 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 18 B/s, 0 objects/s recovering
Jan 31 08:10:21 compute-0 ceph-osd[88096]: log_channel(cluster) log [DBG] : 5.d scrub starts
Jan 31 08:10:21 compute-0 ceph-osd[88096]: log_channel(cluster) log [DBG] : 5.d scrub ok
Jan 31 08:10:22 compute-0 ceph-mon[75227]: 5.e scrub starts
Jan 31 08:10:22 compute-0 ceph-mon[75227]: 5.e scrub ok
Jan 31 08:10:22 compute-0 ceph-mon[75227]: 10.4 scrub starts
Jan 31 08:10:22 compute-0 ceph-mon[75227]: 10.4 scrub ok
Jan 31 08:10:22 compute-0 ceph-mon[75227]: pgmap v217: 305 pgs: 1 unknown, 304 active+clean; 462 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 18 B/s, 0 objects/s recovering
Jan 31 08:10:22 compute-0 ceph-osd[88096]: log_channel(cluster) log [DBG] : 5.1c scrub starts
Jan 31 08:10:22 compute-0 ceph-osd[88096]: log_channel(cluster) log [DBG] : 5.1c scrub ok
Jan 31 08:10:23 compute-0 ceph-mon[75227]: 5.d scrub starts
Jan 31 08:10:23 compute-0 ceph-mon[75227]: 5.d scrub ok
Jan 31 08:10:23 compute-0 ceph-mon[75227]: 5.1c scrub starts
Jan 31 08:10:23 compute-0 ceph-mon[75227]: 5.1c scrub ok
Jan 31 08:10:23 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v218: 305 pgs: 1 unknown, 304 active+clean; 462 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 14 B/s, 0 objects/s recovering
Jan 31 08:10:24 compute-0 ceph-osd[85971]: log_channel(cluster) log [DBG] : 2.1d scrub starts
Jan 31 08:10:24 compute-0 ceph-osd[85971]: log_channel(cluster) log [DBG] : 2.1d scrub ok
Jan 31 08:10:24 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e111 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 08:10:24 compute-0 python3.9[101011]: ansible-ansible.builtin.lineinfile Invoked with line=cloud-init=disabled path=/proc/cmdline state=present encoding=utf-8 backrefs=False create=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 08:10:24 compute-0 ceph-mon[75227]: pgmap v218: 305 pgs: 1 unknown, 304 active+clean; 462 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 14 B/s, 0 objects/s recovering
Jan 31 08:10:25 compute-0 ceph-osd[85971]: log_channel(cluster) log [DBG] : 5.7 scrub starts
Jan 31 08:10:25 compute-0 ceph-osd[85971]: log_channel(cluster) log [DBG] : 5.7 scrub ok
Jan 31 08:10:25 compute-0 ceph-osd[87035]: log_channel(cluster) log [DBG] : 3.2 scrub starts
Jan 31 08:10:25 compute-0 ceph-osd[87035]: log_channel(cluster) log [DBG] : 3.2 scrub ok
Jan 31 08:10:25 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v219: 305 pgs: 305 active+clean; 462 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 2.1 KiB/s rd, 255 B/s wr, 5 op/s; 41 B/s, 1 objects/s recovering
Jan 31 08:10:25 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "24"} v 0)
Jan 31 08:10:25 compute-0 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "24"} : dispatch
Jan 31 08:10:25 compute-0 python3.9[101161]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 31 08:10:26 compute-0 ceph-mon[75227]: 2.1d scrub starts
Jan 31 08:10:26 compute-0 ceph-mon[75227]: 2.1d scrub ok
Jan 31 08:10:26 compute-0 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "24"} : dispatch
Jan 31 08:10:26 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e111 do_prune osdmap full prune enabled
Jan 31 08:10:26 compute-0 ceph-osd[87035]: log_channel(cluster) log [DBG] : 7.d scrub starts
Jan 31 08:10:26 compute-0 ceph-osd[87035]: log_channel(cluster) log [DBG] : 7.d scrub ok
Jan 31 08:10:26 compute-0 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "24"}]': finished
Jan 31 08:10:26 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e112 e112: 3 total, 3 up, 3 in
Jan 31 08:10:26 compute-0 ceph-mon[75227]: log_channel(cluster) log [DBG] : osdmap e112: 3 total, 3 up, 3 in
Jan 31 08:10:26 compute-0 python3.9[101315]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local', 'distribution'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 31 08:10:27 compute-0 ceph-mon[75227]: 5.7 scrub starts
Jan 31 08:10:27 compute-0 ceph-mon[75227]: 5.7 scrub ok
Jan 31 08:10:27 compute-0 ceph-mon[75227]: 3.2 scrub starts
Jan 31 08:10:27 compute-0 ceph-mon[75227]: 3.2 scrub ok
Jan 31 08:10:27 compute-0 ceph-mon[75227]: pgmap v219: 305 pgs: 305 active+clean; 462 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 2.1 KiB/s rd, 255 B/s wr, 5 op/s; 41 B/s, 1 objects/s recovering
Jan 31 08:10:27 compute-0 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "24"}]': finished
Jan 31 08:10:27 compute-0 ceph-mon[75227]: osdmap e112: 3 total, 3 up, 3 in
Jan 31 08:10:27 compute-0 sudo[101471]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ovcmwfiaofersuazwdajyqfpfbcgqvhq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847027.2973015-120-36812338399206/AnsiballZ_setup.py'
Jan 31 08:10:27 compute-0 sudo[101471]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:10:27 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v221: 305 pgs: 305 active+clean; 462 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 2.1 KiB/s rd, 255 B/s wr, 5 op/s; 41 B/s, 1 objects/s recovering
Jan 31 08:10:27 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "25"} v 0)
Jan 31 08:10:27 compute-0 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "25"} : dispatch
Jan 31 08:10:27 compute-0 python3.9[101473]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 31 08:10:28 compute-0 sudo[101471]: pam_unix(sudo:session): session closed for user root
Jan 31 08:10:28 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e112 do_prune osdmap full prune enabled
Jan 31 08:10:28 compute-0 ceph-osd[87035]: log_channel(cluster) log [DBG] : 8.7 scrub starts
Jan 31 08:10:28 compute-0 ceph-osd[87035]: log_channel(cluster) log [DBG] : 8.7 scrub ok
Jan 31 08:10:28 compute-0 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "25"}]': finished
Jan 31 08:10:28 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e113 e113: 3 total, 3 up, 3 in
Jan 31 08:10:28 compute-0 ceph-mon[75227]: log_channel(cluster) log [DBG] : osdmap e113: 3 total, 3 up, 3 in
Jan 31 08:10:28 compute-0 sudo[101555]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nlliviqqahzewkvzwqyvmmmhdgtmuhkp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847027.2973015-120-36812338399206/AnsiballZ_dnf.py'
Jan 31 08:10:28 compute-0 sudo[101555]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:10:28 compute-0 ceph-mon[75227]: 7.d scrub starts
Jan 31 08:10:28 compute-0 ceph-mon[75227]: 7.d scrub ok
Jan 31 08:10:28 compute-0 ceph-mon[75227]: pgmap v221: 305 pgs: 305 active+clean; 462 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 2.1 KiB/s rd, 255 B/s wr, 5 op/s; 41 B/s, 1 objects/s recovering
Jan 31 08:10:28 compute-0 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "25"} : dispatch
Jan 31 08:10:28 compute-0 python3.9[101557]: ansible-ansible.legacy.dnf Invoked with name=['driverctl', 'lvm2', 'crudini', 'jq', 'nftables', 'NetworkManager', 'openstack-selinux', 'python3-libselinux', 'python3-pyyaml', 'rsync', 'tmpwatch', 'sysstat', 'iproute-tc', 'ksmtuned', 'systemd-container', 'crypto-policies-scripts', 'grubby', 'sos'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 31 08:10:29 compute-0 ceph-osd[87035]: log_channel(cluster) log [DBG] : 11.5 scrub starts
Jan 31 08:10:29 compute-0 ceph-osd[87035]: log_channel(cluster) log [DBG] : 11.5 scrub ok
Jan 31 08:10:29 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v223: 305 pgs: 305 active+clean; 462 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 2.1 KiB/s rd, 255 B/s wr, 5 op/s; 27 B/s, 0 objects/s recovering
Jan 31 08:10:29 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "26"} v 0)
Jan 31 08:10:29 compute-0 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "26"} : dispatch
Jan 31 08:10:29 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e113 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 08:10:29 compute-0 ceph-mon[75227]: 8.7 scrub starts
Jan 31 08:10:29 compute-0 ceph-mon[75227]: 8.7 scrub ok
Jan 31 08:10:29 compute-0 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "25"}]': finished
Jan 31 08:10:29 compute-0 ceph-mon[75227]: osdmap e113: 3 total, 3 up, 3 in
Jan 31 08:10:29 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e113 do_prune osdmap full prune enabled
Jan 31 08:10:30 compute-0 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "26"}]': finished
Jan 31 08:10:30 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e114 e114: 3 total, 3 up, 3 in
Jan 31 08:10:30 compute-0 ceph-mon[75227]: log_channel(cluster) log [DBG] : osdmap e114: 3 total, 3 up, 3 in
Jan 31 08:10:30 compute-0 ceph-osd[87035]: log_channel(cluster) log [DBG] : 3.d scrub starts
Jan 31 08:10:30 compute-0 ceph-osd[87035]: log_channel(cluster) log [DBG] : 3.d scrub ok
Jan 31 08:10:30 compute-0 ceph-osd[85971]: osd.0 pg_epoch: 114 pg[9.19( v 78'487 (0'0,78'487] local-lis/les=60/61 n=6 ec=50/35 lis/c=60/60 les/c/f=61/61/0 sis=114 pruub=15.534884453s) [2] r=-1 lpr=114 pi=[60,114)/1 crt=75'486 lcod 75'486 active pruub 210.364212036s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 08:10:30 compute-0 ceph-osd[85971]: osd.0 pg_epoch: 114 pg[9.19( v 78'487 (0'0,78'487] local-lis/les=60/61 n=6 ec=50/35 lis/c=60/60 les/c/f=61/61/0 sis=114 pruub=15.534838676s) [2] r=-1 lpr=114 pi=[60,114)/1 crt=75'486 lcod 75'486 unknown NOTIFY pruub 210.364212036s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 08:10:30 compute-0 ceph-osd[88096]: osd.2 pg_epoch: 114 pg[9.19( empty local-lis/les=0/0 n=0 ec=50/35 lis/c=60/60 les/c/f=61/61/0 sis=114) [2] r=0 lpr=114 pi=[60,114)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:10:30 compute-0 ceph-mon[75227]: 11.5 scrub starts
Jan 31 08:10:30 compute-0 ceph-mon[75227]: 11.5 scrub ok
Jan 31 08:10:30 compute-0 ceph-mon[75227]: pgmap v223: 305 pgs: 305 active+clean; 462 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 2.1 KiB/s rd, 255 B/s wr, 5 op/s; 27 B/s, 0 objects/s recovering
Jan 31 08:10:30 compute-0 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "26"} : dispatch
Jan 31 08:10:30 compute-0 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "26"}]': finished
Jan 31 08:10:30 compute-0 ceph-mon[75227]: osdmap e114: 3 total, 3 up, 3 in
Jan 31 08:10:31 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e114 do_prune osdmap full prune enabled
Jan 31 08:10:31 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e115 e115: 3 total, 3 up, 3 in
Jan 31 08:10:31 compute-0 ceph-mon[75227]: log_channel(cluster) log [DBG] : osdmap e115: 3 total, 3 up, 3 in
Jan 31 08:10:31 compute-0 ceph-osd[88096]: osd.2 pg_epoch: 115 pg[9.19( empty local-lis/les=0/0 n=0 ec=50/35 lis/c=60/60 les/c/f=61/61/0 sis=115) [2]/[0] r=-1 lpr=115 pi=[60,115)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 08:10:31 compute-0 ceph-osd[88096]: osd.2 pg_epoch: 115 pg[9.19( empty local-lis/les=0/0 n=0 ec=50/35 lis/c=60/60 les/c/f=61/61/0 sis=115) [2]/[0] r=-1 lpr=115 pi=[60,115)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 31 08:10:31 compute-0 ceph-osd[85971]: osd.0 pg_epoch: 115 pg[9.19( v 78'487 (0'0,78'487] local-lis/les=60/61 n=6 ec=50/35 lis/c=60/60 les/c/f=61/61/0 sis=115) [2]/[0] r=0 lpr=115 pi=[60,115)/1 crt=75'486 lcod 75'486 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 08:10:31 compute-0 ceph-osd[85971]: osd.0 pg_epoch: 115 pg[9.19( v 78'487 (0'0,78'487] local-lis/les=60/61 n=6 ec=50/35 lis/c=60/60 les/c/f=61/61/0 sis=115) [2]/[0] r=0 lpr=115 pi=[60,115)/1 crt=75'486 lcod 75'486 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 31 08:10:31 compute-0 ceph-osd[87035]: log_channel(cluster) log [DBG] : 8.5 scrub starts
Jan 31 08:10:31 compute-0 ceph-osd[87035]: log_channel(cluster) log [DBG] : 8.5 scrub ok
Jan 31 08:10:31 compute-0 ceph-mgr[75519]: [balancer INFO root] Optimize plan auto_2026-01-31_08:10:31
Jan 31 08:10:31 compute-0 ceph-mgr[75519]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 31 08:10:31 compute-0 ceph-mgr[75519]: [balancer INFO root] do_upmap
Jan 31 08:10:31 compute-0 ceph-mgr[75519]: [balancer INFO root] pools ['vms', 'default.rgw.meta', 'cephfs.cephfs.meta', 'cephfs.cephfs.data', 'images', '.rgw.root', 'default.rgw.log', 'backups', 'volumes', '.mgr', 'default.rgw.control']
Jan 31 08:10:31 compute-0 ceph-mgr[75519]: [balancer INFO root] prepared 0/10 upmap changes
Jan 31 08:10:31 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v226: 305 pgs: 305 active+clean; 462 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:10:31 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "27"} v 0)
Jan 31 08:10:31 compute-0 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "27"} : dispatch
Jan 31 08:10:31 compute-0 ceph-osd[88096]: log_channel(cluster) log [DBG] : 5.1b scrub starts
Jan 31 08:10:31 compute-0 ceph-osd[88096]: log_channel(cluster) log [DBG] : 5.1b scrub ok
Jan 31 08:10:32 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e115 do_prune osdmap full prune enabled
Jan 31 08:10:32 compute-0 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "27"}]': finished
Jan 31 08:10:32 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e116 e116: 3 total, 3 up, 3 in
Jan 31 08:10:32 compute-0 ceph-mon[75227]: 3.d scrub starts
Jan 31 08:10:32 compute-0 ceph-mon[75227]: 3.d scrub ok
Jan 31 08:10:32 compute-0 ceph-mon[75227]: osdmap e115: 3 total, 3 up, 3 in
Jan 31 08:10:32 compute-0 ceph-mon[75227]: pgmap v226: 305 pgs: 305 active+clean; 462 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:10:32 compute-0 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "27"} : dispatch
Jan 31 08:10:32 compute-0 ceph-mon[75227]: log_channel(cluster) log [DBG] : osdmap e116: 3 total, 3 up, 3 in
Jan 31 08:10:32 compute-0 ceph-mgr[75519]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:10:32 compute-0 ceph-mgr[75519]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:10:32 compute-0 ceph-mgr[75519]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:10:32 compute-0 ceph-mgr[75519]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:10:32 compute-0 ceph-mgr[75519]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:10:32 compute-0 ceph-mgr[75519]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:10:32 compute-0 ceph-mgr[75519]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 31 08:10:32 compute-0 ceph-mgr[75519]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 08:10:32 compute-0 ceph-mgr[75519]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 31 08:10:32 compute-0 ceph-mgr[75519]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 08:10:32 compute-0 ceph-mgr[75519]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 08:10:32 compute-0 ceph-osd[88096]: log_channel(cluster) log [DBG] : 4.18 scrub starts
Jan 31 08:10:32 compute-0 ceph-mgr[75519]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 08:10:32 compute-0 ceph-mgr[75519]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 08:10:32 compute-0 ceph-osd[88096]: log_channel(cluster) log [DBG] : 4.18 scrub ok
Jan 31 08:10:32 compute-0 ceph-mgr[75519]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 08:10:32 compute-0 ceph-mgr[75519]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 08:10:32 compute-0 ceph-mgr[75519]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 08:10:32 compute-0 ceph-osd[85971]: osd.0 pg_epoch: 116 pg[9.19( v 78'487 (0'0,78'487] local-lis/les=115/116 n=6 ec=50/35 lis/c=60/60 les/c/f=61/61/0 sis=115) [2]/[0] async=[2] r=0 lpr=115 pi=[60,115)/1 crt=78'487 lcod 75'486 mlcod 0'0 active+remapped mbc={255={(0+1)=9}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:10:33 compute-0 ceph-mon[75227]: 8.5 scrub starts
Jan 31 08:10:33 compute-0 ceph-mon[75227]: 8.5 scrub ok
Jan 31 08:10:33 compute-0 ceph-mon[75227]: 5.1b scrub starts
Jan 31 08:10:33 compute-0 ceph-mon[75227]: 5.1b scrub ok
Jan 31 08:10:33 compute-0 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "27"}]': finished
Jan 31 08:10:33 compute-0 ceph-mon[75227]: osdmap e116: 3 total, 3 up, 3 in
Jan 31 08:10:33 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v228: 305 pgs: 305 active+clean; 462 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:10:33 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"} v 0)
Jan 31 08:10:33 compute-0 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"} : dispatch
Jan 31 08:10:34 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e116 do_prune osdmap full prune enabled
Jan 31 08:10:34 compute-0 ceph-osd[87035]: log_channel(cluster) log [DBG] : 11.7 scrub starts
Jan 31 08:10:34 compute-0 ceph-osd[87035]: log_channel(cluster) log [DBG] : 11.7 scrub ok
Jan 31 08:10:34 compute-0 ceph-mon[75227]: 4.18 scrub starts
Jan 31 08:10:34 compute-0 ceph-mon[75227]: 4.18 scrub ok
Jan 31 08:10:34 compute-0 ceph-mon[75227]: pgmap v228: 305 pgs: 305 active+clean; 462 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:10:34 compute-0 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"} : dispatch
Jan 31 08:10:34 compute-0 ceph-osd[88096]: log_channel(cluster) log [DBG] : 7.1a scrub starts
Jan 31 08:10:34 compute-0 ceph-osd[88096]: log_channel(cluster) log [DBG] : 7.1a scrub ok
Jan 31 08:10:34 compute-0 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"}]': finished
Jan 31 08:10:34 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e117 e117: 3 total, 3 up, 3 in
Jan 31 08:10:34 compute-0 ceph-mon[75227]: log_channel(cluster) log [DBG] : osdmap e117: 3 total, 3 up, 3 in
Jan 31 08:10:34 compute-0 ceph-osd[85971]: osd.0 pg_epoch: 117 pg[9.19( v 78'487 (0'0,78'487] local-lis/les=115/116 n=6 ec=50/35 lis/c=115/60 les/c/f=116/61/0 sis=117 pruub=14.018237114s) [2] async=[2] r=-1 lpr=117 pi=[60,117)/1 crt=78'487 lcod 75'486 active pruub 212.917480469s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 08:10:34 compute-0 ceph-osd[85971]: osd.0 pg_epoch: 117 pg[9.19( v 78'487 (0'0,78'487] local-lis/les=115/116 n=6 ec=50/35 lis/c=115/60 les/c/f=116/61/0 sis=117 pruub=14.018141747s) [2] r=-1 lpr=117 pi=[60,117)/1 crt=78'487 lcod 75'486 unknown NOTIFY pruub 212.917480469s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 08:10:34 compute-0 ceph-osd[88096]: osd.2 pg_epoch: 117 pg[9.19( v 78'487 (0'0,78'487] local-lis/les=0/0 n=6 ec=50/35 lis/c=115/60 les/c/f=116/61/0 sis=117) [2] r=0 lpr=117 pi=[60,117)/1 pct=0'0 crt=78'487 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 08:10:34 compute-0 ceph-osd[88096]: osd.2 pg_epoch: 117 pg[9.19( v 78'487 (0'0,78'487] local-lis/les=0/0 n=6 ec=50/35 lis/c=115/60 les/c/f=116/61/0 sis=117) [2] r=0 lpr=117 pi=[60,117)/1 crt=78'487 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:10:35 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v230: 305 pgs: 1 active+remapped, 304 active+clean; 462 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 72 B/s, 1 objects/s recovering
Jan 31 08:10:35 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "29"} v 0)
Jan 31 08:10:35 compute-0 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "29"} : dispatch
Jan 31 08:10:35 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e117 do_prune osdmap full prune enabled
Jan 31 08:10:35 compute-0 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "29"}]': finished
Jan 31 08:10:35 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e118 e118: 3 total, 3 up, 3 in
Jan 31 08:10:35 compute-0 ceph-mon[75227]: log_channel(cluster) log [DBG] : osdmap e118: 3 total, 3 up, 3 in
Jan 31 08:10:35 compute-0 ceph-osd[88096]: osd.2 pg_epoch: 118 pg[9.1c( v 78'487 (0'0,78'487] local-lis/les=91/92 n=6 ec=50/35 lis/c=91/91 les/c/f=92/92/0 sis=118 pruub=9.449163437s) [0] r=-1 lpr=118 pi=[91,118)/1 crt=78'487 active pruub 197.783279419s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 08:10:35 compute-0 ceph-osd[88096]: osd.2 pg_epoch: 118 pg[9.1c( v 78'487 (0'0,78'487] local-lis/les=91/92 n=6 ec=50/35 lis/c=91/91 les/c/f=92/92/0 sis=118 pruub=9.449121475s) [0] r=-1 lpr=118 pi=[91,118)/1 crt=78'487 unknown NOTIFY pruub 197.783279419s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 08:10:35 compute-0 ceph-osd[88096]: osd.2 pg_epoch: 118 pg[9.19( v 78'487 (0'0,78'487] local-lis/les=117/118 n=6 ec=50/35 lis/c=115/60 les/c/f=116/61/0 sis=117) [2] r=0 lpr=117 pi=[60,117)/1 crt=78'487 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:10:35 compute-0 ceph-osd[85971]: osd.0 pg_epoch: 118 pg[9.1c( empty local-lis/les=0/0 n=0 ec=50/35 lis/c=91/91 les/c/f=92/92/0 sis=118) [0] r=0 lpr=118 pi=[91,118)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:10:35 compute-0 ceph-mon[75227]: 11.7 scrub starts
Jan 31 08:10:35 compute-0 ceph-mon[75227]: 11.7 scrub ok
Jan 31 08:10:35 compute-0 ceph-mon[75227]: 7.1a scrub starts
Jan 31 08:10:35 compute-0 ceph-mon[75227]: 7.1a scrub ok
Jan 31 08:10:35 compute-0 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"}]': finished
Jan 31 08:10:35 compute-0 ceph-mon[75227]: osdmap e117: 3 total, 3 up, 3 in
Jan 31 08:10:35 compute-0 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "29"} : dispatch
Jan 31 08:10:36 compute-0 ceph-osd[87035]: log_channel(cluster) log [DBG] : 7.b scrub starts
Jan 31 08:10:36 compute-0 ceph-osd[87035]: log_channel(cluster) log [DBG] : 7.b scrub ok
Jan 31 08:10:36 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e118 do_prune osdmap full prune enabled
Jan 31 08:10:36 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e119 e119: 3 total, 3 up, 3 in
Jan 31 08:10:36 compute-0 ceph-mon[75227]: log_channel(cluster) log [DBG] : osdmap e119: 3 total, 3 up, 3 in
Jan 31 08:10:36 compute-0 ceph-osd[85971]: osd.0 pg_epoch: 119 pg[9.1c( empty local-lis/les=0/0 n=0 ec=50/35 lis/c=91/91 les/c/f=92/92/0 sis=119) [0]/[2] r=-1 lpr=119 pi=[91,119)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 08:10:36 compute-0 ceph-osd[85971]: osd.0 pg_epoch: 119 pg[9.1c( empty local-lis/les=0/0 n=0 ec=50/35 lis/c=91/91 les/c/f=92/92/0 sis=119) [0]/[2] r=-1 lpr=119 pi=[91,119)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 31 08:10:37 compute-0 ceph-mon[75227]: pgmap v230: 305 pgs: 1 active+remapped, 304 active+clean; 462 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 72 B/s, 1 objects/s recovering
Jan 31 08:10:37 compute-0 ceph-osd[88096]: osd.2 pg_epoch: 119 pg[9.1c( v 78'487 (0'0,78'487] local-lis/les=91/92 n=6 ec=50/35 lis/c=91/91 les/c/f=92/92/0 sis=119) [0]/[2] r=0 lpr=119 pi=[91,119)/1 crt=78'487 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 08:10:37 compute-0 ceph-osd[88096]: osd.2 pg_epoch: 119 pg[9.1c( v 78'487 (0'0,78'487] local-lis/les=91/92 n=6 ec=50/35 lis/c=91/91 les/c/f=92/92/0 sis=119) [0]/[2] r=0 lpr=119 pi=[91,119)/1 crt=78'487 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 31 08:10:37 compute-0 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "29"}]': finished
Jan 31 08:10:37 compute-0 ceph-mon[75227]: osdmap e118: 3 total, 3 up, 3 in
Jan 31 08:10:37 compute-0 ceph-osd[85971]: log_channel(cluster) log [DBG] : 10.7 scrub starts
Jan 31 08:10:37 compute-0 ceph-osd[85971]: log_channel(cluster) log [DBG] : 10.7 scrub ok
Jan 31 08:10:37 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v233: 305 pgs: 1 active+remapped, 304 active+clean; 462 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 75 B/s, 1 objects/s recovering
Jan 31 08:10:37 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "30"} v 0)
Jan 31 08:10:37 compute-0 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "30"} : dispatch
Jan 31 08:10:38 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e119 do_prune osdmap full prune enabled
Jan 31 08:10:38 compute-0 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "30"}]': finished
Jan 31 08:10:38 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e120 e120: 3 total, 3 up, 3 in
Jan 31 08:10:38 compute-0 ceph-mon[75227]: log_channel(cluster) log [DBG] : osdmap e120: 3 total, 3 up, 3 in
Jan 31 08:10:38 compute-0 ceph-mon[75227]: 7.b scrub starts
Jan 31 08:10:38 compute-0 ceph-mon[75227]: 7.b scrub ok
Jan 31 08:10:38 compute-0 ceph-mon[75227]: osdmap e119: 3 total, 3 up, 3 in
Jan 31 08:10:38 compute-0 ceph-mon[75227]: 10.7 scrub starts
Jan 31 08:10:38 compute-0 ceph-mon[75227]: 10.7 scrub ok
Jan 31 08:10:38 compute-0 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "30"} : dispatch
Jan 31 08:10:38 compute-0 ceph-osd[88096]: osd.2 pg_epoch: 120 pg[9.1c( v 78'487 (0'0,78'487] local-lis/les=119/120 n=6 ec=50/35 lis/c=91/91 les/c/f=92/92/0 sis=119) [0]/[2] async=[0] r=0 lpr=119 pi=[91,119)/1 crt=78'487 mlcod 0'0 active+remapped mbc={255={(0+1)=9}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:10:38 compute-0 ceph-osd[87035]: log_channel(cluster) log [DBG] : 7.14 scrub starts
Jan 31 08:10:38 compute-0 ceph-osd[87035]: log_channel(cluster) log [DBG] : 7.14 scrub ok
Jan 31 08:10:39 compute-0 ceph-osd[85971]: log_channel(cluster) log [DBG] : 2.18 scrub starts
Jan 31 08:10:39 compute-0 ceph-osd[85971]: log_channel(cluster) log [DBG] : 2.18 scrub ok
Jan 31 08:10:39 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e120 do_prune osdmap full prune enabled
Jan 31 08:10:39 compute-0 ceph-osd[87035]: log_channel(cluster) log [DBG] : 3.10 scrub starts
Jan 31 08:10:39 compute-0 ceph-osd[87035]: log_channel(cluster) log [DBG] : 3.10 scrub ok
Jan 31 08:10:39 compute-0 ceph-mon[75227]: pgmap v233: 305 pgs: 1 active+remapped, 304 active+clean; 462 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 75 B/s, 1 objects/s recovering
Jan 31 08:10:39 compute-0 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "30"}]': finished
Jan 31 08:10:39 compute-0 ceph-mon[75227]: osdmap e120: 3 total, 3 up, 3 in
Jan 31 08:10:39 compute-0 ceph-mon[75227]: 7.14 scrub starts
Jan 31 08:10:39 compute-0 ceph-mon[75227]: 7.14 scrub ok
Jan 31 08:10:39 compute-0 ceph-mon[75227]: 2.18 scrub starts
Jan 31 08:10:39 compute-0 ceph-mon[75227]: 2.18 scrub ok
Jan 31 08:10:39 compute-0 ceph-osd[88096]: log_channel(cluster) log [DBG] : 8.15 scrub starts
Jan 31 08:10:39 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v235: 305 pgs: 1 active+remapped, 304 active+clean; 462 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:10:39 compute-0 ceph-osd[88096]: log_channel(cluster) log [DBG] : 8.15 scrub ok
Jan 31 08:10:39 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "31"} v 0)
Jan 31 08:10:39 compute-0 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "31"} : dispatch
Jan 31 08:10:39 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e121 e121: 3 total, 3 up, 3 in
Jan 31 08:10:40 compute-0 ceph-mon[75227]: log_channel(cluster) log [DBG] : osdmap e121: 3 total, 3 up, 3 in
Jan 31 08:10:40 compute-0 ceph-osd[88096]: osd.2 pg_epoch: 121 pg[9.1c( v 78'487 (0'0,78'487] local-lis/les=119/120 n=6 ec=50/35 lis/c=119/91 les/c/f=120/92/0 sis=121 pruub=14.153413773s) [0] async=[0] r=-1 lpr=121 pi=[91,121)/1 crt=78'487 active pruub 206.638427734s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 08:10:40 compute-0 ceph-osd[88096]: osd.2 pg_epoch: 121 pg[9.1c( v 78'487 (0'0,78'487] local-lis/les=119/120 n=6 ec=50/35 lis/c=119/91 les/c/f=120/92/0 sis=121 pruub=14.153292656s) [0] r=-1 lpr=121 pi=[91,121)/1 crt=78'487 unknown NOTIFY pruub 206.638427734s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 08:10:40 compute-0 ceph-osd[85971]: osd.0 pg_epoch: 121 pg[9.1c( v 78'487 (0'0,78'487] local-lis/les=0/0 n=6 ec=50/35 lis/c=119/91 les/c/f=120/92/0 sis=121) [0] r=0 lpr=121 pi=[91,121)/1 pct=0'0 crt=78'487 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 08:10:40 compute-0 ceph-osd[85971]: osd.0 pg_epoch: 121 pg[9.1c( v 78'487 (0'0,78'487] local-lis/les=0/0 n=6 ec=50/35 lis/c=119/91 les/c/f=120/92/0 sis=121) [0] r=0 lpr=121 pi=[91,121)/1 crt=78'487 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:10:40 compute-0 ceph-osd[87035]: log_channel(cluster) log [DBG] : 7.16 scrub starts
Jan 31 08:10:40 compute-0 ceph-osd[87035]: log_channel(cluster) log [DBG] : 7.16 scrub ok
Jan 31 08:10:40 compute-0 ceph-mon[75227]: 3.10 scrub starts
Jan 31 08:10:40 compute-0 ceph-mon[75227]: 3.10 scrub ok
Jan 31 08:10:40 compute-0 ceph-mon[75227]: 8.15 scrub starts
Jan 31 08:10:40 compute-0 ceph-mon[75227]: pgmap v235: 305 pgs: 1 active+remapped, 304 active+clean; 462 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:10:40 compute-0 ceph-mon[75227]: 8.15 scrub ok
Jan 31 08:10:40 compute-0 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "31"} : dispatch
Jan 31 08:10:40 compute-0 ceph-mon[75227]: osdmap e121: 3 total, 3 up, 3 in
Jan 31 08:10:40 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e121 do_prune osdmap full prune enabled
Jan 31 08:10:41 compute-0 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "31"}]': finished
Jan 31 08:10:41 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e122 e122: 3 total, 3 up, 3 in
Jan 31 08:10:41 compute-0 ceph-mon[75227]: log_channel(cluster) log [DBG] : osdmap e122: 3 total, 3 up, 3 in
Jan 31 08:10:41 compute-0 ceph-osd[88096]: osd.2 pg_epoch: 122 pg[9.1e( v 75'485 (0'0,75'485] local-lis/les=76/77 n=6 ec=50/35 lis/c=76/76 les/c/f=77/77/0 sis=122 pruub=15.742437363s) [0] r=-1 lpr=122 pi=[76,122)/1 crt=75'485 active pruub 209.373794556s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 08:10:41 compute-0 ceph-osd[88096]: osd.2 pg_epoch: 122 pg[9.1e( v 75'485 (0'0,75'485] local-lis/les=76/77 n=6 ec=50/35 lis/c=76/76 les/c/f=77/77/0 sis=122 pruub=15.742307663s) [0] r=-1 lpr=122 pi=[76,122)/1 crt=75'485 unknown NOTIFY pruub 209.373794556s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 08:10:41 compute-0 ceph-osd[85971]: osd.0 pg_epoch: 122 pg[9.1e( empty local-lis/les=0/0 n=0 ec=50/35 lis/c=76/76 les/c/f=77/77/0 sis=122) [0] r=0 lpr=122 pi=[76,122)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:10:41 compute-0 ceph-osd[85971]: osd.0 pg_epoch: 122 pg[9.1c( v 78'487 (0'0,78'487] local-lis/les=121/122 n=6 ec=50/35 lis/c=119/91 les/c/f=120/92/0 sis=121) [0] r=0 lpr=121 pi=[91,121)/1 crt=78'487 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:10:41 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v238: 305 pgs: 1 active+remapped, 304 active+clean; 462 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 87 B/s, 1 objects/s recovering
Jan 31 08:10:41 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "32"} v 0)
Jan 31 08:10:41 compute-0 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "32"} : dispatch
Jan 31 08:10:42 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e122 do_prune osdmap full prune enabled
Jan 31 08:10:42 compute-0 ceph-osd[87035]: log_channel(cluster) log [DBG] : 8.19 scrub starts
Jan 31 08:10:42 compute-0 ceph-osd[87035]: log_channel(cluster) log [DBG] : 8.19 scrub ok
Jan 31 08:10:42 compute-0 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "32"}]': finished
Jan 31 08:10:42 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e123 e123: 3 total, 3 up, 3 in
Jan 31 08:10:42 compute-0 ceph-mon[75227]: 7.16 scrub starts
Jan 31 08:10:42 compute-0 ceph-mon[75227]: 7.16 scrub ok
Jan 31 08:10:42 compute-0 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "31"}]': finished
Jan 31 08:10:42 compute-0 ceph-mon[75227]: osdmap e122: 3 total, 3 up, 3 in
Jan 31 08:10:42 compute-0 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "32"} : dispatch
Jan 31 08:10:42 compute-0 ceph-mon[75227]: log_channel(cluster) log [DBG] : osdmap e123: 3 total, 3 up, 3 in
Jan 31 08:10:42 compute-0 ceph-osd[85971]: osd.0 pg_epoch: 123 pg[9.1e( empty local-lis/les=0/0 n=0 ec=50/35 lis/c=76/76 les/c/f=77/77/0 sis=123) [0]/[2] r=-1 lpr=123 pi=[76,123)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 08:10:42 compute-0 ceph-osd[85971]: osd.0 pg_epoch: 123 pg[9.1e( empty local-lis/les=0/0 n=0 ec=50/35 lis/c=76/76 les/c/f=77/77/0 sis=123) [0]/[2] r=-1 lpr=123 pi=[76,123)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 31 08:10:42 compute-0 ceph-osd[88096]: osd.2 pg_epoch: 123 pg[9.1e( v 75'485 (0'0,75'485] local-lis/les=76/77 n=6 ec=50/35 lis/c=76/76 les/c/f=77/77/0 sis=123) [0]/[2] r=0 lpr=123 pi=[76,123)/1 crt=75'485 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 08:10:42 compute-0 ceph-osd[88096]: osd.2 pg_epoch: 123 pg[9.1e( v 75'485 (0'0,75'485] local-lis/les=76/77 n=6 ec=50/35 lis/c=76/76 les/c/f=77/77/0 sis=123) [0]/[2] r=0 lpr=123 pi=[76,123)/1 crt=75'485 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 31 08:10:42 compute-0 ceph-osd[88096]: osd.2 pg_epoch: 123 pg[9.1f( v 41'483 (0'0,41'483] local-lis/les=79/80 n=6 ec=50/35 lis/c=79/79 les/c/f=80/80/0 sis=123 pruub=8.551447868s) [1] r=-1 lpr=123 pi=[79,123)/1 crt=41'483 active pruub 203.832275391s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 08:10:42 compute-0 ceph-osd[88096]: osd.2 pg_epoch: 123 pg[9.1f( v 41'483 (0'0,41'483] local-lis/les=79/80 n=6 ec=50/35 lis/c=79/79 les/c/f=80/80/0 sis=123 pruub=8.551359177s) [1] r=-1 lpr=123 pi=[79,123)/1 crt=41'483 unknown NOTIFY pruub 203.832275391s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 08:10:42 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 123 pg[9.1f( empty local-lis/les=0/0 n=0 ec=50/35 lis/c=79/79 les/c/f=80/80/0 sis=123) [1] r=0 lpr=123 pi=[79,123)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:10:42 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] _maybe_adjust
Jan 31 08:10:42 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:10:42 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 31 08:10:42 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:10:42 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 08:10:42 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:10:42 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 08:10:42 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:10:42 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 08:10:42 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:10:42 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 08:10:42 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:10:42 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 2.67515674950501e-06 of space, bias 4.0, pg target 0.0032101880994060117 quantized to 16 (current 16)
Jan 31 08:10:42 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:10:42 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 08:10:42 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:10:42 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 31 08:10:42 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:10:42 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.387758839617113e-06 of space, bias 1.0, pg target 0.0013163276518851337 quantized to 32 (current 32)
Jan 31 08:10:42 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:10:42 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 08:10:42 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:10:42 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 31 08:10:43 compute-0 ceph-osd[85971]: log_channel(cluster) log [DBG] : 10.8 scrub starts
Jan 31 08:10:43 compute-0 ceph-osd[85971]: log_channel(cluster) log [DBG] : 10.8 scrub ok
Jan 31 08:10:43 compute-0 ceph-osd[87035]: log_channel(cluster) log [DBG] : 3.13 scrub starts
Jan 31 08:10:43 compute-0 ceph-osd[87035]: log_channel(cluster) log [DBG] : 3.13 scrub ok
Jan 31 08:10:43 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e123 do_prune osdmap full prune enabled
Jan 31 08:10:43 compute-0 ceph-mon[75227]: pgmap v238: 305 pgs: 1 active+remapped, 304 active+clean; 462 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 87 B/s, 1 objects/s recovering
Jan 31 08:10:43 compute-0 ceph-mon[75227]: 8.19 scrub starts
Jan 31 08:10:43 compute-0 ceph-mon[75227]: 8.19 scrub ok
Jan 31 08:10:43 compute-0 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "32"}]': finished
Jan 31 08:10:43 compute-0 ceph-mon[75227]: osdmap e123: 3 total, 3 up, 3 in
Jan 31 08:10:43 compute-0 ceph-mon[75227]: 10.8 scrub starts
Jan 31 08:10:43 compute-0 ceph-mon[75227]: 10.8 scrub ok
Jan 31 08:10:43 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v240: 305 pgs: 1 active+remapped, 304 active+clean; 462 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 74 B/s, 1 objects/s recovering
Jan 31 08:10:43 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e124 e124: 3 total, 3 up, 3 in
Jan 31 08:10:43 compute-0 ceph-mon[75227]: log_channel(cluster) log [DBG] : osdmap e124: 3 total, 3 up, 3 in
Jan 31 08:10:44 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 124 pg[9.1f( empty local-lis/les=0/0 n=0 ec=50/35 lis/c=79/79 les/c/f=80/80/0 sis=124) [1]/[2] r=-1 lpr=124 pi=[79,124)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 08:10:44 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 124 pg[9.1f( empty local-lis/les=0/0 n=0 ec=50/35 lis/c=79/79 les/c/f=80/80/0 sis=124) [1]/[2] r=-1 lpr=124 pi=[79,124)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 31 08:10:44 compute-0 ceph-osd[88096]: osd.2 pg_epoch: 124 pg[9.1f( v 41'483 (0'0,41'483] local-lis/les=79/80 n=6 ec=50/35 lis/c=79/79 les/c/f=80/80/0 sis=124) [1]/[2] r=0 lpr=124 pi=[79,124)/1 crt=41'483 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 1, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 08:10:44 compute-0 ceph-osd[88096]: osd.2 pg_epoch: 124 pg[9.1f( v 41'483 (0'0,41'483] local-lis/les=79/80 n=6 ec=50/35 lis/c=79/79 les/c/f=80/80/0 sis=124) [1]/[2] r=0 lpr=124 pi=[79,124)/1 crt=41'483 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 31 08:10:44 compute-0 ceph-osd[88096]: osd.2 pg_epoch: 124 pg[9.1e( v 75'485 (0'0,75'485] local-lis/les=123/124 n=6 ec=50/35 lis/c=76/76 les/c/f=77/77/0 sis=123) [0]/[2] async=[0] r=0 lpr=123 pi=[76,123)/1 crt=75'485 mlcod 0'0 active+remapped mbc={255={(0+1)=6}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:10:44 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e124 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 08:10:44 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e124 do_prune osdmap full prune enabled
Jan 31 08:10:45 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e125 e125: 3 total, 3 up, 3 in
Jan 31 08:10:45 compute-0 ceph-mon[75227]: log_channel(cluster) log [DBG] : osdmap e125: 3 total, 3 up, 3 in
Jan 31 08:10:45 compute-0 ceph-mon[75227]: 3.13 scrub starts
Jan 31 08:10:45 compute-0 ceph-mon[75227]: 3.13 scrub ok
Jan 31 08:10:45 compute-0 ceph-mon[75227]: pgmap v240: 305 pgs: 1 active+remapped, 304 active+clean; 462 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 74 B/s, 1 objects/s recovering
Jan 31 08:10:45 compute-0 ceph-mon[75227]: osdmap e124: 3 total, 3 up, 3 in
Jan 31 08:10:45 compute-0 ceph-osd[88096]: log_channel(cluster) log [DBG] : 4.1b scrub starts
Jan 31 08:10:45 compute-0 ceph-osd[88096]: log_channel(cluster) log [DBG] : 4.1b scrub ok
Jan 31 08:10:45 compute-0 ceph-osd[88096]: osd.2 pg_epoch: 125 pg[9.1f( v 41'483 (0'0,41'483] local-lis/les=124/125 n=6 ec=50/35 lis/c=79/79 les/c/f=80/80/0 sis=124) [1]/[2] async=[1] r=0 lpr=124 pi=[79,124)/1 crt=41'483 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:10:45 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v243: 305 pgs: 1 remapped+peering, 1 active+recovering+remapped, 303 active+clean; 462 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 4/247 objects misplaced (1.619%); 33 B/s, 0 objects/s recovering
Jan 31 08:10:46 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e125 do_prune osdmap full prune enabled
Jan 31 08:10:46 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e126 e126: 3 total, 3 up, 3 in
Jan 31 08:10:46 compute-0 sudo[101635]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 08:10:46 compute-0 sudo[101635]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:10:46 compute-0 sudo[101635]: pam_unix(sudo:session): session closed for user root
Jan 31 08:10:46 compute-0 sudo[101660]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/82c880e6-d992-5408-8b12-efff9c275473/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Jan 31 08:10:46 compute-0 ceph-mon[75227]: log_channel(cluster) log [DBG] : osdmap e126: 3 total, 3 up, 3 in
Jan 31 08:10:46 compute-0 sudo[101660]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:10:46 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 126 pg[9.1f( v 41'483 (0'0,41'483] local-lis/les=0/0 n=6 ec=50/35 lis/c=124/79 les/c/f=125/80/0 sis=126) [1] r=0 lpr=126 pi=[79,126)/1 pct=0'0 crt=41'483 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 1 -> 1, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 08:10:46 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 126 pg[9.1f( v 41'483 (0'0,41'483] local-lis/les=0/0 n=6 ec=50/35 lis/c=124/79 les/c/f=125/80/0 sis=126) [1] r=0 lpr=126 pi=[79,126)/1 crt=41'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:10:46 compute-0 ceph-osd[88096]: osd.2 pg_epoch: 126 pg[9.1f( v 41'483 (0'0,41'483] local-lis/les=124/125 n=6 ec=50/35 lis/c=124/79 les/c/f=125/80/0 sis=126 pruub=14.954847336s) [1] async=[1] r=-1 lpr=126 pi=[79,126)/1 crt=41'483 active pruub 214.125625610s@ mbc={255={}}] PeeringState::start_peering_interval up [1] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 1 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 08:10:46 compute-0 ceph-osd[88096]: osd.2 pg_epoch: 126 pg[9.1f( v 41'483 (0'0,41'483] local-lis/les=124/125 n=6 ec=50/35 lis/c=124/79 les/c/f=125/80/0 sis=126 pruub=14.954668999s) [1] r=-1 lpr=126 pi=[79,126)/1 crt=41'483 unknown NOTIFY pruub 214.125625610s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 08:10:46 compute-0 ceph-osd[88096]: osd.2 pg_epoch: 126 pg[9.1e( v 75'485 (0'0,75'485] local-lis/les=123/124 n=6 ec=50/35 lis/c=123/76 les/c/f=124/77/0 sis=126 pruub=13.700307846s) [0] async=[0] r=-1 lpr=126 pi=[76,126)/1 crt=75'485 active pruub 212.871459961s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 08:10:46 compute-0 ceph-osd[88096]: osd.2 pg_epoch: 126 pg[9.1e( v 75'485 (0'0,75'485] local-lis/les=123/124 n=6 ec=50/35 lis/c=123/76 les/c/f=124/77/0 sis=126 pruub=13.700231552s) [0] r=-1 lpr=126 pi=[76,126)/1 crt=75'485 unknown NOTIFY pruub 212.871459961s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 08:10:46 compute-0 ceph-mon[75227]: osdmap e125: 3 total, 3 up, 3 in
Jan 31 08:10:46 compute-0 ceph-mon[75227]: pgmap v243: 305 pgs: 1 remapped+peering, 1 active+recovering+remapped, 303 active+clean; 462 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 4/247 objects misplaced (1.619%); 33 B/s, 0 objects/s recovering
Jan 31 08:10:46 compute-0 ceph-osd[85971]: osd.0 pg_epoch: 126 pg[9.1e( v 75'485 (0'0,75'485] local-lis/les=0/0 n=6 ec=50/35 lis/c=123/76 les/c/f=124/77/0 sis=126) [0] r=0 lpr=126 pi=[76,126)/1 pct=0'0 crt=75'485 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 31 08:10:46 compute-0 ceph-osd[85971]: osd.0 pg_epoch: 126 pg[9.1e( v 75'485 (0'0,75'485] local-lis/les=0/0 n=6 ec=50/35 lis/c=123/76 les/c/f=124/77/0 sis=126) [0] r=0 lpr=126 pi=[76,126)/1 crt=75'485 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 08:10:47 compute-0 sudo[101660]: pam_unix(sudo:session): session closed for user root
Jan 31 08:10:47 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 31 08:10:47 compute-0 ceph-mon[75227]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 08:10:47 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 31 08:10:47 compute-0 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 31 08:10:47 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 31 08:10:47 compute-0 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 08:10:47 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 31 08:10:47 compute-0 ceph-mon[75227]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 31 08:10:47 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 31 08:10:47 compute-0 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 31 08:10:47 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 31 08:10:47 compute-0 ceph-mon[75227]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 08:10:47 compute-0 sudo[101717]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 08:10:47 compute-0 sudo[101717]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:10:47 compute-0 sudo[101717]: pam_unix(sudo:session): session closed for user root
Jan 31 08:10:47 compute-0 sudo[101743]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/82c880e6-d992-5408-8b12-efff9c275473/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 82c880e6-d992-5408-8b12-efff9c275473 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Jan 31 08:10:47 compute-0 sudo[101743]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:10:47 compute-0 ceph-osd[87035]: log_channel(cluster) log [DBG] : 7.17 scrub starts
Jan 31 08:10:47 compute-0 ceph-osd[87035]: log_channel(cluster) log [DBG] : 7.17 scrub ok
Jan 31 08:10:47 compute-0 podman[101783]: 2026-01-31 08:10:47.60354495 +0000 UTC m=+0.072707492 container create b9306458490dc3d916dd530f19c933453d67f13b5f6d739ad6bd771916bdce8a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=naughty_matsumoto, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Jan 31 08:10:47 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e126 do_prune osdmap full prune enabled
Jan 31 08:10:47 compute-0 podman[101783]: 2026-01-31 08:10:47.550777322 +0000 UTC m=+0.019939854 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 08:10:47 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v245: 305 pgs: 1 remapped+peering, 1 active+recovering+remapped, 303 active+clean; 462 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 4/247 objects misplaced (1.619%); 29 B/s, 0 objects/s recovering
Jan 31 08:10:47 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e127 e127: 3 total, 3 up, 3 in
Jan 31 08:10:47 compute-0 ceph-mon[75227]: log_channel(cluster) log [DBG] : osdmap e127: 3 total, 3 up, 3 in
Jan 31 08:10:47 compute-0 systemd[1]: Started libpod-conmon-b9306458490dc3d916dd530f19c933453d67f13b5f6d739ad6bd771916bdce8a.scope.
Jan 31 08:10:47 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:10:47 compute-0 ceph-mon[75227]: 4.1b scrub starts
Jan 31 08:10:47 compute-0 ceph-mon[75227]: 4.1b scrub ok
Jan 31 08:10:47 compute-0 ceph-mon[75227]: osdmap e126: 3 total, 3 up, 3 in
Jan 31 08:10:47 compute-0 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 08:10:47 compute-0 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 31 08:10:47 compute-0 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 08:10:47 compute-0 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 31 08:10:47 compute-0 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 31 08:10:47 compute-0 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 08:10:47 compute-0 ceph-osd[87035]: osd.1 pg_epoch: 127 pg[9.1f( v 41'483 (0'0,41'483] local-lis/les=126/127 n=6 ec=50/35 lis/c=124/79 les/c/f=125/80/0 sis=126) [1] r=0 lpr=126 pi=[79,126)/1 crt=41'483 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:10:47 compute-0 ceph-osd[85971]: osd.0 pg_epoch: 127 pg[9.1e( v 75'485 (0'0,75'485] local-lis/les=126/127 n=6 ec=50/35 lis/c=123/76 les/c/f=124/77/0 sis=126) [0] r=0 lpr=126 pi=[76,126)/1 crt=75'485 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 08:10:47 compute-0 podman[101783]: 2026-01-31 08:10:47.964514099 +0000 UTC m=+0.433676631 container init b9306458490dc3d916dd530f19c933453d67f13b5f6d739ad6bd771916bdce8a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=naughty_matsumoto, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle)
Jan 31 08:10:47 compute-0 podman[101783]: 2026-01-31 08:10:47.972084402 +0000 UTC m=+0.441246904 container start b9306458490dc3d916dd530f19c933453d67f13b5f6d739ad6bd771916bdce8a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=naughty_matsumoto, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 08:10:47 compute-0 naughty_matsumoto[101802]: 167 167
Jan 31 08:10:47 compute-0 systemd[1]: libpod-b9306458490dc3d916dd530f19c933453d67f13b5f6d739ad6bd771916bdce8a.scope: Deactivated successfully.
Jan 31 08:10:47 compute-0 podman[101783]: 2026-01-31 08:10:47.984796251 +0000 UTC m=+0.453958783 container attach b9306458490dc3d916dd530f19c933453d67f13b5f6d739ad6bd771916bdce8a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=naughty_matsumoto, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Jan 31 08:10:47 compute-0 podman[101783]: 2026-01-31 08:10:47.986628453 +0000 UTC m=+0.455790955 container died b9306458490dc3d916dd530f19c933453d67f13b5f6d739ad6bd771916bdce8a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=naughty_matsumoto, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_REF=tentacle)
Jan 31 08:10:48 compute-0 systemd[1]: var-lib-containers-storage-overlay-130a962117b127d9d8c47844d31ffa3f36dc9080894e50343da76079d0e52d1a-merged.mount: Deactivated successfully.
Jan 31 08:10:49 compute-0 ceph-osd[85971]: log_channel(cluster) log [DBG] : 11.10 scrub starts
Jan 31 08:10:49 compute-0 ceph-osd[85971]: log_channel(cluster) log [DBG] : 11.10 scrub ok
Jan 31 08:10:49 compute-0 podman[101783]: 2026-01-31 08:10:49.279654715 +0000 UTC m=+1.748817257 container remove b9306458490dc3d916dd530f19c933453d67f13b5f6d739ad6bd771916bdce8a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=naughty_matsumoto, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 31 08:10:49 compute-0 systemd[1]: libpod-conmon-b9306458490dc3d916dd530f19c933453d67f13b5f6d739ad6bd771916bdce8a.scope: Deactivated successfully.
Jan 31 08:10:49 compute-0 ceph-mon[75227]: 7.17 scrub starts
Jan 31 08:10:49 compute-0 ceph-mon[75227]: 7.17 scrub ok
Jan 31 08:10:49 compute-0 ceph-mon[75227]: pgmap v245: 305 pgs: 1 remapped+peering, 1 active+recovering+remapped, 303 active+clean; 462 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 4/247 objects misplaced (1.619%); 29 B/s, 0 objects/s recovering
Jan 31 08:10:49 compute-0 ceph-mon[75227]: osdmap e127: 3 total, 3 up, 3 in
Jan 31 08:10:49 compute-0 podman[101835]: 2026-01-31 08:10:49.414175349 +0000 UTC m=+0.026310453 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 08:10:49 compute-0 podman[101835]: 2026-01-31 08:10:49.536670263 +0000 UTC m=+0.148805367 container create 6e66eeceb5f8a22536e256ec17ce09eb9e38572c4fe3336fd9906ed3901d3786 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=inspiring_swirles, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Jan 31 08:10:49 compute-0 ceph-osd[87035]: log_channel(cluster) log [DBG] : 3.14 scrub starts
Jan 31 08:10:49 compute-0 ceph-osd[87035]: log_channel(cluster) log [DBG] : 3.14 scrub ok
Jan 31 08:10:49 compute-0 ceph-osd[88096]: log_channel(cluster) log [DBG] : 11.15 scrub starts
Jan 31 08:10:49 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v247: 305 pgs: 1 remapped+peering, 304 active+clean; 462 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 26 B/s, 1 objects/s recovering
Jan 31 08:10:49 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 08:10:49 compute-0 ceph-osd[88096]: log_channel(cluster) log [DBG] : 11.15 scrub ok
Jan 31 08:10:50 compute-0 systemd[1]: Started libpod-conmon-6e66eeceb5f8a22536e256ec17ce09eb9e38572c4fe3336fd9906ed3901d3786.scope.
Jan 31 08:10:50 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:10:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bc8d0cb7884ccc2fa895dc539506d4544906b49a391a94f59921cd5477bfe400/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 08:10:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bc8d0cb7884ccc2fa895dc539506d4544906b49a391a94f59921cd5477bfe400/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 08:10:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bc8d0cb7884ccc2fa895dc539506d4544906b49a391a94f59921cd5477bfe400/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 08:10:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bc8d0cb7884ccc2fa895dc539506d4544906b49a391a94f59921cd5477bfe400/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 08:10:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bc8d0cb7884ccc2fa895dc539506d4544906b49a391a94f59921cd5477bfe400/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 31 08:10:50 compute-0 ceph-osd[85971]: log_channel(cluster) log [DBG] : 2.19 scrub starts
Jan 31 08:10:50 compute-0 ceph-osd[85971]: log_channel(cluster) log [DBG] : 2.19 scrub ok
Jan 31 08:10:50 compute-0 podman[101835]: 2026-01-31 08:10:50.257064828 +0000 UTC m=+0.869199912 container init 6e66eeceb5f8a22536e256ec17ce09eb9e38572c4fe3336fd9906ed3901d3786 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=inspiring_swirles, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 08:10:50 compute-0 podman[101835]: 2026-01-31 08:10:50.26599926 +0000 UTC m=+0.878134324 container start 6e66eeceb5f8a22536e256ec17ce09eb9e38572c4fe3336fd9906ed3901d3786 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=inspiring_swirles, ceph=True, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 08:10:50 compute-0 podman[101835]: 2026-01-31 08:10:50.347116418 +0000 UTC m=+0.959251542 container attach 6e66eeceb5f8a22536e256ec17ce09eb9e38572c4fe3336fd9906ed3901d3786 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=inspiring_swirles, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS)
Jan 31 08:10:50 compute-0 inspiring_swirles[101857]: --> passed data devices: 0 physical, 3 LVM
Jan 31 08:10:50 compute-0 inspiring_swirles[101857]: --> All data devices are unavailable
Jan 31 08:10:50 compute-0 systemd[1]: libpod-6e66eeceb5f8a22536e256ec17ce09eb9e38572c4fe3336fd9906ed3901d3786.scope: Deactivated successfully.
Jan 31 08:10:50 compute-0 podman[101835]: 2026-01-31 08:10:50.719017355 +0000 UTC m=+1.331152439 container died 6e66eeceb5f8a22536e256ec17ce09eb9e38572c4fe3336fd9906ed3901d3786 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=inspiring_swirles, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20251030)
Jan 31 08:10:50 compute-0 ceph-mon[75227]: 11.10 scrub starts
Jan 31 08:10:50 compute-0 ceph-mon[75227]: 11.10 scrub ok
Jan 31 08:10:50 compute-0 ceph-mon[75227]: 3.14 scrub starts
Jan 31 08:10:50 compute-0 ceph-mon[75227]: 3.14 scrub ok
Jan 31 08:10:50 compute-0 ceph-mon[75227]: 11.15 scrub starts
Jan 31 08:10:50 compute-0 ceph-mon[75227]: pgmap v247: 305 pgs: 1 remapped+peering, 304 active+clean; 462 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 26 B/s, 1 objects/s recovering
Jan 31 08:10:50 compute-0 ceph-mon[75227]: 11.15 scrub ok
Jan 31 08:10:50 compute-0 ceph-mon[75227]: 2.19 scrub starts
Jan 31 08:10:50 compute-0 ceph-mon[75227]: 2.19 scrub ok
Jan 31 08:10:51 compute-0 systemd[1]: var-lib-containers-storage-overlay-bc8d0cb7884ccc2fa895dc539506d4544906b49a391a94f59921cd5477bfe400-merged.mount: Deactivated successfully.
Jan 31 08:10:51 compute-0 podman[101835]: 2026-01-31 08:10:51.404012892 +0000 UTC m=+2.016147996 container remove 6e66eeceb5f8a22536e256ec17ce09eb9e38572c4fe3336fd9906ed3901d3786 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=inspiring_swirles, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 08:10:51 compute-0 systemd[1]: libpod-conmon-6e66eeceb5f8a22536e256ec17ce09eb9e38572c4fe3336fd9906ed3901d3786.scope: Deactivated successfully.
Jan 31 08:10:51 compute-0 sudo[101743]: pam_unix(sudo:session): session closed for user root
Jan 31 08:10:51 compute-0 sudo[101902]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 08:10:51 compute-0 sudo[101902]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:10:51 compute-0 sudo[101902]: pam_unix(sudo:session): session closed for user root
Jan 31 08:10:51 compute-0 sudo[101927]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/82c880e6-d992-5408-8b12-efff9c275473/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 82c880e6-d992-5408-8b12-efff9c275473 -- lvm list --format json
Jan 31 08:10:51 compute-0 sudo[101927]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:10:51 compute-0 ceph-osd[87035]: log_channel(cluster) log [DBG] : 11.1d scrub starts
Jan 31 08:10:51 compute-0 ceph-osd[87035]: log_channel(cluster) log [DBG] : 11.1d scrub ok
Jan 31 08:10:51 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v248: 305 pgs: 1 active+clean+scrubbing, 304 active+clean; 462 KiB data, 137 MiB used, 60 GiB / 60 GiB avail; 17 B/s, 1 objects/s recovering
Jan 31 08:10:51 compute-0 podman[101974]: 2026-01-31 08:10:51.823908483 +0000 UTC m=+0.031331934 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 08:10:52 compute-0 podman[101974]: 2026-01-31 08:10:52.083428762 +0000 UTC m=+0.290852233 container create 526cffdac95a1996224390e010e25a1d0589bbb599d44cfe10624fe4f50a0c8d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cool_mcclintock, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030)
Jan 31 08:10:52 compute-0 ceph-osd[85971]: log_channel(cluster) log [DBG] : 8.10 scrub starts
Jan 31 08:10:52 compute-0 ceph-osd[85971]: log_channel(cluster) log [DBG] : 8.10 scrub ok
Jan 31 08:10:52 compute-0 systemd[1]: Started libpod-conmon-526cffdac95a1996224390e010e25a1d0589bbb599d44cfe10624fe4f50a0c8d.scope.
Jan 31 08:10:52 compute-0 ceph-osd[87035]: log_channel(cluster) log [DBG] : 8.1e scrub starts
Jan 31 08:10:52 compute-0 ceph-osd[87035]: log_channel(cluster) log [DBG] : 8.1e scrub ok
Jan 31 08:10:52 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:10:52 compute-0 ceph-mon[75227]: 11.1d scrub starts
Jan 31 08:10:52 compute-0 ceph-mon[75227]: 11.1d scrub ok
Jan 31 08:10:52 compute-0 ceph-mon[75227]: pgmap v248: 305 pgs: 1 active+clean+scrubbing, 304 active+clean; 462 KiB data, 137 MiB used, 60 GiB / 60 GiB avail; 17 B/s, 1 objects/s recovering
Jan 31 08:10:52 compute-0 ceph-mon[75227]: 8.10 scrub starts
Jan 31 08:10:52 compute-0 podman[101974]: 2026-01-31 08:10:52.825005493 +0000 UTC m=+1.032428944 container init 526cffdac95a1996224390e010e25a1d0589bbb599d44cfe10624fe4f50a0c8d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cool_mcclintock, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 31 08:10:52 compute-0 podman[101974]: 2026-01-31 08:10:52.829396977 +0000 UTC m=+1.036820408 container start 526cffdac95a1996224390e010e25a1d0589bbb599d44cfe10624fe4f50a0c8d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cool_mcclintock, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Jan 31 08:10:52 compute-0 cool_mcclintock[101990]: 167 167
Jan 31 08:10:52 compute-0 systemd[1]: libpod-526cffdac95a1996224390e010e25a1d0589bbb599d44cfe10624fe4f50a0c8d.scope: Deactivated successfully.
Jan 31 08:10:52 compute-0 podman[101974]: 2026-01-31 08:10:52.936309222 +0000 UTC m=+1.143732653 container attach 526cffdac95a1996224390e010e25a1d0589bbb599d44cfe10624fe4f50a0c8d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cool_mcclintock, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 08:10:52 compute-0 podman[101974]: 2026-01-31 08:10:52.936758105 +0000 UTC m=+1.144181536 container died 526cffdac95a1996224390e010e25a1d0589bbb599d44cfe10624fe4f50a0c8d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cool_mcclintock, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.build-date=20251030, io.buildah.version=1.41.3)
Jan 31 08:10:53 compute-0 ceph-osd[85971]: log_channel(cluster) log [DBG] : 7.1f scrub starts
Jan 31 08:10:53 compute-0 ceph-osd[85971]: log_channel(cluster) log [DBG] : 7.1f scrub ok
Jan 31 08:10:53 compute-0 systemd[1]: var-lib-containers-storage-overlay-bcaad873c5eb7e203e7d83d58f3db4f2dd04b85a100d222a8f383950ab0ab8e0-merged.mount: Deactivated successfully.
Jan 31 08:10:53 compute-0 ceph-osd[88096]: log_channel(cluster) log [DBG] : 8.11 scrub starts
Jan 31 08:10:53 compute-0 ceph-osd[88096]: log_channel(cluster) log [DBG] : 8.11 scrub ok
Jan 31 08:10:53 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v249: 305 pgs: 1 active+clean+scrubbing, 304 active+clean; 462 KiB data, 137 MiB used, 60 GiB / 60 GiB avail; 13 B/s, 1 objects/s recovering
Jan 31 08:10:53 compute-0 ceph-mon[75227]: 8.10 scrub ok
Jan 31 08:10:53 compute-0 ceph-mon[75227]: 8.1e scrub starts
Jan 31 08:10:53 compute-0 ceph-mon[75227]: 8.1e scrub ok
Jan 31 08:10:53 compute-0 ceph-mon[75227]: 7.1f scrub starts
Jan 31 08:10:53 compute-0 ceph-mon[75227]: 7.1f scrub ok
Jan 31 08:10:53 compute-0 podman[101974]: 2026-01-31 08:10:53.931511847 +0000 UTC m=+2.138935298 container remove 526cffdac95a1996224390e010e25a1d0589bbb599d44cfe10624fe4f50a0c8d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cool_mcclintock, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 08:10:54 compute-0 systemd[1]: libpod-conmon-526cffdac95a1996224390e010e25a1d0589bbb599d44cfe10624fe4f50a0c8d.scope: Deactivated successfully.
Jan 31 08:10:54 compute-0 ceph-osd[85971]: log_channel(cluster) log [DBG] : 3.f scrub starts
Jan 31 08:10:54 compute-0 ceph-osd[85971]: log_channel(cluster) log [DBG] : 3.f scrub ok
Jan 31 08:10:54 compute-0 podman[102013]: 2026-01-31 08:10:54.044425431 +0000 UTC m=+0.017859455 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 08:10:54 compute-0 podman[102013]: 2026-01-31 08:10:54.37915509 +0000 UTC m=+0.352589064 container create fa3a1dd6e79700856c56b7618c3d0f787a907b7be9fe191b6c1dda3f5feaf4c2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elegant_shannon, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 08:10:54 compute-0 systemd[1]: Started libpod-conmon-fa3a1dd6e79700856c56b7618c3d0f787a907b7be9fe191b6c1dda3f5feaf4c2.scope.
Jan 31 08:10:54 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:10:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6ce40cbcfb0023973d7a0482ba484b478273f33a1beacc47df6e8b3a318bcdce/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 08:10:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6ce40cbcfb0023973d7a0482ba484b478273f33a1beacc47df6e8b3a318bcdce/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 08:10:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6ce40cbcfb0023973d7a0482ba484b478273f33a1beacc47df6e8b3a318bcdce/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 08:10:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6ce40cbcfb0023973d7a0482ba484b478273f33a1beacc47df6e8b3a318bcdce/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 08:10:54 compute-0 podman[102013]: 2026-01-31 08:10:54.784435379 +0000 UTC m=+0.757869403 container init fa3a1dd6e79700856c56b7618c3d0f787a907b7be9fe191b6c1dda3f5feaf4c2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elegant_shannon, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 08:10:54 compute-0 podman[102013]: 2026-01-31 08:10:54.794354379 +0000 UTC m=+0.767788343 container start fa3a1dd6e79700856c56b7618c3d0f787a907b7be9fe191b6c1dda3f5feaf4c2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elegant_shannon, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3)
Jan 31 08:10:54 compute-0 podman[102013]: 2026-01-31 08:10:54.865888596 +0000 UTC m=+0.839322620 container attach fa3a1dd6e79700856c56b7618c3d0f787a907b7be9fe191b6c1dda3f5feaf4c2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elegant_shannon, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Jan 31 08:10:54 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 08:10:55 compute-0 elegant_shannon[102030]: {
Jan 31 08:10:55 compute-0 elegant_shannon[102030]:     "0": [
Jan 31 08:10:55 compute-0 elegant_shannon[102030]:         {
Jan 31 08:10:55 compute-0 elegant_shannon[102030]:             "devices": [
Jan 31 08:10:55 compute-0 elegant_shannon[102030]:                 "/dev/loop3"
Jan 31 08:10:55 compute-0 elegant_shannon[102030]:             ],
Jan 31 08:10:55 compute-0 elegant_shannon[102030]:             "lv_name": "ceph_lv0",
Jan 31 08:10:55 compute-0 elegant_shannon[102030]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 08:10:55 compute-0 elegant_shannon[102030]:             "lv_size": "21470642176",
Jan 31 08:10:55 compute-0 elegant_shannon[102030]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=MTsNbY-MKaT-jGv0-3onj-5WQa-gnK0-BbfLsK,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=82c880e6-d992-5408-8b12-efff9c275473,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=39c36249-2898-4a76-b317-8e4ca379866f,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 31 08:10:55 compute-0 elegant_shannon[102030]:             "lv_uuid": "MTsNbY-MKaT-jGv0-3onj-5WQa-gnK0-BbfLsK",
Jan 31 08:10:55 compute-0 elegant_shannon[102030]:             "name": "ceph_lv0",
Jan 31 08:10:55 compute-0 elegant_shannon[102030]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 08:10:55 compute-0 elegant_shannon[102030]:             "tags": {
Jan 31 08:10:55 compute-0 elegant_shannon[102030]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 31 08:10:55 compute-0 elegant_shannon[102030]:                 "ceph.block_uuid": "MTsNbY-MKaT-jGv0-3onj-5WQa-gnK0-BbfLsK",
Jan 31 08:10:55 compute-0 elegant_shannon[102030]:                 "ceph.cephx_lockbox_secret": "",
Jan 31 08:10:55 compute-0 elegant_shannon[102030]:                 "ceph.cluster_fsid": "82c880e6-d992-5408-8b12-efff9c275473",
Jan 31 08:10:55 compute-0 elegant_shannon[102030]:                 "ceph.cluster_name": "ceph",
Jan 31 08:10:55 compute-0 elegant_shannon[102030]:                 "ceph.crush_device_class": "",
Jan 31 08:10:55 compute-0 elegant_shannon[102030]:                 "ceph.encrypted": "0",
Jan 31 08:10:55 compute-0 elegant_shannon[102030]:                 "ceph.objectstore": "bluestore",
Jan 31 08:10:55 compute-0 elegant_shannon[102030]:                 "ceph.osd_fsid": "39c36249-2898-4a76-b317-8e4ca379866f",
Jan 31 08:10:55 compute-0 elegant_shannon[102030]:                 "ceph.osd_id": "0",
Jan 31 08:10:55 compute-0 elegant_shannon[102030]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 31 08:10:55 compute-0 elegant_shannon[102030]:                 "ceph.type": "block",
Jan 31 08:10:55 compute-0 elegant_shannon[102030]:                 "ceph.vdo": "0",
Jan 31 08:10:55 compute-0 elegant_shannon[102030]:                 "ceph.with_tpm": "0"
Jan 31 08:10:55 compute-0 elegant_shannon[102030]:             },
Jan 31 08:10:55 compute-0 elegant_shannon[102030]:             "type": "block",
Jan 31 08:10:55 compute-0 elegant_shannon[102030]:             "vg_name": "ceph_vg0"
Jan 31 08:10:55 compute-0 elegant_shannon[102030]:         }
Jan 31 08:10:55 compute-0 elegant_shannon[102030]:     ],
Jan 31 08:10:55 compute-0 elegant_shannon[102030]:     "1": [
Jan 31 08:10:55 compute-0 elegant_shannon[102030]:         {
Jan 31 08:10:55 compute-0 elegant_shannon[102030]:             "devices": [
Jan 31 08:10:55 compute-0 elegant_shannon[102030]:                 "/dev/loop4"
Jan 31 08:10:55 compute-0 elegant_shannon[102030]:             ],
Jan 31 08:10:55 compute-0 elegant_shannon[102030]:             "lv_name": "ceph_lv1",
Jan 31 08:10:55 compute-0 elegant_shannon[102030]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 31 08:10:55 compute-0 elegant_shannon[102030]:             "lv_size": "21470642176",
Jan 31 08:10:55 compute-0 elegant_shannon[102030]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=p93Mbf-DMxT-pcUt-jSJE-SFna-oscq-yTAd40,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=82c880e6-d992-5408-8b12-efff9c275473,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=dacad4fa-56d8-4937-b2d8-306fb75187f3,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 31 08:10:55 compute-0 elegant_shannon[102030]:             "lv_uuid": "p93Mbf-DMxT-pcUt-jSJE-SFna-oscq-yTAd40",
Jan 31 08:10:55 compute-0 elegant_shannon[102030]:             "name": "ceph_lv1",
Jan 31 08:10:55 compute-0 elegant_shannon[102030]:             "path": "/dev/ceph_vg1/ceph_lv1",
Jan 31 08:10:55 compute-0 elegant_shannon[102030]:             "tags": {
Jan 31 08:10:55 compute-0 elegant_shannon[102030]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 31 08:10:55 compute-0 elegant_shannon[102030]:                 "ceph.block_uuid": "p93Mbf-DMxT-pcUt-jSJE-SFna-oscq-yTAd40",
Jan 31 08:10:55 compute-0 elegant_shannon[102030]:                 "ceph.cephx_lockbox_secret": "",
Jan 31 08:10:55 compute-0 elegant_shannon[102030]:                 "ceph.cluster_fsid": "82c880e6-d992-5408-8b12-efff9c275473",
Jan 31 08:10:55 compute-0 elegant_shannon[102030]:                 "ceph.cluster_name": "ceph",
Jan 31 08:10:55 compute-0 elegant_shannon[102030]:                 "ceph.crush_device_class": "",
Jan 31 08:10:55 compute-0 elegant_shannon[102030]:                 "ceph.encrypted": "0",
Jan 31 08:10:55 compute-0 elegant_shannon[102030]:                 "ceph.objectstore": "bluestore",
Jan 31 08:10:55 compute-0 elegant_shannon[102030]:                 "ceph.osd_fsid": "dacad4fa-56d8-4937-b2d8-306fb75187f3",
Jan 31 08:10:55 compute-0 elegant_shannon[102030]:                 "ceph.osd_id": "1",
Jan 31 08:10:55 compute-0 elegant_shannon[102030]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 31 08:10:55 compute-0 elegant_shannon[102030]:                 "ceph.type": "block",
Jan 31 08:10:55 compute-0 elegant_shannon[102030]:                 "ceph.vdo": "0",
Jan 31 08:10:55 compute-0 elegant_shannon[102030]:                 "ceph.with_tpm": "0"
Jan 31 08:10:55 compute-0 elegant_shannon[102030]:             },
Jan 31 08:10:55 compute-0 elegant_shannon[102030]:             "type": "block",
Jan 31 08:10:55 compute-0 elegant_shannon[102030]:             "vg_name": "ceph_vg1"
Jan 31 08:10:55 compute-0 elegant_shannon[102030]:         }
Jan 31 08:10:55 compute-0 elegant_shannon[102030]:     ],
Jan 31 08:10:55 compute-0 elegant_shannon[102030]:     "2": [
Jan 31 08:10:55 compute-0 elegant_shannon[102030]:         {
Jan 31 08:10:55 compute-0 elegant_shannon[102030]:             "devices": [
Jan 31 08:10:55 compute-0 elegant_shannon[102030]:                 "/dev/loop5"
Jan 31 08:10:55 compute-0 elegant_shannon[102030]:             ],
Jan 31 08:10:55 compute-0 elegant_shannon[102030]:             "lv_name": "ceph_lv2",
Jan 31 08:10:55 compute-0 elegant_shannon[102030]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 31 08:10:55 compute-0 elegant_shannon[102030]:             "lv_size": "21470642176",
Jan 31 08:10:55 compute-0 elegant_shannon[102030]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=fxd7JU-HnwP-NvcE-M4xv-EgEF-kK7y-w6dXCS,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=82c880e6-d992-5408-8b12-efff9c275473,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=faa25865-e7b6-44f9-8188-08bf287b941b,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 31 08:10:55 compute-0 elegant_shannon[102030]:             "lv_uuid": "fxd7JU-HnwP-NvcE-M4xv-EgEF-kK7y-w6dXCS",
Jan 31 08:10:55 compute-0 elegant_shannon[102030]:             "name": "ceph_lv2",
Jan 31 08:10:55 compute-0 elegant_shannon[102030]:             "path": "/dev/ceph_vg2/ceph_lv2",
Jan 31 08:10:55 compute-0 elegant_shannon[102030]:             "tags": {
Jan 31 08:10:55 compute-0 elegant_shannon[102030]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 31 08:10:55 compute-0 elegant_shannon[102030]:                 "ceph.block_uuid": "fxd7JU-HnwP-NvcE-M4xv-EgEF-kK7y-w6dXCS",
Jan 31 08:10:55 compute-0 elegant_shannon[102030]:                 "ceph.cephx_lockbox_secret": "",
Jan 31 08:10:55 compute-0 elegant_shannon[102030]:                 "ceph.cluster_fsid": "82c880e6-d992-5408-8b12-efff9c275473",
Jan 31 08:10:55 compute-0 elegant_shannon[102030]:                 "ceph.cluster_name": "ceph",
Jan 31 08:10:55 compute-0 elegant_shannon[102030]:                 "ceph.crush_device_class": "",
Jan 31 08:10:55 compute-0 elegant_shannon[102030]:                 "ceph.encrypted": "0",
Jan 31 08:10:55 compute-0 elegant_shannon[102030]:                 "ceph.objectstore": "bluestore",
Jan 31 08:10:55 compute-0 elegant_shannon[102030]:                 "ceph.osd_fsid": "faa25865-e7b6-44f9-8188-08bf287b941b",
Jan 31 08:10:55 compute-0 elegant_shannon[102030]:                 "ceph.osd_id": "2",
Jan 31 08:10:55 compute-0 elegant_shannon[102030]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 31 08:10:55 compute-0 elegant_shannon[102030]:                 "ceph.type": "block",
Jan 31 08:10:55 compute-0 elegant_shannon[102030]:                 "ceph.vdo": "0",
Jan 31 08:10:55 compute-0 elegant_shannon[102030]:                 "ceph.with_tpm": "0"
Jan 31 08:10:55 compute-0 elegant_shannon[102030]:             },
Jan 31 08:10:55 compute-0 elegant_shannon[102030]:             "type": "block",
Jan 31 08:10:55 compute-0 elegant_shannon[102030]:             "vg_name": "ceph_vg2"
Jan 31 08:10:55 compute-0 elegant_shannon[102030]:         }
Jan 31 08:10:55 compute-0 elegant_shannon[102030]:     ]
Jan 31 08:10:55 compute-0 elegant_shannon[102030]: }
Jan 31 08:10:55 compute-0 systemd[1]: libpod-fa3a1dd6e79700856c56b7618c3d0f787a907b7be9fe191b6c1dda3f5feaf4c2.scope: Deactivated successfully.
Jan 31 08:10:55 compute-0 podman[102013]: 2026-01-31 08:10:55.108145788 +0000 UTC m=+1.081579782 container died fa3a1dd6e79700856c56b7618c3d0f787a907b7be9fe191b6c1dda3f5feaf4c2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elegant_shannon, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 08:10:55 compute-0 ceph-mon[75227]: 8.11 scrub starts
Jan 31 08:10:55 compute-0 ceph-mon[75227]: 8.11 scrub ok
Jan 31 08:10:55 compute-0 ceph-mon[75227]: pgmap v249: 305 pgs: 1 active+clean+scrubbing, 304 active+clean; 462 KiB data, 137 MiB used, 60 GiB / 60 GiB avail; 13 B/s, 1 objects/s recovering
Jan 31 08:10:55 compute-0 ceph-mon[75227]: 3.f scrub starts
Jan 31 08:10:55 compute-0 ceph-mon[75227]: 3.f scrub ok
Jan 31 08:10:55 compute-0 systemd[1]: var-lib-containers-storage-overlay-6ce40cbcfb0023973d7a0482ba484b478273f33a1beacc47df6e8b3a318bcdce-merged.mount: Deactivated successfully.
Jan 31 08:10:55 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v250: 305 pgs: 305 active+clean; 462 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 12 B/s, 0 objects/s recovering
Jan 31 08:10:55 compute-0 podman[102013]: 2026-01-31 08:10:55.876370581 +0000 UTC m=+1.849804585 container remove fa3a1dd6e79700856c56b7618c3d0f787a907b7be9fe191b6c1dda3f5feaf4c2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elegant_shannon, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 08:10:55 compute-0 sudo[101927]: pam_unix(sudo:session): session closed for user root
Jan 31 08:10:55 compute-0 systemd[1]: libpod-conmon-fa3a1dd6e79700856c56b7618c3d0f787a907b7be9fe191b6c1dda3f5feaf4c2.scope: Deactivated successfully.
Jan 31 08:10:55 compute-0 sudo[102051]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 08:10:55 compute-0 sudo[102051]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:10:55 compute-0 sudo[102051]: pam_unix(sudo:session): session closed for user root
Jan 31 08:10:56 compute-0 sudo[102076]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/82c880e6-d992-5408-8b12-efff9c275473/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 82c880e6-d992-5408-8b12-efff9c275473 -- raw list --format json
Jan 31 08:10:56 compute-0 sudo[102076]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:10:56 compute-0 podman[102113]: 2026-01-31 08:10:56.322945441 +0000 UTC m=+0.025323140 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 08:10:56 compute-0 ceph-mon[75227]: pgmap v250: 305 pgs: 305 active+clean; 462 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 12 B/s, 0 objects/s recovering
Jan 31 08:10:56 compute-0 podman[102113]: 2026-01-31 08:10:56.523479332 +0000 UTC m=+0.225857051 container create 7e70ac19547b7aeb2c02446373880f0418bb7a6d1b24282c7f115e89d5d6f383 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=charming_wu, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Jan 31 08:10:56 compute-0 ceph-osd[87035]: log_channel(cluster) log [DBG] : 7.12 scrub starts
Jan 31 08:10:56 compute-0 ceph-osd[87035]: log_channel(cluster) log [DBG] : 7.12 scrub ok
Jan 31 08:10:56 compute-0 ceph-osd[88096]: log_channel(cluster) log [DBG] : 3.8 scrub starts
Jan 31 08:10:56 compute-0 ceph-osd[88096]: log_channel(cluster) log [DBG] : 3.8 scrub ok
Jan 31 08:10:56 compute-0 systemd[1]: Started libpod-conmon-7e70ac19547b7aeb2c02446373880f0418bb7a6d1b24282c7f115e89d5d6f383.scope.
Jan 31 08:10:56 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:10:57 compute-0 podman[102113]: 2026-01-31 08:10:57.031638896 +0000 UTC m=+0.734016665 container init 7e70ac19547b7aeb2c02446373880f0418bb7a6d1b24282c7f115e89d5d6f383 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=charming_wu, ceph=True, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3)
Jan 31 08:10:57 compute-0 podman[102113]: 2026-01-31 08:10:57.036549217 +0000 UTC m=+0.738926896 container start 7e70ac19547b7aeb2c02446373880f0418bb7a6d1b24282c7f115e89d5d6f383 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=charming_wu, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030)
Jan 31 08:10:57 compute-0 charming_wu[102129]: 167 167
Jan 31 08:10:57 compute-0 systemd[1]: libpod-7e70ac19547b7aeb2c02446373880f0418bb7a6d1b24282c7f115e89d5d6f383.scope: Deactivated successfully.
Jan 31 08:10:57 compute-0 ceph-osd[85971]: log_channel(cluster) log [DBG] : 8.b scrub starts
Jan 31 08:10:57 compute-0 ceph-osd[85971]: log_channel(cluster) log [DBG] : 8.b scrub ok
Jan 31 08:10:57 compute-0 podman[102113]: 2026-01-31 08:10:57.285645881 +0000 UTC m=+0.988023610 container attach 7e70ac19547b7aeb2c02446373880f0418bb7a6d1b24282c7f115e89d5d6f383 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=charming_wu, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 08:10:57 compute-0 podman[102113]: 2026-01-31 08:10:57.286096543 +0000 UTC m=+0.988474282 container died 7e70ac19547b7aeb2c02446373880f0418bb7a6d1b24282c7f115e89d5d6f383 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=charming_wu, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0)
Jan 31 08:10:57 compute-0 systemd[1]: var-lib-containers-storage-overlay-30eef2469e24f8ed92f34274b05b3f6396b0fe465cce6b5ce03bbc7879acb1b4-merged.mount: Deactivated successfully.
Jan 31 08:10:57 compute-0 ceph-osd[88096]: log_channel(cluster) log [DBG] : 3.1d scrub starts
Jan 31 08:10:57 compute-0 ceph-osd[88096]: log_channel(cluster) log [DBG] : 3.1d scrub ok
Jan 31 08:10:57 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v251: 305 pgs: 305 active+clean; 462 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 10 B/s, 0 objects/s recovering
Jan 31 08:10:57 compute-0 podman[102113]: 2026-01-31 08:10:57.976554768 +0000 UTC m=+1.678932437 container remove 7e70ac19547b7aeb2c02446373880f0418bb7a6d1b24282c7f115e89d5d6f383 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=charming_wu, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Jan 31 08:10:58 compute-0 systemd[1]: libpod-conmon-7e70ac19547b7aeb2c02446373880f0418bb7a6d1b24282c7f115e89d5d6f383.scope: Deactivated successfully.
Jan 31 08:10:58 compute-0 ceph-osd[85971]: log_channel(cluster) log [DBG] : 7.4 scrub starts
Jan 31 08:10:58 compute-0 ceph-osd[85971]: log_channel(cluster) log [DBG] : 7.4 scrub ok
Jan 31 08:10:58 compute-0 podman[102154]: 2026-01-31 08:10:58.206173119 +0000 UTC m=+0.115132370 container create a98e1c5f8c8d55bbb18e8b06749e914315d0678fb1727ba0e05e7098b07ecdd5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=affectionate_gauss, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Jan 31 08:10:58 compute-0 podman[102154]: 2026-01-31 08:10:58.112977148 +0000 UTC m=+0.021936439 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 08:10:58 compute-0 ceph-mon[75227]: 7.12 scrub starts
Jan 31 08:10:58 compute-0 ceph-mon[75227]: 7.12 scrub ok
Jan 31 08:10:58 compute-0 ceph-mon[75227]: 3.8 scrub starts
Jan 31 08:10:58 compute-0 ceph-mon[75227]: 3.8 scrub ok
Jan 31 08:10:58 compute-0 ceph-mon[75227]: 8.b scrub starts
Jan 31 08:10:58 compute-0 ceph-mon[75227]: 8.b scrub ok
Jan 31 08:10:58 compute-0 systemd[1]: Started libpod-conmon-a98e1c5f8c8d55bbb18e8b06749e914315d0678fb1727ba0e05e7098b07ecdd5.scope.
Jan 31 08:10:58 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:10:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4dce154f3873286d4f6719cf43d234182dc8505a16da61b2c55fb2b738a948de/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 08:10:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4dce154f3873286d4f6719cf43d234182dc8505a16da61b2c55fb2b738a948de/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 08:10:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4dce154f3873286d4f6719cf43d234182dc8505a16da61b2c55fb2b738a948de/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 08:10:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4dce154f3873286d4f6719cf43d234182dc8505a16da61b2c55fb2b738a948de/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 08:10:58 compute-0 podman[102154]: 2026-01-31 08:10:58.914976216 +0000 UTC m=+0.823935547 container init a98e1c5f8c8d55bbb18e8b06749e914315d0678fb1727ba0e05e7098b07ecdd5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=affectionate_gauss, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, ceph=True, io.buildah.version=1.41.3)
Jan 31 08:10:58 compute-0 podman[102154]: 2026-01-31 08:10:58.925395626 +0000 UTC m=+0.834354897 container start a98e1c5f8c8d55bbb18e8b06749e914315d0678fb1727ba0e05e7098b07ecdd5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=affectionate_gauss, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 08:10:59 compute-0 podman[102154]: 2026-01-31 08:10:59.423590252 +0000 UTC m=+1.332549503 container attach a98e1c5f8c8d55bbb18e8b06749e914315d0678fb1727ba0e05e7098b07ecdd5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=affectionate_gauss, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 08:10:59 compute-0 lvm[102255]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 31 08:10:59 compute-0 lvm[102255]: VG ceph_vg0 finished
Jan 31 08:10:59 compute-0 lvm[102258]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 31 08:10:59 compute-0 lvm[102258]: VG ceph_vg1 finished
Jan 31 08:10:59 compute-0 lvm[102259]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 31 08:10:59 compute-0 lvm[102259]: VG ceph_vg2 finished
Jan 31 08:10:59 compute-0 ceph-mon[75227]: 3.1d scrub starts
Jan 31 08:10:59 compute-0 ceph-mon[75227]: 3.1d scrub ok
Jan 31 08:10:59 compute-0 ceph-mon[75227]: pgmap v251: 305 pgs: 305 active+clean; 462 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 10 B/s, 0 objects/s recovering
Jan 31 08:10:59 compute-0 ceph-mon[75227]: 7.4 scrub starts
Jan 31 08:10:59 compute-0 ceph-mon[75227]: 7.4 scrub ok
Jan 31 08:10:59 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v252: 305 pgs: 305 active+clean; 462 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 9 B/s, 0 objects/s recovering
Jan 31 08:10:59 compute-0 affectionate_gauss[102179]: {}
Jan 31 08:10:59 compute-0 systemd[1]: libpod-a98e1c5f8c8d55bbb18e8b06749e914315d0678fb1727ba0e05e7098b07ecdd5.scope: Deactivated successfully.
Jan 31 08:10:59 compute-0 podman[102154]: 2026-01-31 08:10:59.860482703 +0000 UTC m=+1.769441994 container died a98e1c5f8c8d55bbb18e8b06749e914315d0678fb1727ba0e05e7098b07ecdd5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=affectionate_gauss, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 31 08:10:59 compute-0 systemd[1]: libpod-a98e1c5f8c8d55bbb18e8b06749e914315d0678fb1727ba0e05e7098b07ecdd5.scope: Consumed 1.055s CPU time.
Jan 31 08:10:59 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 08:11:00 compute-0 systemd[1]: var-lib-containers-storage-overlay-4dce154f3873286d4f6719cf43d234182dc8505a16da61b2c55fb2b738a948de-merged.mount: Deactivated successfully.
Jan 31 08:11:00 compute-0 podman[102154]: 2026-01-31 08:11:00.632071145 +0000 UTC m=+2.541030396 container remove a98e1c5f8c8d55bbb18e8b06749e914315d0678fb1727ba0e05e7098b07ecdd5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=affectionate_gauss, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 08:11:00 compute-0 ceph-osd[88096]: log_channel(cluster) log [DBG] : 4.1a scrub starts
Jan 31 08:11:00 compute-0 ceph-osd[88096]: log_channel(cluster) log [DBG] : 4.1a scrub ok
Jan 31 08:11:00 compute-0 sudo[102076]: pam_unix(sudo:session): session closed for user root
Jan 31 08:11:00 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 31 08:11:00 compute-0 systemd[1]: libpod-conmon-a98e1c5f8c8d55bbb18e8b06749e914315d0678fb1727ba0e05e7098b07ecdd5.scope: Deactivated successfully.
Jan 31 08:11:00 compute-0 ceph-mon[75227]: pgmap v252: 305 pgs: 305 active+clean; 462 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 9 B/s, 0 objects/s recovering
Jan 31 08:11:00 compute-0 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 08:11:00 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 31 08:11:01 compute-0 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 08:11:01 compute-0 ceph-osd[85971]: log_channel(cluster) log [DBG] : 3.c scrub starts
Jan 31 08:11:01 compute-0 ceph-osd[85971]: log_channel(cluster) log [DBG] : 3.c scrub ok
Jan 31 08:11:01 compute-0 sudo[102283]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 31 08:11:01 compute-0 sudo[102283]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:11:01 compute-0 sudo[102283]: pam_unix(sudo:session): session closed for user root
Jan 31 08:11:01 compute-0 ceph-osd[87035]: log_channel(cluster) log [DBG] : 7.10 scrub starts
Jan 31 08:11:01 compute-0 ceph-osd[87035]: log_channel(cluster) log [DBG] : 7.10 scrub ok
Jan 31 08:11:01 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v253: 305 pgs: 305 active+clean; 462 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 9 B/s, 0 objects/s recovering
Jan 31 08:11:02 compute-0 ceph-mon[75227]: 4.1a scrub starts
Jan 31 08:11:02 compute-0 ceph-mon[75227]: 4.1a scrub ok
Jan 31 08:11:02 compute-0 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 08:11:02 compute-0 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 08:11:02 compute-0 ceph-mon[75227]: 3.c scrub starts
Jan 31 08:11:02 compute-0 ceph-mon[75227]: 3.c scrub ok
Jan 31 08:11:02 compute-0 ceph-osd[88096]: log_channel(cluster) log [DBG] : 11.3 scrub starts
Jan 31 08:11:02 compute-0 ceph-osd[88096]: log_channel(cluster) log [DBG] : 11.3 scrub ok
Jan 31 08:11:02 compute-0 ceph-mgr[75519]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:11:02 compute-0 ceph-mgr[75519]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:11:02 compute-0 ceph-mgr[75519]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:11:02 compute-0 ceph-mgr[75519]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:11:02 compute-0 ceph-mgr[75519]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:11:02 compute-0 ceph-mgr[75519]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:11:03 compute-0 ceph-osd[85971]: log_channel(cluster) log [DBG] : 5.1e scrub starts
Jan 31 08:11:03 compute-0 ceph-osd[85971]: log_channel(cluster) log [DBG] : 5.1e scrub ok
Jan 31 08:11:03 compute-0 ceph-mon[75227]: 7.10 scrub starts
Jan 31 08:11:03 compute-0 ceph-mon[75227]: 7.10 scrub ok
Jan 31 08:11:03 compute-0 ceph-mon[75227]: pgmap v253: 305 pgs: 305 active+clean; 462 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 9 B/s, 0 objects/s recovering
Jan 31 08:11:03 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v254: 305 pgs: 305 active+clean; 462 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:11:04 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 08:11:04 compute-0 ceph-mon[75227]: 11.3 scrub starts
Jan 31 08:11:04 compute-0 ceph-mon[75227]: 11.3 scrub ok
Jan 31 08:11:04 compute-0 ceph-mon[75227]: 5.1e scrub starts
Jan 31 08:11:04 compute-0 ceph-mon[75227]: 5.1e scrub ok
Jan 31 08:11:04 compute-0 ceph-mon[75227]: pgmap v254: 305 pgs: 305 active+clean; 462 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:11:05 compute-0 ceph-osd[88096]: log_channel(cluster) log [DBG] : 7.c scrub starts
Jan 31 08:11:05 compute-0 ceph-osd[88096]: log_channel(cluster) log [DBG] : 7.c scrub ok
Jan 31 08:11:05 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v255: 305 pgs: 305 active+clean; 462 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:11:06 compute-0 ceph-mon[75227]: pgmap v255: 305 pgs: 305 active+clean; 462 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:11:06 compute-0 ceph-osd[87035]: log_channel(cluster) log [DBG] : 2.17 scrub starts
Jan 31 08:11:06 compute-0 ceph-osd[87035]: log_channel(cluster) log [DBG] : 2.17 scrub ok
Jan 31 08:11:07 compute-0 ceph-osd[88096]: log_channel(cluster) log [DBG] : 3.7 scrub starts
Jan 31 08:11:07 compute-0 ceph-osd[88096]: log_channel(cluster) log [DBG] : 3.7 scrub ok
Jan 31 08:11:07 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v256: 305 pgs: 305 active+clean; 462 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:11:07 compute-0 ceph-mon[75227]: 7.c scrub starts
Jan 31 08:11:07 compute-0 ceph-mon[75227]: 7.c scrub ok
Jan 31 08:11:07 compute-0 ceph-mon[75227]: 2.17 scrub starts
Jan 31 08:11:07 compute-0 ceph-mon[75227]: 2.17 scrub ok
Jan 31 08:11:08 compute-0 ceph-osd[87035]: log_channel(cluster) log [DBG] : 5.13 scrub starts
Jan 31 08:11:08 compute-0 ceph-osd[87035]: log_channel(cluster) log [DBG] : 5.13 scrub ok
Jan 31 08:11:08 compute-0 ceph-osd[88096]: log_channel(cluster) log [DBG] : 11.8 scrub starts
Jan 31 08:11:08 compute-0 ceph-osd[88096]: log_channel(cluster) log [DBG] : 11.8 scrub ok
Jan 31 08:11:08 compute-0 ceph-mon[75227]: 3.7 scrub starts
Jan 31 08:11:08 compute-0 ceph-mon[75227]: 3.7 scrub ok
Jan 31 08:11:08 compute-0 ceph-mon[75227]: pgmap v256: 305 pgs: 305 active+clean; 462 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:11:09 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v257: 305 pgs: 305 active+clean; 462 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:11:09 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 08:11:10 compute-0 ceph-mon[75227]: 5.13 scrub starts
Jan 31 08:11:10 compute-0 ceph-mon[75227]: 5.13 scrub ok
Jan 31 08:11:10 compute-0 ceph-mon[75227]: 11.8 scrub starts
Jan 31 08:11:10 compute-0 ceph-mon[75227]: 11.8 scrub ok
Jan 31 08:11:11 compute-0 ceph-osd[85971]: log_channel(cluster) log [DBG] : 5.4 scrub starts
Jan 31 08:11:11 compute-0 ceph-osd[85971]: log_channel(cluster) log [DBG] : 5.4 scrub ok
Jan 31 08:11:11 compute-0 ceph-osd[88096]: log_channel(cluster) log [DBG] : 7.1 scrub starts
Jan 31 08:11:11 compute-0 ceph-osd[88096]: log_channel(cluster) log [DBG] : 7.1 scrub ok
Jan 31 08:11:11 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v258: 305 pgs: 305 active+clean; 462 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:11:11 compute-0 ceph-mon[75227]: pgmap v257: 305 pgs: 305 active+clean; 462 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:11:11 compute-0 ceph-mon[75227]: 5.4 scrub starts
Jan 31 08:11:11 compute-0 ceph-mon[75227]: 5.4 scrub ok
Jan 31 08:11:12 compute-0 ceph-osd[88096]: log_channel(cluster) log [DBG] : 8.2 scrub starts
Jan 31 08:11:12 compute-0 ceph-osd[88096]: log_channel(cluster) log [DBG] : 8.2 scrub ok
Jan 31 08:11:12 compute-0 ceph-mon[75227]: 7.1 scrub starts
Jan 31 08:11:12 compute-0 ceph-mon[75227]: 7.1 scrub ok
Jan 31 08:11:12 compute-0 ceph-mon[75227]: pgmap v258: 305 pgs: 305 active+clean; 462 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:11:13 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v259: 305 pgs: 305 active+clean; 462 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:11:14 compute-0 ceph-mon[75227]: 8.2 scrub starts
Jan 31 08:11:14 compute-0 ceph-mon[75227]: 8.2 scrub ok
Jan 31 08:11:14 compute-0 ceph-osd[88096]: log_channel(cluster) log [DBG] : 4.e scrub starts
Jan 31 08:11:14 compute-0 ceph-osd[88096]: log_channel(cluster) log [DBG] : 4.e scrub ok
Jan 31 08:11:14 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 08:11:15 compute-0 ceph-osd[85971]: log_channel(cluster) log [DBG] : 11.4 scrub starts
Jan 31 08:11:15 compute-0 ceph-osd[85971]: log_channel(cluster) log [DBG] : 11.4 scrub ok
Jan 31 08:11:15 compute-0 ceph-mon[75227]: pgmap v259: 305 pgs: 305 active+clean; 462 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:11:15 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v260: 305 pgs: 305 active+clean; 462 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:11:16 compute-0 ceph-osd[85971]: log_channel(cluster) log [DBG] : 11.14 scrub starts
Jan 31 08:11:16 compute-0 ceph-osd[85971]: log_channel(cluster) log [DBG] : 11.14 scrub ok
Jan 31 08:11:16 compute-0 ceph-osd[87035]: log_channel(cluster) log [DBG] : 2.15 scrub starts
Jan 31 08:11:16 compute-0 ceph-osd[87035]: log_channel(cluster) log [DBG] : 2.15 scrub ok
Jan 31 08:11:16 compute-0 ceph-osd[88096]: log_channel(cluster) log [DBG] : 11.d scrub starts
Jan 31 08:11:16 compute-0 ceph-osd[88096]: log_channel(cluster) log [DBG] : 11.d scrub ok
Jan 31 08:11:16 compute-0 ceph-mon[75227]: 4.e scrub starts
Jan 31 08:11:16 compute-0 ceph-mon[75227]: 4.e scrub ok
Jan 31 08:11:16 compute-0 ceph-mon[75227]: 11.4 scrub starts
Jan 31 08:11:16 compute-0 ceph-mon[75227]: 11.4 scrub ok
Jan 31 08:11:16 compute-0 ceph-mon[75227]: pgmap v260: 305 pgs: 305 active+clean; 462 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:11:16 compute-0 ceph-mon[75227]: 11.14 scrub starts
Jan 31 08:11:16 compute-0 ceph-mon[75227]: 11.14 scrub ok
Jan 31 08:11:17 compute-0 ceph-osd[88096]: log_channel(cluster) log [DBG] : 7.2 scrub starts
Jan 31 08:11:17 compute-0 ceph-osd[88096]: log_channel(cluster) log [DBG] : 7.2 scrub ok
Jan 31 08:11:17 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v261: 305 pgs: 305 active+clean; 462 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:11:18 compute-0 ceph-osd[85971]: log_channel(cluster) log [DBG] : 7.6 scrub starts
Jan 31 08:11:18 compute-0 ceph-osd[85971]: log_channel(cluster) log [DBG] : 7.6 scrub ok
Jan 31 08:11:18 compute-0 ceph-mon[75227]: 2.15 scrub starts
Jan 31 08:11:18 compute-0 ceph-mon[75227]: 2.15 scrub ok
Jan 31 08:11:18 compute-0 ceph-mon[75227]: 11.d scrub starts
Jan 31 08:11:18 compute-0 ceph-mon[75227]: 11.d scrub ok
Jan 31 08:11:19 compute-0 ceph-osd[85971]: log_channel(cluster) log [DBG] : 7.9 scrub starts
Jan 31 08:11:19 compute-0 ceph-osd[85971]: log_channel(cluster) log [DBG] : 7.9 scrub ok
Jan 31 08:11:19 compute-0 ceph-mon[75227]: 7.2 scrub starts
Jan 31 08:11:19 compute-0 ceph-mon[75227]: 7.2 scrub ok
Jan 31 08:11:19 compute-0 ceph-mon[75227]: pgmap v261: 305 pgs: 305 active+clean; 462 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:11:19 compute-0 ceph-mon[75227]: 7.6 scrub starts
Jan 31 08:11:19 compute-0 ceph-mon[75227]: 7.6 scrub ok
Jan 31 08:11:19 compute-0 ceph-osd[88096]: log_channel(cluster) log [DBG] : 8.d scrub starts
Jan 31 08:11:19 compute-0 ceph-osd[88096]: log_channel(cluster) log [DBG] : 8.d scrub ok
Jan 31 08:11:19 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v262: 305 pgs: 305 active+clean; 462 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:11:20 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 08:11:20 compute-0 sudo[101555]: pam_unix(sudo:session): session closed for user root
Jan 31 08:11:20 compute-0 ceph-osd[87035]: log_channel(cluster) log [DBG] : 10.1a scrub starts
Jan 31 08:11:20 compute-0 ceph-osd[87035]: log_channel(cluster) log [DBG] : 10.1a scrub ok
Jan 31 08:11:20 compute-0 sudo[102470]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rmidrwkngexlnxktjelqxoofcwmdnboo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847080.4538856-132-28631441159634/AnsiballZ_command.py'
Jan 31 08:11:20 compute-0 sudo[102470]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:11:20 compute-0 python3.9[102472]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -V driverctl lvm2 crudini jq nftables NetworkManager openstack-selinux python3-libselinux python3-pyyaml rsync tmpwatch sysstat iproute-tc ksmtuned systemd-container crypto-policies-scripts grubby sos _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 08:11:20 compute-0 ceph-mon[75227]: 7.9 scrub starts
Jan 31 08:11:20 compute-0 ceph-mon[75227]: 7.9 scrub ok
Jan 31 08:11:20 compute-0 ceph-mon[75227]: pgmap v262: 305 pgs: 305 active+clean; 462 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:11:21 compute-0 sudo[102470]: pam_unix(sudo:session): session closed for user root
Jan 31 08:11:21 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v263: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:11:22 compute-0 ceph-mon[75227]: 8.d scrub starts
Jan 31 08:11:22 compute-0 ceph-mon[75227]: 8.d scrub ok
Jan 31 08:11:22 compute-0 ceph-mon[75227]: 10.1a scrub starts
Jan 31 08:11:22 compute-0 ceph-mon[75227]: 10.1a scrub ok
Jan 31 08:11:22 compute-0 sudo[102757]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zfqihiiuwsukvepnshxjdjrisbjbarkz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847081.6101978-140-159073169874267/AnsiballZ_selinux.py'
Jan 31 08:11:22 compute-0 sudo[102757]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:11:22 compute-0 python3.9[102759]: ansible-ansible.posix.selinux Invoked with policy=targeted state=enforcing configfile=/etc/selinux/config update_kernel_param=False
Jan 31 08:11:22 compute-0 sudo[102757]: pam_unix(sudo:session): session closed for user root
Jan 31 08:11:23 compute-0 ceph-osd[85971]: log_channel(cluster) log [DBG] : 7.18 scrub starts
Jan 31 08:11:23 compute-0 ceph-osd[85971]: log_channel(cluster) log [DBG] : 7.18 scrub ok
Jan 31 08:11:23 compute-0 ceph-mon[75227]: pgmap v263: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:11:23 compute-0 sudo[102909]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lvmwrfjpqxzbobbknpegymquyjnfnaoy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847082.879165-151-46666377117624/AnsiballZ_command.py'
Jan 31 08:11:23 compute-0 sudo[102909]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:11:23 compute-0 python3.9[102911]: ansible-ansible.legacy.command Invoked with cmd=dd if=/dev/zero of=/swap count=1024 bs=1M creates=/swap _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None removes=None stdin=None
Jan 31 08:11:23 compute-0 sudo[102909]: pam_unix(sudo:session): session closed for user root
Jan 31 08:11:23 compute-0 ceph-osd[88096]: log_channel(cluster) log [DBG] : 4.1 scrub starts
Jan 31 08:11:23 compute-0 ceph-osd[88096]: log_channel(cluster) log [DBG] : 4.1 scrub ok
Jan 31 08:11:23 compute-0 sudo[103061]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ccoaxtzecjjnqmvcupinxxrhranguhir ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847083.473917-159-130542775748224/AnsiballZ_file.py'
Jan 31 08:11:23 compute-0 sudo[103061]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:11:23 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v264: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:11:23 compute-0 python3.9[103063]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/swap recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False state=None _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 08:11:23 compute-0 sudo[103061]: pam_unix(sudo:session): session closed for user root
Jan 31 08:11:24 compute-0 ceph-osd[85971]: log_channel(cluster) log [DBG] : 3.1 scrub starts
Jan 31 08:11:24 compute-0 ceph-osd[85971]: log_channel(cluster) log [DBG] : 3.1 scrub ok
Jan 31 08:11:24 compute-0 ceph-mon[75227]: 7.18 scrub starts
Jan 31 08:11:24 compute-0 ceph-mon[75227]: 7.18 scrub ok
Jan 31 08:11:24 compute-0 sudo[103213]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-evvfiajdpltmeafqgzmvylokcltzlnon ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847084.0988433-167-216836123214488/AnsiballZ_mount.py'
Jan 31 08:11:24 compute-0 sudo[103213]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:11:24 compute-0 python3.9[103215]: ansible-ansible.posix.mount Invoked with dump=0 fstype=swap name=none opts=sw passno=0 src=/swap state=present path=none boot=True opts_no_log=False backup=False fstab=None
Jan 31 08:11:24 compute-0 sudo[103213]: pam_unix(sudo:session): session closed for user root
Jan 31 08:11:25 compute-0 ceph-mon[75227]: 4.1 scrub starts
Jan 31 08:11:25 compute-0 ceph-mon[75227]: 4.1 scrub ok
Jan 31 08:11:25 compute-0 ceph-mon[75227]: pgmap v264: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:11:25 compute-0 ceph-mon[75227]: 3.1 scrub starts
Jan 31 08:11:25 compute-0 ceph-mon[75227]: 3.1 scrub ok
Jan 31 08:11:25 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 08:11:25 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v265: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:11:26 compute-0 ceph-osd[85971]: log_channel(cluster) log [DBG] : 8.9 scrub starts
Jan 31 08:11:26 compute-0 ceph-osd[85971]: log_channel(cluster) log [DBG] : 8.9 scrub ok
Jan 31 08:11:26 compute-0 sudo[103365]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rkgxkadpjkkszirawyyzmskfkxfcynpe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847085.98044-195-181842361377601/AnsiballZ_file.py'
Jan 31 08:11:26 compute-0 sudo[103365]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:11:26 compute-0 python3.9[103367]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/ca-trust/source/anchors setype=cert_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 31 08:11:26 compute-0 sudo[103365]: pam_unix(sudo:session): session closed for user root
Jan 31 08:11:26 compute-0 ceph-osd[88096]: log_channel(cluster) log [DBG] : 3.5 scrub starts
Jan 31 08:11:26 compute-0 ceph-osd[88096]: log_channel(cluster) log [DBG] : 3.5 scrub ok
Jan 31 08:11:26 compute-0 sudo[103517]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gkmqecnzdtnizsolycguhefpvydewttl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847086.5427957-203-259360277590836/AnsiballZ_stat.py'
Jan 31 08:11:26 compute-0 sudo[103517]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:11:26 compute-0 python3.9[103519]: ansible-ansible.legacy.stat Invoked with path=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 08:11:26 compute-0 sudo[103517]: pam_unix(sudo:session): session closed for user root
Jan 31 08:11:26 compute-0 ceph-osd[85971]: log_channel(cluster) log [DBG] : 11.6 scrub starts
Jan 31 08:11:27 compute-0 ceph-osd[85971]: log_channel(cluster) log [DBG] : 11.6 scrub ok
Jan 31 08:11:27 compute-0 ceph-mon[75227]: pgmap v265: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:11:27 compute-0 ceph-mon[75227]: 8.9 scrub starts
Jan 31 08:11:27 compute-0 ceph-mon[75227]: 8.9 scrub ok
Jan 31 08:11:27 compute-0 sudo[103595]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-msngjariofgumtlyhotfpkjkxeltozwi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847086.5427957-203-259360277590836/AnsiballZ_file.py'
Jan 31 08:11:27 compute-0 sudo[103595]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:11:27 compute-0 python3.9[103597]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem _original_basename=tls-ca-bundle.pem recurse=False state=file path=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 08:11:27 compute-0 sudo[103595]: pam_unix(sudo:session): session closed for user root
Jan 31 08:11:27 compute-0 ceph-osd[87035]: log_channel(cluster) log [DBG] : 5.12 scrub starts
Jan 31 08:11:27 compute-0 ceph-osd[87035]: log_channel(cluster) log [DBG] : 5.12 scrub ok
Jan 31 08:11:27 compute-0 ceph-osd[88096]: log_channel(cluster) log [DBG] : 11.9 scrub starts
Jan 31 08:11:27 compute-0 ceph-osd[88096]: log_channel(cluster) log [DBG] : 11.9 scrub ok
Jan 31 08:11:27 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v266: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:11:28 compute-0 sudo[103747]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fxvxejrigwehtoityljpfmsjhdxikzlu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847087.8225472-224-131236849945128/AnsiballZ_stat.py'
Jan 31 08:11:28 compute-0 sudo[103747]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:11:28 compute-0 python3.9[103749]: ansible-ansible.builtin.stat Invoked with path=/etc/lvm/devices/system.devices follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 31 08:11:28 compute-0 sudo[103747]: pam_unix(sudo:session): session closed for user root
Jan 31 08:11:28 compute-0 ceph-mon[75227]: 3.5 scrub starts
Jan 31 08:11:28 compute-0 ceph-mon[75227]: 3.5 scrub ok
Jan 31 08:11:28 compute-0 ceph-mon[75227]: 11.6 scrub starts
Jan 31 08:11:28 compute-0 ceph-mon[75227]: 11.6 scrub ok
Jan 31 08:11:28 compute-0 ceph-osd[87035]: log_channel(cluster) log [DBG] : 5.11 scrub starts
Jan 31 08:11:28 compute-0 ceph-osd[87035]: log_channel(cluster) log [DBG] : 5.11 scrub ok
Jan 31 08:11:29 compute-0 sudo[103901]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ouuuvycgqguejzxbgehoodrmhfleenjm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847088.7363937-237-35770321752754/AnsiballZ_getent.py'
Jan 31 08:11:29 compute-0 sudo[103901]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:11:29 compute-0 ceph-mon[75227]: 5.12 scrub starts
Jan 31 08:11:29 compute-0 ceph-mon[75227]: 5.12 scrub ok
Jan 31 08:11:29 compute-0 ceph-mon[75227]: 11.9 scrub starts
Jan 31 08:11:29 compute-0 ceph-mon[75227]: 11.9 scrub ok
Jan 31 08:11:29 compute-0 ceph-mon[75227]: pgmap v266: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:11:29 compute-0 ceph-mon[75227]: 5.11 scrub starts
Jan 31 08:11:29 compute-0 ceph-mon[75227]: 5.11 scrub ok
Jan 31 08:11:29 compute-0 python3.9[103903]: ansible-ansible.builtin.getent Invoked with database=passwd key=qemu fail_key=True service=None split=None
Jan 31 08:11:29 compute-0 sudo[103901]: pam_unix(sudo:session): session closed for user root
Jan 31 08:11:29 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v267: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:11:30 compute-0 sudo[104054]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bvwxigtptctioitqdibxvdvjbrxwoive ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847089.798047-247-252397076645059/AnsiballZ_getent.py'
Jan 31 08:11:30 compute-0 sudo[104054]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:11:30 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 08:11:30 compute-0 python3.9[104056]: ansible-ansible.builtin.getent Invoked with database=passwd key=hugetlbfs fail_key=True service=None split=None
Jan 31 08:11:30 compute-0 sudo[104054]: pam_unix(sudo:session): session closed for user root
Jan 31 08:11:30 compute-0 ceph-mon[75227]: pgmap v267: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:11:30 compute-0 ceph-osd[88096]: log_channel(cluster) log [DBG] : 7.5 scrub starts
Jan 31 08:11:30 compute-0 ceph-osd[88096]: log_channel(cluster) log [DBG] : 7.5 scrub ok
Jan 31 08:11:30 compute-0 ceph-osd[87035]: log_channel(cluster) log [DBG] : 4.8 scrub starts
Jan 31 08:11:30 compute-0 ceph-osd[87035]: log_channel(cluster) log [DBG] : 4.8 scrub ok
Jan 31 08:11:30 compute-0 sudo[104207]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qrshtndzdfmbxsjcrfksgjhhkpsqusce ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847090.4679012-255-132003854678725/AnsiballZ_group.py'
Jan 31 08:11:30 compute-0 sudo[104207]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:11:31 compute-0 python3.9[104209]: ansible-ansible.builtin.group Invoked with gid=42477 name=hugetlbfs state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Jan 31 08:11:31 compute-0 sudo[104207]: pam_unix(sudo:session): session closed for user root
Jan 31 08:11:31 compute-0 sudo[104359]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-imikomvmapyrikoyhzcratmmnllzavxv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847091.3218327-264-6896366430259/AnsiballZ_file.py'
Jan 31 08:11:31 compute-0 sudo[104359]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:11:31 compute-0 ceph-mgr[75519]: [balancer INFO root] Optimize plan auto_2026-01-31_08:11:31
Jan 31 08:11:31 compute-0 ceph-mgr[75519]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 31 08:11:31 compute-0 ceph-mgr[75519]: [balancer INFO root] do_upmap
Jan 31 08:11:31 compute-0 ceph-mgr[75519]: [balancer INFO root] pools ['.rgw.root', 'volumes', 'default.rgw.log', '.mgr', 'default.rgw.control', 'cephfs.cephfs.meta', 'backups', 'cephfs.cephfs.data', 'vms', 'images', 'default.rgw.meta']
Jan 31 08:11:31 compute-0 ceph-mgr[75519]: [balancer INFO root] prepared 0/10 upmap changes
Jan 31 08:11:31 compute-0 python3.9[104361]: ansible-ansible.builtin.file Invoked with group=qemu mode=0755 owner=qemu path=/var/lib/vhost_sockets setype=virt_cache_t seuser=system_u state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None serole=None selevel=None attributes=None
Jan 31 08:11:31 compute-0 sudo[104359]: pam_unix(sudo:session): session closed for user root
Jan 31 08:11:31 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v268: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:11:31 compute-0 ceph-mon[75227]: 7.5 scrub starts
Jan 31 08:11:31 compute-0 ceph-mon[75227]: 7.5 scrub ok
Jan 31 08:11:31 compute-0 ceph-mon[75227]: 4.8 scrub starts
Jan 31 08:11:31 compute-0 ceph-mon[75227]: 4.8 scrub ok
Jan 31 08:11:32 compute-0 sudo[104511]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hcmcsaqlsvputjohtstjxnysksdhurcc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847092.032283-275-225932417021929/AnsiballZ_dnf.py'
Jan 31 08:11:32 compute-0 sudo[104511]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:11:32 compute-0 python3.9[104513]: ansible-ansible.legacy.dnf Invoked with name=['dracut-config-generic'] state=absent allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 31 08:11:32 compute-0 ceph-mgr[75519]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:11:32 compute-0 ceph-mgr[75519]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:11:32 compute-0 ceph-mgr[75519]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:11:32 compute-0 ceph-mgr[75519]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:11:32 compute-0 ceph-mgr[75519]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:11:32 compute-0 ceph-mgr[75519]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:11:32 compute-0 ceph-mon[75227]: pgmap v268: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:11:32 compute-0 ceph-mgr[75519]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 31 08:11:32 compute-0 ceph-mgr[75519]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 08:11:32 compute-0 ceph-mgr[75519]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 31 08:11:32 compute-0 ceph-mgr[75519]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 08:11:32 compute-0 ceph-mgr[75519]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 08:11:32 compute-0 ceph-mgr[75519]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 08:11:32 compute-0 ceph-mgr[75519]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 08:11:32 compute-0 ceph-mgr[75519]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 08:11:32 compute-0 ceph-mgr[75519]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 08:11:32 compute-0 ceph-mgr[75519]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 08:11:32 compute-0 ceph-osd[85971]: log_channel(cluster) log [DBG] : 3.3 scrub starts
Jan 31 08:11:32 compute-0 ceph-osd[85971]: log_channel(cluster) log [DBG] : 3.3 scrub ok
Jan 31 08:11:33 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v269: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:11:33 compute-0 sudo[104511]: pam_unix(sudo:session): session closed for user root
Jan 31 08:11:33 compute-0 ceph-mon[75227]: 3.3 scrub starts
Jan 31 08:11:33 compute-0 ceph-mon[75227]: 3.3 scrub ok
Jan 31 08:11:34 compute-0 sudo[104665]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dolxidznubeenlbkwafmkkcygblaobxn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847093.950779-283-19821751035807/AnsiballZ_file.py'
Jan 31 08:11:34 compute-0 sudo[104665]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:11:34 compute-0 python3.9[104667]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/modules-load.d setype=etc_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 31 08:11:34 compute-0 sudo[104665]: pam_unix(sudo:session): session closed for user root
Jan 31 08:11:34 compute-0 ceph-osd[88096]: log_channel(cluster) log [DBG] : 11.b scrub starts
Jan 31 08:11:34 compute-0 ceph-osd[88096]: log_channel(cluster) log [DBG] : 11.b scrub ok
Jan 31 08:11:34 compute-0 sudo[104817]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dcydqqxgvhtywbtcspqaspeecijxlzqb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847094.5211408-291-209530088558060/AnsiballZ_stat.py'
Jan 31 08:11:34 compute-0 sudo[104817]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:11:34 compute-0 python3.9[104819]: ansible-ansible.legacy.stat Invoked with path=/etc/modules-load.d/99-edpm.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 08:11:35 compute-0 ceph-osd[85971]: log_channel(cluster) log [DBG] : 11.e scrub starts
Jan 31 08:11:35 compute-0 ceph-osd[85971]: log_channel(cluster) log [DBG] : 11.e scrub ok
Jan 31 08:11:35 compute-0 sudo[104817]: pam_unix(sudo:session): session closed for user root
Jan 31 08:11:35 compute-0 ceph-mon[75227]: pgmap v269: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:11:35 compute-0 sudo[104895]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rlykwgwguqousdifjuxruprookairmsp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847094.5211408-291-209530088558060/AnsiballZ_file.py'
Jan 31 08:11:35 compute-0 sudo[104895]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:11:35 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 08:11:35 compute-0 python3.9[104897]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root setype=etc_t dest=/etc/modules-load.d/99-edpm.conf _original_basename=edpm-modprobe.conf.j2 recurse=False state=file path=/etc/modules-load.d/99-edpm.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 31 08:11:35 compute-0 sudo[104895]: pam_unix(sudo:session): session closed for user root
Jan 31 08:11:35 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v270: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:11:35 compute-0 sudo[105047]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jkkvdxfdugdusbrkknuopwfqiwqtgrdc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847095.578445-304-68602130127021/AnsiballZ_stat.py'
Jan 31 08:11:35 compute-0 sudo[105047]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:11:36 compute-0 ceph-osd[85971]: log_channel(cluster) log [DBG] : 3.6 scrub starts
Jan 31 08:11:36 compute-0 ceph-osd[85971]: log_channel(cluster) log [DBG] : 3.6 scrub ok
Jan 31 08:11:36 compute-0 python3.9[105049]: ansible-ansible.legacy.stat Invoked with path=/etc/sysctl.d/99-edpm.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 08:11:36 compute-0 sudo[105047]: pam_unix(sudo:session): session closed for user root
Jan 31 08:11:36 compute-0 ceph-mon[75227]: 11.b scrub starts
Jan 31 08:11:36 compute-0 ceph-mon[75227]: 11.b scrub ok
Jan 31 08:11:36 compute-0 ceph-mon[75227]: 11.e scrub starts
Jan 31 08:11:36 compute-0 ceph-mon[75227]: 11.e scrub ok
Jan 31 08:11:36 compute-0 sudo[105125]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tsvwtyypkvzrkqrflefppxbojljteofy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847095.578445-304-68602130127021/AnsiballZ_file.py'
Jan 31 08:11:36 compute-0 sudo[105125]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:11:36 compute-0 python3.9[105127]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root setype=etc_t dest=/etc/sysctl.d/99-edpm.conf _original_basename=edpm-sysctl.conf.j2 recurse=False state=file path=/etc/sysctl.d/99-edpm.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 31 08:11:36 compute-0 sudo[105125]: pam_unix(sudo:session): session closed for user root
Jan 31 08:11:36 compute-0 ceph-osd[87035]: log_channel(cluster) log [DBG] : 10.19 scrub starts
Jan 31 08:11:36 compute-0 ceph-osd[87035]: log_channel(cluster) log [DBG] : 10.19 scrub ok
Jan 31 08:11:37 compute-0 ceph-osd[85971]: log_channel(cluster) log [DBG] : 11.f scrub starts
Jan 31 08:11:37 compute-0 ceph-osd[85971]: log_channel(cluster) log [DBG] : 11.f scrub ok
Jan 31 08:11:37 compute-0 sudo[105277]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-guuqwoykkzwtabeksaopnvqoormirsmd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847096.8231785-319-164828869598574/AnsiballZ_dnf.py'
Jan 31 08:11:37 compute-0 sudo[105277]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:11:37 compute-0 python3.9[105279]: ansible-ansible.legacy.dnf Invoked with name=['tuned', 'tuned-profiles-cpu-partitioning'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 31 08:11:37 compute-0 ceph-mon[75227]: pgmap v270: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:11:37 compute-0 ceph-mon[75227]: 3.6 scrub starts
Jan 31 08:11:37 compute-0 ceph-mon[75227]: 3.6 scrub ok
Jan 31 08:11:37 compute-0 ceph-mon[75227]: 10.19 scrub starts
Jan 31 08:11:37 compute-0 ceph-mon[75227]: 10.19 scrub ok
Jan 31 08:11:37 compute-0 ceph-mon[75227]: 11.f scrub starts
Jan 31 08:11:37 compute-0 ceph-mon[75227]: 11.f scrub ok
Jan 31 08:11:37 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v271: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:11:38 compute-0 ceph-mon[75227]: pgmap v271: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:11:38 compute-0 sudo[105277]: pam_unix(sudo:session): session closed for user root
Jan 31 08:11:39 compute-0 ceph-osd[87035]: log_channel(cluster) log [DBG] : 5.9 scrub starts
Jan 31 08:11:39 compute-0 ceph-osd[87035]: log_channel(cluster) log [DBG] : 5.9 scrub ok
Jan 31 08:11:39 compute-0 python3.9[105430]: ansible-ansible.builtin.stat Invoked with path=/etc/tuned/active_profile follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 31 08:11:39 compute-0 ceph-osd[88096]: log_channel(cluster) log [DBG] : 7.e scrub starts
Jan 31 08:11:39 compute-0 ceph-osd[88096]: log_channel(cluster) log [DBG] : 7.e scrub ok
Jan 31 08:11:39 compute-0 ceph-mon[75227]: 5.9 scrub starts
Jan 31 08:11:39 compute-0 ceph-mon[75227]: 5.9 scrub ok
Jan 31 08:11:39 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v272: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:11:39 compute-0 ceph-osd[85971]: log_channel(cluster) log [DBG] : 8.c scrub starts
Jan 31 08:11:39 compute-0 ceph-osd[85971]: log_channel(cluster) log [DBG] : 8.c scrub ok
Jan 31 08:11:40 compute-0 ceph-osd[87035]: log_channel(cluster) log [DBG] : 5.16 scrub starts
Jan 31 08:11:40 compute-0 ceph-osd[87035]: log_channel(cluster) log [DBG] : 5.16 scrub ok
Jan 31 08:11:40 compute-0 python3.9[105582]: ansible-ansible.builtin.slurp Invoked with src=/etc/tuned/active_profile
Jan 31 08:11:40 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 08:11:40 compute-0 python3.9[105732]: ansible-ansible.builtin.stat Invoked with path=/etc/tuned/throughput-performance-variables.conf follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 31 08:11:40 compute-0 ceph-mon[75227]: 7.e scrub starts
Jan 31 08:11:40 compute-0 ceph-mon[75227]: 7.e scrub ok
Jan 31 08:11:40 compute-0 ceph-mon[75227]: pgmap v272: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:11:40 compute-0 ceph-mon[75227]: 8.c scrub starts
Jan 31 08:11:40 compute-0 ceph-mon[75227]: 8.c scrub ok
Jan 31 08:11:40 compute-0 ceph-mon[75227]: 5.16 scrub starts
Jan 31 08:11:40 compute-0 ceph-mon[75227]: 5.16 scrub ok
Jan 31 08:11:40 compute-0 ceph-osd[87035]: log_channel(cluster) log [DBG] : 10.6 scrub starts
Jan 31 08:11:40 compute-0 ceph-osd[87035]: log_channel(cluster) log [DBG] : 10.6 scrub ok
Jan 31 08:11:41 compute-0 ceph-osd[85971]: log_channel(cluster) log [DBG] : 3.a scrub starts
Jan 31 08:11:41 compute-0 ceph-osd[85971]: log_channel(cluster) log [DBG] : 3.a scrub ok
Jan 31 08:11:41 compute-0 sudo[105882]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qwvabiuqnltepsykeplgpbuhscnhvryo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847101.1578772-360-30730721969080/AnsiballZ_systemd.py'
Jan 31 08:11:41 compute-0 sudo[105882]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:11:41 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v273: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:11:41 compute-0 ceph-osd[87035]: log_channel(cluster) log [DBG] : 2.d scrub starts
Jan 31 08:11:41 compute-0 ceph-osd[87035]: log_channel(cluster) log [DBG] : 2.d scrub ok
Jan 31 08:11:42 compute-0 python3.9[105884]: ansible-ansible.builtin.systemd Invoked with enabled=True name=tuned state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 31 08:11:42 compute-0 systemd[1]: Stopping Dynamic System Tuning Daemon...
Jan 31 08:11:42 compute-0 ceph-mon[75227]: 10.6 scrub starts
Jan 31 08:11:42 compute-0 ceph-mon[75227]: 10.6 scrub ok
Jan 31 08:11:42 compute-0 ceph-mon[75227]: 3.a scrub starts
Jan 31 08:11:42 compute-0 ceph-mon[75227]: 3.a scrub ok
Jan 31 08:11:42 compute-0 systemd[1]: tuned.service: Deactivated successfully.
Jan 31 08:11:42 compute-0 systemd[1]: Stopped Dynamic System Tuning Daemon.
Jan 31 08:11:42 compute-0 systemd[1]: Starting Dynamic System Tuning Daemon...
Jan 31 08:11:42 compute-0 systemd[1]: Started Dynamic System Tuning Daemon.
Jan 31 08:11:42 compute-0 sudo[105882]: pam_unix(sudo:session): session closed for user root
Jan 31 08:11:42 compute-0 ceph-osd[87035]: log_channel(cluster) log [DBG] : 10.b scrub starts
Jan 31 08:11:42 compute-0 ceph-osd[87035]: log_channel(cluster) log [DBG] : 10.b scrub ok
Jan 31 08:11:42 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] _maybe_adjust
Jan 31 08:11:43 compute-0 python3.9[106047]: ansible-ansible.builtin.slurp Invoked with src=/proc/cmdline
Jan 31 08:11:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:11:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 31 08:11:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:11:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 08:11:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:11:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 08:11:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:11:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 08:11:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:11:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 08:11:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:11:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 2.6947183441958982e-06 of space, bias 4.0, pg target 0.003233662013035078 quantized to 16 (current 16)
Jan 31 08:11:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:11:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 08:11:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:11:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 31 08:11:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:11:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Jan 31 08:11:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:11:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 08:11:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:11:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 31 08:11:43 compute-0 ceph-osd[85971]: log_channel(cluster) log [DBG] : 8.e scrub starts
Jan 31 08:11:43 compute-0 ceph-osd[85971]: log_channel(cluster) log [DBG] : 8.e scrub ok
Jan 31 08:11:43 compute-0 ceph-mon[75227]: pgmap v273: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:11:43 compute-0 ceph-mon[75227]: 2.d scrub starts
Jan 31 08:11:43 compute-0 ceph-mon[75227]: 2.d scrub ok
Jan 31 08:11:43 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v274: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:11:44 compute-0 ceph-mon[75227]: 10.b scrub starts
Jan 31 08:11:44 compute-0 ceph-mon[75227]: 10.b scrub ok
Jan 31 08:11:44 compute-0 ceph-mon[75227]: 8.e scrub starts
Jan 31 08:11:44 compute-0 ceph-mon[75227]: 8.e scrub ok
Jan 31 08:11:44 compute-0 ceph-mon[75227]: pgmap v274: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:11:44 compute-0 sudo[106197]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fxomijjczgdltihvvmrihwxsrnqplncw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847104.56275-417-15384121591302/AnsiballZ_systemd.py'
Jan 31 08:11:44 compute-0 sudo[106197]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:11:45 compute-0 python3.9[106199]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ksm.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 31 08:11:45 compute-0 sudo[106197]: pam_unix(sudo:session): session closed for user root
Jan 31 08:11:45 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 08:11:45 compute-0 sudo[106351]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oerwxxmxsqwiqhgeyfmyejxnuwkbwnzj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847105.3105054-417-147583705008378/AnsiballZ_systemd.py'
Jan 31 08:11:45 compute-0 sudo[106351]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:11:45 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v275: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:11:45 compute-0 python3.9[106353]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ksmtuned.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 31 08:11:45 compute-0 ceph-osd[87035]: log_channel(cluster) log [DBG] : 2.3 scrub starts
Jan 31 08:11:45 compute-0 ceph-osd[87035]: log_channel(cluster) log [DBG] : 2.3 scrub ok
Jan 31 08:11:45 compute-0 sudo[106351]: pam_unix(sudo:session): session closed for user root
Jan 31 08:11:46 compute-0 sshd-session[99643]: Connection closed by 192.168.122.30 port 52690
Jan 31 08:11:46 compute-0 sshd-session[99640]: pam_unix(sshd:session): session closed for user zuul
Jan 31 08:11:46 compute-0 systemd[1]: session-35.scope: Deactivated successfully.
Jan 31 08:11:46 compute-0 systemd[1]: session-35.scope: Consumed 1min 1.798s CPU time.
Jan 31 08:11:46 compute-0 systemd-logind[793]: Session 35 logged out. Waiting for processes to exit.
Jan 31 08:11:46 compute-0 systemd-logind[793]: Removed session 35.
Jan 31 08:11:46 compute-0 ceph-osd[85971]: log_channel(cluster) log [DBG] : 7.f scrub starts
Jan 31 08:11:46 compute-0 ceph-osd[85971]: log_channel(cluster) log [DBG] : 7.f scrub ok
Jan 31 08:11:47 compute-0 ceph-mon[75227]: pgmap v275: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:11:47 compute-0 ceph-mon[75227]: 2.3 scrub starts
Jan 31 08:11:47 compute-0 ceph-mon[75227]: 2.3 scrub ok
Jan 31 08:11:47 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v276: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:11:47 compute-0 ceph-osd[85971]: log_channel(cluster) log [DBG] : 11.1 scrub starts
Jan 31 08:11:47 compute-0 ceph-osd[87035]: log_channel(cluster) log [DBG] : 5.f scrub starts
Jan 31 08:11:47 compute-0 ceph-osd[85971]: log_channel(cluster) log [DBG] : 11.1 scrub ok
Jan 31 08:11:47 compute-0 ceph-osd[87035]: log_channel(cluster) log [DBG] : 5.f scrub ok
Jan 31 08:11:48 compute-0 ceph-mon[75227]: 7.f scrub starts
Jan 31 08:11:48 compute-0 ceph-mon[75227]: 7.f scrub ok
Jan 31 08:11:48 compute-0 ceph-mon[75227]: pgmap v276: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:11:48 compute-0 ceph-mon[75227]: 11.1 scrub starts
Jan 31 08:11:48 compute-0 ceph-mon[75227]: 5.f scrub starts
Jan 31 08:11:48 compute-0 ceph-mon[75227]: 11.1 scrub ok
Jan 31 08:11:48 compute-0 ceph-mon[75227]: 5.f scrub ok
Jan 31 08:11:48 compute-0 ceph-osd[88096]: log_channel(cluster) log [DBG] : 11.2 scrub starts
Jan 31 08:11:48 compute-0 ceph-osd[88096]: log_channel(cluster) log [DBG] : 11.2 scrub ok
Jan 31 08:11:49 compute-0 ceph-mon[75227]: 11.2 scrub starts
Jan 31 08:11:49 compute-0 ceph-mon[75227]: 11.2 scrub ok
Jan 31 08:11:49 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v277: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:11:49 compute-0 ceph-osd[88096]: log_channel(cluster) log [DBG] : 7.8 scrub starts
Jan 31 08:11:49 compute-0 ceph-osd[88096]: log_channel(cluster) log [DBG] : 7.8 scrub ok
Jan 31 08:11:50 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 08:11:50 compute-0 ceph-mon[75227]: pgmap v277: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:11:50 compute-0 ceph-mon[75227]: 7.8 scrub starts
Jan 31 08:11:50 compute-0 ceph-mon[75227]: 7.8 scrub ok
Jan 31 08:11:50 compute-0 ceph-osd[88096]: log_channel(cluster) log [DBG] : 8.4 scrub starts
Jan 31 08:11:50 compute-0 ceph-osd[88096]: log_channel(cluster) log [DBG] : 8.4 scrub ok
Jan 31 08:11:51 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v278: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:11:51 compute-0 ceph-mon[75227]: 8.4 scrub starts
Jan 31 08:11:51 compute-0 ceph-mon[75227]: 8.4 scrub ok
Jan 31 08:11:52 compute-0 sshd-session[106380]: Accepted publickey for zuul from 192.168.122.30 port 49168 ssh2: ECDSA SHA256:Skb+4tfaoVfLHQIqkRSeA/sFlTrVc6ZnX8V66qTLHY8
Jan 31 08:11:52 compute-0 systemd-logind[793]: New session 36 of user zuul.
Jan 31 08:11:52 compute-0 systemd[1]: Started Session 36 of User zuul.
Jan 31 08:11:52 compute-0 sshd-session[106380]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 31 08:11:52 compute-0 ceph-osd[88096]: log_channel(cluster) log [DBG] : 11.12 scrub starts
Jan 31 08:11:52 compute-0 ceph-osd[88096]: log_channel(cluster) log [DBG] : 11.12 scrub ok
Jan 31 08:11:52 compute-0 ceph-osd[87035]: log_channel(cluster) log [DBG] : 4.7 scrub starts
Jan 31 08:11:52 compute-0 ceph-osd[87035]: log_channel(cluster) log [DBG] : 4.7 scrub ok
Jan 31 08:11:52 compute-0 ceph-mon[75227]: pgmap v278: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:11:53 compute-0 python3.9[106533]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 31 08:11:53 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v279: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:11:53 compute-0 sudo[106687]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zoofdkwxgvdbotkwwpcttfgqhbvbujop ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847113.6200252-31-24753719511605/AnsiballZ_getent.py'
Jan 31 08:11:53 compute-0 sudo[106687]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:11:54 compute-0 ceph-mon[75227]: 11.12 scrub starts
Jan 31 08:11:54 compute-0 ceph-mon[75227]: 11.12 scrub ok
Jan 31 08:11:54 compute-0 ceph-mon[75227]: 4.7 scrub starts
Jan 31 08:11:54 compute-0 ceph-mon[75227]: 4.7 scrub ok
Jan 31 08:11:54 compute-0 ceph-osd[85971]: log_channel(cluster) log [DBG] : 3.17 scrub starts
Jan 31 08:11:54 compute-0 ceph-osd[85971]: log_channel(cluster) log [DBG] : 3.17 scrub ok
Jan 31 08:11:54 compute-0 python3.9[106689]: ansible-ansible.builtin.getent Invoked with database=passwd key=openvswitch fail_key=True service=None split=None
Jan 31 08:11:54 compute-0 sudo[106687]: pam_unix(sudo:session): session closed for user root
Jan 31 08:11:54 compute-0 sudo[106840]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xzvtqsmcxtupqbvyhespjclcfzltkrga ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847114.473214-43-188447548221170/AnsiballZ_setup.py'
Jan 31 08:11:54 compute-0 sudo[106840]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:11:55 compute-0 python3.9[106842]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 31 08:11:55 compute-0 ceph-mon[75227]: pgmap v279: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:11:55 compute-0 ceph-mon[75227]: 3.17 scrub starts
Jan 31 08:11:55 compute-0 ceph-mon[75227]: 3.17 scrub ok
Jan 31 08:11:55 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 08:11:55 compute-0 sudo[106840]: pam_unix(sudo:session): session closed for user root
Jan 31 08:11:55 compute-0 sudo[106924]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aypmtpecvnpdlknbtpxkjxawimjiqgbr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847114.473214-43-188447548221170/AnsiballZ_dnf.py'
Jan 31 08:11:55 compute-0 sudo[106924]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:11:55 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v280: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:11:55 compute-0 ceph-osd[88096]: log_channel(cluster) log [DBG] : 3.e scrub starts
Jan 31 08:11:55 compute-0 ceph-osd[88096]: log_channel(cluster) log [DBG] : 3.e scrub ok
Jan 31 08:11:55 compute-0 python3.9[106926]: ansible-ansible.legacy.dnf Invoked with download_only=True name=['openvswitch'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Jan 31 08:11:56 compute-0 ceph-osd[85971]: log_channel(cluster) log [DBG] : 7.13 scrub starts
Jan 31 08:11:56 compute-0 ceph-osd[85971]: log_channel(cluster) log [DBG] : 7.13 scrub ok
Jan 31 08:11:56 compute-0 ceph-mon[75227]: pgmap v280: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:11:56 compute-0 ceph-mon[75227]: 7.13 scrub starts
Jan 31 08:11:56 compute-0 ceph-mon[75227]: 7.13 scrub ok
Jan 31 08:11:57 compute-0 sudo[106924]: pam_unix(sudo:session): session closed for user root
Jan 31 08:11:57 compute-0 ceph-mon[75227]: 3.e scrub starts
Jan 31 08:11:57 compute-0 ceph-mon[75227]: 3.e scrub ok
Jan 31 08:11:57 compute-0 sudo[107078]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wojzqmkcrcskbdjnnejjfjkswukrdarh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847117.5116153-57-219910621341798/AnsiballZ_dnf.py'
Jan 31 08:11:57 compute-0 sudo[107078]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:11:57 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v281: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:11:57 compute-0 ceph-osd[88096]: log_channel(cluster) log [DBG] : 7.a scrub starts
Jan 31 08:11:57 compute-0 ceph-osd[88096]: log_channel(cluster) log [DBG] : 7.a scrub ok
Jan 31 08:11:57 compute-0 ceph-osd[87035]: log_channel(cluster) log [DBG] : 10.2 scrub starts
Jan 31 08:11:57 compute-0 ceph-osd[87035]: log_channel(cluster) log [DBG] : 10.2 scrub ok
Jan 31 08:11:58 compute-0 python3.9[107080]: ansible-ansible.legacy.dnf Invoked with name=['openvswitch'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 31 08:11:58 compute-0 ceph-mon[75227]: pgmap v281: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:11:58 compute-0 ceph-mon[75227]: 10.2 scrub starts
Jan 31 08:11:58 compute-0 ceph-mon[75227]: 10.2 scrub ok
Jan 31 08:11:58 compute-0 ceph-osd[88096]: log_channel(cluster) log [DBG] : 7.15 scrub starts
Jan 31 08:11:58 compute-0 ceph-osd[88096]: log_channel(cluster) log [DBG] : 7.15 scrub ok
Jan 31 08:11:58 compute-0 ceph-osd[87035]: log_channel(cluster) log [DBG] : 4.9 scrub starts
Jan 31 08:11:58 compute-0 ceph-osd[87035]: log_channel(cluster) log [DBG] : 4.9 scrub ok
Jan 31 08:11:59 compute-0 sudo[107078]: pam_unix(sudo:session): session closed for user root
Jan 31 08:11:59 compute-0 ceph-mon[75227]: 7.a scrub starts
Jan 31 08:11:59 compute-0 ceph-mon[75227]: 7.a scrub ok
Jan 31 08:11:59 compute-0 ceph-mon[75227]: 4.9 scrub starts
Jan 31 08:11:59 compute-0 ceph-mon[75227]: 4.9 scrub ok
Jan 31 08:11:59 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v282: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:11:59 compute-0 ceph-osd[88096]: log_channel(cluster) log [DBG] : 3.11 scrub starts
Jan 31 08:11:59 compute-0 ceph-osd[88096]: log_channel(cluster) log [DBG] : 3.11 scrub ok
Jan 31 08:12:00 compute-0 sudo[107231]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rgpsyzlkdfrxjaqejjiyxivgerghgofs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847119.3909276-65-263309466345955/AnsiballZ_systemd.py'
Jan 31 08:12:00 compute-0 sudo[107231]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:12:00 compute-0 ceph-osd[85971]: log_channel(cluster) log [DBG] : 7.3 scrub starts
Jan 31 08:12:00 compute-0 ceph-osd[85971]: log_channel(cluster) log [DBG] : 7.3 scrub ok
Jan 31 08:12:00 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 08:12:00 compute-0 python3.9[107233]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=openvswitch.service state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Jan 31 08:12:00 compute-0 sudo[107231]: pam_unix(sudo:session): session closed for user root
Jan 31 08:12:00 compute-0 ceph-mon[75227]: 7.15 scrub starts
Jan 31 08:12:00 compute-0 ceph-mon[75227]: 7.15 scrub ok
Jan 31 08:12:00 compute-0 ceph-mon[75227]: pgmap v282: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:12:00 compute-0 ceph-mon[75227]: 7.3 scrub starts
Jan 31 08:12:00 compute-0 ceph-mon[75227]: 7.3 scrub ok
Jan 31 08:12:01 compute-0 sudo[107387]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 08:12:01 compute-0 sudo[107387]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:12:01 compute-0 sudo[107387]: pam_unix(sudo:session): session closed for user root
Jan 31 08:12:01 compute-0 sudo[107412]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/82c880e6-d992-5408-8b12-efff9c275473/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Jan 31 08:12:01 compute-0 sudo[107412]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:12:01 compute-0 python3.9[107386]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'selinux'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 31 08:12:01 compute-0 ceph-mon[75227]: 3.11 scrub starts
Jan 31 08:12:01 compute-0 ceph-mon[75227]: 3.11 scrub ok
Jan 31 08:12:01 compute-0 sudo[107412]: pam_unix(sudo:session): session closed for user root
Jan 31 08:12:01 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v283: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:12:01 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 31 08:12:01 compute-0 ceph-mon[75227]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 08:12:01 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 31 08:12:01 compute-0 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 31 08:12:01 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 31 08:12:01 compute-0 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 08:12:01 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 31 08:12:01 compute-0 ceph-mon[75227]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 31 08:12:01 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 31 08:12:01 compute-0 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 31 08:12:01 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 31 08:12:01 compute-0 ceph-mon[75227]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 08:12:01 compute-0 sudo[107592]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 08:12:01 compute-0 sudo[107592]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:12:01 compute-0 sudo[107592]: pam_unix(sudo:session): session closed for user root
Jan 31 08:12:01 compute-0 sudo[107643]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oxmpeehyrzmwplhuuffxxhdvuunbxxac ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847121.4975896-83-105769657593888/AnsiballZ_sefcontext.py'
Jan 31 08:12:01 compute-0 sudo[107643]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:12:01 compute-0 sudo[107644]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/82c880e6-d992-5408-8b12-efff9c275473/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 82c880e6-d992-5408-8b12-efff9c275473 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Jan 31 08:12:01 compute-0 sudo[107644]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:12:02 compute-0 ceph-osd[85971]: log_channel(cluster) log [DBG] : 8.1d scrub starts
Jan 31 08:12:02 compute-0 ceph-osd[85971]: log_channel(cluster) log [DBG] : 8.1d scrub ok
Jan 31 08:12:02 compute-0 python3.9[107651]: ansible-community.general.sefcontext Invoked with selevel=s0 setype=container_file_t state=present target=/var/lib/edpm-config(/.*)? ignore_selinux_state=False ftype=a reload=True substitute=None seuser=None
Jan 31 08:12:02 compute-0 podman[107683]: 2026-01-31 08:12:02.241756264 +0000 UTC m=+0.019889422 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 08:12:02 compute-0 podman[107683]: 2026-01-31 08:12:02.355484474 +0000 UTC m=+0.133617642 container create 28ba0dcea78e3eb679116bda29aced37d6360f1a017c8c9ca3468b6ceaa9240b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=determined_boyd, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Jan 31 08:12:02 compute-0 systemd[76601]: Created slice User Background Tasks Slice.
Jan 31 08:12:02 compute-0 systemd[76601]: Starting Cleanup of User's Temporary Files and Directories...
Jan 31 08:12:02 compute-0 systemd[1]: Started libpod-conmon-28ba0dcea78e3eb679116bda29aced37d6360f1a017c8c9ca3468b6ceaa9240b.scope.
Jan 31 08:12:02 compute-0 systemd[76601]: Finished Cleanup of User's Temporary Files and Directories.
Jan 31 08:12:02 compute-0 sudo[107643]: pam_unix(sudo:session): session closed for user root
Jan 31 08:12:02 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:12:02 compute-0 podman[107683]: 2026-01-31 08:12:02.459902237 +0000 UTC m=+0.238035505 container init 28ba0dcea78e3eb679116bda29aced37d6360f1a017c8c9ca3468b6ceaa9240b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=determined_boyd, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 08:12:02 compute-0 podman[107683]: 2026-01-31 08:12:02.467545289 +0000 UTC m=+0.245678467 container start 28ba0dcea78e3eb679116bda29aced37d6360f1a017c8c9ca3468b6ceaa9240b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=determined_boyd, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Jan 31 08:12:02 compute-0 determined_boyd[107700]: 167 167
Jan 31 08:12:02 compute-0 systemd[1]: libpod-28ba0dcea78e3eb679116bda29aced37d6360f1a017c8c9ca3468b6ceaa9240b.scope: Deactivated successfully.
Jan 31 08:12:02 compute-0 conmon[107700]: conmon 28ba0dcea78e3eb67911 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-28ba0dcea78e3eb679116bda29aced37d6360f1a017c8c9ca3468b6ceaa9240b.scope/container/memory.events
Jan 31 08:12:02 compute-0 podman[107683]: 2026-01-31 08:12:02.535531172 +0000 UTC m=+0.313664320 container attach 28ba0dcea78e3eb679116bda29aced37d6360f1a017c8c9ca3468b6ceaa9240b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=determined_boyd, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Jan 31 08:12:02 compute-0 podman[107683]: 2026-01-31 08:12:02.537079165 +0000 UTC m=+0.315212313 container died 28ba0dcea78e3eb679116bda29aced37d6360f1a017c8c9ca3468b6ceaa9240b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=determined_boyd, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Jan 31 08:12:02 compute-0 ceph-mon[75227]: pgmap v283: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:12:02 compute-0 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 08:12:02 compute-0 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 31 08:12:02 compute-0 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 08:12:02 compute-0 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 31 08:12:02 compute-0 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 31 08:12:02 compute-0 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 08:12:02 compute-0 ceph-mon[75227]: 8.1d scrub starts
Jan 31 08:12:02 compute-0 ceph-mon[75227]: 8.1d scrub ok
Jan 31 08:12:02 compute-0 ceph-mgr[75519]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:12:02 compute-0 ceph-mgr[75519]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:12:02 compute-0 ceph-mgr[75519]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:12:02 compute-0 ceph-mgr[75519]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:12:02 compute-0 ceph-mgr[75519]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:12:02 compute-0 ceph-mgr[75519]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:12:02 compute-0 systemd[1]: var-lib-containers-storage-overlay-9a542f14f33c45870701405b7be590da40a570738c072df8d43137e29dd98fe7-merged.mount: Deactivated successfully.
Jan 31 08:12:02 compute-0 podman[107683]: 2026-01-31 08:12:02.966727097 +0000 UTC m=+0.744860235 container remove 28ba0dcea78e3eb679116bda29aced37d6360f1a017c8c9ca3468b6ceaa9240b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=determined_boyd, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 08:12:03 compute-0 systemd[1]: libpod-conmon-28ba0dcea78e3eb679116bda29aced37d6360f1a017c8c9ca3468b6ceaa9240b.scope: Deactivated successfully.
Jan 31 08:12:03 compute-0 python3.9[107868]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local', 'distribution'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 31 08:12:03 compute-0 podman[107876]: 2026-01-31 08:12:03.17587171 +0000 UTC m=+0.093174142 container create c2b5a0769a17c24ef9ab6eec2c45fffe8c47ab1ad1bbe187adae0bbc85e2a23d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=objective_khayyam, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Jan 31 08:12:03 compute-0 podman[107876]: 2026-01-31 08:12:03.109906823 +0000 UTC m=+0.027209275 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 08:12:03 compute-0 systemd[1]: Started libpod-conmon-c2b5a0769a17c24ef9ab6eec2c45fffe8c47ab1ad1bbe187adae0bbc85e2a23d.scope.
Jan 31 08:12:03 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:12:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b422702296ef10ae35232f42220ba6ab54eaba9ffce8af834348d11edb31cb41/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 08:12:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b422702296ef10ae35232f42220ba6ab54eaba9ffce8af834348d11edb31cb41/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 08:12:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b422702296ef10ae35232f42220ba6ab54eaba9ffce8af834348d11edb31cb41/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 08:12:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b422702296ef10ae35232f42220ba6ab54eaba9ffce8af834348d11edb31cb41/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 08:12:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b422702296ef10ae35232f42220ba6ab54eaba9ffce8af834348d11edb31cb41/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 31 08:12:03 compute-0 podman[107876]: 2026-01-31 08:12:03.327866631 +0000 UTC m=+0.245169143 container init c2b5a0769a17c24ef9ab6eec2c45fffe8c47ab1ad1bbe187adae0bbc85e2a23d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=objective_khayyam, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 08:12:03 compute-0 podman[107876]: 2026-01-31 08:12:03.334650789 +0000 UTC m=+0.251953231 container start c2b5a0769a17c24ef9ab6eec2c45fffe8c47ab1ad1bbe187adae0bbc85e2a23d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=objective_khayyam, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 31 08:12:03 compute-0 podman[107876]: 2026-01-31 08:12:03.381661501 +0000 UTC m=+0.298964043 container attach c2b5a0769a17c24ef9ab6eec2c45fffe8c47ab1ad1bbe187adae0bbc85e2a23d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=objective_khayyam, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_REF=tentacle, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Jan 31 08:12:03 compute-0 objective_khayyam[107897]: --> passed data devices: 0 physical, 3 LVM
Jan 31 08:12:03 compute-0 objective_khayyam[107897]: --> All data devices are unavailable
Jan 31 08:12:03 compute-0 systemd[1]: libpod-c2b5a0769a17c24ef9ab6eec2c45fffe8c47ab1ad1bbe187adae0bbc85e2a23d.scope: Deactivated successfully.
Jan 31 08:12:03 compute-0 podman[107876]: 2026-01-31 08:12:03.762695215 +0000 UTC m=+0.679997647 container died c2b5a0769a17c24ef9ab6eec2c45fffe8c47ab1ad1bbe187adae0bbc85e2a23d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=objective_khayyam, org.label-schema.build-date=20251030, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Jan 31 08:12:03 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v284: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:12:03 compute-0 sudo[108075]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ahqfyvnvdxycuatorqssxsoexahjzzlu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847123.5623674-101-263753793320623/AnsiballZ_dnf.py'
Jan 31 08:12:03 compute-0 sudo[108075]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:12:03 compute-0 ceph-osd[87035]: log_channel(cluster) log [DBG] : 2.5 scrub starts
Jan 31 08:12:03 compute-0 ceph-osd[87035]: log_channel(cluster) log [DBG] : 2.5 scrub ok
Jan 31 08:12:04 compute-0 systemd[1]: var-lib-containers-storage-overlay-b422702296ef10ae35232f42220ba6ab54eaba9ffce8af834348d11edb31cb41-merged.mount: Deactivated successfully.
Jan 31 08:12:04 compute-0 python3.9[108082]: ansible-ansible.legacy.dnf Invoked with name=['driverctl', 'lvm2', 'crudini', 'jq', 'nftables', 'NetworkManager', 'openstack-selinux', 'python3-libselinux', 'python3-pyyaml', 'rsync', 'tmpwatch', 'sysstat', 'iproute-tc', 'ksmtuned', 'systemd-container', 'crypto-policies-scripts', 'grubby', 'sos'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 31 08:12:04 compute-0 ceph-osd[85971]: log_channel(cluster) log [DBG] : 3.15 scrub starts
Jan 31 08:12:04 compute-0 ceph-osd[85971]: log_channel(cluster) log [DBG] : 3.15 scrub ok
Jan 31 08:12:04 compute-0 podman[107876]: 2026-01-31 08:12:04.113578735 +0000 UTC m=+1.030881177 container remove c2b5a0769a17c24ef9ab6eec2c45fffe8c47ab1ad1bbe187adae0bbc85e2a23d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=objective_khayyam, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 08:12:04 compute-0 systemd[1]: libpod-conmon-c2b5a0769a17c24ef9ab6eec2c45fffe8c47ab1ad1bbe187adae0bbc85e2a23d.scope: Deactivated successfully.
Jan 31 08:12:04 compute-0 sudo[107644]: pam_unix(sudo:session): session closed for user root
Jan 31 08:12:04 compute-0 sudo[108086]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 08:12:04 compute-0 sudo[108086]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:12:04 compute-0 sudo[108086]: pam_unix(sudo:session): session closed for user root
Jan 31 08:12:04 compute-0 sudo[108111]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/82c880e6-d992-5408-8b12-efff9c275473/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 82c880e6-d992-5408-8b12-efff9c275473 -- lvm list --format json
Jan 31 08:12:04 compute-0 sudo[108111]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:12:04 compute-0 ceph-osd[88096]: log_channel(cluster) log [DBG] : 8.1b scrub starts
Jan 31 08:12:04 compute-0 ceph-osd[88096]: log_channel(cluster) log [DBG] : 8.1b scrub ok
Jan 31 08:12:04 compute-0 podman[108151]: 2026-01-31 08:12:04.796090511 +0000 UTC m=+0.020965271 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 08:12:04 compute-0 podman[108151]: 2026-01-31 08:12:04.919597313 +0000 UTC m=+0.144472083 container create f81e99709a289398cd097322cb3458212c044c398afeec1f38686a882b6cccbf (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=competent_bohr, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Jan 31 08:12:04 compute-0 systemd[1]: Started libpod-conmon-f81e99709a289398cd097322cb3458212c044c398afeec1f38686a882b6cccbf.scope.
Jan 31 08:12:04 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:12:05 compute-0 podman[108151]: 2026-01-31 08:12:05.014764079 +0000 UTC m=+0.239639299 container init f81e99709a289398cd097322cb3458212c044c398afeec1f38686a882b6cccbf (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=competent_bohr, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251030)
Jan 31 08:12:05 compute-0 ceph-mon[75227]: pgmap v284: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:12:05 compute-0 ceph-mon[75227]: 2.5 scrub starts
Jan 31 08:12:05 compute-0 ceph-mon[75227]: 2.5 scrub ok
Jan 31 08:12:05 compute-0 ceph-mon[75227]: 3.15 scrub starts
Jan 31 08:12:05 compute-0 ceph-mon[75227]: 3.15 scrub ok
Jan 31 08:12:05 compute-0 podman[108151]: 2026-01-31 08:12:05.023164512 +0000 UTC m=+0.248039252 container start f81e99709a289398cd097322cb3458212c044c398afeec1f38686a882b6cccbf (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=competent_bohr, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Jan 31 08:12:05 compute-0 competent_bohr[108166]: 167 167
Jan 31 08:12:05 compute-0 systemd[1]: libpod-f81e99709a289398cd097322cb3458212c044c398afeec1f38686a882b6cccbf.scope: Deactivated successfully.
Jan 31 08:12:05 compute-0 podman[108151]: 2026-01-31 08:12:05.036512831 +0000 UTC m=+0.261387601 container attach f81e99709a289398cd097322cb3458212c044c398afeec1f38686a882b6cccbf (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=competent_bohr, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 08:12:05 compute-0 podman[108151]: 2026-01-31 08:12:05.036929003 +0000 UTC m=+0.261803743 container died f81e99709a289398cd097322cb3458212c044c398afeec1f38686a882b6cccbf (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=competent_bohr, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 08:12:05 compute-0 systemd[1]: var-lib-containers-storage-overlay-097b225f4082d498242581bde5d605881fb6107b2f5c3f71b0c58538869c289b-merged.mount: Deactivated successfully.
Jan 31 08:12:05 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 08:12:05 compute-0 podman[108151]: 2026-01-31 08:12:05.261106003 +0000 UTC m=+0.485980773 container remove f81e99709a289398cd097322cb3458212c044c398afeec1f38686a882b6cccbf (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=competent_bohr, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Jan 31 08:12:05 compute-0 systemd[1]: libpod-conmon-f81e99709a289398cd097322cb3458212c044c398afeec1f38686a882b6cccbf.scope: Deactivated successfully.
Jan 31 08:12:05 compute-0 sudo[108075]: pam_unix(sudo:session): session closed for user root
Jan 31 08:12:05 compute-0 podman[108216]: 2026-01-31 08:12:05.49492926 +0000 UTC m=+0.115433499 container create b8214d7187abfc18819cd8bedaa30b70d0f4fd261444e3ddf084f7ae7cf2c7ab (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=epic_wiles, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030)
Jan 31 08:12:05 compute-0 podman[108216]: 2026-01-31 08:12:05.417896276 +0000 UTC m=+0.038400605 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 08:12:05 compute-0 systemd[1]: Started libpod-conmon-b8214d7187abfc18819cd8bedaa30b70d0f4fd261444e3ddf084f7ae7cf2c7ab.scope.
Jan 31 08:12:05 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:12:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5c92213ed515bbe51ac35bf7c1c58bf85124edd4dd17b36dd7a25e50270a9c73/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 08:12:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5c92213ed515bbe51ac35bf7c1c58bf85124edd4dd17b36dd7a25e50270a9c73/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 08:12:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5c92213ed515bbe51ac35bf7c1c58bf85124edd4dd17b36dd7a25e50270a9c73/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 08:12:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5c92213ed515bbe51ac35bf7c1c58bf85124edd4dd17b36dd7a25e50270a9c73/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 08:12:05 compute-0 podman[108216]: 2026-01-31 08:12:05.741792659 +0000 UTC m=+0.362296988 container init b8214d7187abfc18819cd8bedaa30b70d0f4fd261444e3ddf084f7ae7cf2c7ab (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=epic_wiles, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_REF=tentacle)
Jan 31 08:12:05 compute-0 podman[108216]: 2026-01-31 08:12:05.748517695 +0000 UTC m=+0.369021974 container start b8214d7187abfc18819cd8bedaa30b70d0f4fd261444e3ddf084f7ae7cf2c7ab (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=epic_wiles, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 08:12:05 compute-0 podman[108216]: 2026-01-31 08:12:05.77180739 +0000 UTC m=+0.392311729 container attach b8214d7187abfc18819cd8bedaa30b70d0f4fd261444e3ddf084f7ae7cf2c7ab (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=epic_wiles, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Jan 31 08:12:05 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v285: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:12:05 compute-0 sudo[108362]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ewopkbjhytsrknntrybfbizxmrdttola ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847125.4766405-109-224569217651039/AnsiballZ_command.py'
Jan 31 08:12:05 compute-0 sudo[108362]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:12:06 compute-0 epic_wiles[108284]: {
Jan 31 08:12:06 compute-0 epic_wiles[108284]:     "0": [
Jan 31 08:12:06 compute-0 epic_wiles[108284]:         {
Jan 31 08:12:06 compute-0 epic_wiles[108284]:             "devices": [
Jan 31 08:12:06 compute-0 epic_wiles[108284]:                 "/dev/loop3"
Jan 31 08:12:06 compute-0 epic_wiles[108284]:             ],
Jan 31 08:12:06 compute-0 epic_wiles[108284]:             "lv_name": "ceph_lv0",
Jan 31 08:12:06 compute-0 epic_wiles[108284]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 08:12:06 compute-0 epic_wiles[108284]:             "lv_size": "21470642176",
Jan 31 08:12:06 compute-0 epic_wiles[108284]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=MTsNbY-MKaT-jGv0-3onj-5WQa-gnK0-BbfLsK,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=82c880e6-d992-5408-8b12-efff9c275473,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=39c36249-2898-4a76-b317-8e4ca379866f,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 31 08:12:06 compute-0 epic_wiles[108284]:             "lv_uuid": "MTsNbY-MKaT-jGv0-3onj-5WQa-gnK0-BbfLsK",
Jan 31 08:12:06 compute-0 epic_wiles[108284]:             "name": "ceph_lv0",
Jan 31 08:12:06 compute-0 epic_wiles[108284]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 08:12:06 compute-0 epic_wiles[108284]:             "tags": {
Jan 31 08:12:06 compute-0 epic_wiles[108284]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 31 08:12:06 compute-0 epic_wiles[108284]:                 "ceph.block_uuid": "MTsNbY-MKaT-jGv0-3onj-5WQa-gnK0-BbfLsK",
Jan 31 08:12:06 compute-0 epic_wiles[108284]:                 "ceph.cephx_lockbox_secret": "",
Jan 31 08:12:06 compute-0 epic_wiles[108284]:                 "ceph.cluster_fsid": "82c880e6-d992-5408-8b12-efff9c275473",
Jan 31 08:12:06 compute-0 epic_wiles[108284]:                 "ceph.cluster_name": "ceph",
Jan 31 08:12:06 compute-0 epic_wiles[108284]:                 "ceph.crush_device_class": "",
Jan 31 08:12:06 compute-0 epic_wiles[108284]:                 "ceph.encrypted": "0",
Jan 31 08:12:06 compute-0 epic_wiles[108284]:                 "ceph.objectstore": "bluestore",
Jan 31 08:12:06 compute-0 epic_wiles[108284]:                 "ceph.osd_fsid": "39c36249-2898-4a76-b317-8e4ca379866f",
Jan 31 08:12:06 compute-0 epic_wiles[108284]:                 "ceph.osd_id": "0",
Jan 31 08:12:06 compute-0 epic_wiles[108284]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 31 08:12:06 compute-0 epic_wiles[108284]:                 "ceph.type": "block",
Jan 31 08:12:06 compute-0 epic_wiles[108284]:                 "ceph.vdo": "0",
Jan 31 08:12:06 compute-0 epic_wiles[108284]:                 "ceph.with_tpm": "0"
Jan 31 08:12:06 compute-0 epic_wiles[108284]:             },
Jan 31 08:12:06 compute-0 epic_wiles[108284]:             "type": "block",
Jan 31 08:12:06 compute-0 epic_wiles[108284]:             "vg_name": "ceph_vg0"
Jan 31 08:12:06 compute-0 epic_wiles[108284]:         }
Jan 31 08:12:06 compute-0 epic_wiles[108284]:     ],
Jan 31 08:12:06 compute-0 epic_wiles[108284]:     "1": [
Jan 31 08:12:06 compute-0 epic_wiles[108284]:         {
Jan 31 08:12:06 compute-0 epic_wiles[108284]:             "devices": [
Jan 31 08:12:06 compute-0 epic_wiles[108284]:                 "/dev/loop4"
Jan 31 08:12:06 compute-0 epic_wiles[108284]:             ],
Jan 31 08:12:06 compute-0 epic_wiles[108284]:             "lv_name": "ceph_lv1",
Jan 31 08:12:06 compute-0 epic_wiles[108284]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 31 08:12:06 compute-0 epic_wiles[108284]:             "lv_size": "21470642176",
Jan 31 08:12:06 compute-0 epic_wiles[108284]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=p93Mbf-DMxT-pcUt-jSJE-SFna-oscq-yTAd40,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=82c880e6-d992-5408-8b12-efff9c275473,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=dacad4fa-56d8-4937-b2d8-306fb75187f3,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 31 08:12:06 compute-0 epic_wiles[108284]:             "lv_uuid": "p93Mbf-DMxT-pcUt-jSJE-SFna-oscq-yTAd40",
Jan 31 08:12:06 compute-0 epic_wiles[108284]:             "name": "ceph_lv1",
Jan 31 08:12:06 compute-0 epic_wiles[108284]:             "path": "/dev/ceph_vg1/ceph_lv1",
Jan 31 08:12:06 compute-0 epic_wiles[108284]:             "tags": {
Jan 31 08:12:06 compute-0 epic_wiles[108284]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 31 08:12:06 compute-0 epic_wiles[108284]:                 "ceph.block_uuid": "p93Mbf-DMxT-pcUt-jSJE-SFna-oscq-yTAd40",
Jan 31 08:12:06 compute-0 epic_wiles[108284]:                 "ceph.cephx_lockbox_secret": "",
Jan 31 08:12:06 compute-0 epic_wiles[108284]:                 "ceph.cluster_fsid": "82c880e6-d992-5408-8b12-efff9c275473",
Jan 31 08:12:06 compute-0 epic_wiles[108284]:                 "ceph.cluster_name": "ceph",
Jan 31 08:12:06 compute-0 epic_wiles[108284]:                 "ceph.crush_device_class": "",
Jan 31 08:12:06 compute-0 epic_wiles[108284]:                 "ceph.encrypted": "0",
Jan 31 08:12:06 compute-0 epic_wiles[108284]:                 "ceph.objectstore": "bluestore",
Jan 31 08:12:06 compute-0 epic_wiles[108284]:                 "ceph.osd_fsid": "dacad4fa-56d8-4937-b2d8-306fb75187f3",
Jan 31 08:12:06 compute-0 epic_wiles[108284]:                 "ceph.osd_id": "1",
Jan 31 08:12:06 compute-0 epic_wiles[108284]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 31 08:12:06 compute-0 epic_wiles[108284]:                 "ceph.type": "block",
Jan 31 08:12:06 compute-0 epic_wiles[108284]:                 "ceph.vdo": "0",
Jan 31 08:12:06 compute-0 epic_wiles[108284]:                 "ceph.with_tpm": "0"
Jan 31 08:12:06 compute-0 epic_wiles[108284]:             },
Jan 31 08:12:06 compute-0 epic_wiles[108284]:             "type": "block",
Jan 31 08:12:06 compute-0 epic_wiles[108284]:             "vg_name": "ceph_vg1"
Jan 31 08:12:06 compute-0 epic_wiles[108284]:         }
Jan 31 08:12:06 compute-0 epic_wiles[108284]:     ],
Jan 31 08:12:06 compute-0 epic_wiles[108284]:     "2": [
Jan 31 08:12:06 compute-0 epic_wiles[108284]:         {
Jan 31 08:12:06 compute-0 epic_wiles[108284]:             "devices": [
Jan 31 08:12:06 compute-0 epic_wiles[108284]:                 "/dev/loop5"
Jan 31 08:12:06 compute-0 epic_wiles[108284]:             ],
Jan 31 08:12:06 compute-0 epic_wiles[108284]:             "lv_name": "ceph_lv2",
Jan 31 08:12:06 compute-0 epic_wiles[108284]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 31 08:12:06 compute-0 epic_wiles[108284]:             "lv_size": "21470642176",
Jan 31 08:12:06 compute-0 epic_wiles[108284]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=fxd7JU-HnwP-NvcE-M4xv-EgEF-kK7y-w6dXCS,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=82c880e6-d992-5408-8b12-efff9c275473,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=faa25865-e7b6-44f9-8188-08bf287b941b,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 31 08:12:06 compute-0 epic_wiles[108284]:             "lv_uuid": "fxd7JU-HnwP-NvcE-M4xv-EgEF-kK7y-w6dXCS",
Jan 31 08:12:06 compute-0 epic_wiles[108284]:             "name": "ceph_lv2",
Jan 31 08:12:06 compute-0 epic_wiles[108284]:             "path": "/dev/ceph_vg2/ceph_lv2",
Jan 31 08:12:06 compute-0 epic_wiles[108284]:             "tags": {
Jan 31 08:12:06 compute-0 epic_wiles[108284]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 31 08:12:06 compute-0 epic_wiles[108284]:                 "ceph.block_uuid": "fxd7JU-HnwP-NvcE-M4xv-EgEF-kK7y-w6dXCS",
Jan 31 08:12:06 compute-0 epic_wiles[108284]:                 "ceph.cephx_lockbox_secret": "",
Jan 31 08:12:06 compute-0 epic_wiles[108284]:                 "ceph.cluster_fsid": "82c880e6-d992-5408-8b12-efff9c275473",
Jan 31 08:12:06 compute-0 epic_wiles[108284]:                 "ceph.cluster_name": "ceph",
Jan 31 08:12:06 compute-0 epic_wiles[108284]:                 "ceph.crush_device_class": "",
Jan 31 08:12:06 compute-0 epic_wiles[108284]:                 "ceph.encrypted": "0",
Jan 31 08:12:06 compute-0 epic_wiles[108284]:                 "ceph.objectstore": "bluestore",
Jan 31 08:12:06 compute-0 epic_wiles[108284]:                 "ceph.osd_fsid": "faa25865-e7b6-44f9-8188-08bf287b941b",
Jan 31 08:12:06 compute-0 epic_wiles[108284]:                 "ceph.osd_id": "2",
Jan 31 08:12:06 compute-0 epic_wiles[108284]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 31 08:12:06 compute-0 epic_wiles[108284]:                 "ceph.type": "block",
Jan 31 08:12:06 compute-0 epic_wiles[108284]:                 "ceph.vdo": "0",
Jan 31 08:12:06 compute-0 epic_wiles[108284]:                 "ceph.with_tpm": "0"
Jan 31 08:12:06 compute-0 epic_wiles[108284]:             },
Jan 31 08:12:06 compute-0 epic_wiles[108284]:             "type": "block",
Jan 31 08:12:06 compute-0 epic_wiles[108284]:             "vg_name": "ceph_vg2"
Jan 31 08:12:06 compute-0 epic_wiles[108284]:         }
Jan 31 08:12:06 compute-0 epic_wiles[108284]:     ]
Jan 31 08:12:06 compute-0 epic_wiles[108284]: }
Jan 31 08:12:06 compute-0 python3.9[108364]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -V driverctl lvm2 crudini jq nftables NetworkManager openstack-selinux python3-libselinux python3-pyyaml rsync tmpwatch sysstat iproute-tc ksmtuned systemd-container crypto-policies-scripts grubby sos _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 08:12:06 compute-0 systemd[1]: libpod-b8214d7187abfc18819cd8bedaa30b70d0f4fd261444e3ddf084f7ae7cf2c7ab.scope: Deactivated successfully.
Jan 31 08:12:06 compute-0 podman[108216]: 2026-01-31 08:12:06.053517744 +0000 UTC m=+0.674021983 container died b8214d7187abfc18819cd8bedaa30b70d0f4fd261444e3ddf084f7ae7cf2c7ab (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=epic_wiles, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 08:12:06 compute-0 ceph-mon[75227]: 8.1b scrub starts
Jan 31 08:12:06 compute-0 ceph-mon[75227]: 8.1b scrub ok
Jan 31 08:12:06 compute-0 systemd[1]: var-lib-containers-storage-overlay-5c92213ed515bbe51ac35bf7c1c58bf85124edd4dd17b36dd7a25e50270a9c73-merged.mount: Deactivated successfully.
Jan 31 08:12:06 compute-0 ceph-osd[88096]: log_channel(cluster) log [DBG] : 11.18 scrub starts
Jan 31 08:12:06 compute-0 ceph-osd[88096]: log_channel(cluster) log [DBG] : 11.18 scrub ok
Jan 31 08:12:06 compute-0 sudo[108362]: pam_unix(sudo:session): session closed for user root
Jan 31 08:12:06 compute-0 podman[108216]: 2026-01-31 08:12:06.710533544 +0000 UTC m=+1.331037803 container remove b8214d7187abfc18819cd8bedaa30b70d0f4fd261444e3ddf084f7ae7cf2c7ab (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=epic_wiles, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 08:12:06 compute-0 systemd[1]: libpod-conmon-b8214d7187abfc18819cd8bedaa30b70d0f4fd261444e3ddf084f7ae7cf2c7ab.scope: Deactivated successfully.
Jan 31 08:12:06 compute-0 sudo[108111]: pam_unix(sudo:session): session closed for user root
Jan 31 08:12:06 compute-0 sudo[108540]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 08:12:06 compute-0 sudo[108540]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:12:06 compute-0 sudo[108540]: pam_unix(sudo:session): session closed for user root
Jan 31 08:12:06 compute-0 sudo[108583]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/82c880e6-d992-5408-8b12-efff9c275473/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 82c880e6-d992-5408-8b12-efff9c275473 -- raw list --format json
Jan 31 08:12:06 compute-0 sudo[108583]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:12:06 compute-0 ceph-osd[87035]: log_channel(cluster) log [DBG] : 5.c scrub starts
Jan 31 08:12:06 compute-0 ceph-osd[87035]: log_channel(cluster) log [DBG] : 5.c scrub ok
Jan 31 08:12:07 compute-0 podman[108656]: 2026-01-31 08:12:07.088017631 +0000 UTC m=+0.017926888 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 08:12:07 compute-0 podman[108656]: 2026-01-31 08:12:07.196156476 +0000 UTC m=+0.126065693 container create 2eb3ec876a50b90ca1227280079ff49c908a84fb1f5dfc9a5f3ec8acd7f97284 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=tender_austin, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 08:12:07 compute-0 ceph-mon[75227]: pgmap v285: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:12:07 compute-0 sudo[108743]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ffopzvynhwjkdlrscgrrrnlqmvgragdf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847126.8363712-117-255817659818682/AnsiballZ_file.py'
Jan 31 08:12:07 compute-0 sudo[108743]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:12:07 compute-0 systemd[1]: Started libpod-conmon-2eb3ec876a50b90ca1227280079ff49c908a84fb1f5dfc9a5f3ec8acd7f97284.scope.
Jan 31 08:12:07 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:12:07 compute-0 podman[108656]: 2026-01-31 08:12:07.521687553 +0000 UTC m=+0.451596810 container init 2eb3ec876a50b90ca1227280079ff49c908a84fb1f5dfc9a5f3ec8acd7f97284 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=tender_austin, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 08:12:07 compute-0 podman[108656]: 2026-01-31 08:12:07.52879726 +0000 UTC m=+0.458706497 container start 2eb3ec876a50b90ca1227280079ff49c908a84fb1f5dfc9a5f3ec8acd7f97284 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=tender_austin, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default)
Jan 31 08:12:07 compute-0 podman[108656]: 2026-01-31 08:12:07.533711326 +0000 UTC m=+0.463620573 container attach 2eb3ec876a50b90ca1227280079ff49c908a84fb1f5dfc9a5f3ec8acd7f97284 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=tender_austin, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=tentacle)
Jan 31 08:12:07 compute-0 systemd[1]: libpod-2eb3ec876a50b90ca1227280079ff49c908a84fb1f5dfc9a5f3ec8acd7f97284.scope: Deactivated successfully.
Jan 31 08:12:07 compute-0 tender_austin[108748]: 167 167
Jan 31 08:12:07 compute-0 conmon[108748]: conmon 2eb3ec876a50b90ca122 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-2eb3ec876a50b90ca1227280079ff49c908a84fb1f5dfc9a5f3ec8acd7f97284.scope/container/memory.events
Jan 31 08:12:07 compute-0 podman[108656]: 2026-01-31 08:12:07.539446665 +0000 UTC m=+0.469355882 container died 2eb3ec876a50b90ca1227280079ff49c908a84fb1f5dfc9a5f3ec8acd7f97284 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=tender_austin, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 08:12:07 compute-0 systemd[1]: var-lib-containers-storage-overlay-5b6c4ddbe4b9c432c5bbb9e770ed12c20e686f5bac3df614e88240dded7f7380-merged.mount: Deactivated successfully.
Jan 31 08:12:07 compute-0 python3.9[108745]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/edpm-config selevel=s0 setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None attributes=None
Jan 31 08:12:07 compute-0 podman[108656]: 2026-01-31 08:12:07.588804252 +0000 UTC m=+0.518713489 container remove 2eb3ec876a50b90ca1227280079ff49c908a84fb1f5dfc9a5f3ec8acd7f97284 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=tender_austin, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20251030)
Jan 31 08:12:07 compute-0 sudo[108743]: pam_unix(sudo:session): session closed for user root
Jan 31 08:12:07 compute-0 systemd[1]: libpod-conmon-2eb3ec876a50b90ca1227280079ff49c908a84fb1f5dfc9a5f3ec8acd7f97284.scope: Deactivated successfully.
Jan 31 08:12:07 compute-0 podman[108796]: 2026-01-31 08:12:07.7403518 +0000 UTC m=+0.066223845 container create fa5cebd234de9cd98900b56e343c98083de684732a9a8af0e2b25da6d14cc05a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=loving_volhard, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Jan 31 08:12:07 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v286: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:12:07 compute-0 systemd[1]: Started libpod-conmon-fa5cebd234de9cd98900b56e343c98083de684732a9a8af0e2b25da6d14cc05a.scope.
Jan 31 08:12:07 compute-0 podman[108796]: 2026-01-31 08:12:07.707979723 +0000 UTC m=+0.033851819 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 08:12:07 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:12:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a070540e944601aedebabf8afd9867306d10d67d7486eb7d9671e0d99919ed5e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 08:12:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a070540e944601aedebabf8afd9867306d10d67d7486eb7d9671e0d99919ed5e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 08:12:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a070540e944601aedebabf8afd9867306d10d67d7486eb7d9671e0d99919ed5e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 08:12:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a070540e944601aedebabf8afd9867306d10d67d7486eb7d9671e0d99919ed5e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 08:12:07 compute-0 podman[108796]: 2026-01-31 08:12:07.887073325 +0000 UTC m=+0.212945410 container init fa5cebd234de9cd98900b56e343c98083de684732a9a8af0e2b25da6d14cc05a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=loving_volhard, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 08:12:07 compute-0 podman[108796]: 2026-01-31 08:12:07.892060633 +0000 UTC m=+0.217932638 container start fa5cebd234de9cd98900b56e343c98083de684732a9a8af0e2b25da6d14cc05a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=loving_volhard, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, ceph=True, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030)
Jan 31 08:12:07 compute-0 ceph-osd[87035]: log_channel(cluster) log [DBG] : 2.9 scrub starts
Jan 31 08:12:07 compute-0 ceph-osd[87035]: log_channel(cluster) log [DBG] : 2.9 scrub ok
Jan 31 08:12:07 compute-0 podman[108796]: 2026-01-31 08:12:07.958691339 +0000 UTC m=+0.284563424 container attach fa5cebd234de9cd98900b56e343c98083de684732a9a8af0e2b25da6d14cc05a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=loving_volhard, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, ceph=True, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 31 08:12:08 compute-0 python3.9[108952]: ansible-ansible.builtin.stat Invoked with path=/etc/cloud/cloud.cfg.d follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 31 08:12:08 compute-0 lvm[109043]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 31 08:12:08 compute-0 lvm[109042]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 31 08:12:08 compute-0 lvm[109043]: VG ceph_vg1 finished
Jan 31 08:12:08 compute-0 lvm[109042]: VG ceph_vg0 finished
Jan 31 08:12:08 compute-0 lvm[109045]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 31 08:12:08 compute-0 lvm[109045]: VG ceph_vg2 finished
Jan 31 08:12:08 compute-0 ceph-osd[88096]: log_channel(cluster) log [DBG] : 7.11 scrub starts
Jan 31 08:12:08 compute-0 ceph-osd[88096]: log_channel(cluster) log [DBG] : 7.11 scrub ok
Jan 31 08:12:08 compute-0 ceph-mon[75227]: 11.18 scrub starts
Jan 31 08:12:08 compute-0 ceph-mon[75227]: 11.18 scrub ok
Jan 31 08:12:08 compute-0 ceph-mon[75227]: 5.c scrub starts
Jan 31 08:12:08 compute-0 ceph-mon[75227]: 5.c scrub ok
Jan 31 08:12:08 compute-0 ceph-mon[75227]: pgmap v286: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:12:08 compute-0 ceph-mon[75227]: 2.9 scrub starts
Jan 31 08:12:08 compute-0 ceph-mon[75227]: 2.9 scrub ok
Jan 31 08:12:08 compute-0 loving_volhard[108853]: {}
Jan 31 08:12:08 compute-0 systemd[1]: libpod-fa5cebd234de9cd98900b56e343c98083de684732a9a8af0e2b25da6d14cc05a.scope: Deactivated successfully.
Jan 31 08:12:08 compute-0 systemd[1]: libpod-fa5cebd234de9cd98900b56e343c98083de684732a9a8af0e2b25da6d14cc05a.scope: Consumed 1.015s CPU time.
Jan 31 08:12:08 compute-0 podman[108796]: 2026-01-31 08:12:08.654472413 +0000 UTC m=+0.980344418 container died fa5cebd234de9cd98900b56e343c98083de684732a9a8af0e2b25da6d14cc05a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=loving_volhard, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 08:12:08 compute-0 sudo[109184]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ybnsxwybhlxrgmnufvicyceezynqkdcu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847128.578743-133-87116722451806/AnsiballZ_dnf.py'
Jan 31 08:12:08 compute-0 sudo[109184]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:12:08 compute-0 ceph-osd[87035]: log_channel(cluster) log [DBG] : 4.4 scrub starts
Jan 31 08:12:08 compute-0 systemd[1]: var-lib-containers-storage-overlay-a070540e944601aedebabf8afd9867306d10d67d7486eb7d9671e0d99919ed5e-merged.mount: Deactivated successfully.
Jan 31 08:12:08 compute-0 ceph-osd[87035]: log_channel(cluster) log [DBG] : 4.4 scrub ok
Jan 31 08:12:09 compute-0 python3.9[109186]: ansible-ansible.legacy.dnf Invoked with name=['NetworkManager-ovs'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 31 08:12:09 compute-0 podman[108796]: 2026-01-31 08:12:09.388787044 +0000 UTC m=+1.714659059 container remove fa5cebd234de9cd98900b56e343c98083de684732a9a8af0e2b25da6d14cc05a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=loving_volhard, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, ceph=True)
Jan 31 08:12:09 compute-0 systemd[1]: libpod-conmon-fa5cebd234de9cd98900b56e343c98083de684732a9a8af0e2b25da6d14cc05a.scope: Deactivated successfully.
Jan 31 08:12:09 compute-0 sudo[108583]: pam_unix(sudo:session): session closed for user root
Jan 31 08:12:09 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 31 08:12:09 compute-0 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 08:12:09 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 31 08:12:09 compute-0 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 08:12:09 compute-0 sudo[109189]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 31 08:12:09 compute-0 sudo[109189]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:12:09 compute-0 sudo[109189]: pam_unix(sudo:session): session closed for user root
Jan 31 08:12:09 compute-0 ceph-mon[75227]: 7.11 scrub starts
Jan 31 08:12:09 compute-0 ceph-mon[75227]: 7.11 scrub ok
Jan 31 08:12:09 compute-0 ceph-mon[75227]: 4.4 scrub starts
Jan 31 08:12:09 compute-0 ceph-mon[75227]: 4.4 scrub ok
Jan 31 08:12:09 compute-0 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 08:12:09 compute-0 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 08:12:09 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v287: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:12:10 compute-0 ceph-osd[85971]: log_channel(cluster) log [DBG] : 8.1f scrub starts
Jan 31 08:12:10 compute-0 ceph-osd[85971]: log_channel(cluster) log [DBG] : 8.1f scrub ok
Jan 31 08:12:10 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 08:12:10 compute-0 sudo[109184]: pam_unix(sudo:session): session closed for user root
Jan 31 08:12:10 compute-0 sudo[109363]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-heiowlanfbjvgyhpjbtxhxfpvfeaeijw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847130.6306427-142-250249761783233/AnsiballZ_dnf.py'
Jan 31 08:12:10 compute-0 sudo[109363]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:12:10 compute-0 ceph-mon[75227]: pgmap v287: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:12:10 compute-0 ceph-mon[75227]: 8.1f scrub starts
Jan 31 08:12:10 compute-0 ceph-mon[75227]: 8.1f scrub ok
Jan 31 08:12:10 compute-0 ceph-osd[87035]: log_channel(cluster) log [DBG] : 5.1 scrub starts
Jan 31 08:12:10 compute-0 ceph-osd[87035]: log_channel(cluster) log [DBG] : 5.1 scrub ok
Jan 31 08:12:11 compute-0 python3.9[109365]: ansible-ansible.legacy.dnf Invoked with name=['os-net-config'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 31 08:12:11 compute-0 ceph-osd[85971]: log_channel(cluster) log [DBG] : 3.9 scrub starts
Jan 31 08:12:11 compute-0 ceph-osd[85971]: log_channel(cluster) log [DBG] : 3.9 scrub ok
Jan 31 08:12:11 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v288: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:12:11 compute-0 ceph-osd[87035]: log_channel(cluster) log [DBG] : 10.11 scrub starts
Jan 31 08:12:12 compute-0 ceph-osd[87035]: log_channel(cluster) log [DBG] : 10.11 scrub ok
Jan 31 08:12:12 compute-0 ceph-mon[75227]: 5.1 scrub starts
Jan 31 08:12:12 compute-0 ceph-mon[75227]: 5.1 scrub ok
Jan 31 08:12:12 compute-0 ceph-mon[75227]: 3.9 scrub starts
Jan 31 08:12:12 compute-0 ceph-mon[75227]: 3.9 scrub ok
Jan 31 08:12:12 compute-0 sudo[109363]: pam_unix(sudo:session): session closed for user root
Jan 31 08:12:12 compute-0 sudo[109516]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jzhvglfcnxdfebfiwhwuxujggeoejnev ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847132.5672338-154-20355402946752/AnsiballZ_stat.py'
Jan 31 08:12:12 compute-0 sudo[109516]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:12:13 compute-0 ceph-osd[87035]: log_channel(cluster) log [DBG] : 10.10 scrub starts
Jan 31 08:12:13 compute-0 python3.9[109518]: ansible-ansible.builtin.stat Invoked with path=/var/lib/edpm-config/os-net-config.returncode follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 31 08:12:13 compute-0 ceph-osd[87035]: log_channel(cluster) log [DBG] : 10.10 scrub ok
Jan 31 08:12:13 compute-0 ceph-mon[75227]: pgmap v288: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:12:13 compute-0 ceph-mon[75227]: 10.11 scrub starts
Jan 31 08:12:13 compute-0 ceph-mon[75227]: 10.11 scrub ok
Jan 31 08:12:13 compute-0 sudo[109516]: pam_unix(sudo:session): session closed for user root
Jan 31 08:12:13 compute-0 sudo[109670]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-idiuiegrwgandkeicreaxscswtplbfnw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847133.2069075-162-130014767098273/AnsiballZ_slurp.py'
Jan 31 08:12:13 compute-0 sudo[109670]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:12:13 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v289: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:12:13 compute-0 python3.9[109672]: ansible-ansible.builtin.slurp Invoked with path=/var/lib/edpm-config/os-net-config.returncode src=/var/lib/edpm-config/os-net-config.returncode
Jan 31 08:12:13 compute-0 sudo[109670]: pam_unix(sudo:session): session closed for user root
Jan 31 08:12:13 compute-0 ceph-osd[87035]: log_channel(cluster) log [DBG] : 10.13 scrub starts
Jan 31 08:12:14 compute-0 ceph-osd[87035]: log_channel(cluster) log [DBG] : 10.13 scrub ok
Jan 31 08:12:14 compute-0 ceph-mon[75227]: 10.10 scrub starts
Jan 31 08:12:14 compute-0 ceph-mon[75227]: 10.10 scrub ok
Jan 31 08:12:14 compute-0 sshd-session[106383]: Connection closed by 192.168.122.30 port 49168
Jan 31 08:12:14 compute-0 sshd-session[106380]: pam_unix(sshd:session): session closed for user zuul
Jan 31 08:12:14 compute-0 systemd[1]: session-36.scope: Deactivated successfully.
Jan 31 08:12:14 compute-0 systemd[1]: session-36.scope: Consumed 16.357s CPU time.
Jan 31 08:12:14 compute-0 systemd-logind[793]: Session 36 logged out. Waiting for processes to exit.
Jan 31 08:12:14 compute-0 systemd-logind[793]: Removed session 36.
Jan 31 08:12:15 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 08:12:15 compute-0 ceph-mon[75227]: pgmap v289: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:12:15 compute-0 ceph-mon[75227]: 10.13 scrub starts
Jan 31 08:12:15 compute-0 ceph-mon[75227]: 10.13 scrub ok
Jan 31 08:12:15 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v290: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:12:16 compute-0 ceph-osd[87035]: log_channel(cluster) log [DBG] : 4.5 scrub starts
Jan 31 08:12:16 compute-0 ceph-osd[87035]: log_channel(cluster) log [DBG] : 4.5 scrub ok
Jan 31 08:12:16 compute-0 ceph-osd[85971]: log_channel(cluster) log [DBG] : 11.19 scrub starts
Jan 31 08:12:16 compute-0 ceph-osd[85971]: log_channel(cluster) log [DBG] : 11.19 scrub ok
Jan 31 08:12:16 compute-0 ceph-mon[75227]: pgmap v290: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:12:16 compute-0 ceph-mon[75227]: 4.5 scrub starts
Jan 31 08:12:16 compute-0 ceph-mon[75227]: 4.5 scrub ok
Jan 31 08:12:16 compute-0 ceph-osd[88096]: log_channel(cluster) log [DBG] : 11.1a scrub starts
Jan 31 08:12:16 compute-0 ceph-osd[88096]: log_channel(cluster) log [DBG] : 11.1a scrub ok
Jan 31 08:12:17 compute-0 ceph-osd[88096]: log_channel(cluster) log [DBG] : 11.1b scrub starts
Jan 31 08:12:17 compute-0 ceph-mon[75227]: 11.19 scrub starts
Jan 31 08:12:17 compute-0 ceph-mon[75227]: 11.19 scrub ok
Jan 31 08:12:17 compute-0 ceph-mon[75227]: 11.1a scrub starts
Jan 31 08:12:17 compute-0 ceph-mon[75227]: 11.1a scrub ok
Jan 31 08:12:17 compute-0 ceph-osd[88096]: log_channel(cluster) log [DBG] : 11.1b scrub ok
Jan 31 08:12:17 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v291: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:12:18 compute-0 ceph-osd[87035]: log_channel(cluster) log [DBG] : 5.1a scrub starts
Jan 31 08:12:18 compute-0 ceph-osd[87035]: log_channel(cluster) log [DBG] : 5.1a scrub ok
Jan 31 08:12:18 compute-0 ceph-osd[88096]: log_channel(cluster) log [DBG] : 11.1f scrub starts
Jan 31 08:12:18 compute-0 ceph-osd[88096]: log_channel(cluster) log [DBG] : 11.1f scrub ok
Jan 31 08:12:18 compute-0 ceph-mon[75227]: 11.1b scrub starts
Jan 31 08:12:18 compute-0 ceph-mon[75227]: 11.1b scrub ok
Jan 31 08:12:18 compute-0 ceph-mon[75227]: pgmap v291: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:12:18 compute-0 ceph-mon[75227]: 5.1a scrub starts
Jan 31 08:12:18 compute-0 ceph-mon[75227]: 5.1a scrub ok
Jan 31 08:12:19 compute-0 ceph-osd[87035]: log_channel(cluster) log [DBG] : 2.4 scrub starts
Jan 31 08:12:19 compute-0 ceph-osd[87035]: log_channel(cluster) log [DBG] : 2.4 scrub ok
Jan 31 08:12:19 compute-0 ceph-osd[88096]: log_channel(cluster) log [DBG] : 11.1c scrub starts
Jan 31 08:12:19 compute-0 ceph-osd[88096]: log_channel(cluster) log [DBG] : 11.1c scrub ok
Jan 31 08:12:19 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v292: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:12:19 compute-0 ceph-mon[75227]: 11.1f scrub starts
Jan 31 08:12:19 compute-0 ceph-mon[75227]: 11.1f scrub ok
Jan 31 08:12:19 compute-0 ceph-mon[75227]: 2.4 scrub starts
Jan 31 08:12:19 compute-0 ceph-mon[75227]: 2.4 scrub ok
Jan 31 08:12:19 compute-0 ceph-mon[75227]: 11.1c scrub starts
Jan 31 08:12:19 compute-0 ceph-osd[87035]: log_channel(cluster) log [DBG] : 5.1d scrub starts
Jan 31 08:12:19 compute-0 ceph-osd[87035]: log_channel(cluster) log [DBG] : 5.1d scrub ok
Jan 31 08:12:20 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 08:12:20 compute-0 sshd-session[109697]: Accepted publickey for zuul from 192.168.122.30 port 45470 ssh2: ECDSA SHA256:Skb+4tfaoVfLHQIqkRSeA/sFlTrVc6ZnX8V66qTLHY8
Jan 31 08:12:20 compute-0 systemd-logind[793]: New session 37 of user zuul.
Jan 31 08:12:20 compute-0 systemd[1]: Started Session 37 of User zuul.
Jan 31 08:12:20 compute-0 sshd-session[109697]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 31 08:12:20 compute-0 ceph-osd[88096]: log_channel(cluster) log [DBG] : 4.13 scrub starts
Jan 31 08:12:20 compute-0 ceph-osd[88096]: log_channel(cluster) log [DBG] : 4.13 scrub ok
Jan 31 08:12:21 compute-0 ceph-mon[75227]: 11.1c scrub ok
Jan 31 08:12:21 compute-0 ceph-mon[75227]: pgmap v292: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:12:21 compute-0 ceph-mon[75227]: 5.1d scrub starts
Jan 31 08:12:21 compute-0 ceph-mon[75227]: 5.1d scrub ok
Jan 31 08:12:21 compute-0 ceph-mon[75227]: 4.13 scrub starts
Jan 31 08:12:21 compute-0 ceph-mon[75227]: 4.13 scrub ok
Jan 31 08:12:21 compute-0 python3.9[109850]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 31 08:12:21 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v293: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:12:21 compute-0 ceph-osd[87035]: log_channel(cluster) log [DBG] : 4.2 scrub starts
Jan 31 08:12:21 compute-0 ceph-osd[87035]: log_channel(cluster) log [DBG] : 4.2 scrub ok
Jan 31 08:12:22 compute-0 ceph-osd[85971]: log_channel(cluster) log [DBG] : 11.17 scrub starts
Jan 31 08:12:22 compute-0 ceph-osd[85971]: log_channel(cluster) log [DBG] : 11.17 scrub ok
Jan 31 08:12:22 compute-0 python3.9[110004]: ansible-ansible.builtin.setup Invoked with filter=['ansible_default_ipv4'] gather_subset=['!all', '!min', 'network'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 31 08:12:23 compute-0 ceph-mon[75227]: pgmap v293: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:12:23 compute-0 ceph-mon[75227]: 4.2 scrub starts
Jan 31 08:12:23 compute-0 ceph-mon[75227]: 4.2 scrub ok
Jan 31 08:12:23 compute-0 python3.9[110197]: ansible-ansible.legacy.command Invoked with _raw_params=hostname -f _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 08:12:23 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v294: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:12:23 compute-0 sshd-session[109700]: Connection closed by 192.168.122.30 port 45470
Jan 31 08:12:23 compute-0 sshd-session[109697]: pam_unix(sshd:session): session closed for user zuul
Jan 31 08:12:23 compute-0 systemd[1]: session-37.scope: Deactivated successfully.
Jan 31 08:12:23 compute-0 systemd[1]: session-37.scope: Consumed 1.977s CPU time.
Jan 31 08:12:23 compute-0 systemd-logind[793]: Session 37 logged out. Waiting for processes to exit.
Jan 31 08:12:23 compute-0 systemd-logind[793]: Removed session 37.
Jan 31 08:12:23 compute-0 ceph-osd[87035]: log_channel(cluster) log [DBG] : 10.f scrub starts
Jan 31 08:12:24 compute-0 ceph-osd[87035]: log_channel(cluster) log [DBG] : 10.f scrub ok
Jan 31 08:12:24 compute-0 ceph-osd[85971]: log_channel(cluster) log [DBG] : 8.1a scrub starts
Jan 31 08:12:24 compute-0 ceph-osd[85971]: log_channel(cluster) log [DBG] : 8.1a scrub ok
Jan 31 08:12:24 compute-0 ceph-mon[75227]: 11.17 scrub starts
Jan 31 08:12:24 compute-0 ceph-mon[75227]: 11.17 scrub ok
Jan 31 08:12:24 compute-0 ceph-mon[75227]: pgmap v294: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:12:24 compute-0 ceph-mon[75227]: 10.f scrub starts
Jan 31 08:12:24 compute-0 ceph-mon[75227]: 10.f scrub ok
Jan 31 08:12:24 compute-0 ceph-osd[88096]: log_channel(cluster) log [DBG] : 11.1e scrub starts
Jan 31 08:12:24 compute-0 ceph-osd[88096]: log_channel(cluster) log [DBG] : 11.1e scrub ok
Jan 31 08:12:25 compute-0 ceph-osd[85971]: log_channel(cluster) log [DBG] : 8.14 scrub starts
Jan 31 08:12:25 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 08:12:25 compute-0 ceph-osd[85971]: log_channel(cluster) log [DBG] : 8.14 scrub ok
Jan 31 08:12:25 compute-0 ceph-mon[75227]: 8.1a scrub starts
Jan 31 08:12:25 compute-0 ceph-mon[75227]: 8.1a scrub ok
Jan 31 08:12:25 compute-0 ceph-mon[75227]: 11.1e scrub starts
Jan 31 08:12:25 compute-0 ceph-mon[75227]: 11.1e scrub ok
Jan 31 08:12:25 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v295: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:12:26 compute-0 ceph-mon[75227]: 8.14 scrub starts
Jan 31 08:12:26 compute-0 ceph-mon[75227]: 8.14 scrub ok
Jan 31 08:12:26 compute-0 ceph-mon[75227]: pgmap v295: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:12:27 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v296: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:12:28 compute-0 ceph-osd[85971]: log_channel(cluster) log [DBG] : 7.1b scrub starts
Jan 31 08:12:28 compute-0 ceph-osd[85971]: log_channel(cluster) log [DBG] : 7.1b scrub ok
Jan 31 08:12:28 compute-0 ceph-mon[75227]: pgmap v296: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:12:29 compute-0 sshd-session[110224]: Accepted publickey for zuul from 192.168.122.30 port 38400 ssh2: ECDSA SHA256:Skb+4tfaoVfLHQIqkRSeA/sFlTrVc6ZnX8V66qTLHY8
Jan 31 08:12:29 compute-0 systemd-logind[793]: New session 38 of user zuul.
Jan 31 08:12:29 compute-0 systemd[1]: Started Session 38 of User zuul.
Jan 31 08:12:29 compute-0 sshd-session[110224]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 31 08:12:29 compute-0 ceph-osd[88096]: log_channel(cluster) log [DBG] : 3.16 scrub starts
Jan 31 08:12:29 compute-0 ceph-osd[88096]: log_channel(cluster) log [DBG] : 3.16 scrub ok
Jan 31 08:12:29 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v297: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:12:29 compute-0 ceph-mon[75227]: 7.1b scrub starts
Jan 31 08:12:29 compute-0 ceph-mon[75227]: 7.1b scrub ok
Jan 31 08:12:29 compute-0 ceph-mon[75227]: 3.16 scrub starts
Jan 31 08:12:29 compute-0 ceph-mon[75227]: 3.16 scrub ok
Jan 31 08:12:30 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 08:12:30 compute-0 ceph-osd[88096]: log_channel(cluster) log [DBG] : 4.11 scrub starts
Jan 31 08:12:30 compute-0 ceph-osd[88096]: log_channel(cluster) log [DBG] : 4.11 scrub ok
Jan 31 08:12:30 compute-0 python3.9[110377]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 31 08:12:30 compute-0 ceph-mon[75227]: pgmap v297: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:12:30 compute-0 ceph-mon[75227]: 4.11 scrub starts
Jan 31 08:12:30 compute-0 ceph-mon[75227]: 4.11 scrub ok
Jan 31 08:12:31 compute-0 ceph-osd[85971]: log_channel(cluster) log [DBG] : 3.12 scrub starts
Jan 31 08:12:31 compute-0 ceph-osd[85971]: log_channel(cluster) log [DBG] : 3.12 scrub ok
Jan 31 08:12:31 compute-0 python3.9[110531]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 31 08:12:31 compute-0 ceph-mgr[75519]: [balancer INFO root] Optimize plan auto_2026-01-31_08:12:31
Jan 31 08:12:31 compute-0 ceph-mgr[75519]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 31 08:12:31 compute-0 ceph-mgr[75519]: [balancer INFO root] do_upmap
Jan 31 08:12:31 compute-0 ceph-mgr[75519]: [balancer INFO root] pools ['volumes', 'images', 'cephfs.cephfs.data', '.rgw.root', 'cephfs.cephfs.meta', 'default.rgw.meta', 'default.rgw.log', '.mgr', 'default.rgw.control', 'vms', 'backups']
Jan 31 08:12:31 compute-0 ceph-mgr[75519]: [balancer INFO root] prepared 0/10 upmap changes
Jan 31 08:12:31 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v298: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:12:32 compute-0 sudo[110685]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dhuuixeoyfmepytevcmqtvelurzeelgz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847151.9711416-35-171997837051999/AnsiballZ_setup.py'
Jan 31 08:12:32 compute-0 sudo[110685]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:12:32 compute-0 ceph-osd[85971]: log_channel(cluster) log [DBG] : 8.18 scrub starts
Jan 31 08:12:32 compute-0 ceph-osd[85971]: log_channel(cluster) log [DBG] : 8.18 scrub ok
Jan 31 08:12:32 compute-0 python3.9[110687]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 31 08:12:32 compute-0 ceph-osd[88096]: log_channel(cluster) log [DBG] : 8.12 scrub starts
Jan 31 08:12:32 compute-0 ceph-osd[88096]: log_channel(cluster) log [DBG] : 8.12 scrub ok
Jan 31 08:12:32 compute-0 sudo[110685]: pam_unix(sudo:session): session closed for user root
Jan 31 08:12:32 compute-0 ceph-mgr[75519]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:12:32 compute-0 ceph-mgr[75519]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:12:32 compute-0 ceph-mgr[75519]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:12:32 compute-0 ceph-mgr[75519]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:12:32 compute-0 ceph-mgr[75519]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:12:32 compute-0 ceph-mgr[75519]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:12:32 compute-0 ceph-mon[75227]: 3.12 scrub starts
Jan 31 08:12:32 compute-0 ceph-mon[75227]: 3.12 scrub ok
Jan 31 08:12:32 compute-0 ceph-mon[75227]: pgmap v298: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:12:32 compute-0 ceph-mon[75227]: 8.12 scrub starts
Jan 31 08:12:32 compute-0 ceph-mon[75227]: 8.12 scrub ok
Jan 31 08:12:32 compute-0 ceph-mgr[75519]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 31 08:12:32 compute-0 ceph-mgr[75519]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 08:12:32 compute-0 ceph-mgr[75519]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 31 08:12:32 compute-0 ceph-mgr[75519]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 08:12:32 compute-0 ceph-mgr[75519]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 08:12:32 compute-0 ceph-mgr[75519]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 08:12:32 compute-0 ceph-mgr[75519]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 08:12:32 compute-0 ceph-mgr[75519]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 08:12:32 compute-0 ceph-mgr[75519]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 08:12:32 compute-0 ceph-mgr[75519]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 08:12:33 compute-0 sudo[110769]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zxeluhurtgkxiecmzzkhsrftwnfujyqx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847151.9711416-35-171997837051999/AnsiballZ_dnf.py'
Jan 31 08:12:33 compute-0 sudo[110769]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:12:33 compute-0 ceph-osd[85971]: log_channel(cluster) log [DBG] : 10.d scrub starts
Jan 31 08:12:33 compute-0 ceph-osd[85971]: log_channel(cluster) log [DBG] : 10.d scrub ok
Jan 31 08:12:33 compute-0 python3.9[110771]: ansible-ansible.legacy.dnf Invoked with name=['podman'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 31 08:12:33 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v299: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:12:33 compute-0 ceph-mon[75227]: 8.18 scrub starts
Jan 31 08:12:33 compute-0 ceph-mon[75227]: 8.18 scrub ok
Jan 31 08:12:34 compute-0 ceph-osd[87035]: log_channel(cluster) log [DBG] : 2.1b scrub starts
Jan 31 08:12:34 compute-0 ceph-osd[87035]: log_channel(cluster) log [DBG] : 2.1b scrub ok
Jan 31 08:12:34 compute-0 ceph-osd[88096]: log_channel(cluster) log [DBG] : 11.11 scrub starts
Jan 31 08:12:34 compute-0 sudo[110769]: pam_unix(sudo:session): session closed for user root
Jan 31 08:12:34 compute-0 ceph-osd[88096]: log_channel(cluster) log [DBG] : 11.11 scrub ok
Jan 31 08:12:34 compute-0 sudo[110922]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cswtoqtkqyhwmdisptnzpchijpcnbhwm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847154.6708107-47-106628229589198/AnsiballZ_setup.py'
Jan 31 08:12:34 compute-0 sudo[110922]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:12:34 compute-0 ceph-mon[75227]: 10.d scrub starts
Jan 31 08:12:34 compute-0 ceph-mon[75227]: 10.d scrub ok
Jan 31 08:12:34 compute-0 ceph-mon[75227]: pgmap v299: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:12:34 compute-0 ceph-mon[75227]: 2.1b scrub starts
Jan 31 08:12:34 compute-0 ceph-mon[75227]: 2.1b scrub ok
Jan 31 08:12:34 compute-0 ceph-mon[75227]: 11.11 scrub starts
Jan 31 08:12:34 compute-0 ceph-mon[75227]: 11.11 scrub ok
Jan 31 08:12:35 compute-0 python3.9[110924]: ansible-ansible.builtin.setup Invoked with filter=['ansible_interfaces'] gather_subset=['!all', '!min', 'network'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 31 08:12:35 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 08:12:35 compute-0 ceph-osd[85971]: log_channel(cluster) log [DBG] : 10.e scrub starts
Jan 31 08:12:35 compute-0 ceph-osd[85971]: log_channel(cluster) log [DBG] : 10.e scrub ok
Jan 31 08:12:35 compute-0 sudo[110922]: pam_unix(sudo:session): session closed for user root
Jan 31 08:12:35 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v300: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:12:36 compute-0 sudo[111117]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ovlbczjamtcecobdjfcaqjpqxhiknvfx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847155.5810614-58-214549851252932/AnsiballZ_file.py'
Jan 31 08:12:36 compute-0 sudo[111117]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:12:36 compute-0 python3.9[111119]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/containers/networks recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 08:12:36 compute-0 sudo[111117]: pam_unix(sudo:session): session closed for user root
Jan 31 08:12:36 compute-0 ceph-osd[85971]: log_channel(cluster) log [DBG] : 10.15 scrub starts
Jan 31 08:12:36 compute-0 ceph-osd[85971]: log_channel(cluster) log [DBG] : 10.15 scrub ok
Jan 31 08:12:36 compute-0 sudo[111269]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dxabhavpsuakkvczcdqwewdlcumbejkt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847156.4006746-66-63882340604368/AnsiballZ_command.py'
Jan 31 08:12:36 compute-0 sudo[111269]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:12:36 compute-0 ceph-mon[75227]: 10.e scrub starts
Jan 31 08:12:36 compute-0 ceph-mon[75227]: 10.e scrub ok
Jan 31 08:12:36 compute-0 ceph-mon[75227]: pgmap v300: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:12:36 compute-0 python3.9[111271]: ansible-ansible.legacy.command Invoked with _raw_params=podman network inspect podman
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 08:12:37 compute-0 sudo[111269]: pam_unix(sudo:session): session closed for user root
Jan 31 08:12:37 compute-0 ceph-osd[88096]: log_channel(cluster) log [DBG] : 7.1c scrub starts
Jan 31 08:12:37 compute-0 ceph-osd[88096]: log_channel(cluster) log [DBG] : 7.1c scrub ok
Jan 31 08:12:37 compute-0 sudo[111434]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nchjbhxwzqkybykdssnetprsnnbywocv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847157.1394596-74-124693333548169/AnsiballZ_stat.py'
Jan 31 08:12:37 compute-0 sudo[111434]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:12:37 compute-0 python3.9[111436]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/networks/podman.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 08:12:37 compute-0 sudo[111434]: pam_unix(sudo:session): session closed for user root
Jan 31 08:12:37 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v301: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:12:37 compute-0 ceph-mon[75227]: 10.15 scrub starts
Jan 31 08:12:37 compute-0 ceph-mon[75227]: 10.15 scrub ok
Jan 31 08:12:37 compute-0 ceph-mon[75227]: 7.1c scrub starts
Jan 31 08:12:37 compute-0 ceph-mon[75227]: 7.1c scrub ok
Jan 31 08:12:37 compute-0 ceph-osd[87035]: log_channel(cluster) log [DBG] : 5.19 scrub starts
Jan 31 08:12:37 compute-0 ceph-osd[87035]: log_channel(cluster) log [DBG] : 5.19 scrub ok
Jan 31 08:12:37 compute-0 sudo[111512]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-okaumpewkacvxbiyqjjybehevoahihmt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847157.1394596-74-124693333548169/AnsiballZ_file.py'
Jan 31 08:12:37 compute-0 sudo[111512]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:12:38 compute-0 python3.9[111514]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/containers/networks/podman.json _original_basename=podman_network_config.j2 recurse=False state=file path=/etc/containers/networks/podman.json force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 08:12:38 compute-0 sudo[111512]: pam_unix(sudo:session): session closed for user root
Jan 31 08:12:38 compute-0 ceph-osd[88096]: log_channel(cluster) log [DBG] : 8.1c scrub starts
Jan 31 08:12:38 compute-0 ceph-osd[88096]: log_channel(cluster) log [DBG] : 8.1c scrub ok
Jan 31 08:12:38 compute-0 sudo[111664]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sxybbnmjaanxshggkgiiwxjegpcaeyfq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847158.2737687-86-131230509546051/AnsiballZ_stat.py'
Jan 31 08:12:38 compute-0 sudo[111664]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:12:38 compute-0 python3.9[111666]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 08:12:38 compute-0 sudo[111664]: pam_unix(sudo:session): session closed for user root
Jan 31 08:12:38 compute-0 sudo[111742]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oqnlssrwzkfkvdsgoimaapbnwaggitvt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847158.2737687-86-131230509546051/AnsiballZ_file.py'
Jan 31 08:12:38 compute-0 sudo[111742]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:12:38 compute-0 ceph-osd[87035]: log_channel(cluster) log [DBG] : 5.18 scrub starts
Jan 31 08:12:38 compute-0 ceph-osd[87035]: log_channel(cluster) log [DBG] : 5.18 scrub ok
Jan 31 08:12:38 compute-0 ceph-mon[75227]: pgmap v301: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:12:38 compute-0 ceph-mon[75227]: 5.19 scrub starts
Jan 31 08:12:38 compute-0 ceph-mon[75227]: 5.19 scrub ok
Jan 31 08:12:38 compute-0 ceph-mon[75227]: 8.1c scrub starts
Jan 31 08:12:38 compute-0 ceph-mon[75227]: 8.1c scrub ok
Jan 31 08:12:39 compute-0 python3.9[111744]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root setype=etc_t dest=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf _original_basename=registries.conf.j2 recurse=False state=file path=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 31 08:12:39 compute-0 sudo[111742]: pam_unix(sudo:session): session closed for user root
Jan 31 08:12:39 compute-0 sudo[111894]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gsnioexmafkxsxitwotyxgwhpvyulksg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847159.2497196-99-77308709311860/AnsiballZ_ini_file.py'
Jan 31 08:12:39 compute-0 sudo[111894]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:12:39 compute-0 python3.9[111896]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=pids_limit owner=root path=/etc/containers/containers.conf section=containers setype=etc_t value=4096 backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Jan 31 08:12:39 compute-0 sudo[111894]: pam_unix(sudo:session): session closed for user root
Jan 31 08:12:39 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v302: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:12:39 compute-0 ceph-mon[75227]: 5.18 scrub starts
Jan 31 08:12:39 compute-0 ceph-mon[75227]: 5.18 scrub ok
Jan 31 08:12:40 compute-0 sudo[112046]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ektegrofaikssbvfgskfwpisoepkfyeh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847159.9110696-99-195490382931783/AnsiballZ_ini_file.py'
Jan 31 08:12:40 compute-0 sudo[112046]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:12:40 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 08:12:40 compute-0 python3.9[112048]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=events_logger owner=root path=/etc/containers/containers.conf section=engine setype=etc_t value="journald" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Jan 31 08:12:40 compute-0 sudo[112046]: pam_unix(sudo:session): session closed for user root
Jan 31 08:12:40 compute-0 ceph-osd[85971]: log_channel(cluster) log [DBG] : 10.9 scrub starts
Jan 31 08:12:40 compute-0 ceph-osd[85971]: log_channel(cluster) log [DBG] : 10.9 scrub ok
Jan 31 08:12:40 compute-0 sudo[112198]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qsbalwuzmtkclapznpsmimndvgpimumt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847160.4281049-99-217227795061894/AnsiballZ_ini_file.py'
Jan 31 08:12:40 compute-0 sudo[112198]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:12:40 compute-0 python3.9[112200]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=runtime owner=root path=/etc/containers/containers.conf section=engine setype=etc_t value="crun" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Jan 31 08:12:40 compute-0 sudo[112198]: pam_unix(sudo:session): session closed for user root
Jan 31 08:12:40 compute-0 ceph-osd[87035]: log_channel(cluster) log [DBG] : 4.d scrub starts
Jan 31 08:12:40 compute-0 ceph-osd[87035]: log_channel(cluster) log [DBG] : 4.d scrub ok
Jan 31 08:12:40 compute-0 ceph-mon[75227]: pgmap v302: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:12:41 compute-0 sudo[112350]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ymvlthgijsjacfwxnccyjvttejfnaraq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847161.0069199-99-164135816596950/AnsiballZ_ini_file.py'
Jan 31 08:12:41 compute-0 sudo[112350]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:12:41 compute-0 python3.9[112352]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=network_backend owner=root path=/etc/containers/containers.conf section=network setype=etc_t value="netavark" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Jan 31 08:12:41 compute-0 ceph-osd[85971]: log_channel(cluster) log [DBG] : 8.6 scrub starts
Jan 31 08:12:41 compute-0 sudo[112350]: pam_unix(sudo:session): session closed for user root
Jan 31 08:12:41 compute-0 ceph-osd[85971]: log_channel(cluster) log [DBG] : 8.6 scrub ok
Jan 31 08:12:41 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v303: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:12:41 compute-0 sudo[112502]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dluylzzuatmdqetnecqlbwayolippkeq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847161.6078758-130-150292745365830/AnsiballZ_dnf.py'
Jan 31 08:12:41 compute-0 sudo[112502]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:12:41 compute-0 ceph-osd[87035]: log_channel(cluster) log [DBG] : 4.f scrub starts
Jan 31 08:12:41 compute-0 ceph-osd[87035]: log_channel(cluster) log [DBG] : 4.f scrub ok
Jan 31 08:12:41 compute-0 ceph-mon[75227]: 10.9 scrub starts
Jan 31 08:12:41 compute-0 ceph-mon[75227]: 10.9 scrub ok
Jan 31 08:12:41 compute-0 ceph-mon[75227]: 4.d scrub starts
Jan 31 08:12:41 compute-0 ceph-mon[75227]: 4.d scrub ok
Jan 31 08:12:42 compute-0 python3.9[112504]: ansible-ansible.legacy.dnf Invoked with name=['openssh-server'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 31 08:12:42 compute-0 ceph-mon[75227]: 8.6 scrub starts
Jan 31 08:12:42 compute-0 ceph-mon[75227]: 8.6 scrub ok
Jan 31 08:12:42 compute-0 ceph-mon[75227]: pgmap v303: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:12:42 compute-0 ceph-mon[75227]: 4.f scrub starts
Jan 31 08:12:42 compute-0 ceph-mon[75227]: 4.f scrub ok
Jan 31 08:12:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] _maybe_adjust
Jan 31 08:12:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:12:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 31 08:12:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:12:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 08:12:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:12:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 08:12:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:12:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 08:12:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:12:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 08:12:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:12:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 2.6947183441958982e-06 of space, bias 4.0, pg target 0.003233662013035078 quantized to 16 (current 16)
Jan 31 08:12:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:12:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 08:12:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:12:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 31 08:12:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:12:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Jan 31 08:12:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:12:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 08:12:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:12:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 31 08:12:43 compute-0 sudo[112502]: pam_unix(sudo:session): session closed for user root
Jan 31 08:12:43 compute-0 ceph-osd[88096]: log_channel(cluster) log [DBG] : 6.8 scrub starts
Jan 31 08:12:43 compute-0 ceph-osd[88096]: log_channel(cluster) log [DBG] : 6.8 scrub ok
Jan 31 08:12:43 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v304: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:12:43 compute-0 ceph-osd[87035]: log_channel(cluster) log [DBG] : 2.7 scrub starts
Jan 31 08:12:43 compute-0 sudo[112655]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vwvgjpkfkxwiwmuqtwlsersqhlyyhkwm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847163.7115526-141-149302307446074/AnsiballZ_setup.py'
Jan 31 08:12:43 compute-0 sudo[112655]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:12:43 compute-0 ceph-osd[87035]: log_channel(cluster) log [DBG] : 2.7 scrub ok
Jan 31 08:12:44 compute-0 ceph-mon[75227]: 6.8 scrub starts
Jan 31 08:12:44 compute-0 ceph-mon[75227]: 6.8 scrub ok
Jan 31 08:12:44 compute-0 python3.9[112657]: ansible-setup Invoked with gather_subset=['!all', '!min', 'distribution', 'distribution_major_version', 'distribution_version', 'os_family'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 31 08:12:44 compute-0 sudo[112655]: pam_unix(sudo:session): session closed for user root
Jan 31 08:12:44 compute-0 sudo[112809]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rqztfxpkbwxgdnoafmelvpxegneplgty ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847164.39962-149-170777525132918/AnsiballZ_stat.py'
Jan 31 08:12:44 compute-0 sudo[112809]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:12:44 compute-0 python3.9[112811]: ansible-stat Invoked with path=/run/ostree-booted follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 31 08:12:44 compute-0 sudo[112809]: pam_unix(sudo:session): session closed for user root
Jan 31 08:12:44 compute-0 ceph-osd[87035]: log_channel(cluster) log [DBG] : 4.12 scrub starts
Jan 31 08:12:44 compute-0 ceph-osd[87035]: log_channel(cluster) log [DBG] : 4.12 scrub ok
Jan 31 08:12:45 compute-0 ceph-mon[75227]: pgmap v304: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:12:45 compute-0 ceph-mon[75227]: 2.7 scrub starts
Jan 31 08:12:45 compute-0 ceph-mon[75227]: 2.7 scrub ok
Jan 31 08:12:45 compute-0 sudo[112961]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-emuvivnbfsjvsnvzfltkqluekdxwplrx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847165.001316-158-118152097398923/AnsiballZ_stat.py'
Jan 31 08:12:45 compute-0 sudo[112961]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:12:45 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 08:12:45 compute-0 python3.9[112963]: ansible-stat Invoked with path=/sbin/transactional-update follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 31 08:12:45 compute-0 sudo[112961]: pam_unix(sudo:session): session closed for user root
Jan 31 08:12:45 compute-0 ceph-osd[88096]: log_channel(cluster) log [DBG] : 6.f scrub starts
Jan 31 08:12:45 compute-0 ceph-osd[88096]: log_channel(cluster) log [DBG] : 6.f scrub ok
Jan 31 08:12:45 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v305: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:12:45 compute-0 ceph-osd[87035]: log_channel(cluster) log [DBG] : 4.14 scrub starts
Jan 31 08:12:45 compute-0 ceph-osd[87035]: log_channel(cluster) log [DBG] : 4.14 scrub ok
Jan 31 08:12:45 compute-0 sudo[113113]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sbuytfflxnpcmujjuuhheitjxradfsuw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847165.6529157-168-226494787217139/AnsiballZ_command.py'
Jan 31 08:12:45 compute-0 sudo[113113]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:12:46 compute-0 python3.9[113115]: ansible-ansible.legacy.command Invoked with _raw_params=systemctl is-system-running _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 08:12:46 compute-0 ceph-mon[75227]: 4.12 scrub starts
Jan 31 08:12:46 compute-0 ceph-mon[75227]: 4.12 scrub ok
Jan 31 08:12:46 compute-0 ceph-mon[75227]: 6.f scrub starts
Jan 31 08:12:46 compute-0 ceph-mon[75227]: 6.f scrub ok
Jan 31 08:12:46 compute-0 sudo[113113]: pam_unix(sudo:session): session closed for user root
Jan 31 08:12:46 compute-0 ceph-osd[88096]: log_channel(cluster) log [DBG] : 9.8 scrub starts
Jan 31 08:12:46 compute-0 ceph-osd[88096]: log_channel(cluster) log [DBG] : 9.8 scrub ok
Jan 31 08:12:46 compute-0 sudo[113266]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-toqdwoyyyrqqumhsthfxaiuipvvxnzzu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847166.3910112-178-270419393988464/AnsiballZ_service_facts.py'
Jan 31 08:12:46 compute-0 sudo[113266]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:12:47 compute-0 python3.9[113268]: ansible-service_facts Invoked
Jan 31 08:12:47 compute-0 network[113285]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Jan 31 08:12:47 compute-0 network[113286]: 'network-scripts' will be removed from distribution in near future.
Jan 31 08:12:47 compute-0 network[113287]: It is advised to switch to 'NetworkManager' instead for network management.
Jan 31 08:12:47 compute-0 ceph-mon[75227]: pgmap v305: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:12:47 compute-0 ceph-mon[75227]: 4.14 scrub starts
Jan 31 08:12:47 compute-0 ceph-mon[75227]: 4.14 scrub ok
Jan 31 08:12:47 compute-0 ceph-mon[75227]: 9.8 scrub starts
Jan 31 08:12:47 compute-0 ceph-mon[75227]: 9.8 scrub ok
Jan 31 08:12:47 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v306: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:12:47 compute-0 ceph-osd[87035]: log_channel(cluster) log [DBG] : 4.10 scrub starts
Jan 31 08:12:47 compute-0 ceph-osd[87035]: log_channel(cluster) log [DBG] : 4.10 scrub ok
Jan 31 08:12:48 compute-0 ceph-mon[75227]: pgmap v306: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:12:48 compute-0 ceph-mon[75227]: 4.10 scrub starts
Jan 31 08:12:48 compute-0 ceph-mon[75227]: 4.10 scrub ok
Jan 31 08:12:48 compute-0 ceph-osd[85971]: log_channel(cluster) log [DBG] : 8.f scrub starts
Jan 31 08:12:48 compute-0 ceph-osd[85971]: log_channel(cluster) log [DBG] : 8.f scrub ok
Jan 31 08:12:48 compute-0 ceph-osd[87035]: log_channel(cluster) log [DBG] : 10.14 scrub starts
Jan 31 08:12:48 compute-0 ceph-osd[87035]: log_channel(cluster) log [DBG] : 10.14 scrub ok
Jan 31 08:12:49 compute-0 ceph-mon[75227]: 8.f scrub starts
Jan 31 08:12:49 compute-0 ceph-mon[75227]: 8.f scrub ok
Jan 31 08:12:49 compute-0 ceph-mon[75227]: 10.14 scrub starts
Jan 31 08:12:49 compute-0 ceph-mon[75227]: 10.14 scrub ok
Jan 31 08:12:49 compute-0 sudo[113266]: pam_unix(sudo:session): session closed for user root
Jan 31 08:12:49 compute-0 ceph-osd[87035]: log_channel(cluster) log [DBG] : 10.12 scrub starts
Jan 31 08:12:49 compute-0 ceph-osd[87035]: log_channel(cluster) log [DBG] : 10.12 scrub ok
Jan 31 08:12:49 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v307: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:12:50 compute-0 sudo[113570]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ixzxbqxkfvhmisabsiejzjdtgkxudpqn ; /bin/bash /home/zuul/.ansible/tmp/ansible-tmp-1769847169.8048928-193-37820566007219/AnsiballZ_timesync_provider.sh /home/zuul/.ansible/tmp/ansible-tmp-1769847169.8048928-193-37820566007219/args'
Jan 31 08:12:50 compute-0 sudo[113570]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:12:50 compute-0 sudo[113570]: pam_unix(sudo:session): session closed for user root
Jan 31 08:12:50 compute-0 ceph-mon[75227]: 10.12 scrub starts
Jan 31 08:12:50 compute-0 ceph-mon[75227]: 10.12 scrub ok
Jan 31 08:12:50 compute-0 ceph-mon[75227]: pgmap v307: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:12:50 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 08:12:50 compute-0 ceph-osd[88096]: log_channel(cluster) log [DBG] : 9.17 scrub starts
Jan 31 08:12:50 compute-0 ceph-osd[88096]: log_channel(cluster) log [DBG] : 9.17 scrub ok
Jan 31 08:12:50 compute-0 sudo[113737]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xoigfoveapkftkwoacdounmwzmhxsdfp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847170.369048-204-118426809346161/AnsiballZ_dnf.py'
Jan 31 08:12:50 compute-0 sudo[113737]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:12:51 compute-0 python3.9[113739]: ansible-ansible.legacy.dnf Invoked with name=['chrony'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 31 08:12:51 compute-0 ceph-osd[88096]: log_channel(cluster) log [DBG] : 9.e scrub starts
Jan 31 08:12:51 compute-0 ceph-mon[75227]: 9.17 scrub starts
Jan 31 08:12:51 compute-0 ceph-mon[75227]: 9.17 scrub ok
Jan 31 08:12:51 compute-0 ceph-osd[88096]: log_channel(cluster) log [DBG] : 9.e scrub ok
Jan 31 08:12:51 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v308: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:12:52 compute-0 ceph-osd[85971]: log_channel(cluster) log [DBG] : 6.5 scrub starts
Jan 31 08:12:52 compute-0 ceph-osd[85971]: log_channel(cluster) log [DBG] : 6.5 scrub ok
Jan 31 08:12:52 compute-0 ceph-mon[75227]: 9.e scrub starts
Jan 31 08:12:52 compute-0 ceph-mon[75227]: 9.e scrub ok
Jan 31 08:12:52 compute-0 ceph-mon[75227]: pgmap v308: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:12:52 compute-0 sudo[113737]: pam_unix(sudo:session): session closed for user root
Jan 31 08:12:52 compute-0 ceph-osd[87035]: log_channel(cluster) log [DBG] : 6.2 scrub starts
Jan 31 08:12:52 compute-0 ceph-osd[87035]: log_channel(cluster) log [DBG] : 6.2 scrub ok
Jan 31 08:12:53 compute-0 ceph-osd[85971]: log_channel(cluster) log [DBG] : 6.9 scrub starts
Jan 31 08:12:53 compute-0 ceph-osd[85971]: log_channel(cluster) log [DBG] : 6.9 scrub ok
Jan 31 08:12:53 compute-0 sudo[113890]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ushrrvmkmlzzfubhqcidpukwungukxuf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847172.9792159-217-75757686048932/AnsiballZ_package_facts.py'
Jan 31 08:12:53 compute-0 sudo[113890]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:12:53 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v309: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:12:53 compute-0 ceph-osd[87035]: log_channel(cluster) log [DBG] : 6.6 scrub starts
Jan 31 08:12:53 compute-0 ceph-osd[87035]: log_channel(cluster) log [DBG] : 6.6 scrub ok
Jan 31 08:12:53 compute-0 python3.9[113892]: ansible-package_facts Invoked with manager=['auto'] strategy=first
Jan 31 08:12:53 compute-0 ceph-mon[75227]: 6.5 scrub starts
Jan 31 08:12:53 compute-0 ceph-mon[75227]: 6.5 scrub ok
Jan 31 08:12:53 compute-0 ceph-mon[75227]: 6.2 scrub starts
Jan 31 08:12:53 compute-0 ceph-mon[75227]: 6.2 scrub ok
Jan 31 08:12:54 compute-0 sudo[113890]: pam_unix(sudo:session): session closed for user root
Jan 31 08:12:54 compute-0 ceph-osd[85971]: log_channel(cluster) log [DBG] : 6.a scrub starts
Jan 31 08:12:54 compute-0 ceph-osd[85971]: log_channel(cluster) log [DBG] : 6.a scrub ok
Jan 31 08:12:54 compute-0 ceph-osd[88096]: log_channel(cluster) log [DBG] : 9.f scrub starts
Jan 31 08:12:54 compute-0 ceph-osd[88096]: log_channel(cluster) log [DBG] : 9.f scrub ok
Jan 31 08:12:55 compute-0 sudo[114042]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lomgvkcyocxmiemdrbimheszlkiqkawk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847174.7435713-227-14600401080840/AnsiballZ_stat.py'
Jan 31 08:12:55 compute-0 sudo[114042]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:12:55 compute-0 ceph-mon[75227]: 6.9 scrub starts
Jan 31 08:12:55 compute-0 ceph-mon[75227]: 6.9 scrub ok
Jan 31 08:12:55 compute-0 ceph-mon[75227]: pgmap v309: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:12:55 compute-0 ceph-mon[75227]: 6.6 scrub starts
Jan 31 08:12:55 compute-0 ceph-mon[75227]: 6.6 scrub ok
Jan 31 08:12:55 compute-0 ceph-mon[75227]: 9.f scrub starts
Jan 31 08:12:55 compute-0 ceph-mon[75227]: 9.f scrub ok
Jan 31 08:12:55 compute-0 python3.9[114044]: ansible-ansible.legacy.stat Invoked with path=/etc/chrony.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 08:12:55 compute-0 sudo[114042]: pam_unix(sudo:session): session closed for user root
Jan 31 08:12:55 compute-0 ceph-osd[88096]: log_channel(cluster) log [DBG] : 9.c scrub starts
Jan 31 08:12:55 compute-0 ceph-osd[88096]: log_channel(cluster) log [DBG] : 9.c scrub ok
Jan 31 08:12:55 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 08:12:55 compute-0 sudo[114120]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pyoevpowzcyxkemqufomjzpdpmwhptye ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847174.7435713-227-14600401080840/AnsiballZ_file.py'
Jan 31 08:12:55 compute-0 sudo[114120]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:12:55 compute-0 python3.9[114122]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/etc/chrony.conf _original_basename=chrony.conf.j2 recurse=False state=file path=/etc/chrony.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 08:12:55 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v310: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:12:55 compute-0 sudo[114120]: pam_unix(sudo:session): session closed for user root
Jan 31 08:12:56 compute-0 ceph-mon[75227]: 6.a scrub starts
Jan 31 08:12:56 compute-0 ceph-mon[75227]: 6.a scrub ok
Jan 31 08:12:56 compute-0 ceph-mon[75227]: 9.c scrub starts
Jan 31 08:12:56 compute-0 ceph-mon[75227]: 9.c scrub ok
Jan 31 08:12:56 compute-0 sudo[114272]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wtvxgwjsksiuowqsgubffeiocjkzttfm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847176.0297863-239-139218819905276/AnsiballZ_stat.py'
Jan 31 08:12:56 compute-0 sudo[114272]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:12:56 compute-0 python3.9[114274]: ansible-ansible.legacy.stat Invoked with path=/etc/sysconfig/chronyd follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 08:12:56 compute-0 sudo[114272]: pam_unix(sudo:session): session closed for user root
Jan 31 08:12:56 compute-0 sudo[114350]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lwvitskxzuinaqnxtubpaebdizuhwwqc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847176.0297863-239-139218819905276/AnsiballZ_file.py'
Jan 31 08:12:56 compute-0 sudo[114350]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:12:56 compute-0 python3.9[114352]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/etc/sysconfig/chronyd _original_basename=chronyd.sysconfig.j2 recurse=False state=file path=/etc/sysconfig/chronyd force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 08:12:56 compute-0 sudo[114350]: pam_unix(sudo:session): session closed for user root
Jan 31 08:12:57 compute-0 ceph-mon[75227]: pgmap v310: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:12:57 compute-0 ceph-osd[88096]: log_channel(cluster) log [DBG] : 9.7 scrub starts
Jan 31 08:12:57 compute-0 ceph-osd[88096]: log_channel(cluster) log [DBG] : 9.7 scrub ok
Jan 31 08:12:57 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v311: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:12:58 compute-0 sudo[114502]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rrfwpgkpwsysbndmrakzbmydzrktvscr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847177.576009-257-136391322529710/AnsiballZ_lineinfile.py'
Jan 31 08:12:58 compute-0 sudo[114502]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:12:58 compute-0 python3.9[114504]: ansible-lineinfile Invoked with backup=True create=True dest=/etc/sysconfig/network line=PEERNTP=no mode=0644 regexp=^PEERNTP= state=present path=/etc/sysconfig/network encoding=utf-8 backrefs=False firstmatch=False unsafe_writes=False search_string=None insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 08:12:58 compute-0 ceph-osd[85971]: log_channel(cluster) log [DBG] : 6.7 scrub starts
Jan 31 08:12:58 compute-0 ceph-osd[85971]: log_channel(cluster) log [DBG] : 6.7 scrub ok
Jan 31 08:12:58 compute-0 sudo[114502]: pam_unix(sudo:session): session closed for user root
Jan 31 08:12:58 compute-0 ceph-mon[75227]: 9.7 scrub starts
Jan 31 08:12:58 compute-0 ceph-mon[75227]: 9.7 scrub ok
Jan 31 08:12:58 compute-0 ceph-mon[75227]: pgmap v311: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:12:58 compute-0 ceph-osd[87035]: log_channel(cluster) log [DBG] : 6.4 scrub starts
Jan 31 08:12:58 compute-0 ceph-osd[87035]: log_channel(cluster) log [DBG] : 6.4 scrub ok
Jan 31 08:12:59 compute-0 sudo[114654]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bakqbtthnwbfztucqbvfyomuptxdtkky ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847178.9601536-272-22602054129604/AnsiballZ_setup.py'
Jan 31 08:12:59 compute-0 sudo[114654]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:12:59 compute-0 ceph-osd[88096]: log_channel(cluster) log [DBG] : 9.6 scrub starts
Jan 31 08:12:59 compute-0 ceph-osd[85971]: log_channel(cluster) log [DBG] : 6.3 scrub starts
Jan 31 08:12:59 compute-0 ceph-osd[85971]: log_channel(cluster) log [DBG] : 6.3 scrub ok
Jan 31 08:12:59 compute-0 ceph-osd[88096]: log_channel(cluster) log [DBG] : 9.6 scrub ok
Jan 31 08:12:59 compute-0 python3.9[114656]: ansible-ansible.legacy.setup Invoked with gather_subset=['!all'] filter=['ansible_service_mgr'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 31 08:12:59 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v312: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:12:59 compute-0 sudo[114654]: pam_unix(sudo:session): session closed for user root
Jan 31 08:12:59 compute-0 ceph-osd[87035]: log_channel(cluster) log [DBG] : 6.d scrub starts
Jan 31 08:12:59 compute-0 ceph-osd[87035]: log_channel(cluster) log [DBG] : 6.d scrub ok
Jan 31 08:13:00 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 08:13:00 compute-0 ceph-mon[75227]: 6.7 scrub starts
Jan 31 08:13:00 compute-0 ceph-mon[75227]: 6.7 scrub ok
Jan 31 08:13:00 compute-0 ceph-mon[75227]: 6.4 scrub starts
Jan 31 08:13:00 compute-0 ceph-mon[75227]: 6.4 scrub ok
Jan 31 08:13:00 compute-0 ceph-mon[75227]: 9.6 scrub starts
Jan 31 08:13:00 compute-0 ceph-mon[75227]: 9.6 scrub ok
Jan 31 08:13:00 compute-0 sudo[114738]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xgxmvrqgsligfqnwiushfsurnximvcho ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847178.9601536-272-22602054129604/AnsiballZ_systemd.py'
Jan 31 08:13:00 compute-0 sudo[114738]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:13:00 compute-0 python3.9[114740]: ansible-ansible.legacy.systemd Invoked with enabled=True name=chronyd state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 31 08:13:00 compute-0 ceph-osd[87035]: log_channel(cluster) log [DBG] : 6.e scrub starts
Jan 31 08:13:00 compute-0 sudo[114738]: pam_unix(sudo:session): session closed for user root
Jan 31 08:13:00 compute-0 ceph-osd[87035]: log_channel(cluster) log [DBG] : 6.e scrub ok
Jan 31 08:13:01 compute-0 ceph-osd[85971]: log_channel(cluster) log [DBG] : 6.0 scrub starts
Jan 31 08:13:01 compute-0 ceph-osd[85971]: log_channel(cluster) log [DBG] : 6.0 scrub ok
Jan 31 08:13:01 compute-0 ceph-mon[75227]: 6.3 scrub starts
Jan 31 08:13:01 compute-0 ceph-mon[75227]: 6.3 scrub ok
Jan 31 08:13:01 compute-0 ceph-mon[75227]: pgmap v312: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:13:01 compute-0 ceph-mon[75227]: 6.d scrub starts
Jan 31 08:13:01 compute-0 ceph-mon[75227]: 6.d scrub ok
Jan 31 08:13:01 compute-0 ceph-mon[75227]: 6.e scrub starts
Jan 31 08:13:01 compute-0 ceph-mon[75227]: 6.e scrub ok
Jan 31 08:13:01 compute-0 sshd-session[110227]: Connection closed by 192.168.122.30 port 38400
Jan 31 08:13:01 compute-0 sshd-session[110224]: pam_unix(sshd:session): session closed for user zuul
Jan 31 08:13:01 compute-0 systemd[1]: session-38.scope: Deactivated successfully.
Jan 31 08:13:01 compute-0 systemd[1]: session-38.scope: Consumed 20.759s CPU time.
Jan 31 08:13:01 compute-0 systemd-logind[793]: Session 38 logged out. Waiting for processes to exit.
Jan 31 08:13:01 compute-0 systemd-logind[793]: Removed session 38.
Jan 31 08:13:01 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v313: 305 pgs: 305 active+clean; 462 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:13:02 compute-0 ceph-mon[75227]: 6.0 scrub starts
Jan 31 08:13:02 compute-0 ceph-mon[75227]: 6.0 scrub ok
Jan 31 08:13:02 compute-0 ceph-mon[75227]: pgmap v313: 305 pgs: 305 active+clean; 462 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:13:02 compute-0 ceph-mgr[75519]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:13:02 compute-0 ceph-mgr[75519]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:13:02 compute-0 ceph-mgr[75519]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:13:02 compute-0 ceph-mgr[75519]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:13:02 compute-0 ceph-mgr[75519]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:13:02 compute-0 ceph-mgr[75519]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:13:02 compute-0 ceph-osd[87035]: log_channel(cluster) log [DBG] : 6.1 scrub starts
Jan 31 08:13:02 compute-0 ceph-osd[87035]: log_channel(cluster) log [DBG] : 6.1 scrub ok
Jan 31 08:13:03 compute-0 ceph-osd[85971]: log_channel(cluster) log [DBG] : 9.11 scrub starts
Jan 31 08:13:03 compute-0 ceph-osd[85971]: log_channel(cluster) log [DBG] : 9.11 scrub ok
Jan 31 08:13:03 compute-0 ceph-osd[88096]: log_channel(cluster) log [DBG] : 9.19 scrub starts
Jan 31 08:13:03 compute-0 ceph-osd[88096]: log_channel(cluster) log [DBG] : 9.19 scrub ok
Jan 31 08:13:03 compute-0 ceph-mon[75227]: 6.1 scrub starts
Jan 31 08:13:03 compute-0 ceph-mon[75227]: 6.1 scrub ok
Jan 31 08:13:03 compute-0 ceph-mon[75227]: 9.19 scrub starts
Jan 31 08:13:03 compute-0 ceph-mon[75227]: 9.19 scrub ok
Jan 31 08:13:03 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v314: 305 pgs: 305 active+clean; 462 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:13:04 compute-0 ceph-osd[85971]: log_channel(cluster) log [DBG] : 9.5 scrub starts
Jan 31 08:13:04 compute-0 ceph-osd[85971]: log_channel(cluster) log [DBG] : 9.5 scrub ok
Jan 31 08:13:04 compute-0 ceph-osd[88096]: log_channel(cluster) log [DBG] : 9.18 scrub starts
Jan 31 08:13:04 compute-0 ceph-osd[88096]: log_channel(cluster) log [DBG] : 9.18 scrub ok
Jan 31 08:13:04 compute-0 ceph-mon[75227]: 9.11 scrub starts
Jan 31 08:13:04 compute-0 ceph-mon[75227]: 9.11 scrub ok
Jan 31 08:13:04 compute-0 ceph-mon[75227]: pgmap v314: 305 pgs: 305 active+clean; 462 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:13:04 compute-0 ceph-mon[75227]: 9.18 scrub starts
Jan 31 08:13:04 compute-0 ceph-mon[75227]: 9.18 scrub ok
Jan 31 08:13:04 compute-0 ceph-mon[75227]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #18. Immutable memtables: 0.
Jan 31 08:13:04 compute-0 ceph-mon[75227]: rocksdb: (Original Log Time 2026/01/31-08:13:04.666167) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 31 08:13:04 compute-0 ceph-mon[75227]: rocksdb: [db/flush_job.cc:856] [default] [JOB 3] Flushing memtable with next log file: 18
Jan 31 08:13:04 compute-0 ceph-mon[75227]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769847184666217, "job": 3, "event": "flush_started", "num_memtables": 1, "num_entries": 7316, "num_deletes": 251, "total_data_size": 10034784, "memory_usage": 10193880, "flush_reason": "Manual Compaction"}
Jan 31 08:13:04 compute-0 ceph-mon[75227]: rocksdb: [db/flush_job.cc:885] [default] [JOB 3] Level-0 flush table #19: started
Jan 31 08:13:04 compute-0 ceph-mon[75227]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769847184703856, "cf_name": "default", "job": 3, "event": "table_file_creation", "file_number": 19, "file_size": 8012393, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 146, "largest_seqno": 7459, "table_properties": {"data_size": 7984438, "index_size": 18496, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 8389, "raw_key_size": 77694, "raw_average_key_size": 23, "raw_value_size": 7919647, "raw_average_value_size": 2378, "num_data_blocks": 812, "num_entries": 3330, "num_filter_entries": 3330, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769846773, "oldest_key_time": 1769846773, "file_creation_time": 1769847184, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "91992687-9ca4-489a-811f-a25b3432622d", "db_session_id": "RDN3DWKE2K2I6QTJYIJY", "orig_file_number": 19, "seqno_to_time_mapping": "N/A"}}
Jan 31 08:13:04 compute-0 ceph-mon[75227]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 3] Flush lasted 37773 microseconds, and 9846 cpu microseconds.
Jan 31 08:13:04 compute-0 ceph-mon[75227]: rocksdb: (Original Log Time 2026/01/31-08:13:04.703931) [db/flush_job.cc:967] [default] [JOB 3] Level-0 flush table #19: 8012393 bytes OK
Jan 31 08:13:04 compute-0 ceph-mon[75227]: rocksdb: (Original Log Time 2026/01/31-08:13:04.703961) [db/memtable_list.cc:519] [default] Level-0 commit table #19 started
Jan 31 08:13:04 compute-0 ceph-mon[75227]: rocksdb: (Original Log Time 2026/01/31-08:13:04.705591) [db/memtable_list.cc:722] [default] Level-0 commit table #19: memtable #1 done
Jan 31 08:13:04 compute-0 ceph-mon[75227]: rocksdb: (Original Log Time 2026/01/31-08:13:04.705618) EVENT_LOG_v1 {"time_micros": 1769847184705611, "job": 3, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [3, 0, 0, 0, 0, 0, 0], "immutable_memtables": 0}
Jan 31 08:13:04 compute-0 ceph-mon[75227]: rocksdb: (Original Log Time 2026/01/31-08:13:04.705661) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: files[3 0 0 0 0 0 0] max score 0.75
Jan 31 08:13:04 compute-0 ceph-mon[75227]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 3] Try to delete WAL files size 10002761, prev total WAL file size 10002761, number of live WAL files 2.
Jan 31 08:13:04 compute-0 ceph-mon[75227]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000014.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 08:13:04 compute-0 ceph-mon[75227]: rocksdb: (Original Log Time 2026/01/31-08:13:04.707371) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730030' seq:72057594037927935, type:22 .. '7061786F7300323532' seq:0, type:0; will stop at (end)
Jan 31 08:13:04 compute-0 ceph-mon[75227]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 4] Compacting 3@0 files to L6, score -1.00
Jan 31 08:13:04 compute-0 ceph-mon[75227]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 3 Base level 0, inputs: [19(7824KB) 13(58KB) 8(1944B)]
Jan 31 08:13:04 compute-0 ceph-mon[75227]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769847184707461, "job": 4, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [19, 13, 8], "score": -1, "input_data_size": 8074297, "oldest_snapshot_seqno": -1}
Jan 31 08:13:04 compute-0 ceph-mon[75227]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 4] Generated table #20: 3156 keys, 8027119 bytes, temperature: kUnknown
Jan 31 08:13:04 compute-0 ceph-mon[75227]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769847184751635, "cf_name": "default", "job": 4, "event": "table_file_creation", "file_number": 20, "file_size": 8027119, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 7999599, "index_size": 18514, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 7941, "raw_key_size": 76134, "raw_average_key_size": 24, "raw_value_size": 7936167, "raw_average_value_size": 2514, "num_data_blocks": 814, "num_entries": 3156, "num_filter_entries": 3156, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769846771, "oldest_key_time": 0, "file_creation_time": 1769847184, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "91992687-9ca4-489a-811f-a25b3432622d", "db_session_id": "RDN3DWKE2K2I6QTJYIJY", "orig_file_number": 20, "seqno_to_time_mapping": "N/A"}}
Jan 31 08:13:04 compute-0 ceph-mon[75227]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 31 08:13:04 compute-0 ceph-mon[75227]: rocksdb: (Original Log Time 2026/01/31-08:13:04.751885) [db/compaction/compaction_job.cc:1663] [default] [JOB 4] Compacted 3@0 files to L6 => 8027119 bytes
Jan 31 08:13:04 compute-0 ceph-mon[75227]: rocksdb: (Original Log Time 2026/01/31-08:13:04.753056) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 182.4 rd, 181.4 wr, level 6, files in(3, 0) out(1 +0 blob) MB in(7.7, 0.0 +0.0 blob) out(7.7 +0.0 blob), read-write-amplify(2.0) write-amplify(1.0) OK, records in: 3445, records dropped: 289 output_compression: NoCompression
Jan 31 08:13:04 compute-0 ceph-mon[75227]: rocksdb: (Original Log Time 2026/01/31-08:13:04.753089) EVENT_LOG_v1 {"time_micros": 1769847184753074, "job": 4, "event": "compaction_finished", "compaction_time_micros": 44257, "compaction_time_cpu_micros": 14795, "output_level": 6, "num_output_files": 1, "total_output_size": 8027119, "num_input_records": 3445, "num_output_records": 3156, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 31 08:13:04 compute-0 ceph-mon[75227]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000019.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 08:13:04 compute-0 ceph-mon[75227]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769847184754675, "job": 4, "event": "table_file_deletion", "file_number": 19}
Jan 31 08:13:04 compute-0 ceph-mon[75227]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000013.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 08:13:04 compute-0 ceph-mon[75227]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769847184754772, "job": 4, "event": "table_file_deletion", "file_number": 13}
Jan 31 08:13:04 compute-0 ceph-mon[75227]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000008.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 08:13:04 compute-0 ceph-mon[75227]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769847184754823, "job": 4, "event": "table_file_deletion", "file_number": 8}
Jan 31 08:13:04 compute-0 ceph-mon[75227]: rocksdb: (Original Log Time 2026/01/31-08:13:04.707223) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 08:13:05 compute-0 ceph-osd[88096]: log_channel(cluster) log [DBG] : 9.13 scrub starts
Jan 31 08:13:05 compute-0 ceph-osd[88096]: log_channel(cluster) log [DBG] : 9.13 scrub ok
Jan 31 08:13:05 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 08:13:05 compute-0 ceph-mon[75227]: 9.5 scrub starts
Jan 31 08:13:05 compute-0 ceph-mon[75227]: 9.5 scrub ok
Jan 31 08:13:05 compute-0 ceph-mon[75227]: 9.13 scrub starts
Jan 31 08:13:05 compute-0 ceph-mon[75227]: 9.13 scrub ok
Jan 31 08:13:05 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v315: 305 pgs: 305 active+clean; 462 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:13:05 compute-0 ceph-osd[87035]: log_channel(cluster) log [DBG] : 6.c scrub starts
Jan 31 08:13:05 compute-0 ceph-osd[87035]: log_channel(cluster) log [DBG] : 6.c scrub ok
Jan 31 08:13:06 compute-0 ceph-mon[75227]: pgmap v315: 305 pgs: 305 active+clean; 462 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:13:06 compute-0 ceph-mon[75227]: 6.c scrub starts
Jan 31 08:13:06 compute-0 ceph-mon[75227]: 6.c scrub ok
Jan 31 08:13:06 compute-0 ceph-osd[87035]: log_channel(cluster) log [DBG] : 6.b scrub starts
Jan 31 08:13:06 compute-0 ceph-osd[87035]: log_channel(cluster) log [DBG] : 6.b scrub ok
Jan 31 08:13:07 compute-0 ceph-mon[75227]: 6.b scrub starts
Jan 31 08:13:07 compute-0 ceph-mon[75227]: 6.b scrub ok
Jan 31 08:13:07 compute-0 sshd-session[114768]: Accepted publickey for zuul from 192.168.122.30 port 56614 ssh2: ECDSA SHA256:Skb+4tfaoVfLHQIqkRSeA/sFlTrVc6ZnX8V66qTLHY8
Jan 31 08:13:07 compute-0 systemd-logind[793]: New session 39 of user zuul.
Jan 31 08:13:07 compute-0 systemd[1]: Started Session 39 of User zuul.
Jan 31 08:13:07 compute-0 sshd-session[114768]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 31 08:13:07 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v316: 305 pgs: 305 active+clean; 462 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:13:07 compute-0 ceph-osd[87035]: log_channel(cluster) log [DBG] : 9.15 scrub starts
Jan 31 08:13:07 compute-0 ceph-osd[87035]: log_channel(cluster) log [DBG] : 9.15 scrub ok
Jan 31 08:13:08 compute-0 sudo[114921]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tcdxdicmrubulbrumuugtkjkyemlpxta ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847187.9180994-17-281223855831927/AnsiballZ_file.py'
Jan 31 08:13:08 compute-0 sudo[114921]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:13:08 compute-0 python3.9[114923]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 08:13:08 compute-0 sudo[114921]: pam_unix(sudo:session): session closed for user root
Jan 31 08:13:08 compute-0 ceph-mon[75227]: pgmap v316: 305 pgs: 305 active+clean; 462 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:13:08 compute-0 ceph-mon[75227]: 9.15 scrub starts
Jan 31 08:13:08 compute-0 ceph-mon[75227]: 9.15 scrub ok
Jan 31 08:13:09 compute-0 sudo[115073]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qwgzbzaulfphudefwebwpccsvwawvbtp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847188.6539433-29-275983497049262/AnsiballZ_stat.py'
Jan 31 08:13:09 compute-0 sudo[115073]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:13:09 compute-0 python3.9[115075]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/ceph-networks.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 08:13:09 compute-0 sudo[115073]: pam_unix(sudo:session): session closed for user root
Jan 31 08:13:09 compute-0 sudo[115151]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nbepgzskunrhawobglpxaulyqaidfdbu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847188.6539433-29-275983497049262/AnsiballZ_file.py'
Jan 31 08:13:09 compute-0 sudo[115151]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:13:09 compute-0 sudo[115154]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 08:13:09 compute-0 sudo[115154]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:13:09 compute-0 sudo[115154]: pam_unix(sudo:session): session closed for user root
Jan 31 08:13:09 compute-0 python3.9[115153]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/ceph-networks.yaml _original_basename=firewall.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/ceph-networks.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 08:13:09 compute-0 sudo[115179]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/82c880e6-d992-5408-8b12-efff9c275473/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Jan 31 08:13:09 compute-0 sudo[115151]: pam_unix(sudo:session): session closed for user root
Jan 31 08:13:09 compute-0 sudo[115179]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:13:09 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v317: 305 pgs: 305 active+clean; 462 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:13:10 compute-0 sshd-session[114771]: Connection closed by 192.168.122.30 port 56614
Jan 31 08:13:10 compute-0 sshd-session[114768]: pam_unix(sshd:session): session closed for user zuul
Jan 31 08:13:10 compute-0 systemd[1]: session-39.scope: Deactivated successfully.
Jan 31 08:13:10 compute-0 systemd[1]: session-39.scope: Consumed 1.325s CPU time.
Jan 31 08:13:10 compute-0 systemd-logind[793]: Session 39 logged out. Waiting for processes to exit.
Jan 31 08:13:10 compute-0 systemd-logind[793]: Removed session 39.
Jan 31 08:13:10 compute-0 sudo[115179]: pam_unix(sudo:session): session closed for user root
Jan 31 08:13:10 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 31 08:13:10 compute-0 ceph-mon[75227]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 08:13:10 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 31 08:13:10 compute-0 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 31 08:13:10 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 31 08:13:10 compute-0 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 08:13:10 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 31 08:13:10 compute-0 ceph-mon[75227]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 31 08:13:10 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 08:13:10 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 31 08:13:10 compute-0 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 31 08:13:10 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 31 08:13:10 compute-0 ceph-mon[75227]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 08:13:10 compute-0 sudo[115259]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 08:13:10 compute-0 sudo[115259]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:13:10 compute-0 sudo[115259]: pam_unix(sudo:session): session closed for user root
Jan 31 08:13:10 compute-0 sudo[115284]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/82c880e6-d992-5408-8b12-efff9c275473/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 82c880e6-d992-5408-8b12-efff9c275473 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Jan 31 08:13:10 compute-0 sudo[115284]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:13:10 compute-0 podman[115321]: 2026-01-31 08:13:10.754198841 +0000 UTC m=+0.048445462 container create 0925c57b5e5c1bbd7086bab9f262091798cb7b52c769b00a1ddd28e8ef3c96b0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=goofy_chaum, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=tentacle, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 08:13:10 compute-0 systemd[1]: Started libpod-conmon-0925c57b5e5c1bbd7086bab9f262091798cb7b52c769b00a1ddd28e8ef3c96b0.scope.
Jan 31 08:13:10 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:13:10 compute-0 podman[115321]: 2026-01-31 08:13:10.830053422 +0000 UTC m=+0.124300093 container init 0925c57b5e5c1bbd7086bab9f262091798cb7b52c769b00a1ddd28e8ef3c96b0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=goofy_chaum, CEPH_REF=tentacle, io.buildah.version=1.41.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030)
Jan 31 08:13:10 compute-0 podman[115321]: 2026-01-31 08:13:10.734782706 +0000 UTC m=+0.029029367 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 08:13:10 compute-0 podman[115321]: 2026-01-31 08:13:10.836916885 +0000 UTC m=+0.131163496 container start 0925c57b5e5c1bbd7086bab9f262091798cb7b52c769b00a1ddd28e8ef3c96b0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=goofy_chaum, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 08:13:10 compute-0 podman[115321]: 2026-01-31 08:13:10.839709383 +0000 UTC m=+0.133956034 container attach 0925c57b5e5c1bbd7086bab9f262091798cb7b52c769b00a1ddd28e8ef3c96b0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=goofy_chaum, io.buildah.version=1.41.3, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=tentacle)
Jan 31 08:13:10 compute-0 goofy_chaum[115337]: 167 167
Jan 31 08:13:10 compute-0 systemd[1]: libpod-0925c57b5e5c1bbd7086bab9f262091798cb7b52c769b00a1ddd28e8ef3c96b0.scope: Deactivated successfully.
Jan 31 08:13:10 compute-0 podman[115321]: 2026-01-31 08:13:10.842514502 +0000 UTC m=+0.136761113 container died 0925c57b5e5c1bbd7086bab9f262091798cb7b52c769b00a1ddd28e8ef3c96b0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=goofy_chaum, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Jan 31 08:13:10 compute-0 systemd[1]: var-lib-containers-storage-overlay-a83a98542081ae0c3fa5247d7483848fc73d192f7d6a109a0fb694b7c9c0aa88-merged.mount: Deactivated successfully.
Jan 31 08:13:10 compute-0 podman[115321]: 2026-01-31 08:13:10.889017548 +0000 UTC m=+0.183264179 container remove 0925c57b5e5c1bbd7086bab9f262091798cb7b52c769b00a1ddd28e8ef3c96b0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=goofy_chaum, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=tentacle, io.buildah.version=1.41.3)
Jan 31 08:13:10 compute-0 systemd[1]: libpod-conmon-0925c57b5e5c1bbd7086bab9f262091798cb7b52c769b00a1ddd28e8ef3c96b0.scope: Deactivated successfully.
Jan 31 08:13:10 compute-0 ceph-osd[87035]: log_channel(cluster) log [DBG] : 9.14 scrub starts
Jan 31 08:13:10 compute-0 ceph-osd[87035]: log_channel(cluster) log [DBG] : 9.14 scrub ok
Jan 31 08:13:10 compute-0 podman[115361]: 2026-01-31 08:13:10.987781363 +0000 UTC m=+0.029075808 container create cc4ab24b8fe8ef1d2a80020748f74e02e837f5d88c5b7ff4e49ad24b399dc5c8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=interesting_pare, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, ceph=True)
Jan 31 08:13:11 compute-0 systemd[1]: Started libpod-conmon-cc4ab24b8fe8ef1d2a80020748f74e02e837f5d88c5b7ff4e49ad24b399dc5c8.scope.
Jan 31 08:13:11 compute-0 ceph-mon[75227]: pgmap v317: 305 pgs: 305 active+clean; 462 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:13:11 compute-0 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 08:13:11 compute-0 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 31 08:13:11 compute-0 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 08:13:11 compute-0 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 31 08:13:11 compute-0 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 31 08:13:11 compute-0 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 08:13:11 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:13:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a089a4774bef54c1dfdcc236d35636cbc6e8e77999b744526a778477d35f1ce8/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 08:13:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a089a4774bef54c1dfdcc236d35636cbc6e8e77999b744526a778477d35f1ce8/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 08:13:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a089a4774bef54c1dfdcc236d35636cbc6e8e77999b744526a778477d35f1ce8/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 08:13:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a089a4774bef54c1dfdcc236d35636cbc6e8e77999b744526a778477d35f1ce8/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 08:13:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a089a4774bef54c1dfdcc236d35636cbc6e8e77999b744526a778477d35f1ce8/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 31 08:13:11 compute-0 podman[115361]: 2026-01-31 08:13:11.061702949 +0000 UTC m=+0.102997454 container init cc4ab24b8fe8ef1d2a80020748f74e02e837f5d88c5b7ff4e49ad24b399dc5c8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=interesting_pare, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 08:13:11 compute-0 podman[115361]: 2026-01-31 08:13:11.067598815 +0000 UTC m=+0.108893300 container start cc4ab24b8fe8ef1d2a80020748f74e02e837f5d88c5b7ff4e49ad24b399dc5c8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=interesting_pare, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 08:13:11 compute-0 podman[115361]: 2026-01-31 08:13:11.070514807 +0000 UTC m=+0.111809262 container attach cc4ab24b8fe8ef1d2a80020748f74e02e837f5d88c5b7ff4e49ad24b399dc5c8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=interesting_pare, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 08:13:11 compute-0 podman[115361]: 2026-01-31 08:13:10.974979283 +0000 UTC m=+0.016273748 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 08:13:11 compute-0 ceph-osd[85971]: log_channel(cluster) log [DBG] : 9.b scrub starts
Jan 31 08:13:11 compute-0 ceph-osd[85971]: log_channel(cluster) log [DBG] : 9.b scrub ok
Jan 31 08:13:11 compute-0 interesting_pare[115377]: --> passed data devices: 0 physical, 3 LVM
Jan 31 08:13:11 compute-0 interesting_pare[115377]: --> All data devices are unavailable
Jan 31 08:13:11 compute-0 systemd[1]: libpod-cc4ab24b8fe8ef1d2a80020748f74e02e837f5d88c5b7ff4e49ad24b399dc5c8.scope: Deactivated successfully.
Jan 31 08:13:11 compute-0 podman[115361]: 2026-01-31 08:13:11.50385324 +0000 UTC m=+0.545147745 container died cc4ab24b8fe8ef1d2a80020748f74e02e837f5d88c5b7ff4e49ad24b399dc5c8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=interesting_pare, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_REF=tentacle)
Jan 31 08:13:11 compute-0 systemd[1]: var-lib-containers-storage-overlay-a089a4774bef54c1dfdcc236d35636cbc6e8e77999b744526a778477d35f1ce8-merged.mount: Deactivated successfully.
Jan 31 08:13:11 compute-0 podman[115361]: 2026-01-31 08:13:11.554382189 +0000 UTC m=+0.595676674 container remove cc4ab24b8fe8ef1d2a80020748f74e02e837f5d88c5b7ff4e49ad24b399dc5c8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=interesting_pare, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2)
Jan 31 08:13:11 compute-0 systemd[1]: libpod-conmon-cc4ab24b8fe8ef1d2a80020748f74e02e837f5d88c5b7ff4e49ad24b399dc5c8.scope: Deactivated successfully.
Jan 31 08:13:11 compute-0 sudo[115284]: pam_unix(sudo:session): session closed for user root
Jan 31 08:13:11 compute-0 sudo[115409]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 08:13:11 compute-0 sudo[115409]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:13:11 compute-0 sudo[115409]: pam_unix(sudo:session): session closed for user root
Jan 31 08:13:11 compute-0 sudo[115434]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/82c880e6-d992-5408-8b12-efff9c275473/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 82c880e6-d992-5408-8b12-efff9c275473 -- lvm list --format json
Jan 31 08:13:11 compute-0 sudo[115434]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:13:11 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v318: 305 pgs: 305 active+clean; 462 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:13:11 compute-0 ceph-osd[87035]: log_channel(cluster) log [DBG] : 9.10 scrub starts
Jan 31 08:13:11 compute-0 ceph-osd[87035]: log_channel(cluster) log [DBG] : 9.10 scrub ok
Jan 31 08:13:11 compute-0 podman[115471]: 2026-01-31 08:13:11.96206171 +0000 UTC m=+0.040318633 container create 935374ad7755ab8f3f6410e4092277c944373393c55d396582a15f61afde33cc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=awesome_proskuriakova, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 08:13:11 compute-0 systemd[1]: Started libpod-conmon-935374ad7755ab8f3f6410e4092277c944373393c55d396582a15f61afde33cc.scope.
Jan 31 08:13:12 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:13:12 compute-0 podman[115471]: 2026-01-31 08:13:12.033332443 +0000 UTC m=+0.111589386 container init 935374ad7755ab8f3f6410e4092277c944373393c55d396582a15f61afde33cc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=awesome_proskuriakova, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=tentacle, org.label-schema.build-date=20251030)
Jan 31 08:13:12 compute-0 podman[115471]: 2026-01-31 08:13:11.941887424 +0000 UTC m=+0.020144377 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 08:13:12 compute-0 ceph-mon[75227]: 9.14 scrub starts
Jan 31 08:13:12 compute-0 ceph-mon[75227]: 9.14 scrub ok
Jan 31 08:13:12 compute-0 podman[115471]: 2026-01-31 08:13:12.038834107 +0000 UTC m=+0.117091040 container start 935374ad7755ab8f3f6410e4092277c944373393c55d396582a15f61afde33cc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=awesome_proskuriakova, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0)
Jan 31 08:13:12 compute-0 podman[115471]: 2026-01-31 08:13:12.041745379 +0000 UTC m=+0.120002312 container attach 935374ad7755ab8f3f6410e4092277c944373393c55d396582a15f61afde33cc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=awesome_proskuriakova, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Jan 31 08:13:12 compute-0 awesome_proskuriakova[115487]: 167 167
Jan 31 08:13:12 compute-0 systemd[1]: libpod-935374ad7755ab8f3f6410e4092277c944373393c55d396582a15f61afde33cc.scope: Deactivated successfully.
Jan 31 08:13:12 compute-0 conmon[115487]: conmon 935374ad7755ab8f3f64 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-935374ad7755ab8f3f6410e4092277c944373393c55d396582a15f61afde33cc.scope/container/memory.events
Jan 31 08:13:12 compute-0 podman[115471]: 2026-01-31 08:13:12.044125656 +0000 UTC m=+0.122382579 container died 935374ad7755ab8f3f6410e4092277c944373393c55d396582a15f61afde33cc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=awesome_proskuriakova, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 08:13:12 compute-0 systemd[1]: var-lib-containers-storage-overlay-587470afab33dd01dbc16c5351765eb3bf1d64b85f536d6470c5708a0ef6d7e9-merged.mount: Deactivated successfully.
Jan 31 08:13:12 compute-0 podman[115471]: 2026-01-31 08:13:12.072935445 +0000 UTC m=+0.151192368 container remove 935374ad7755ab8f3f6410e4092277c944373393c55d396582a15f61afde33cc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=awesome_proskuriakova, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030)
Jan 31 08:13:12 compute-0 systemd[1]: libpod-conmon-935374ad7755ab8f3f6410e4092277c944373393c55d396582a15f61afde33cc.scope: Deactivated successfully.
Jan 31 08:13:12 compute-0 podman[115510]: 2026-01-31 08:13:12.192347959 +0000 UTC m=+0.040968281 container create 7189d4508bea8dda641e6029c76db51a82456594655c6c538d8eec6b99f4d05a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zealous_wilson, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 08:13:12 compute-0 systemd[1]: Started libpod-conmon-7189d4508bea8dda641e6029c76db51a82456594655c6c538d8eec6b99f4d05a.scope.
Jan 31 08:13:12 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:13:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0d48a80f93348ddb4674cb9868db3f012dd386065817fe16db4e1e8e209a1213/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 08:13:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0d48a80f93348ddb4674cb9868db3f012dd386065817fe16db4e1e8e209a1213/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 08:13:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0d48a80f93348ddb4674cb9868db3f012dd386065817fe16db4e1e8e209a1213/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 08:13:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0d48a80f93348ddb4674cb9868db3f012dd386065817fe16db4e1e8e209a1213/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 08:13:12 compute-0 podman[115510]: 2026-01-31 08:13:12.255594656 +0000 UTC m=+0.104215028 container init 7189d4508bea8dda641e6029c76db51a82456594655c6c538d8eec6b99f4d05a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zealous_wilson, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030)
Jan 31 08:13:12 compute-0 podman[115510]: 2026-01-31 08:13:12.264076874 +0000 UTC m=+0.112697196 container start 7189d4508bea8dda641e6029c76db51a82456594655c6c538d8eec6b99f4d05a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zealous_wilson, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Jan 31 08:13:12 compute-0 podman[115510]: 2026-01-31 08:13:12.169621881 +0000 UTC m=+0.018242233 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 08:13:12 compute-0 podman[115510]: 2026-01-31 08:13:12.267276624 +0000 UTC m=+0.115896976 container attach 7189d4508bea8dda641e6029c76db51a82456594655c6c538d8eec6b99f4d05a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zealous_wilson, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, ceph=True, io.buildah.version=1.41.3, CEPH_REF=tentacle, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Jan 31 08:13:12 compute-0 zealous_wilson[115527]: {
Jan 31 08:13:12 compute-0 zealous_wilson[115527]:     "0": [
Jan 31 08:13:12 compute-0 zealous_wilson[115527]:         {
Jan 31 08:13:12 compute-0 zealous_wilson[115527]:             "devices": [
Jan 31 08:13:12 compute-0 zealous_wilson[115527]:                 "/dev/loop3"
Jan 31 08:13:12 compute-0 zealous_wilson[115527]:             ],
Jan 31 08:13:12 compute-0 zealous_wilson[115527]:             "lv_name": "ceph_lv0",
Jan 31 08:13:12 compute-0 zealous_wilson[115527]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 08:13:12 compute-0 zealous_wilson[115527]:             "lv_size": "21470642176",
Jan 31 08:13:12 compute-0 zealous_wilson[115527]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=MTsNbY-MKaT-jGv0-3onj-5WQa-gnK0-BbfLsK,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=82c880e6-d992-5408-8b12-efff9c275473,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=39c36249-2898-4a76-b317-8e4ca379866f,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 31 08:13:12 compute-0 zealous_wilson[115527]:             "lv_uuid": "MTsNbY-MKaT-jGv0-3onj-5WQa-gnK0-BbfLsK",
Jan 31 08:13:12 compute-0 zealous_wilson[115527]:             "name": "ceph_lv0",
Jan 31 08:13:12 compute-0 zealous_wilson[115527]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 08:13:12 compute-0 zealous_wilson[115527]:             "tags": {
Jan 31 08:13:12 compute-0 zealous_wilson[115527]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 31 08:13:12 compute-0 zealous_wilson[115527]:                 "ceph.block_uuid": "MTsNbY-MKaT-jGv0-3onj-5WQa-gnK0-BbfLsK",
Jan 31 08:13:12 compute-0 zealous_wilson[115527]:                 "ceph.cephx_lockbox_secret": "",
Jan 31 08:13:12 compute-0 zealous_wilson[115527]:                 "ceph.cluster_fsid": "82c880e6-d992-5408-8b12-efff9c275473",
Jan 31 08:13:12 compute-0 zealous_wilson[115527]:                 "ceph.cluster_name": "ceph",
Jan 31 08:13:12 compute-0 zealous_wilson[115527]:                 "ceph.crush_device_class": "",
Jan 31 08:13:12 compute-0 zealous_wilson[115527]:                 "ceph.encrypted": "0",
Jan 31 08:13:12 compute-0 zealous_wilson[115527]:                 "ceph.objectstore": "bluestore",
Jan 31 08:13:12 compute-0 zealous_wilson[115527]:                 "ceph.osd_fsid": "39c36249-2898-4a76-b317-8e4ca379866f",
Jan 31 08:13:12 compute-0 zealous_wilson[115527]:                 "ceph.osd_id": "0",
Jan 31 08:13:12 compute-0 zealous_wilson[115527]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 31 08:13:12 compute-0 zealous_wilson[115527]:                 "ceph.type": "block",
Jan 31 08:13:12 compute-0 zealous_wilson[115527]:                 "ceph.vdo": "0",
Jan 31 08:13:12 compute-0 zealous_wilson[115527]:                 "ceph.with_tpm": "0"
Jan 31 08:13:12 compute-0 zealous_wilson[115527]:             },
Jan 31 08:13:12 compute-0 zealous_wilson[115527]:             "type": "block",
Jan 31 08:13:12 compute-0 zealous_wilson[115527]:             "vg_name": "ceph_vg0"
Jan 31 08:13:12 compute-0 zealous_wilson[115527]:         }
Jan 31 08:13:12 compute-0 zealous_wilson[115527]:     ],
Jan 31 08:13:12 compute-0 zealous_wilson[115527]:     "1": [
Jan 31 08:13:12 compute-0 zealous_wilson[115527]:         {
Jan 31 08:13:12 compute-0 zealous_wilson[115527]:             "devices": [
Jan 31 08:13:12 compute-0 zealous_wilson[115527]:                 "/dev/loop4"
Jan 31 08:13:12 compute-0 zealous_wilson[115527]:             ],
Jan 31 08:13:12 compute-0 zealous_wilson[115527]:             "lv_name": "ceph_lv1",
Jan 31 08:13:12 compute-0 zealous_wilson[115527]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 31 08:13:12 compute-0 zealous_wilson[115527]:             "lv_size": "21470642176",
Jan 31 08:13:12 compute-0 zealous_wilson[115527]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=p93Mbf-DMxT-pcUt-jSJE-SFna-oscq-yTAd40,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=82c880e6-d992-5408-8b12-efff9c275473,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=dacad4fa-56d8-4937-b2d8-306fb75187f3,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 31 08:13:12 compute-0 zealous_wilson[115527]:             "lv_uuid": "p93Mbf-DMxT-pcUt-jSJE-SFna-oscq-yTAd40",
Jan 31 08:13:12 compute-0 zealous_wilson[115527]:             "name": "ceph_lv1",
Jan 31 08:13:12 compute-0 zealous_wilson[115527]:             "path": "/dev/ceph_vg1/ceph_lv1",
Jan 31 08:13:12 compute-0 zealous_wilson[115527]:             "tags": {
Jan 31 08:13:12 compute-0 zealous_wilson[115527]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 31 08:13:12 compute-0 zealous_wilson[115527]:                 "ceph.block_uuid": "p93Mbf-DMxT-pcUt-jSJE-SFna-oscq-yTAd40",
Jan 31 08:13:12 compute-0 zealous_wilson[115527]:                 "ceph.cephx_lockbox_secret": "",
Jan 31 08:13:12 compute-0 zealous_wilson[115527]:                 "ceph.cluster_fsid": "82c880e6-d992-5408-8b12-efff9c275473",
Jan 31 08:13:12 compute-0 zealous_wilson[115527]:                 "ceph.cluster_name": "ceph",
Jan 31 08:13:12 compute-0 zealous_wilson[115527]:                 "ceph.crush_device_class": "",
Jan 31 08:13:12 compute-0 zealous_wilson[115527]:                 "ceph.encrypted": "0",
Jan 31 08:13:12 compute-0 zealous_wilson[115527]:                 "ceph.objectstore": "bluestore",
Jan 31 08:13:12 compute-0 zealous_wilson[115527]:                 "ceph.osd_fsid": "dacad4fa-56d8-4937-b2d8-306fb75187f3",
Jan 31 08:13:12 compute-0 zealous_wilson[115527]:                 "ceph.osd_id": "1",
Jan 31 08:13:12 compute-0 zealous_wilson[115527]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 31 08:13:12 compute-0 zealous_wilson[115527]:                 "ceph.type": "block",
Jan 31 08:13:12 compute-0 zealous_wilson[115527]:                 "ceph.vdo": "0",
Jan 31 08:13:12 compute-0 zealous_wilson[115527]:                 "ceph.with_tpm": "0"
Jan 31 08:13:12 compute-0 zealous_wilson[115527]:             },
Jan 31 08:13:12 compute-0 zealous_wilson[115527]:             "type": "block",
Jan 31 08:13:12 compute-0 zealous_wilson[115527]:             "vg_name": "ceph_vg1"
Jan 31 08:13:12 compute-0 zealous_wilson[115527]:         }
Jan 31 08:13:12 compute-0 zealous_wilson[115527]:     ],
Jan 31 08:13:12 compute-0 zealous_wilson[115527]:     "2": [
Jan 31 08:13:12 compute-0 zealous_wilson[115527]:         {
Jan 31 08:13:12 compute-0 zealous_wilson[115527]:             "devices": [
Jan 31 08:13:12 compute-0 zealous_wilson[115527]:                 "/dev/loop5"
Jan 31 08:13:12 compute-0 zealous_wilson[115527]:             ],
Jan 31 08:13:12 compute-0 zealous_wilson[115527]:             "lv_name": "ceph_lv2",
Jan 31 08:13:12 compute-0 zealous_wilson[115527]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 31 08:13:12 compute-0 zealous_wilson[115527]:             "lv_size": "21470642176",
Jan 31 08:13:12 compute-0 zealous_wilson[115527]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=fxd7JU-HnwP-NvcE-M4xv-EgEF-kK7y-w6dXCS,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=82c880e6-d992-5408-8b12-efff9c275473,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=faa25865-e7b6-44f9-8188-08bf287b941b,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 31 08:13:12 compute-0 zealous_wilson[115527]:             "lv_uuid": "fxd7JU-HnwP-NvcE-M4xv-EgEF-kK7y-w6dXCS",
Jan 31 08:13:12 compute-0 zealous_wilson[115527]:             "name": "ceph_lv2",
Jan 31 08:13:12 compute-0 zealous_wilson[115527]:             "path": "/dev/ceph_vg2/ceph_lv2",
Jan 31 08:13:12 compute-0 zealous_wilson[115527]:             "tags": {
Jan 31 08:13:12 compute-0 zealous_wilson[115527]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 31 08:13:12 compute-0 zealous_wilson[115527]:                 "ceph.block_uuid": "fxd7JU-HnwP-NvcE-M4xv-EgEF-kK7y-w6dXCS",
Jan 31 08:13:12 compute-0 zealous_wilson[115527]:                 "ceph.cephx_lockbox_secret": "",
Jan 31 08:13:12 compute-0 zealous_wilson[115527]:                 "ceph.cluster_fsid": "82c880e6-d992-5408-8b12-efff9c275473",
Jan 31 08:13:12 compute-0 zealous_wilson[115527]:                 "ceph.cluster_name": "ceph",
Jan 31 08:13:12 compute-0 zealous_wilson[115527]:                 "ceph.crush_device_class": "",
Jan 31 08:13:12 compute-0 zealous_wilson[115527]:                 "ceph.encrypted": "0",
Jan 31 08:13:12 compute-0 zealous_wilson[115527]:                 "ceph.objectstore": "bluestore",
Jan 31 08:13:12 compute-0 zealous_wilson[115527]:                 "ceph.osd_fsid": "faa25865-e7b6-44f9-8188-08bf287b941b",
Jan 31 08:13:12 compute-0 zealous_wilson[115527]:                 "ceph.osd_id": "2",
Jan 31 08:13:12 compute-0 zealous_wilson[115527]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 31 08:13:12 compute-0 zealous_wilson[115527]:                 "ceph.type": "block",
Jan 31 08:13:12 compute-0 zealous_wilson[115527]:                 "ceph.vdo": "0",
Jan 31 08:13:12 compute-0 zealous_wilson[115527]:                 "ceph.with_tpm": "0"
Jan 31 08:13:12 compute-0 zealous_wilson[115527]:             },
Jan 31 08:13:12 compute-0 zealous_wilson[115527]:             "type": "block",
Jan 31 08:13:12 compute-0 zealous_wilson[115527]:             "vg_name": "ceph_vg2"
Jan 31 08:13:12 compute-0 zealous_wilson[115527]:         }
Jan 31 08:13:12 compute-0 zealous_wilson[115527]:     ]
Jan 31 08:13:12 compute-0 zealous_wilson[115527]: }
Jan 31 08:13:12 compute-0 systemd[1]: libpod-7189d4508bea8dda641e6029c76db51a82456594655c6c538d8eec6b99f4d05a.scope: Deactivated successfully.
Jan 31 08:13:12 compute-0 podman[115510]: 2026-01-31 08:13:12.537911997 +0000 UTC m=+0.386532319 container died 7189d4508bea8dda641e6029c76db51a82456594655c6c538d8eec6b99f4d05a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zealous_wilson, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 08:13:12 compute-0 systemd[1]: var-lib-containers-storage-overlay-0d48a80f93348ddb4674cb9868db3f012dd386065817fe16db4e1e8e209a1213-merged.mount: Deactivated successfully.
Jan 31 08:13:12 compute-0 podman[115510]: 2026-01-31 08:13:12.573858467 +0000 UTC m=+0.422478789 container remove 7189d4508bea8dda641e6029c76db51a82456594655c6c538d8eec6b99f4d05a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zealous_wilson, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 31 08:13:12 compute-0 systemd[1]: libpod-conmon-7189d4508bea8dda641e6029c76db51a82456594655c6c538d8eec6b99f4d05a.scope: Deactivated successfully.
Jan 31 08:13:12 compute-0 sudo[115434]: pam_unix(sudo:session): session closed for user root
Jan 31 08:13:12 compute-0 sudo[115549]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 08:13:12 compute-0 sudo[115549]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:13:12 compute-0 sudo[115549]: pam_unix(sudo:session): session closed for user root
Jan 31 08:13:12 compute-0 sudo[115574]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/82c880e6-d992-5408-8b12-efff9c275473/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 82c880e6-d992-5408-8b12-efff9c275473 -- raw list --format json
Jan 31 08:13:12 compute-0 sudo[115574]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:13:13 compute-0 ceph-mon[75227]: 9.b scrub starts
Jan 31 08:13:13 compute-0 ceph-mon[75227]: 9.b scrub ok
Jan 31 08:13:13 compute-0 ceph-mon[75227]: pgmap v318: 305 pgs: 305 active+clean; 462 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:13:13 compute-0 ceph-mon[75227]: 9.10 scrub starts
Jan 31 08:13:13 compute-0 ceph-mon[75227]: 9.10 scrub ok
Jan 31 08:13:13 compute-0 podman[115610]: 2026-01-31 08:13:13.062797921 +0000 UTC m=+0.060321945 container create 0b1089539ebe5ac495d7ba03d8a10a33bb84a0a283c9787610087b0f685618fb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_cray, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030)
Jan 31 08:13:13 compute-0 systemd[1]: Started libpod-conmon-0b1089539ebe5ac495d7ba03d8a10a33bb84a0a283c9787610087b0f685618fb.scope.
Jan 31 08:13:13 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:13:13 compute-0 podman[115610]: 2026-01-31 08:13:13.037938123 +0000 UTC m=+0.035462157 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 08:13:13 compute-0 podman[115610]: 2026-01-31 08:13:13.135236106 +0000 UTC m=+0.132760150 container init 0b1089539ebe5ac495d7ba03d8a10a33bb84a0a283c9787610087b0f685618fb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_cray, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3)
Jan 31 08:13:13 compute-0 podman[115610]: 2026-01-31 08:13:13.144369403 +0000 UTC m=+0.141893447 container start 0b1089539ebe5ac495d7ba03d8a10a33bb84a0a283c9787610087b0f685618fb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_cray, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030)
Jan 31 08:13:13 compute-0 podman[115610]: 2026-01-31 08:13:13.147702246 +0000 UTC m=+0.145226350 container attach 0b1089539ebe5ac495d7ba03d8a10a33bb84a0a283c9787610087b0f685618fb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_cray, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 08:13:13 compute-0 happy_cray[115626]: 167 167
Jan 31 08:13:13 compute-0 systemd[1]: libpod-0b1089539ebe5ac495d7ba03d8a10a33bb84a0a283c9787610087b0f685618fb.scope: Deactivated successfully.
Jan 31 08:13:13 compute-0 podman[115610]: 2026-01-31 08:13:13.149875047 +0000 UTC m=+0.147399051 container died 0b1089539ebe5ac495d7ba03d8a10a33bb84a0a283c9787610087b0f685618fb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_cray, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 08:13:13 compute-0 systemd[1]: var-lib-containers-storage-overlay-18db69c5f9bcfdc5fabcd1453d79a413435e795bfe8f8c8daa11ee99af4a014e-merged.mount: Deactivated successfully.
Jan 31 08:13:13 compute-0 podman[115610]: 2026-01-31 08:13:13.194393438 +0000 UTC m=+0.191917452 container remove 0b1089539ebe5ac495d7ba03d8a10a33bb84a0a283c9787610087b0f685618fb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_cray, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 08:13:13 compute-0 systemd[1]: libpod-conmon-0b1089539ebe5ac495d7ba03d8a10a33bb84a0a283c9787610087b0f685618fb.scope: Deactivated successfully.
Jan 31 08:13:13 compute-0 podman[115651]: 2026-01-31 08:13:13.396209427 +0000 UTC m=+0.092234512 container create 3f5508782961aab75e4a8491b90c83d0e337cc6d3384f6e211bba4792773659c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=festive_rhodes, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.build-date=20251030, io.buildah.version=1.41.3)
Jan 31 08:13:13 compute-0 podman[115651]: 2026-01-31 08:13:13.339803093 +0000 UTC m=+0.035828268 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 08:13:13 compute-0 systemd[1]: Started libpod-conmon-3f5508782961aab75e4a8491b90c83d0e337cc6d3384f6e211bba4792773659c.scope.
Jan 31 08:13:13 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:13:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7438b6906344e060ff3a1b25e8eca1a3e67d3dbe0df880258ff6e05fd546c750/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 08:13:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7438b6906344e060ff3a1b25e8eca1a3e67d3dbe0df880258ff6e05fd546c750/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 08:13:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7438b6906344e060ff3a1b25e8eca1a3e67d3dbe0df880258ff6e05fd546c750/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 08:13:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7438b6906344e060ff3a1b25e8eca1a3e67d3dbe0df880258ff6e05fd546c750/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 08:13:13 compute-0 podman[115651]: 2026-01-31 08:13:13.523484913 +0000 UTC m=+0.219510038 container init 3f5508782961aab75e4a8491b90c83d0e337cc6d3384f6e211bba4792773659c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=festive_rhodes, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.build-date=20251030, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 08:13:13 compute-0 podman[115651]: 2026-01-31 08:13:13.531905749 +0000 UTC m=+0.227930864 container start 3f5508782961aab75e4a8491b90c83d0e337cc6d3384f6e211bba4792773659c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=festive_rhodes, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, ceph=True, org.label-schema.vendor=CentOS)
Jan 31 08:13:13 compute-0 podman[115651]: 2026-01-31 08:13:13.535674665 +0000 UTC m=+0.231699840 container attach 3f5508782961aab75e4a8491b90c83d0e337cc6d3384f6e211bba4792773659c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=festive_rhodes, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 08:13:13 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v319: 305 pgs: 305 active+clean; 462 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:13:14 compute-0 lvm[115747]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 31 08:13:14 compute-0 lvm[115746]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 31 08:13:14 compute-0 lvm[115747]: VG ceph_vg1 finished
Jan 31 08:13:14 compute-0 lvm[115746]: VG ceph_vg0 finished
Jan 31 08:13:14 compute-0 lvm[115749]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 31 08:13:14 compute-0 lvm[115749]: VG ceph_vg2 finished
Jan 31 08:13:14 compute-0 festive_rhodes[115668]: {}
Jan 31 08:13:14 compute-0 systemd[1]: libpod-3f5508782961aab75e4a8491b90c83d0e337cc6d3384f6e211bba4792773659c.scope: Deactivated successfully.
Jan 31 08:13:14 compute-0 podman[115651]: 2026-01-31 08:13:14.318010162 +0000 UTC m=+1.014035257 container died 3f5508782961aab75e4a8491b90c83d0e337cc6d3384f6e211bba4792773659c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=festive_rhodes, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 08:13:14 compute-0 systemd[1]: libpod-3f5508782961aab75e4a8491b90c83d0e337cc6d3384f6e211bba4792773659c.scope: Consumed 1.043s CPU time.
Jan 31 08:13:14 compute-0 systemd[1]: var-lib-containers-storage-overlay-7438b6906344e060ff3a1b25e8eca1a3e67d3dbe0df880258ff6e05fd546c750-merged.mount: Deactivated successfully.
Jan 31 08:13:14 compute-0 podman[115651]: 2026-01-31 08:13:14.368576182 +0000 UTC m=+1.064601287 container remove 3f5508782961aab75e4a8491b90c83d0e337cc6d3384f6e211bba4792773659c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=festive_rhodes, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 08:13:14 compute-0 systemd[1]: libpod-conmon-3f5508782961aab75e4a8491b90c83d0e337cc6d3384f6e211bba4792773659c.scope: Deactivated successfully.
Jan 31 08:13:14 compute-0 sudo[115574]: pam_unix(sudo:session): session closed for user root
Jan 31 08:13:14 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 31 08:13:14 compute-0 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 08:13:14 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 31 08:13:14 compute-0 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 08:13:14 compute-0 sudo[115765]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 31 08:13:14 compute-0 sudo[115765]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:13:14 compute-0 sudo[115765]: pam_unix(sudo:session): session closed for user root
Jan 31 08:13:15 compute-0 ceph-mon[75227]: pgmap v319: 305 pgs: 305 active+clean; 462 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:13:15 compute-0 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 08:13:15 compute-0 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 08:13:15 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 08:13:15 compute-0 ceph-osd[87035]: log_channel(cluster) log [DBG] : 9.12 scrub starts
Jan 31 08:13:15 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v320: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:13:15 compute-0 ceph-osd[87035]: log_channel(cluster) log [DBG] : 9.12 scrub ok
Jan 31 08:13:16 compute-0 ceph-mon[75227]: 9.12 scrub starts
Jan 31 08:13:16 compute-0 ceph-mon[75227]: pgmap v320: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:13:16 compute-0 ceph-mon[75227]: 9.12 scrub ok
Jan 31 08:13:17 compute-0 sshd-session[115790]: Accepted publickey for zuul from 192.168.122.30 port 40460 ssh2: ECDSA SHA256:Skb+4tfaoVfLHQIqkRSeA/sFlTrVc6ZnX8V66qTLHY8
Jan 31 08:13:17 compute-0 systemd-logind[793]: New session 40 of user zuul.
Jan 31 08:13:17 compute-0 systemd[1]: Started Session 40 of User zuul.
Jan 31 08:13:17 compute-0 sshd-session[115790]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 31 08:13:17 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v321: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:13:17 compute-0 ceph-osd[87035]: log_channel(cluster) log [DBG] : 9.2 scrub starts
Jan 31 08:13:17 compute-0 ceph-osd[87035]: log_channel(cluster) log [DBG] : 9.2 scrub ok
Jan 31 08:13:18 compute-0 python3.9[115943]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 31 08:13:18 compute-0 sudo[116097]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zwqsgvuabmzfivgggodyislkdnymgnmk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847198.5811236-28-55377597235637/AnsiballZ_file.py'
Jan 31 08:13:18 compute-0 sudo[116097]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:13:19 compute-0 ceph-mon[75227]: pgmap v321: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:13:19 compute-0 ceph-mon[75227]: 9.2 scrub starts
Jan 31 08:13:19 compute-0 ceph-mon[75227]: 9.2 scrub ok
Jan 31 08:13:19 compute-0 python3.9[116099]: ansible-ansible.builtin.file Invoked with group=zuul mode=0770 owner=zuul path=/root/.config/containers recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 08:13:19 compute-0 sudo[116097]: pam_unix(sudo:session): session closed for user root
Jan 31 08:13:19 compute-0 sudo[116272]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jmudrlfjgrzyntdsjqihggeykqsarwnw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847199.3042924-36-164219924626471/AnsiballZ_stat.py'
Jan 31 08:13:19 compute-0 sudo[116272]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:13:19 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v322: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:13:19 compute-0 python3.9[116274]: ansible-ansible.legacy.stat Invoked with path=/root/.config/containers/auth.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 08:13:19 compute-0 sudo[116272]: pam_unix(sudo:session): session closed for user root
Jan 31 08:13:20 compute-0 sudo[116350]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bvsxkcypnfqfglbvtpnhlncplbttxwyj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847199.3042924-36-164219924626471/AnsiballZ_file.py'
Jan 31 08:13:20 compute-0 sudo[116350]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:13:20 compute-0 python3.9[116352]: ansible-ansible.legacy.file Invoked with group=zuul mode=0660 owner=zuul dest=/root/.config/containers/auth.json _original_basename=.adm08e91 recurse=False state=file path=/root/.config/containers/auth.json force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 08:13:20 compute-0 sudo[116350]: pam_unix(sudo:session): session closed for user root
Jan 31 08:13:20 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 08:13:20 compute-0 ceph-osd[87035]: log_channel(cluster) log [DBG] : 9.0 scrub starts
Jan 31 08:13:20 compute-0 ceph-osd[87035]: log_channel(cluster) log [DBG] : 9.0 scrub ok
Jan 31 08:13:21 compute-0 sudo[116502]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fgyrqybsfnhdrxklwfobfvxwjsnrxymc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847200.7667818-56-76932757625570/AnsiballZ_stat.py'
Jan 31 08:13:21 compute-0 sudo[116502]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:13:21 compute-0 python3.9[116504]: ansible-ansible.legacy.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 08:13:21 compute-0 sudo[116502]: pam_unix(sudo:session): session closed for user root
Jan 31 08:13:21 compute-0 ceph-mon[75227]: pgmap v322: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:13:21 compute-0 sudo[116580]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-arwygdbdqsgzhswouovlvbdpbonvmpyq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847200.7667818-56-76932757625570/AnsiballZ_file.py'
Jan 31 08:13:21 compute-0 sudo[116580]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:13:21 compute-0 python3.9[116582]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/etc/sysconfig/podman_drop_in _original_basename=.ffomknku recurse=False state=file path=/etc/sysconfig/podman_drop_in force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 08:13:21 compute-0 sudo[116580]: pam_unix(sudo:session): session closed for user root
Jan 31 08:13:21 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v323: 305 pgs: 305 active+clean; 462 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:13:22 compute-0 sudo[116732]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dkaqkgdoqkhuqpppgmitdisyhzlajube ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847201.9008932-69-173552736482271/AnsiballZ_file.py'
Jan 31 08:13:22 compute-0 sudo[116732]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:13:22 compute-0 python3.9[116734]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 31 08:13:22 compute-0 sudo[116732]: pam_unix(sudo:session): session closed for user root
Jan 31 08:13:22 compute-0 ceph-mon[75227]: 9.0 scrub starts
Jan 31 08:13:22 compute-0 ceph-mon[75227]: 9.0 scrub ok
Jan 31 08:13:22 compute-0 ceph-mon[75227]: pgmap v323: 305 pgs: 305 active+clean; 462 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:13:22 compute-0 sudo[116884]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ltkjoexsmxyhfehbtlbfbxeztilafwtr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847202.5787978-77-74597009373397/AnsiballZ_stat.py'
Jan 31 08:13:22 compute-0 sudo[116884]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:13:23 compute-0 python3.9[116886]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 08:13:23 compute-0 sudo[116884]: pam_unix(sudo:session): session closed for user root
Jan 31 08:13:23 compute-0 sudo[116962]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xakumjfvzgrpkkfuqssznbumciwpqosy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847202.5787978-77-74597009373397/AnsiballZ_file.py'
Jan 31 08:13:23 compute-0 sudo[116962]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:13:23 compute-0 python3.9[116964]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-container-shutdown _original_basename=edpm-container-shutdown recurse=False state=file path=/var/local/libexec/edpm-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 31 08:13:23 compute-0 sudo[116962]: pam_unix(sudo:session): session closed for user root
Jan 31 08:13:23 compute-0 sudo[117114]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yckuumpxkpzgymrzrpwezerzynityxgm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847203.5716155-77-116828908134773/AnsiballZ_stat.py'
Jan 31 08:13:23 compute-0 sudo[117114]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:13:23 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v324: 305 pgs: 305 active+clean; 462 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:13:23 compute-0 ceph-osd[87035]: log_channel(cluster) log [DBG] : 9.a scrub starts
Jan 31 08:13:23 compute-0 ceph-osd[87035]: log_channel(cluster) log [DBG] : 9.a scrub ok
Jan 31 08:13:24 compute-0 python3.9[117116]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 08:13:24 compute-0 sudo[117114]: pam_unix(sudo:session): session closed for user root
Jan 31 08:13:24 compute-0 sudo[117192]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lbajnmxmuwnerwlarvidicqajjyuetuw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847203.5716155-77-116828908134773/AnsiballZ_file.py'
Jan 31 08:13:24 compute-0 sudo[117192]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:13:24 compute-0 ceph-osd[85971]: log_channel(cluster) log [DBG] : 9.16 scrub starts
Jan 31 08:13:24 compute-0 ceph-osd[85971]: log_channel(cluster) log [DBG] : 9.16 scrub ok
Jan 31 08:13:24 compute-0 python3.9[117194]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-start-podman-container _original_basename=edpm-start-podman-container recurse=False state=file path=/var/local/libexec/edpm-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 31 08:13:24 compute-0 sudo[117192]: pam_unix(sudo:session): session closed for user root
Jan 31 08:13:24 compute-0 sudo[117344]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ojchynhnooecahkygraibscbqqcicdzb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847204.633138-100-178863593488468/AnsiballZ_file.py'
Jan 31 08:13:24 compute-0 sudo[117344]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:13:25 compute-0 python3.9[117346]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 08:13:25 compute-0 sudo[117344]: pam_unix(sudo:session): session closed for user root
Jan 31 08:13:25 compute-0 ceph-mon[75227]: pgmap v324: 305 pgs: 305 active+clean; 462 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:13:25 compute-0 ceph-mon[75227]: 9.a scrub starts
Jan 31 08:13:25 compute-0 ceph-mon[75227]: 9.a scrub ok
Jan 31 08:13:25 compute-0 sudo[117496]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nedjsgwjhlafvwyedblsnklxaigltuir ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847205.1958854-108-241766411327096/AnsiballZ_stat.py'
Jan 31 08:13:25 compute-0 sudo[117496]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:13:25 compute-0 python3.9[117498]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 08:13:25 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 08:13:25 compute-0 sudo[117496]: pam_unix(sudo:session): session closed for user root
Jan 31 08:13:25 compute-0 sudo[117574]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ewqeejjpmbnhsfhcgbeqbsdoqcnsbwpt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847205.1958854-108-241766411327096/AnsiballZ_file.py'
Jan 31 08:13:25 compute-0 sudo[117574]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:13:25 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v325: 305 pgs: 305 active+clean; 462 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:13:25 compute-0 ceph-osd[87035]: log_channel(cluster) log [DBG] : 9.4 scrub starts
Jan 31 08:13:26 compute-0 ceph-osd[87035]: log_channel(cluster) log [DBG] : 9.4 scrub ok
Jan 31 08:13:26 compute-0 python3.9[117576]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/edpm-container-shutdown.service _original_basename=edpm-container-shutdown-service recurse=False state=file path=/etc/systemd/system/edpm-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 08:13:26 compute-0 sudo[117574]: pam_unix(sudo:session): session closed for user root
Jan 31 08:13:26 compute-0 ceph-mon[75227]: 9.16 scrub starts
Jan 31 08:13:26 compute-0 ceph-mon[75227]: 9.16 scrub ok
Jan 31 08:13:26 compute-0 ceph-mon[75227]: pgmap v325: 305 pgs: 305 active+clean; 462 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:13:26 compute-0 sudo[117726]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ejokgjhneymywsogbhxvtrbhqgakvnet ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847206.1548426-120-39933180159291/AnsiballZ_stat.py'
Jan 31 08:13:26 compute-0 sudo[117726]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:13:26 compute-0 python3.9[117728]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 08:13:26 compute-0 sudo[117726]: pam_unix(sudo:session): session closed for user root
Jan 31 08:13:26 compute-0 sudo[117804]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yvgleixprzgxtszjtbupppihjxujukoj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847206.1548426-120-39933180159291/AnsiballZ_file.py'
Jan 31 08:13:26 compute-0 sudo[117804]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:13:27 compute-0 python3.9[117806]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset _original_basename=91-edpm-container-shutdown-preset recurse=False state=file path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 08:13:27 compute-0 sudo[117804]: pam_unix(sudo:session): session closed for user root
Jan 31 08:13:27 compute-0 ceph-mon[75227]: 9.4 scrub starts
Jan 31 08:13:27 compute-0 ceph-mon[75227]: 9.4 scrub ok
Jan 31 08:13:27 compute-0 sudo[117956]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zisloozeupvsythnwrzsonhqgetmzpag ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847207.2252352-132-167823381355887/AnsiballZ_systemd.py'
Jan 31 08:13:27 compute-0 sudo[117956]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:13:27 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v326: 305 pgs: 305 active+clean; 462 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:13:28 compute-0 python3.9[117958]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 31 08:13:28 compute-0 systemd[1]: Reloading.
Jan 31 08:13:28 compute-0 systemd-rc-local-generator[117985]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 08:13:28 compute-0 systemd-sysv-generator[117989]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 08:13:28 compute-0 ceph-mon[75227]: pgmap v326: 305 pgs: 305 active+clean; 462 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:13:28 compute-0 sudo[117956]: pam_unix(sudo:session): session closed for user root
Jan 31 08:13:28 compute-0 sudo[118145]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qiqiridkfikdzrdbzmenqqqsyroljdmu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847208.6441362-140-138440254302006/AnsiballZ_stat.py'
Jan 31 08:13:28 compute-0 sudo[118145]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:13:29 compute-0 python3.9[118147]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 08:13:29 compute-0 sudo[118145]: pam_unix(sudo:session): session closed for user root
Jan 31 08:13:29 compute-0 sudo[118223]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ggbcrclhmjxwxagzxbzenbynlsbglncv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847208.6441362-140-138440254302006/AnsiballZ_file.py'
Jan 31 08:13:29 compute-0 sudo[118223]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:13:29 compute-0 ceph-osd[85971]: log_channel(cluster) log [DBG] : 9.9 scrub starts
Jan 31 08:13:29 compute-0 ceph-osd[85971]: log_channel(cluster) log [DBG] : 9.9 scrub ok
Jan 31 08:13:29 compute-0 python3.9[118225]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/etc/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 08:13:29 compute-0 sudo[118223]: pam_unix(sudo:session): session closed for user root
Jan 31 08:13:29 compute-0 sudo[118375]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kkjruqivrpdgzqacaxlpwirubvbugtny ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847209.5831642-152-206763785415255/AnsiballZ_stat.py'
Jan 31 08:13:29 compute-0 sudo[118375]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:13:29 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v327: 305 pgs: 305 active+clean; 462 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:13:30 compute-0 python3.9[118377]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 08:13:30 compute-0 sudo[118375]: pam_unix(sudo:session): session closed for user root
Jan 31 08:13:30 compute-0 sudo[118453]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xbleidknugbxqnrnurleyxhdfyblulav ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847209.5831642-152-206763785415255/AnsiballZ_file.py'
Jan 31 08:13:30 compute-0 sudo[118453]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:13:30 compute-0 python3.9[118455]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/etc/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 08:13:30 compute-0 sudo[118453]: pam_unix(sudo:session): session closed for user root
Jan 31 08:13:30 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 08:13:30 compute-0 sudo[118605]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nlcdolwuehxhumtjhhhfaopflighgcra ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847210.5225403-164-104902866578758/AnsiballZ_systemd.py'
Jan 31 08:13:30 compute-0 sudo[118605]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:13:30 compute-0 ceph-mon[75227]: 9.9 scrub starts
Jan 31 08:13:30 compute-0 ceph-mon[75227]: 9.9 scrub ok
Jan 31 08:13:30 compute-0 ceph-mon[75227]: pgmap v327: 305 pgs: 305 active+clean; 462 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:13:31 compute-0 python3.9[118607]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 31 08:13:31 compute-0 systemd[1]: Reloading.
Jan 31 08:13:31 compute-0 systemd-rc-local-generator[118630]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 08:13:31 compute-0 systemd-sysv-generator[118634]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 08:13:31 compute-0 ceph-osd[85971]: log_channel(cluster) log [DBG] : 9.d scrub starts
Jan 31 08:13:31 compute-0 systemd[1]: Starting Create netns directory...
Jan 31 08:13:31 compute-0 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Jan 31 08:13:31 compute-0 systemd[1]: netns-placeholder.service: Deactivated successfully.
Jan 31 08:13:31 compute-0 systemd[1]: Finished Create netns directory.
Jan 31 08:13:31 compute-0 ceph-osd[85971]: log_channel(cluster) log [DBG] : 9.d scrub ok
Jan 31 08:13:31 compute-0 sudo[118605]: pam_unix(sudo:session): session closed for user root
Jan 31 08:13:31 compute-0 ceph-mgr[75519]: [balancer INFO root] Optimize plan auto_2026-01-31_08:13:31
Jan 31 08:13:31 compute-0 ceph-mgr[75519]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 31 08:13:31 compute-0 ceph-mgr[75519]: [balancer INFO root] do_upmap
Jan 31 08:13:31 compute-0 ceph-mgr[75519]: [balancer INFO root] pools ['default.rgw.meta', 'default.rgw.control', 'cephfs.cephfs.meta', 'images', 'cephfs.cephfs.data', 'vms', 'volumes', '.mgr', 'backups', '.rgw.root', 'default.rgw.log']
Jan 31 08:13:31 compute-0 ceph-mgr[75519]: [balancer INFO root] prepared 0/10 upmap changes
Jan 31 08:13:31 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v328: 305 pgs: 305 active+clean; 462 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:13:32 compute-0 ceph-osd[87035]: log_channel(cluster) log [DBG] : 9.1a scrub starts
Jan 31 08:13:32 compute-0 ceph-osd[87035]: log_channel(cluster) log [DBG] : 9.1a scrub ok
Jan 31 08:13:32 compute-0 python3.9[118798]: ansible-ansible.builtin.service_facts Invoked
Jan 31 08:13:32 compute-0 network[118815]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Jan 31 08:13:32 compute-0 network[118816]: 'network-scripts' will be removed from distribution in near future.
Jan 31 08:13:32 compute-0 network[118817]: It is advised to switch to 'NetworkManager' instead for network management.
Jan 31 08:13:32 compute-0 ceph-osd[85971]: log_channel(cluster) log [DBG] : 9.1 scrub starts
Jan 31 08:13:32 compute-0 ceph-osd[85971]: log_channel(cluster) log [DBG] : 9.1 scrub ok
Jan 31 08:13:32 compute-0 ceph-mgr[75519]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:13:32 compute-0 ceph-mgr[75519]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:13:32 compute-0 ceph-mgr[75519]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:13:32 compute-0 ceph-mgr[75519]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:13:32 compute-0 ceph-mgr[75519]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:13:32 compute-0 ceph-mgr[75519]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:13:32 compute-0 ceph-mon[75227]: 9.d scrub starts
Jan 31 08:13:32 compute-0 ceph-mon[75227]: 9.d scrub ok
Jan 31 08:13:32 compute-0 ceph-mon[75227]: pgmap v328: 305 pgs: 305 active+clean; 462 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:13:32 compute-0 ceph-mon[75227]: 9.1a scrub starts
Jan 31 08:13:32 compute-0 ceph-mon[75227]: 9.1a scrub ok
Jan 31 08:13:32 compute-0 ceph-mgr[75519]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 31 08:13:32 compute-0 ceph-mgr[75519]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 08:13:32 compute-0 ceph-mgr[75519]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 31 08:13:32 compute-0 ceph-mgr[75519]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 08:13:32 compute-0 ceph-mgr[75519]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 08:13:32 compute-0 ceph-mgr[75519]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 08:13:32 compute-0 ceph-mgr[75519]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 08:13:32 compute-0 ceph-mgr[75519]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 08:13:32 compute-0 ceph-mgr[75519]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 08:13:32 compute-0 ceph-mgr[75519]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 08:13:33 compute-0 ceph-osd[85971]: log_channel(cluster) log [DBG] : 9.3 scrub starts
Jan 31 08:13:33 compute-0 ceph-osd[85971]: log_channel(cluster) log [DBG] : 9.3 scrub ok
Jan 31 08:13:33 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v329: 305 pgs: 305 active+clean; 462 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:13:33 compute-0 ceph-mon[75227]: 9.1 scrub starts
Jan 31 08:13:33 compute-0 ceph-mon[75227]: 9.1 scrub ok
Jan 31 08:13:34 compute-0 ceph-mon[75227]: 9.3 scrub starts
Jan 31 08:13:34 compute-0 ceph-mon[75227]: 9.3 scrub ok
Jan 31 08:13:34 compute-0 ceph-mon[75227]: pgmap v329: 305 pgs: 305 active+clean; 462 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:13:35 compute-0 sudo[119077]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hhluohdkypzybimyojwkmnnjducoqlzp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847214.8356314-190-37471707388740/AnsiballZ_stat.py'
Jan 31 08:13:35 compute-0 sudo[119077]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:13:35 compute-0 python3.9[119079]: ansible-ansible.legacy.stat Invoked with path=/etc/ssh/sshd_config follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 08:13:35 compute-0 sudo[119077]: pam_unix(sudo:session): session closed for user root
Jan 31 08:13:35 compute-0 sudo[119155]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lsjavsihvqmxmcvidqcukzacfccggyss ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847214.8356314-190-37471707388740/AnsiballZ_file.py'
Jan 31 08:13:35 compute-0 sudo[119155]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:13:35 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 08:13:35 compute-0 python3.9[119157]: ansible-ansible.legacy.file Invoked with mode=0600 dest=/etc/ssh/sshd_config _original_basename=sshd_config_block.j2 recurse=False state=file path=/etc/ssh/sshd_config force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 08:13:35 compute-0 sudo[119155]: pam_unix(sudo:session): session closed for user root
Jan 31 08:13:35 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v330: 305 pgs: 305 active+clean; 462 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:13:35 compute-0 ceph-osd[87035]: log_channel(cluster) log [DBG] : 9.1f scrub starts
Jan 31 08:13:35 compute-0 ceph-osd[87035]: log_channel(cluster) log [DBG] : 9.1f scrub ok
Jan 31 08:13:36 compute-0 sudo[119307]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cizwgjdljoyztpyzronjwezszvkoltmc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847215.8638873-203-152923499873248/AnsiballZ_file.py'
Jan 31 08:13:36 compute-0 sudo[119307]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:13:36 compute-0 python3.9[119309]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 08:13:36 compute-0 sudo[119307]: pam_unix(sudo:session): session closed for user root
Jan 31 08:13:36 compute-0 sudo[119459]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cyuyzkiryeopnsjlzogdprjeovpcwrxf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847216.5042462-211-7297958915020/AnsiballZ_stat.py'
Jan 31 08:13:36 compute-0 sudo[119459]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:13:36 compute-0 ceph-mon[75227]: pgmap v330: 305 pgs: 305 active+clean; 462 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:13:36 compute-0 ceph-mon[75227]: 9.1f scrub starts
Jan 31 08:13:36 compute-0 ceph-mon[75227]: 9.1f scrub ok
Jan 31 08:13:36 compute-0 python3.9[119461]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/sshd-networks.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 08:13:37 compute-0 sudo[119459]: pam_unix(sudo:session): session closed for user root
Jan 31 08:13:37 compute-0 sudo[119537]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kipoearomscdlajtcszzsvtsjfqulppn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847216.5042462-211-7297958915020/AnsiballZ_file.py'
Jan 31 08:13:37 compute-0 sudo[119537]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:13:37 compute-0 ceph-osd[85971]: log_channel(cluster) log [DBG] : 9.1d scrub starts
Jan 31 08:13:37 compute-0 ceph-osd[85971]: log_channel(cluster) log [DBG] : 9.1d scrub ok
Jan 31 08:13:37 compute-0 python3.9[119539]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/var/lib/edpm-config/firewall/sshd-networks.yaml _original_basename=firewall.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/sshd-networks.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 08:13:37 compute-0 sudo[119537]: pam_unix(sudo:session): session closed for user root
Jan 31 08:13:37 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v331: 305 pgs: 305 active+clean; 462 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:13:37 compute-0 sudo[119689]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vimuksfigofbpbdybqjvstcndpkapgrf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847217.6215458-226-185335662064862/AnsiballZ_timezone.py'
Jan 31 08:13:37 compute-0 sudo[119689]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:13:38 compute-0 python3.9[119691]: ansible-community.general.timezone Invoked with name=UTC hwclock=None
Jan 31 08:13:38 compute-0 systemd[1]: Starting Time & Date Service...
Jan 31 08:13:38 compute-0 systemd[1]: Started Time & Date Service.
Jan 31 08:13:38 compute-0 sudo[119689]: pam_unix(sudo:session): session closed for user root
Jan 31 08:13:38 compute-0 ceph-osd[85971]: log_channel(cluster) log [DBG] : 9.1c scrub starts
Jan 31 08:13:38 compute-0 ceph-osd[85971]: log_channel(cluster) log [DBG] : 9.1c scrub ok
Jan 31 08:13:38 compute-0 sudo[119845]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jimiriwzhbgdswsxuhazscnzqurebgga ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847218.467177-235-79826902327797/AnsiballZ_file.py'
Jan 31 08:13:38 compute-0 sudo[119845]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:13:38 compute-0 python3.9[119847]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 08:13:38 compute-0 sudo[119845]: pam_unix(sudo:session): session closed for user root
Jan 31 08:13:38 compute-0 ceph-mon[75227]: 9.1d scrub starts
Jan 31 08:13:38 compute-0 ceph-mon[75227]: 9.1d scrub ok
Jan 31 08:13:38 compute-0 ceph-mon[75227]: pgmap v331: 305 pgs: 305 active+clean; 462 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:13:39 compute-0 sudo[119997]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vugtcqqeljlhsdkecjahnyzpltmttqkr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847219.0297017-243-75771902099794/AnsiballZ_stat.py'
Jan 31 08:13:39 compute-0 sudo[119997]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:13:39 compute-0 python3.9[119999]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 08:13:39 compute-0 sudo[119997]: pam_unix(sudo:session): session closed for user root
Jan 31 08:13:39 compute-0 sudo[120075]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-owewhktkjhzxsfzfmxvxkpvzhvpgkbgn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847219.0297017-243-75771902099794/AnsiballZ_file.py'
Jan 31 08:13:39 compute-0 sudo[120075]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:13:39 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v332: 305 pgs: 305 active+clean; 462 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:13:40 compute-0 python3.9[120077]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml _original_basename=base-rules.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 08:13:40 compute-0 sudo[120075]: pam_unix(sudo:session): session closed for user root
Jan 31 08:13:40 compute-0 ceph-mon[75227]: 9.1c scrub starts
Jan 31 08:13:40 compute-0 ceph-mon[75227]: 9.1c scrub ok
Jan 31 08:13:40 compute-0 sudo[120227]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wkbbqoytbtoktlzuuocozlxgyujhshyu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847220.2229924-255-189027443199371/AnsiballZ_stat.py'
Jan 31 08:13:40 compute-0 sudo[120227]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:13:40 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 08:13:40 compute-0 python3.9[120229]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 08:13:40 compute-0 sudo[120227]: pam_unix(sudo:session): session closed for user root
Jan 31 08:13:40 compute-0 sudo[120305]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pazzcjsosyownmprqhzqehcwrhpkvrqy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847220.2229924-255-189027443199371/AnsiballZ_file.py'
Jan 31 08:13:40 compute-0 sudo[120305]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:13:40 compute-0 python3.9[120307]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml _original_basename=.wqkwzi68 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 08:13:41 compute-0 sudo[120305]: pam_unix(sudo:session): session closed for user root
Jan 31 08:13:41 compute-0 ceph-mon[75227]: pgmap v332: 305 pgs: 305 active+clean; 462 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:13:41 compute-0 sudo[120457]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mhewljnukhtfdmgaxqgqpjgvnaanfuor ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847221.1304479-267-236893963116957/AnsiballZ_stat.py'
Jan 31 08:13:41 compute-0 sudo[120457]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:13:41 compute-0 python3.9[120459]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 08:13:41 compute-0 sudo[120457]: pam_unix(sudo:session): session closed for user root
Jan 31 08:13:41 compute-0 sudo[120535]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gcginempyuqknfjhkysewjvkivkmclis ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847221.1304479-267-236893963116957/AnsiballZ_file.py'
Jan 31 08:13:41 compute-0 sudo[120535]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:13:41 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v333: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:13:41 compute-0 python3.9[120537]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/iptables.nft _original_basename=iptables.nft recurse=False state=file path=/etc/nftables/iptables.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 08:13:41 compute-0 sudo[120535]: pam_unix(sudo:session): session closed for user root
Jan 31 08:13:42 compute-0 sudo[120687]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ctnuvfrrmvpulklqubcsadtcpcxrbzfs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847222.177591-280-242302119511908/AnsiballZ_command.py'
Jan 31 08:13:42 compute-0 sudo[120687]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:13:42 compute-0 python3.9[120689]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 08:13:42 compute-0 sudo[120687]: pam_unix(sudo:session): session closed for user root
Jan 31 08:13:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] _maybe_adjust
Jan 31 08:13:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:13:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 31 08:13:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:13:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 08:13:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:13:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 08:13:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:13:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 08:13:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:13:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 08:13:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:13:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 2.6947183441958982e-06 of space, bias 4.0, pg target 0.003233662013035078 quantized to 16 (current 16)
Jan 31 08:13:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:13:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 08:13:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:13:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 31 08:13:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:13:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Jan 31 08:13:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:13:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 08:13:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:13:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 31 08:13:43 compute-0 ceph-mon[75227]: pgmap v333: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:13:43 compute-0 sudo[120840]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wygqhhysivnevmzhrsyxmwjgvdkecoev ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1769847222.8617368-288-21077403860684/AnsiballZ_edpm_nftables_from_files.py'
Jan 31 08:13:43 compute-0 sudo[120840]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:13:43 compute-0 ceph-osd[85971]: log_channel(cluster) log [DBG] : 9.1b scrub starts
Jan 31 08:13:43 compute-0 ceph-osd[85971]: log_channel(cluster) log [DBG] : 9.1b scrub ok
Jan 31 08:13:43 compute-0 python3[120842]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall
Jan 31 08:13:43 compute-0 sudo[120840]: pam_unix(sudo:session): session closed for user root
Jan 31 08:13:43 compute-0 sudo[120992]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yfhxboyjrlyxrytleentktkxtykqhvhi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847223.5842893-296-176324831246535/AnsiballZ_stat.py'
Jan 31 08:13:43 compute-0 sudo[120992]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:13:43 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v334: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:13:43 compute-0 python3.9[120994]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 08:13:43 compute-0 sudo[120992]: pam_unix(sudo:session): session closed for user root
Jan 31 08:13:44 compute-0 sudo[121070]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lvklrneuyluurchruirptcvklvcsbjuj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847223.5842893-296-176324831246535/AnsiballZ_file.py'
Jan 31 08:13:44 compute-0 sudo[121070]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:13:44 compute-0 ceph-osd[85971]: log_channel(cluster) log [DBG] : 9.1e scrub starts
Jan 31 08:13:44 compute-0 ceph-osd[85971]: log_channel(cluster) log [DBG] : 9.1e scrub ok
Jan 31 08:13:44 compute-0 python3.9[121072]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-jumps.nft _original_basename=jump-chain.j2 recurse=False state=file path=/etc/nftables/edpm-jumps.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 08:13:44 compute-0 sudo[121070]: pam_unix(sudo:session): session closed for user root
Jan 31 08:13:44 compute-0 sudo[121222]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-axrannuvojoyenujpiiqhllchhgwadkz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847224.5311546-308-278075843057044/AnsiballZ_stat.py'
Jan 31 08:13:44 compute-0 sudo[121222]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:13:44 compute-0 python3.9[121224]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 08:13:44 compute-0 sudo[121222]: pam_unix(sudo:session): session closed for user root
Jan 31 08:13:45 compute-0 ceph-mon[75227]: 9.1b scrub starts
Jan 31 08:13:45 compute-0 ceph-mon[75227]: 9.1b scrub ok
Jan 31 08:13:45 compute-0 ceph-mon[75227]: pgmap v334: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:13:45 compute-0 sudo[121347]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-otuyrvvilgkjwagvluwjuutctrfwqbqn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847224.5311546-308-278075843057044/AnsiballZ_copy.py'
Jan 31 08:13:45 compute-0 sudo[121347]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:13:45 compute-0 python3.9[121349]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-update-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769847224.5311546-308-278075843057044/.source.nft follow=False _original_basename=jump-chain.j2 checksum=3ce353c89bce3b135a0ed688d4e338b2efb15185 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 08:13:45 compute-0 sudo[121347]: pam_unix(sudo:session): session closed for user root
Jan 31 08:13:45 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 08:13:45 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v335: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:13:45 compute-0 sudo[121499]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fsnrlimloyutwtozyjsoybfbrmiflhvx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847225.761493-323-278935029283173/AnsiballZ_stat.py'
Jan 31 08:13:46 compute-0 sudo[121499]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:13:46 compute-0 python3.9[121501]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 08:13:46 compute-0 sudo[121499]: pam_unix(sudo:session): session closed for user root
Jan 31 08:13:46 compute-0 ceph-mon[75227]: 9.1e scrub starts
Jan 31 08:13:46 compute-0 ceph-mon[75227]: 9.1e scrub ok
Jan 31 08:13:46 compute-0 sudo[121577]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vpmezqbsnfuuzszgvhcpajzrltiqgrhr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847225.761493-323-278935029283173/AnsiballZ_file.py'
Jan 31 08:13:46 compute-0 sudo[121577]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:13:46 compute-0 python3.9[121579]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-flushes.nft _original_basename=flush-chain.j2 recurse=False state=file path=/etc/nftables/edpm-flushes.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 08:13:46 compute-0 sudo[121577]: pam_unix(sudo:session): session closed for user root
Jan 31 08:13:47 compute-0 sudo[121729]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cctuajzzbflouqzowvnnrxrgdevyfzdy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847226.7970204-335-109088231033839/AnsiballZ_stat.py'
Jan 31 08:13:47 compute-0 sudo[121729]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:13:47 compute-0 python3.9[121731]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 08:13:47 compute-0 sudo[121729]: pam_unix(sudo:session): session closed for user root
Jan 31 08:13:47 compute-0 ceph-mon[75227]: pgmap v335: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:13:47 compute-0 sudo[121807]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nboljvjwdehijsdwlikifvsrhwbtpvap ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847226.7970204-335-109088231033839/AnsiballZ_file.py'
Jan 31 08:13:47 compute-0 sudo[121807]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:13:47 compute-0 python3.9[121809]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-chains.nft _original_basename=chains.j2 recurse=False state=file path=/etc/nftables/edpm-chains.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 08:13:47 compute-0 sudo[121807]: pam_unix(sudo:session): session closed for user root
Jan 31 08:13:47 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v336: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:13:48 compute-0 sudo[121959]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bliljuvdtdcrivrqgrxfzquaixwojzzq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847227.8175247-347-257839881924836/AnsiballZ_stat.py'
Jan 31 08:13:48 compute-0 sudo[121959]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:13:48 compute-0 python3.9[121961]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 08:13:48 compute-0 sudo[121959]: pam_unix(sudo:session): session closed for user root
Jan 31 08:13:48 compute-0 ceph-mon[75227]: pgmap v336: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:13:48 compute-0 sudo[122037]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mihdnvddjaoqxekjpzdoezsmszfuysnt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847227.8175247-347-257839881924836/AnsiballZ_file.py'
Jan 31 08:13:48 compute-0 sudo[122037]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:13:48 compute-0 python3.9[122039]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-rules.nft _original_basename=ruleset.j2 recurse=False state=file path=/etc/nftables/edpm-rules.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 08:13:48 compute-0 sudo[122037]: pam_unix(sudo:session): session closed for user root
Jan 31 08:13:49 compute-0 sudo[122189]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ujkhvtzxiopoxzgxnqehhldbivvidvfq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847228.926742-360-186641466347508/AnsiballZ_command.py'
Jan 31 08:13:49 compute-0 sudo[122189]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:13:49 compute-0 python3.9[122191]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 08:13:49 compute-0 sudo[122189]: pam_unix(sudo:session): session closed for user root
Jan 31 08:13:49 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v337: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:13:49 compute-0 sudo[122344]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ssfofqdvxakicaysmvbhitpyupopcoka ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847229.5384092-368-61154637926794/AnsiballZ_blockinfile.py'
Jan 31 08:13:50 compute-0 sudo[122344]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:13:50 compute-0 python3.9[122346]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"
                                             include "/etc/nftables/edpm-chains.nft"
                                             include "/etc/nftables/edpm-rules.nft"
                                             include "/etc/nftables/edpm-jumps.nft"
                                              path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 08:13:50 compute-0 sudo[122344]: pam_unix(sudo:session): session closed for user root
Jan 31 08:13:50 compute-0 sudo[122496]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-adwprjgzvkfzmjzhaouubmylqfbczqqd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847230.358179-377-167342697305342/AnsiballZ_file.py'
Jan 31 08:13:50 compute-0 sudo[122496]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:13:50 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 08:13:50 compute-0 python3.9[122498]: ansible-ansible.builtin.file Invoked with group=hugetlbfs mode=0775 owner=zuul path=/dev/hugepages1G state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 08:13:50 compute-0 sudo[122496]: pam_unix(sudo:session): session closed for user root
Jan 31 08:13:50 compute-0 ceph-mon[75227]: pgmap v337: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:13:51 compute-0 sudo[122648]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qayqjqavvmgnnehjtjvpyjawefyzxwxb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847230.9142098-377-105738907354280/AnsiballZ_file.py'
Jan 31 08:13:51 compute-0 sudo[122648]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:13:51 compute-0 python3.9[122650]: ansible-ansible.builtin.file Invoked with group=hugetlbfs mode=0775 owner=zuul path=/dev/hugepages2M state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 08:13:51 compute-0 sudo[122648]: pam_unix(sudo:session): session closed for user root
Jan 31 08:13:51 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v338: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:13:51 compute-0 sudo[122800]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xhpiopevcaokkkarzmpmkrpxvqhdyowg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847231.4781756-392-203898965698011/AnsiballZ_mount.py'
Jan 31 08:13:51 compute-0 sudo[122800]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:13:52 compute-0 python3.9[122802]: ansible-ansible.posix.mount Invoked with fstype=hugetlbfs opts=pagesize=1G path=/dev/hugepages1G src=none state=mounted boot=True dump=0 opts_no_log=False passno=0 backup=False fstab=None
Jan 31 08:13:52 compute-0 sudo[122800]: pam_unix(sudo:session): session closed for user root
Jan 31 08:13:52 compute-0 sudo[122952]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nckupebshdlfgxypchdzdcwueauzwylu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847232.31484-392-66098320141771/AnsiballZ_mount.py'
Jan 31 08:13:52 compute-0 sudo[122952]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:13:52 compute-0 python3.9[122954]: ansible-ansible.posix.mount Invoked with fstype=hugetlbfs opts=pagesize=2M path=/dev/hugepages2M src=none state=mounted boot=True dump=0 opts_no_log=False passno=0 backup=False fstab=None
Jan 31 08:13:52 compute-0 sudo[122952]: pam_unix(sudo:session): session closed for user root
Jan 31 08:13:52 compute-0 ceph-mon[75227]: pgmap v338: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:13:53 compute-0 sshd-session[115793]: Connection closed by 192.168.122.30 port 40460
Jan 31 08:13:53 compute-0 sshd-session[115790]: pam_unix(sshd:session): session closed for user zuul
Jan 31 08:13:53 compute-0 systemd[1]: session-40.scope: Deactivated successfully.
Jan 31 08:13:53 compute-0 systemd[1]: session-40.scope: Consumed 25.095s CPU time.
Jan 31 08:13:53 compute-0 systemd-logind[793]: Session 40 logged out. Waiting for processes to exit.
Jan 31 08:13:53 compute-0 systemd-logind[793]: Removed session 40.
Jan 31 08:13:53 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v339: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:13:54 compute-0 ceph-mon[75227]: pgmap v339: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:13:55 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 08:13:55 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v340: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:13:56 compute-0 ceph-mon[75227]: pgmap v340: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:13:57 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v341: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:13:58 compute-0 sshd-session[122979]: Accepted publickey for zuul from 192.168.122.30 port 59350 ssh2: ECDSA SHA256:Skb+4tfaoVfLHQIqkRSeA/sFlTrVc6ZnX8V66qTLHY8
Jan 31 08:13:58 compute-0 systemd-logind[793]: New session 41 of user zuul.
Jan 31 08:13:58 compute-0 systemd[1]: Started Session 41 of User zuul.
Jan 31 08:13:58 compute-0 sshd-session[122979]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 31 08:13:58 compute-0 sudo[123132]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hrjeesnnhxyqfrassmscguqyimqwtbii ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847238.2537935-16-181973379920950/AnsiballZ_tempfile.py'
Jan 31 08:13:58 compute-0 sudo[123132]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:13:58 compute-0 python3.9[123134]: ansible-ansible.builtin.tempfile Invoked with state=file prefix=ansible. suffix= path=None
Jan 31 08:13:58 compute-0 sudo[123132]: pam_unix(sudo:session): session closed for user root
Jan 31 08:13:58 compute-0 ceph-mon[75227]: pgmap v341: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:13:59 compute-0 sudo[123284]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gxuogzkrufyxgukfqpbaicefmsywijig ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847239.014741-28-31653748857932/AnsiballZ_stat.py'
Jan 31 08:13:59 compute-0 sudo[123284]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:13:59 compute-0 python3.9[123286]: ansible-ansible.builtin.stat Invoked with path=/etc/ssh/ssh_known_hosts follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 31 08:13:59 compute-0 sudo[123284]: pam_unix(sudo:session): session closed for user root
Jan 31 08:13:59 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v342: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:14:00 compute-0 sudo[123438]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kmhhneujunlznpknxavagyjjgzvhtpjm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847239.7299507-36-269283812535685/AnsiballZ_slurp.py'
Jan 31 08:14:00 compute-0 sudo[123438]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:14:00 compute-0 python3.9[123440]: ansible-ansible.builtin.slurp Invoked with src=/etc/ssh/ssh_known_hosts
Jan 31 08:14:00 compute-0 sudo[123438]: pam_unix(sudo:session): session closed for user root
Jan 31 08:14:00 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 08:14:00 compute-0 sudo[123590]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fzhwiwtpouosilrowrotagieezgiwuvq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847240.4163513-44-166055283139032/AnsiballZ_stat.py'
Jan 31 08:14:00 compute-0 sudo[123590]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:14:00 compute-0 python3.9[123592]: ansible-ansible.legacy.stat Invoked with path=/tmp/ansible.2jw423ga follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 08:14:00 compute-0 sudo[123590]: pam_unix(sudo:session): session closed for user root
Jan 31 08:14:00 compute-0 ceph-mon[75227]: pgmap v342: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:14:01 compute-0 sudo[123715]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-msjipdckrpywekzxvsprtnfdippgkloc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847240.4163513-44-166055283139032/AnsiballZ_copy.py'
Jan 31 08:14:01 compute-0 sudo[123715]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:14:01 compute-0 python3.9[123717]: ansible-ansible.legacy.copy Invoked with dest=/tmp/ansible.2jw423ga mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769847240.4163513-44-166055283139032/.source.2jw423ga _original_basename=._b2qh4io follow=False checksum=085088cdd6eb94656409168e9e8a2a7ec564f206 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 08:14:01 compute-0 sudo[123715]: pam_unix(sudo:session): session closed for user root
Jan 31 08:14:01 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v343: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:14:02 compute-0 sudo[123867]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cfetvqedrgsdgcknpytksxunagpqhbre ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847241.6656036-59-27032836932316/AnsiballZ_setup.py'
Jan 31 08:14:02 compute-0 sudo[123867]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:14:02 compute-0 python3.9[123869]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'ssh_host_key_rsa_public', 'ssh_host_key_ed25519_public', 'ssh_host_key_ecdsa_public'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 31 08:14:02 compute-0 sudo[123867]: pam_unix(sudo:session): session closed for user root
Jan 31 08:14:02 compute-0 ceph-mgr[75519]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:14:02 compute-0 ceph-mgr[75519]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:14:02 compute-0 ceph-mgr[75519]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:14:02 compute-0 ceph-mgr[75519]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:14:02 compute-0 ceph-mgr[75519]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:14:02 compute-0 ceph-mgr[75519]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:14:02 compute-0 ceph-mon[75227]: pgmap v343: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:14:03 compute-0 sudo[124019]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bhjntvwbdeokscdwcpmabllxomozwitf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847242.7026384-68-43746896148742/AnsiballZ_blockinfile.py'
Jan 31 08:14:03 compute-0 sudo[124019]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:14:03 compute-0 python3.9[124021]: ansible-ansible.builtin.blockinfile Invoked with block=compute-0.ctlplane.example.com,192.168.122.100,compute-0* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDWE2JVgZg7/u8eKJOhyXjs2p2Qt39hyygdPIhluejh1YW6dcdEylP4WBj6s+q3E0jylhkLknf3rSZ3V/k+1w4fdSUak8G4nLiV+h7jI0m37zoSEXpQABHGJkpgi2eMs0YNEF9ZbgIO31d28SspBpNxFqovrMK9sOzJD3jRaR2TV2FGV4csI4Je0LNdEV2NmeRljWtF7PlqQKs424iGvqmWC0B3yHCfBTNvXWNKzGR1N9odg9DQrU9iQl+1eRKkj6BTvJgzpUrsqny5n8vohkDGBUxN/PXOEp7pqhuJUPSphsqmLwQwrLfwDu7A7dJJfZkVKkpzZyD6doTBm0NvOOS1P7M8/iclLU1KEYLp51WWXc+cX67skjn1vfDJa7CGV5YlXA3q5QP5xqR6eDbptMG7KpRBt6sSG7A44KIXdmzbWGFuBJYi0sjVIDfXPkfJOcwxwUzMotpbCYCDOV94CS6XESh8ZKogwpuB8qVCTqZEJz/qxAkpdL1xxLZ6iM3SA2k=
                                             compute-0.ctlplane.example.com,192.168.122.100,compute-0* ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIBdV4ImCUSap74vh7n2NTRmfyoKbp4X6QTOOZaAU/4X4
                                             compute-0.ctlplane.example.com,192.168.122.100,compute-0* ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBNKN9rH1fl1KXYyt+swOzNYmow6bIvU77b90jfMS4wXtyUATZdas4vlUZ46SayVV+s+nKQQloJFhgnR/5ots9Yc=
                                              create=True mode=0644 path=/tmp/ansible.2jw423ga state=present marker=# {mark} ANSIBLE MANAGED BLOCK backup=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 08:14:03 compute-0 sudo[124019]: pam_unix(sudo:session): session closed for user root
Jan 31 08:14:03 compute-0 sudo[124171]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sudtptxjoxybwnufwzgwuzuuyhffquro ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847243.3772523-76-133488150817075/AnsiballZ_command.py'
Jan 31 08:14:03 compute-0 sudo[124171]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:14:03 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v344: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:14:03 compute-0 python3.9[124173]: ansible-ansible.legacy.command Invoked with _raw_params=cat '/tmp/ansible.2jw423ga' > /etc/ssh/ssh_known_hosts _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 08:14:03 compute-0 sudo[124171]: pam_unix(sudo:session): session closed for user root
Jan 31 08:14:04 compute-0 sudo[124325]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dwtxeacngribximzpxofqzmaywtxateh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847244.0946777-84-266375537764515/AnsiballZ_file.py'
Jan 31 08:14:04 compute-0 sudo[124325]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:14:04 compute-0 python3.9[124327]: ansible-ansible.builtin.file Invoked with path=/tmp/ansible.2jw423ga state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 08:14:04 compute-0 sudo[124325]: pam_unix(sudo:session): session closed for user root
Jan 31 08:14:04 compute-0 ceph-mon[75227]: pgmap v344: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:14:04 compute-0 sshd-session[122982]: Connection closed by 192.168.122.30 port 59350
Jan 31 08:14:04 compute-0 sshd-session[122979]: pam_unix(sshd:session): session closed for user zuul
Jan 31 08:14:04 compute-0 systemd-logind[793]: Session 41 logged out. Waiting for processes to exit.
Jan 31 08:14:04 compute-0 systemd[1]: session-41.scope: Deactivated successfully.
Jan 31 08:14:04 compute-0 systemd[1]: session-41.scope: Consumed 4.257s CPU time.
Jan 31 08:14:04 compute-0 systemd-logind[793]: Removed session 41.
Jan 31 08:14:05 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 08:14:05 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v345: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:14:06 compute-0 ceph-mon[75227]: pgmap v345: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:14:07 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v346: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:14:08 compute-0 systemd[1]: systemd-timedated.service: Deactivated successfully.
Jan 31 08:14:08 compute-0 ceph-mon[75227]: pgmap v346: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:14:09 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v347: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:14:10 compute-0 sshd-session[124355]: Accepted publickey for zuul from 192.168.122.30 port 34874 ssh2: ECDSA SHA256:Skb+4tfaoVfLHQIqkRSeA/sFlTrVc6ZnX8V66qTLHY8
Jan 31 08:14:10 compute-0 systemd-logind[793]: New session 42 of user zuul.
Jan 31 08:14:10 compute-0 systemd[1]: Started Session 42 of User zuul.
Jan 31 08:14:10 compute-0 sshd-session[124355]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 31 08:14:10 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 08:14:10 compute-0 ceph-mon[75227]: pgmap v347: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:14:11 compute-0 python3.9[124508]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 31 08:14:11 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v348: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:14:12 compute-0 sudo[124662]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pflzftjkfdtmatbcgvglhsnslrrfobxe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847251.5300906-27-185974418548249/AnsiballZ_systemd.py'
Jan 31 08:14:12 compute-0 sudo[124662]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:14:12 compute-0 python3.9[124664]: ansible-ansible.builtin.systemd Invoked with enabled=True name=sshd daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None masked=None
Jan 31 08:14:12 compute-0 sudo[124662]: pam_unix(sudo:session): session closed for user root
Jan 31 08:14:12 compute-0 sudo[124816]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pkksijtclynixvqefhcwydzeffqqhiqb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847252.5065162-35-6636467963581/AnsiballZ_systemd.py'
Jan 31 08:14:12 compute-0 sudo[124816]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:14:13 compute-0 ceph-mon[75227]: pgmap v348: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:14:13 compute-0 python3.9[124818]: ansible-ansible.builtin.systemd Invoked with name=sshd state=started daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 31 08:14:13 compute-0 sudo[124816]: pam_unix(sudo:session): session closed for user root
Jan 31 08:14:13 compute-0 sudo[124969]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lkrczujutjphgncvarolfcbgcvinzsas ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847253.2695386-44-243844323117130/AnsiballZ_command.py'
Jan 31 08:14:13 compute-0 sudo[124969]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:14:13 compute-0 python3.9[124971]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 08:14:13 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v349: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:14:13 compute-0 sudo[124969]: pam_unix(sudo:session): session closed for user root
Jan 31 08:14:14 compute-0 sudo[125122]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xrqbfwucioriktzwtxgmdlvdjiubmqru ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847254.015356-52-98853893117794/AnsiballZ_stat.py'
Jan 31 08:14:14 compute-0 sudo[125122]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:14:14 compute-0 sudo[125125]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 08:14:14 compute-0 sudo[125125]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:14:14 compute-0 sudo[125125]: pam_unix(sudo:session): session closed for user root
Jan 31 08:14:14 compute-0 python3.9[125124]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 31 08:14:14 compute-0 sudo[125122]: pam_unix(sudo:session): session closed for user root
Jan 31 08:14:14 compute-0 sudo[125150]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/82c880e6-d992-5408-8b12-efff9c275473/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Jan 31 08:14:14 compute-0 sudo[125150]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:14:15 compute-0 ceph-mon[75227]: pgmap v349: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:14:15 compute-0 sudo[125150]: pam_unix(sudo:session): session closed for user root
Jan 31 08:14:15 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 31 08:14:15 compute-0 ceph-mon[75227]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 08:14:15 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 31 08:14:15 compute-0 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 31 08:14:15 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 31 08:14:15 compute-0 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 08:14:15 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 31 08:14:15 compute-0 ceph-mon[75227]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 31 08:14:15 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 31 08:14:15 compute-0 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 31 08:14:15 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 31 08:14:15 compute-0 ceph-mon[75227]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 08:14:15 compute-0 sudo[125329]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 08:14:15 compute-0 sudo[125329]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:14:15 compute-0 sudo[125329]: pam_unix(sudo:session): session closed for user root
Jan 31 08:14:15 compute-0 sudo[125379]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lrhbehyfjlupusaxchmonkkhruoipowa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847254.745352-61-263135738176827/AnsiballZ_file.py'
Jan 31 08:14:15 compute-0 sudo[125379]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:14:15 compute-0 sudo[125382]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/82c880e6-d992-5408-8b12-efff9c275473/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 82c880e6-d992-5408-8b12-efff9c275473 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Jan 31 08:14:15 compute-0 sudo[125382]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:14:15 compute-0 python3.9[125386]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 08:14:15 compute-0 sudo[125379]: pam_unix(sudo:session): session closed for user root
Jan 31 08:14:15 compute-0 podman[125444]: 2026-01-31 08:14:15.39841622 +0000 UTC m=+0.036251667 container create 325d1ace4ff16450dbac54dd76b2f83b94e8975af9cf05064f7c79cfc3dbb2dd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sweet_payne, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 08:14:15 compute-0 systemd[1]: Started libpod-conmon-325d1ace4ff16450dbac54dd76b2f83b94e8975af9cf05064f7c79cfc3dbb2dd.scope.
Jan 31 08:14:15 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:14:15 compute-0 podman[125444]: 2026-01-31 08:14:15.473770868 +0000 UTC m=+0.111606335 container init 325d1ace4ff16450dbac54dd76b2f83b94e8975af9cf05064f7c79cfc3dbb2dd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sweet_payne, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Jan 31 08:14:15 compute-0 podman[125444]: 2026-01-31 08:14:15.479899392 +0000 UTC m=+0.117734869 container start 325d1ace4ff16450dbac54dd76b2f83b94e8975af9cf05064f7c79cfc3dbb2dd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sweet_payne, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Jan 31 08:14:15 compute-0 podman[125444]: 2026-01-31 08:14:15.383139063 +0000 UTC m=+0.020974550 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 08:14:15 compute-0 podman[125444]: 2026-01-31 08:14:15.483617621 +0000 UTC m=+0.121453058 container attach 325d1ace4ff16450dbac54dd76b2f83b94e8975af9cf05064f7c79cfc3dbb2dd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sweet_payne, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.build-date=20251030)
Jan 31 08:14:15 compute-0 sweet_payne[125460]: 167 167
Jan 31 08:14:15 compute-0 systemd[1]: libpod-325d1ace4ff16450dbac54dd76b2f83b94e8975af9cf05064f7c79cfc3dbb2dd.scope: Deactivated successfully.
Jan 31 08:14:15 compute-0 podman[125444]: 2026-01-31 08:14:15.487936836 +0000 UTC m=+0.125772303 container died 325d1ace4ff16450dbac54dd76b2f83b94e8975af9cf05064f7c79cfc3dbb2dd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sweet_payne, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030)
Jan 31 08:14:15 compute-0 systemd[1]: var-lib-containers-storage-overlay-b17f9cbbb82498831d92474017ee0d6c6981d4c69d74ed98b09258bbf1c39a23-merged.mount: Deactivated successfully.
Jan 31 08:14:15 compute-0 podman[125444]: 2026-01-31 08:14:15.523878284 +0000 UTC m=+0.161713721 container remove 325d1ace4ff16450dbac54dd76b2f83b94e8975af9cf05064f7c79cfc3dbb2dd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sweet_payne, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030)
Jan 31 08:14:15 compute-0 systemd[1]: libpod-conmon-325d1ace4ff16450dbac54dd76b2f83b94e8975af9cf05064f7c79cfc3dbb2dd.scope: Deactivated successfully.
Jan 31 08:14:15 compute-0 sshd-session[124358]: Connection closed by 192.168.122.30 port 34874
Jan 31 08:14:15 compute-0 sshd-session[124355]: pam_unix(sshd:session): session closed for user zuul
Jan 31 08:14:15 compute-0 systemd-logind[793]: Session 42 logged out. Waiting for processes to exit.
Jan 31 08:14:15 compute-0 systemd[1]: session-42.scope: Deactivated successfully.
Jan 31 08:14:15 compute-0 systemd[1]: session-42.scope: Consumed 3.347s CPU time.
Jan 31 08:14:15 compute-0 systemd-logind[793]: Removed session 42.
Jan 31 08:14:15 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 08:14:15 compute-0 podman[125484]: 2026-01-31 08:14:15.647404036 +0000 UTC m=+0.041560919 container create 83d4acc1bf21b234627a5e298b7e515ecf6926beca908500ad38fb71b37d01dd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=busy_mccarthy, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, ceph=True)
Jan 31 08:14:15 compute-0 systemd[1]: Started libpod-conmon-83d4acc1bf21b234627a5e298b7e515ecf6926beca908500ad38fb71b37d01dd.scope.
Jan 31 08:14:15 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:14:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a04b00bc0d9278e428c0f1f35d667762574ee50b01445adaa9fb98e2daae966d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 08:14:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a04b00bc0d9278e428c0f1f35d667762574ee50b01445adaa9fb98e2daae966d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 08:14:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a04b00bc0d9278e428c0f1f35d667762574ee50b01445adaa9fb98e2daae966d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 08:14:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a04b00bc0d9278e428c0f1f35d667762574ee50b01445adaa9fb98e2daae966d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 08:14:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a04b00bc0d9278e428c0f1f35d667762574ee50b01445adaa9fb98e2daae966d/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 31 08:14:15 compute-0 podman[125484]: 2026-01-31 08:14:15.626348945 +0000 UTC m=+0.020505838 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 08:14:15 compute-0 podman[125484]: 2026-01-31 08:14:15.738561995 +0000 UTC m=+0.132718898 container init 83d4acc1bf21b234627a5e298b7e515ecf6926beca908500ad38fb71b37d01dd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=busy_mccarthy, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 08:14:15 compute-0 podman[125484]: 2026-01-31 08:14:15.750231846 +0000 UTC m=+0.144388709 container start 83d4acc1bf21b234627a5e298b7e515ecf6926beca908500ad38fb71b37d01dd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=busy_mccarthy, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 08:14:15 compute-0 podman[125484]: 2026-01-31 08:14:15.753807862 +0000 UTC m=+0.147964765 container attach 83d4acc1bf21b234627a5e298b7e515ecf6926beca908500ad38fb71b37d01dd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=busy_mccarthy, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 08:14:15 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v350: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:14:16 compute-0 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 08:14:16 compute-0 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 31 08:14:16 compute-0 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 08:14:16 compute-0 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 31 08:14:16 compute-0 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 31 08:14:16 compute-0 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 08:14:16 compute-0 busy_mccarthy[125501]: --> passed data devices: 0 physical, 3 LVM
Jan 31 08:14:16 compute-0 busy_mccarthy[125501]: --> All data devices are unavailable
Jan 31 08:14:16 compute-0 systemd[1]: libpod-83d4acc1bf21b234627a5e298b7e515ecf6926beca908500ad38fb71b37d01dd.scope: Deactivated successfully.
Jan 31 08:14:16 compute-0 podman[125484]: 2026-01-31 08:14:16.157299555 +0000 UTC m=+0.551456458 container died 83d4acc1bf21b234627a5e298b7e515ecf6926beca908500ad38fb71b37d01dd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=busy_mccarthy, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, io.buildah.version=1.41.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle)
Jan 31 08:14:16 compute-0 systemd[1]: var-lib-containers-storage-overlay-a04b00bc0d9278e428c0f1f35d667762574ee50b01445adaa9fb98e2daae966d-merged.mount: Deactivated successfully.
Jan 31 08:14:16 compute-0 podman[125484]: 2026-01-31 08:14:16.189958665 +0000 UTC m=+0.584115568 container remove 83d4acc1bf21b234627a5e298b7e515ecf6926beca908500ad38fb71b37d01dd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=busy_mccarthy, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 31 08:14:16 compute-0 systemd[1]: libpod-conmon-83d4acc1bf21b234627a5e298b7e515ecf6926beca908500ad38fb71b37d01dd.scope: Deactivated successfully.
Jan 31 08:14:16 compute-0 sudo[125382]: pam_unix(sudo:session): session closed for user root
Jan 31 08:14:16 compute-0 sudo[125531]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 08:14:16 compute-0 sudo[125531]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:14:16 compute-0 sudo[125531]: pam_unix(sudo:session): session closed for user root
Jan 31 08:14:16 compute-0 sudo[125556]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/82c880e6-d992-5408-8b12-efff9c275473/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 82c880e6-d992-5408-8b12-efff9c275473 -- lvm list --format json
Jan 31 08:14:16 compute-0 sudo[125556]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:14:16 compute-0 podman[125594]: 2026-01-31 08:14:16.635229042 +0000 UTC m=+0.043625504 container create 81b385370995ab239c14496883e501e683c97db6470f01ad7c59c9af8ef03262 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=naughty_herschel, org.label-schema.build-date=20251030, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3)
Jan 31 08:14:16 compute-0 systemd[1]: Started libpod-conmon-81b385370995ab239c14496883e501e683c97db6470f01ad7c59c9af8ef03262.scope.
Jan 31 08:14:16 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:14:16 compute-0 podman[125594]: 2026-01-31 08:14:16.701857887 +0000 UTC m=+0.110254369 container init 81b385370995ab239c14496883e501e683c97db6470f01ad7c59c9af8ef03262 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=naughty_herschel, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Jan 31 08:14:16 compute-0 podman[125594]: 2026-01-31 08:14:16.706701286 +0000 UTC m=+0.115097738 container start 81b385370995ab239c14496883e501e683c97db6470f01ad7c59c9af8ef03262 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=naughty_herschel, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 31 08:14:16 compute-0 naughty_herschel[125610]: 167 167
Jan 31 08:14:16 compute-0 systemd[1]: libpod-81b385370995ab239c14496883e501e683c97db6470f01ad7c59c9af8ef03262.scope: Deactivated successfully.
Jan 31 08:14:16 compute-0 podman[125594]: 2026-01-31 08:14:16.709925962 +0000 UTC m=+0.118322444 container attach 81b385370995ab239c14496883e501e683c97db6470f01ad7c59c9af8ef03262 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=naughty_herschel, CEPH_REF=tentacle, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 08:14:16 compute-0 podman[125594]: 2026-01-31 08:14:16.615178537 +0000 UTC m=+0.023575049 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 08:14:16 compute-0 conmon[125610]: conmon 81b385370995ab239c14 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-81b385370995ab239c14496883e501e683c97db6470f01ad7c59c9af8ef03262.scope/container/memory.events
Jan 31 08:14:16 compute-0 podman[125594]: 2026-01-31 08:14:16.710861897 +0000 UTC m=+0.119258389 container died 81b385370995ab239c14496883e501e683c97db6470f01ad7c59c9af8ef03262 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=naughty_herschel, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.build-date=20251030)
Jan 31 08:14:16 compute-0 systemd[1]: var-lib-containers-storage-overlay-fe868524c4dba6a4d346e7d62674edd468cb9506f80bbcac96039bc6f5757263-merged.mount: Deactivated successfully.
Jan 31 08:14:16 compute-0 podman[125594]: 2026-01-31 08:14:16.742194842 +0000 UTC m=+0.150591294 container remove 81b385370995ab239c14496883e501e683c97db6470f01ad7c59c9af8ef03262 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=naughty_herschel, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_REF=tentacle, OSD_FLAVOR=default)
Jan 31 08:14:16 compute-0 systemd[1]: libpod-conmon-81b385370995ab239c14496883e501e683c97db6470f01ad7c59c9af8ef03262.scope: Deactivated successfully.
Jan 31 08:14:16 compute-0 podman[125634]: 2026-01-31 08:14:16.855692247 +0000 UTC m=+0.034911201 container create b2532c8b340c531c8710872358de8584129b570b343ee2442b8d88808da1a30b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=boring_carson, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 08:14:16 compute-0 systemd[1]: Started libpod-conmon-b2532c8b340c531c8710872358de8584129b570b343ee2442b8d88808da1a30b.scope.
Jan 31 08:14:16 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:14:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3048ec1c8eddd8244b0ac76a93a500ca40b2df862371084a2af34ea850f77b60/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 08:14:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3048ec1c8eddd8244b0ac76a93a500ca40b2df862371084a2af34ea850f77b60/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 08:14:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3048ec1c8eddd8244b0ac76a93a500ca40b2df862371084a2af34ea850f77b60/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 08:14:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3048ec1c8eddd8244b0ac76a93a500ca40b2df862371084a2af34ea850f77b60/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 08:14:16 compute-0 podman[125634]: 2026-01-31 08:14:16.919860537 +0000 UTC m=+0.099079491 container init b2532c8b340c531c8710872358de8584129b570b343ee2442b8d88808da1a30b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=boring_carson, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=tentacle)
Jan 31 08:14:16 compute-0 podman[125634]: 2026-01-31 08:14:16.924375278 +0000 UTC m=+0.103594192 container start b2532c8b340c531c8710872358de8584129b570b343ee2442b8d88808da1a30b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=boring_carson, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030)
Jan 31 08:14:16 compute-0 podman[125634]: 2026-01-31 08:14:16.927119511 +0000 UTC m=+0.106338465 container attach b2532c8b340c531c8710872358de8584129b570b343ee2442b8d88808da1a30b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=boring_carson, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Jan 31 08:14:16 compute-0 podman[125634]: 2026-01-31 08:14:16.839546387 +0000 UTC m=+0.018765331 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 08:14:17 compute-0 ceph-mon[75227]: pgmap v350: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:14:17 compute-0 boring_carson[125650]: {
Jan 31 08:14:17 compute-0 boring_carson[125650]:     "0": [
Jan 31 08:14:17 compute-0 boring_carson[125650]:         {
Jan 31 08:14:17 compute-0 boring_carson[125650]:             "devices": [
Jan 31 08:14:17 compute-0 boring_carson[125650]:                 "/dev/loop3"
Jan 31 08:14:17 compute-0 boring_carson[125650]:             ],
Jan 31 08:14:17 compute-0 boring_carson[125650]:             "lv_name": "ceph_lv0",
Jan 31 08:14:17 compute-0 boring_carson[125650]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 08:14:17 compute-0 boring_carson[125650]:             "lv_size": "21470642176",
Jan 31 08:14:17 compute-0 boring_carson[125650]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=MTsNbY-MKaT-jGv0-3onj-5WQa-gnK0-BbfLsK,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=82c880e6-d992-5408-8b12-efff9c275473,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=39c36249-2898-4a76-b317-8e4ca379866f,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 31 08:14:17 compute-0 boring_carson[125650]:             "lv_uuid": "MTsNbY-MKaT-jGv0-3onj-5WQa-gnK0-BbfLsK",
Jan 31 08:14:17 compute-0 boring_carson[125650]:             "name": "ceph_lv0",
Jan 31 08:14:17 compute-0 boring_carson[125650]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 08:14:17 compute-0 boring_carson[125650]:             "tags": {
Jan 31 08:14:17 compute-0 boring_carson[125650]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 31 08:14:17 compute-0 boring_carson[125650]:                 "ceph.block_uuid": "MTsNbY-MKaT-jGv0-3onj-5WQa-gnK0-BbfLsK",
Jan 31 08:14:17 compute-0 boring_carson[125650]:                 "ceph.cephx_lockbox_secret": "",
Jan 31 08:14:17 compute-0 boring_carson[125650]:                 "ceph.cluster_fsid": "82c880e6-d992-5408-8b12-efff9c275473",
Jan 31 08:14:17 compute-0 boring_carson[125650]:                 "ceph.cluster_name": "ceph",
Jan 31 08:14:17 compute-0 boring_carson[125650]:                 "ceph.crush_device_class": "",
Jan 31 08:14:17 compute-0 boring_carson[125650]:                 "ceph.encrypted": "0",
Jan 31 08:14:17 compute-0 boring_carson[125650]:                 "ceph.objectstore": "bluestore",
Jan 31 08:14:17 compute-0 boring_carson[125650]:                 "ceph.osd_fsid": "39c36249-2898-4a76-b317-8e4ca379866f",
Jan 31 08:14:17 compute-0 boring_carson[125650]:                 "ceph.osd_id": "0",
Jan 31 08:14:17 compute-0 boring_carson[125650]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 31 08:14:17 compute-0 boring_carson[125650]:                 "ceph.type": "block",
Jan 31 08:14:17 compute-0 boring_carson[125650]:                 "ceph.vdo": "0",
Jan 31 08:14:17 compute-0 boring_carson[125650]:                 "ceph.with_tpm": "0"
Jan 31 08:14:17 compute-0 boring_carson[125650]:             },
Jan 31 08:14:17 compute-0 boring_carson[125650]:             "type": "block",
Jan 31 08:14:17 compute-0 boring_carson[125650]:             "vg_name": "ceph_vg0"
Jan 31 08:14:17 compute-0 boring_carson[125650]:         }
Jan 31 08:14:17 compute-0 boring_carson[125650]:     ],
Jan 31 08:14:17 compute-0 boring_carson[125650]:     "1": [
Jan 31 08:14:17 compute-0 boring_carson[125650]:         {
Jan 31 08:14:17 compute-0 boring_carson[125650]:             "devices": [
Jan 31 08:14:17 compute-0 boring_carson[125650]:                 "/dev/loop4"
Jan 31 08:14:17 compute-0 boring_carson[125650]:             ],
Jan 31 08:14:17 compute-0 boring_carson[125650]:             "lv_name": "ceph_lv1",
Jan 31 08:14:17 compute-0 boring_carson[125650]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 31 08:14:17 compute-0 boring_carson[125650]:             "lv_size": "21470642176",
Jan 31 08:14:17 compute-0 boring_carson[125650]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=p93Mbf-DMxT-pcUt-jSJE-SFna-oscq-yTAd40,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=82c880e6-d992-5408-8b12-efff9c275473,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=dacad4fa-56d8-4937-b2d8-306fb75187f3,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 31 08:14:17 compute-0 boring_carson[125650]:             "lv_uuid": "p93Mbf-DMxT-pcUt-jSJE-SFna-oscq-yTAd40",
Jan 31 08:14:17 compute-0 boring_carson[125650]:             "name": "ceph_lv1",
Jan 31 08:14:17 compute-0 boring_carson[125650]:             "path": "/dev/ceph_vg1/ceph_lv1",
Jan 31 08:14:17 compute-0 boring_carson[125650]:             "tags": {
Jan 31 08:14:17 compute-0 boring_carson[125650]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 31 08:14:17 compute-0 boring_carson[125650]:                 "ceph.block_uuid": "p93Mbf-DMxT-pcUt-jSJE-SFna-oscq-yTAd40",
Jan 31 08:14:17 compute-0 boring_carson[125650]:                 "ceph.cephx_lockbox_secret": "",
Jan 31 08:14:17 compute-0 boring_carson[125650]:                 "ceph.cluster_fsid": "82c880e6-d992-5408-8b12-efff9c275473",
Jan 31 08:14:17 compute-0 boring_carson[125650]:                 "ceph.cluster_name": "ceph",
Jan 31 08:14:17 compute-0 boring_carson[125650]:                 "ceph.crush_device_class": "",
Jan 31 08:14:17 compute-0 boring_carson[125650]:                 "ceph.encrypted": "0",
Jan 31 08:14:17 compute-0 boring_carson[125650]:                 "ceph.objectstore": "bluestore",
Jan 31 08:14:17 compute-0 boring_carson[125650]:                 "ceph.osd_fsid": "dacad4fa-56d8-4937-b2d8-306fb75187f3",
Jan 31 08:14:17 compute-0 boring_carson[125650]:                 "ceph.osd_id": "1",
Jan 31 08:14:17 compute-0 boring_carson[125650]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 31 08:14:17 compute-0 boring_carson[125650]:                 "ceph.type": "block",
Jan 31 08:14:17 compute-0 boring_carson[125650]:                 "ceph.vdo": "0",
Jan 31 08:14:17 compute-0 boring_carson[125650]:                 "ceph.with_tpm": "0"
Jan 31 08:14:17 compute-0 boring_carson[125650]:             },
Jan 31 08:14:17 compute-0 boring_carson[125650]:             "type": "block",
Jan 31 08:14:17 compute-0 boring_carson[125650]:             "vg_name": "ceph_vg1"
Jan 31 08:14:17 compute-0 boring_carson[125650]:         }
Jan 31 08:14:17 compute-0 boring_carson[125650]:     ],
Jan 31 08:14:17 compute-0 boring_carson[125650]:     "2": [
Jan 31 08:14:17 compute-0 boring_carson[125650]:         {
Jan 31 08:14:17 compute-0 boring_carson[125650]:             "devices": [
Jan 31 08:14:17 compute-0 boring_carson[125650]:                 "/dev/loop5"
Jan 31 08:14:17 compute-0 boring_carson[125650]:             ],
Jan 31 08:14:17 compute-0 boring_carson[125650]:             "lv_name": "ceph_lv2",
Jan 31 08:14:17 compute-0 boring_carson[125650]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 31 08:14:17 compute-0 boring_carson[125650]:             "lv_size": "21470642176",
Jan 31 08:14:17 compute-0 boring_carson[125650]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=fxd7JU-HnwP-NvcE-M4xv-EgEF-kK7y-w6dXCS,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=82c880e6-d992-5408-8b12-efff9c275473,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=faa25865-e7b6-44f9-8188-08bf287b941b,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 31 08:14:17 compute-0 boring_carson[125650]:             "lv_uuid": "fxd7JU-HnwP-NvcE-M4xv-EgEF-kK7y-w6dXCS",
Jan 31 08:14:17 compute-0 boring_carson[125650]:             "name": "ceph_lv2",
Jan 31 08:14:17 compute-0 boring_carson[125650]:             "path": "/dev/ceph_vg2/ceph_lv2",
Jan 31 08:14:17 compute-0 boring_carson[125650]:             "tags": {
Jan 31 08:14:17 compute-0 boring_carson[125650]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 31 08:14:17 compute-0 boring_carson[125650]:                 "ceph.block_uuid": "fxd7JU-HnwP-NvcE-M4xv-EgEF-kK7y-w6dXCS",
Jan 31 08:14:17 compute-0 boring_carson[125650]:                 "ceph.cephx_lockbox_secret": "",
Jan 31 08:14:17 compute-0 boring_carson[125650]:                 "ceph.cluster_fsid": "82c880e6-d992-5408-8b12-efff9c275473",
Jan 31 08:14:17 compute-0 boring_carson[125650]:                 "ceph.cluster_name": "ceph",
Jan 31 08:14:17 compute-0 boring_carson[125650]:                 "ceph.crush_device_class": "",
Jan 31 08:14:17 compute-0 boring_carson[125650]:                 "ceph.encrypted": "0",
Jan 31 08:14:17 compute-0 boring_carson[125650]:                 "ceph.objectstore": "bluestore",
Jan 31 08:14:17 compute-0 boring_carson[125650]:                 "ceph.osd_fsid": "faa25865-e7b6-44f9-8188-08bf287b941b",
Jan 31 08:14:17 compute-0 boring_carson[125650]:                 "ceph.osd_id": "2",
Jan 31 08:14:17 compute-0 boring_carson[125650]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 31 08:14:17 compute-0 boring_carson[125650]:                 "ceph.type": "block",
Jan 31 08:14:17 compute-0 boring_carson[125650]:                 "ceph.vdo": "0",
Jan 31 08:14:17 compute-0 boring_carson[125650]:                 "ceph.with_tpm": "0"
Jan 31 08:14:17 compute-0 boring_carson[125650]:             },
Jan 31 08:14:17 compute-0 boring_carson[125650]:             "type": "block",
Jan 31 08:14:17 compute-0 boring_carson[125650]:             "vg_name": "ceph_vg2"
Jan 31 08:14:17 compute-0 boring_carson[125650]:         }
Jan 31 08:14:17 compute-0 boring_carson[125650]:     ]
Jan 31 08:14:17 compute-0 boring_carson[125650]: }
Jan 31 08:14:17 compute-0 systemd[1]: libpod-b2532c8b340c531c8710872358de8584129b570b343ee2442b8d88808da1a30b.scope: Deactivated successfully.
Jan 31 08:14:17 compute-0 conmon[125650]: conmon b2532c8b340c531c8710 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-b2532c8b340c531c8710872358de8584129b570b343ee2442b8d88808da1a30b.scope/container/memory.events
Jan 31 08:14:17 compute-0 podman[125634]: 2026-01-31 08:14:17.193343456 +0000 UTC m=+0.372562380 container died b2532c8b340c531c8710872358de8584129b570b343ee2442b8d88808da1a30b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=boring_carson, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 31 08:14:17 compute-0 systemd[1]: var-lib-containers-storage-overlay-3048ec1c8eddd8244b0ac76a93a500ca40b2df862371084a2af34ea850f77b60-merged.mount: Deactivated successfully.
Jan 31 08:14:17 compute-0 podman[125634]: 2026-01-31 08:14:17.233690631 +0000 UTC m=+0.412909565 container remove b2532c8b340c531c8710872358de8584129b570b343ee2442b8d88808da1a30b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=boring_carson, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 31 08:14:17 compute-0 systemd[1]: libpod-conmon-b2532c8b340c531c8710872358de8584129b570b343ee2442b8d88808da1a30b.scope: Deactivated successfully.
Jan 31 08:14:17 compute-0 sudo[125556]: pam_unix(sudo:session): session closed for user root
Jan 31 08:14:17 compute-0 sudo[125670]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 08:14:17 compute-0 sudo[125670]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:14:17 compute-0 sudo[125670]: pam_unix(sudo:session): session closed for user root
Jan 31 08:14:17 compute-0 sudo[125695]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/82c880e6-d992-5408-8b12-efff9c275473/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 82c880e6-d992-5408-8b12-efff9c275473 -- raw list --format json
Jan 31 08:14:17 compute-0 sudo[125695]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:14:17 compute-0 podman[125733]: 2026-01-31 08:14:17.606528598 +0000 UTC m=+0.031764638 container create b61a3d3a3fc1bc23828e92ec3e54f0fb79d2e7bf0b9707d35ea472213a9f4648 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=funny_lamarr, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, ceph=True)
Jan 31 08:14:17 compute-0 systemd[1]: Started libpod-conmon-b61a3d3a3fc1bc23828e92ec3e54f0fb79d2e7bf0b9707d35ea472213a9f4648.scope.
Jan 31 08:14:17 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:14:17 compute-0 podman[125733]: 2026-01-31 08:14:17.656741686 +0000 UTC m=+0.081977756 container init b61a3d3a3fc1bc23828e92ec3e54f0fb79d2e7bf0b9707d35ea472213a9f4648 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=funny_lamarr, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.vendor=CentOS)
Jan 31 08:14:17 compute-0 podman[125733]: 2026-01-31 08:14:17.661116653 +0000 UTC m=+0.086352693 container start b61a3d3a3fc1bc23828e92ec3e54f0fb79d2e7bf0b9707d35ea472213a9f4648 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=funny_lamarr, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 08:14:17 compute-0 funny_lamarr[125749]: 167 167
Jan 31 08:14:17 compute-0 systemd[1]: libpod-b61a3d3a3fc1bc23828e92ec3e54f0fb79d2e7bf0b9707d35ea472213a9f4648.scope: Deactivated successfully.
Jan 31 08:14:17 compute-0 podman[125733]: 2026-01-31 08:14:17.66516274 +0000 UTC m=+0.090398800 container attach b61a3d3a3fc1bc23828e92ec3e54f0fb79d2e7bf0b9707d35ea472213a9f4648 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=funny_lamarr, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 31 08:14:17 compute-0 podman[125733]: 2026-01-31 08:14:17.665626173 +0000 UTC m=+0.090862233 container died b61a3d3a3fc1bc23828e92ec3e54f0fb79d2e7bf0b9707d35ea472213a9f4648 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=funny_lamarr, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030)
Jan 31 08:14:17 compute-0 systemd[1]: var-lib-containers-storage-overlay-a5724626db40ca89fdea9d1fbdec7cfe2ab39221004555338f0f50d80cf7d5ea-merged.mount: Deactivated successfully.
Jan 31 08:14:17 compute-0 podman[125733]: 2026-01-31 08:14:17.592005661 +0000 UTC m=+0.017241721 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 08:14:17 compute-0 podman[125733]: 2026-01-31 08:14:17.693360942 +0000 UTC m=+0.118596982 container remove b61a3d3a3fc1bc23828e92ec3e54f0fb79d2e7bf0b9707d35ea472213a9f4648 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=funny_lamarr, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 31 08:14:17 compute-0 systemd[1]: libpod-conmon-b61a3d3a3fc1bc23828e92ec3e54f0fb79d2e7bf0b9707d35ea472213a9f4648.scope: Deactivated successfully.
Jan 31 08:14:17 compute-0 podman[125771]: 2026-01-31 08:14:17.82988437 +0000 UTC m=+0.053242570 container create 628ebddf4d7f9b98e2e92a9a329769f522bf81201934e950a60e6901372cdb87 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=distracted_fermi, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 08:14:17 compute-0 systemd[1]: Started libpod-conmon-628ebddf4d7f9b98e2e92a9a329769f522bf81201934e950a60e6901372cdb87.scope.
Jan 31 08:14:17 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v351: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:14:17 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:14:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/937a4470b74e7ba791cd72bc6e0e71712e7604e064d58f705ef2166ea94260fc/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 08:14:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/937a4470b74e7ba791cd72bc6e0e71712e7604e064d58f705ef2166ea94260fc/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 08:14:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/937a4470b74e7ba791cd72bc6e0e71712e7604e064d58f705ef2166ea94260fc/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 08:14:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/937a4470b74e7ba791cd72bc6e0e71712e7604e064d58f705ef2166ea94260fc/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 08:14:17 compute-0 podman[125771]: 2026-01-31 08:14:17.896206088 +0000 UTC m=+0.119564348 container init 628ebddf4d7f9b98e2e92a9a329769f522bf81201934e950a60e6901372cdb87 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=distracted_fermi, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 08:14:17 compute-0 podman[125771]: 2026-01-31 08:14:17.805717376 +0000 UTC m=+0.029075676 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 08:14:17 compute-0 podman[125771]: 2026-01-31 08:14:17.901463368 +0000 UTC m=+0.124821568 container start 628ebddf4d7f9b98e2e92a9a329769f522bf81201934e950a60e6901372cdb87 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=distracted_fermi, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, ceph=True, org.label-schema.vendor=CentOS)
Jan 31 08:14:17 compute-0 podman[125771]: 2026-01-31 08:14:17.904316274 +0000 UTC m=+0.127674474 container attach 628ebddf4d7f9b98e2e92a9a329769f522bf81201934e950a60e6901372cdb87 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=distracted_fermi, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 08:14:18 compute-0 lvm[125865]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 31 08:14:18 compute-0 lvm[125865]: VG ceph_vg0 finished
Jan 31 08:14:18 compute-0 lvm[125866]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 31 08:14:18 compute-0 lvm[125866]: VG ceph_vg1 finished
Jan 31 08:14:18 compute-0 lvm[125868]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 31 08:14:18 compute-0 lvm[125868]: VG ceph_vg2 finished
Jan 31 08:14:18 compute-0 distracted_fermi[125787]: {}
Jan 31 08:14:18 compute-0 systemd[1]: libpod-628ebddf4d7f9b98e2e92a9a329769f522bf81201934e950a60e6901372cdb87.scope: Deactivated successfully.
Jan 31 08:14:18 compute-0 podman[125771]: 2026-01-31 08:14:18.623541812 +0000 UTC m=+0.846900032 container died 628ebddf4d7f9b98e2e92a9a329769f522bf81201934e950a60e6901372cdb87 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=distracted_fermi, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 08:14:18 compute-0 systemd[1]: libpod-628ebddf4d7f9b98e2e92a9a329769f522bf81201934e950a60e6901372cdb87.scope: Consumed 1.017s CPU time.
Jan 31 08:14:18 compute-0 systemd[1]: var-lib-containers-storage-overlay-937a4470b74e7ba791cd72bc6e0e71712e7604e064d58f705ef2166ea94260fc-merged.mount: Deactivated successfully.
Jan 31 08:14:18 compute-0 podman[125771]: 2026-01-31 08:14:18.666313092 +0000 UTC m=+0.889671282 container remove 628ebddf4d7f9b98e2e92a9a329769f522bf81201934e950a60e6901372cdb87 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=distracted_fermi, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 08:14:18 compute-0 systemd[1]: libpod-conmon-628ebddf4d7f9b98e2e92a9a329769f522bf81201934e950a60e6901372cdb87.scope: Deactivated successfully.
Jan 31 08:14:18 compute-0 sudo[125695]: pam_unix(sudo:session): session closed for user root
Jan 31 08:14:18 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 31 08:14:18 compute-0 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 08:14:18 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 31 08:14:18 compute-0 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 08:14:18 compute-0 sudo[125882]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 31 08:14:18 compute-0 sudo[125882]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:14:18 compute-0 sudo[125882]: pam_unix(sudo:session): session closed for user root
Jan 31 08:14:19 compute-0 ceph-mon[75227]: pgmap v351: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:14:19 compute-0 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 08:14:19 compute-0 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 08:14:19 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v352: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:14:20 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 08:14:21 compute-0 ceph-mon[75227]: pgmap v352: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:14:21 compute-0 sshd-session[125907]: Accepted publickey for zuul from 192.168.122.30 port 49498 ssh2: ECDSA SHA256:Skb+4tfaoVfLHQIqkRSeA/sFlTrVc6ZnX8V66qTLHY8
Jan 31 08:14:21 compute-0 systemd-logind[793]: New session 43 of user zuul.
Jan 31 08:14:21 compute-0 systemd[1]: Started Session 43 of User zuul.
Jan 31 08:14:21 compute-0 sshd-session[125907]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 31 08:14:21 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v353: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:14:22 compute-0 python3.9[126060]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 31 08:14:22 compute-0 sudo[126214]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-csumkvugyinalsgqppyjkcmpzawsgufb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847262.6513155-29-195251522188569/AnsiballZ_setup.py'
Jan 31 08:14:22 compute-0 sudo[126214]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:14:23 compute-0 ceph-mon[75227]: pgmap v353: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:14:23 compute-0 python3.9[126216]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 31 08:14:23 compute-0 sudo[126214]: pam_unix(sudo:session): session closed for user root
Jan 31 08:14:23 compute-0 sudo[126298]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aeowaragokdeglkwmvsmmfpcfofalfzk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847262.6513155-29-195251522188569/AnsiballZ_dnf.py'
Jan 31 08:14:23 compute-0 sudo[126298]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:14:23 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v354: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:14:23 compute-0 python3.9[126300]: ansible-ansible.legacy.dnf Invoked with name=['yum-utils'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Jan 31 08:14:25 compute-0 ceph-mon[75227]: pgmap v354: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:14:25 compute-0 sudo[126298]: pam_unix(sudo:session): session closed for user root
Jan 31 08:14:25 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 08:14:25 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v355: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:14:25 compute-0 python3.9[126451]: ansible-ansible.legacy.command Invoked with _raw_params=needs-restarting -r _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 08:14:26 compute-0 python3.9[126602]: ansible-ansible.builtin.find Invoked with paths=['/var/lib/openstack/reboot_required/'] patterns=[] read_whole_file=False file_type=file age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Jan 31 08:14:27 compute-0 ceph-mon[75227]: pgmap v355: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:14:27 compute-0 python3.9[126752]: ansible-ansible.builtin.stat Invoked with path=/var/lib/config-data/puppet-generated follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 31 08:14:27 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v356: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:14:28 compute-0 python3.9[126902]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 31 08:14:28 compute-0 sshd-session[125910]: Connection closed by 192.168.122.30 port 49498
Jan 31 08:14:28 compute-0 sshd-session[125907]: pam_unix(sshd:session): session closed for user zuul
Jan 31 08:14:28 compute-0 systemd[1]: session-43.scope: Deactivated successfully.
Jan 31 08:14:28 compute-0 systemd[1]: session-43.scope: Consumed 5.050s CPU time.
Jan 31 08:14:28 compute-0 systemd-logind[793]: Session 43 logged out. Waiting for processes to exit.
Jan 31 08:14:28 compute-0 systemd-logind[793]: Removed session 43.
Jan 31 08:14:29 compute-0 ceph-mon[75227]: pgmap v356: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:14:29 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v357: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:14:30 compute-0 sshd-session[71401]: Received disconnect from 38.102.83.220 port 57680:11: disconnected by user
Jan 31 08:14:30 compute-0 sshd-session[71401]: Disconnected from user zuul 38.102.83.220 port 57680
Jan 31 08:14:30 compute-0 sshd-session[71398]: pam_unix(sshd:session): session closed for user zuul
Jan 31 08:14:30 compute-0 systemd[1]: session-18.scope: Deactivated successfully.
Jan 31 08:14:30 compute-0 systemd[1]: session-18.scope: Consumed 1min 28.530s CPU time.
Jan 31 08:14:30 compute-0 systemd-logind[793]: Session 18 logged out. Waiting for processes to exit.
Jan 31 08:14:30 compute-0 systemd-logind[793]: Removed session 18.
Jan 31 08:14:30 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 08:14:31 compute-0 ceph-mon[75227]: pgmap v357: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:14:31 compute-0 ceph-mgr[75519]: [balancer INFO root] Optimize plan auto_2026-01-31_08:14:31
Jan 31 08:14:31 compute-0 ceph-mgr[75519]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 31 08:14:31 compute-0 ceph-mgr[75519]: [balancer INFO root] do_upmap
Jan 31 08:14:31 compute-0 ceph-mgr[75519]: [balancer INFO root] pools ['vms', 'default.rgw.control', '.mgr', 'volumes', '.rgw.root', 'cephfs.cephfs.meta', 'images', 'default.rgw.meta', 'backups', 'default.rgw.log', 'cephfs.cephfs.data']
Jan 31 08:14:31 compute-0 ceph-mgr[75519]: [balancer INFO root] prepared 0/10 upmap changes
Jan 31 08:14:31 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v358: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:14:32 compute-0 ceph-mon[75227]: pgmap v358: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:14:32 compute-0 ceph-mgr[75519]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:14:32 compute-0 ceph-mgr[75519]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:14:32 compute-0 ceph-mgr[75519]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:14:32 compute-0 ceph-mgr[75519]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:14:32 compute-0 ceph-mgr[75519]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:14:32 compute-0 ceph-mgr[75519]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:14:32 compute-0 ceph-mgr[75519]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 31 08:14:32 compute-0 ceph-mgr[75519]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 08:14:32 compute-0 ceph-mgr[75519]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 31 08:14:32 compute-0 ceph-mgr[75519]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 08:14:32 compute-0 ceph-mgr[75519]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 08:14:32 compute-0 ceph-mgr[75519]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 08:14:32 compute-0 ceph-mgr[75519]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 08:14:32 compute-0 ceph-mgr[75519]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 08:14:32 compute-0 ceph-mgr[75519]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 08:14:32 compute-0 ceph-mgr[75519]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 08:14:33 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v359: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:14:34 compute-0 sshd-session[126927]: Accepted publickey for zuul from 192.168.122.30 port 52208 ssh2: ECDSA SHA256:Skb+4tfaoVfLHQIqkRSeA/sFlTrVc6ZnX8V66qTLHY8
Jan 31 08:14:34 compute-0 systemd-logind[793]: New session 44 of user zuul.
Jan 31 08:14:34 compute-0 systemd[1]: Started Session 44 of User zuul.
Jan 31 08:14:34 compute-0 sshd-session[126927]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 31 08:14:34 compute-0 ceph-mon[75227]: pgmap v359: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:14:35 compute-0 python3.9[127080]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 31 08:14:35 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 08:14:35 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v360: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:14:36 compute-0 sudo[127234]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lgnrwaqduitdrnewrbuviquqmuuxjhts ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847276.3845465-45-71787463616015/AnsiballZ_file.py'
Jan 31 08:14:36 compute-0 sudo[127234]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:14:36 compute-0 python3.9[127236]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/libvirt/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 31 08:14:36 compute-0 sudo[127234]: pam_unix(sudo:session): session closed for user root
Jan 31 08:14:37 compute-0 ceph-mon[75227]: pgmap v360: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:14:37 compute-0 sudo[127386]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rfdqdgrhuleqvenccblhwnmtvxcmjozv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847277.085912-45-40285789593034/AnsiballZ_file.py'
Jan 31 08:14:37 compute-0 sudo[127386]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:14:37 compute-0 python3.9[127388]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/libvirt/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 31 08:14:37 compute-0 sudo[127386]: pam_unix(sudo:session): session closed for user root
Jan 31 08:14:37 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v361: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:14:38 compute-0 sudo[127538]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-adshwomnfjqdsziiqfmankbuiteyxcpu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847277.7516167-60-173834942388739/AnsiballZ_stat.py'
Jan 31 08:14:38 compute-0 sudo[127538]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:14:38 compute-0 python3.9[127540]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/libvirt/default/tls.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 08:14:38 compute-0 sudo[127538]: pam_unix(sudo:session): session closed for user root
Jan 31 08:14:38 compute-0 sudo[127661]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iwxzhhiaimxdipbfqewakkixncutapko ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847277.7516167-60-173834942388739/AnsiballZ_copy.py'
Jan 31 08:14:38 compute-0 sudo[127661]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:14:38 compute-0 python3.9[127663]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/libvirt/default/tls.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769847277.7516167-60-173834942388739/.source.crt _original_basename=compute-0.ctlplane.example.com-tls.crt follow=False checksum=fdbed11d72702d0c28585d2f3fa0ede8c1d99a43 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 08:14:39 compute-0 sudo[127661]: pam_unix(sudo:session): session closed for user root
Jan 31 08:14:39 compute-0 ceph-mon[75227]: pgmap v361: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:14:39 compute-0 sudo[127813]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ygpjltqfnhvawohkkpsomuvjpxxuxkmo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847279.1072574-60-111341352915368/AnsiballZ_stat.py'
Jan 31 08:14:39 compute-0 sudo[127813]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:14:39 compute-0 python3.9[127815]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/libvirt/default/ca.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 08:14:39 compute-0 sudo[127813]: pam_unix(sudo:session): session closed for user root
Jan 31 08:14:39 compute-0 sudo[127936]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qiqicwppoipcyrpiggsnfykuaxwvxwbx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847279.1072574-60-111341352915368/AnsiballZ_copy.py'
Jan 31 08:14:39 compute-0 sudo[127936]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:14:39 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v362: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:14:39 compute-0 python3.9[127938]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/libvirt/default/ca.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769847279.1072574-60-111341352915368/.source.crt _original_basename=compute-0.ctlplane.example.com-ca.crt follow=False checksum=7181925ca4d0c23701428eb3b5989ad45810d4dc backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 08:14:40 compute-0 sudo[127936]: pam_unix(sudo:session): session closed for user root
Jan 31 08:14:40 compute-0 sudo[128088]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dxkoxqmimwxwalbzipbxkremczukvgax ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847280.1287086-60-180092125624263/AnsiballZ_stat.py'
Jan 31 08:14:40 compute-0 sudo[128088]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:14:40 compute-0 python3.9[128090]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/libvirt/default/tls.key follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 08:14:40 compute-0 sudo[128088]: pam_unix(sudo:session): session closed for user root
Jan 31 08:14:40 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 08:14:40 compute-0 sudo[128211]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qciwohaktctkptxkdkmuxodijhvmuyld ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847280.1287086-60-180092125624263/AnsiballZ_copy.py'
Jan 31 08:14:40 compute-0 sudo[128211]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:14:41 compute-0 python3.9[128213]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/libvirt/default/tls.key group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769847280.1287086-60-180092125624263/.source.key _original_basename=compute-0.ctlplane.example.com-tls.key follow=False checksum=fce23680cf7dc4c27547710ac9265ef124ceb373 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 08:14:41 compute-0 sudo[128211]: pam_unix(sudo:session): session closed for user root
Jan 31 08:14:41 compute-0 ceph-mon[75227]: pgmap v362: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:14:41 compute-0 sudo[128363]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mhhiqezefrgqblqwmzklolbaeewkozqi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847281.182181-104-198284423756837/AnsiballZ_file.py'
Jan 31 08:14:41 compute-0 sudo[128363]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:14:41 compute-0 python3.9[128365]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/neutron-metadata/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 31 08:14:41 compute-0 sudo[128363]: pam_unix(sudo:session): session closed for user root
Jan 31 08:14:41 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v363: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:14:41 compute-0 sudo[128515]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-izubbbexgcxftzfywoxhknuclvenqmfl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847281.7692337-104-56557848515960/AnsiballZ_file.py'
Jan 31 08:14:41 compute-0 sudo[128515]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:14:42 compute-0 python3.9[128517]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/neutron-metadata/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 31 08:14:42 compute-0 sudo[128515]: pam_unix(sudo:session): session closed for user root
Jan 31 08:14:42 compute-0 ceph-mon[75227]: pgmap v363: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:14:42 compute-0 sudo[128667]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bcyrpqhossbbmegrkieccevhvhejpajm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847282.385077-119-225058434797769/AnsiballZ_stat.py'
Jan 31 08:14:42 compute-0 sudo[128667]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:14:42 compute-0 python3.9[128669]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/neutron-metadata/default/tls.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 08:14:42 compute-0 sudo[128667]: pam_unix(sudo:session): session closed for user root
Jan 31 08:14:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] _maybe_adjust
Jan 31 08:14:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:14:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 31 08:14:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:14:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 08:14:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:14:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 08:14:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:14:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 08:14:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:14:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 08:14:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:14:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 2.6947183441958982e-06 of space, bias 4.0, pg target 0.003233662013035078 quantized to 16 (current 16)
Jan 31 08:14:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:14:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 08:14:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:14:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 31 08:14:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:14:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Jan 31 08:14:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:14:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 08:14:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:14:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 31 08:14:43 compute-0 sudo[128790]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-itbpvrdeshpgpachmuegejasvxveyxkf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847282.385077-119-225058434797769/AnsiballZ_copy.py'
Jan 31 08:14:43 compute-0 sudo[128790]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:14:43 compute-0 python3.9[128792]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/neutron-metadata/default/tls.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769847282.385077-119-225058434797769/.source.crt _original_basename=compute-0.ctlplane.example.com-tls.crt follow=False checksum=6120145c13ba3d014fbf8fdeb4b2ab094d53d173 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 08:14:43 compute-0 sudo[128790]: pam_unix(sudo:session): session closed for user root
Jan 31 08:14:43 compute-0 sudo[128942]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qjdpohqhrstwmyxejmnvaopayklfsxkw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847283.4642477-119-123690763768607/AnsiballZ_stat.py'
Jan 31 08:14:43 compute-0 sudo[128942]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:14:43 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v364: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:14:43 compute-0 python3.9[128944]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/neutron-metadata/default/ca.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 08:14:43 compute-0 sudo[128942]: pam_unix(sudo:session): session closed for user root
Jan 31 08:14:44 compute-0 sudo[129065]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sdzhrkeepzjgfwtcchvturvvryvucxom ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847283.4642477-119-123690763768607/AnsiballZ_copy.py'
Jan 31 08:14:44 compute-0 sudo[129065]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:14:44 compute-0 python3.9[129067]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/neutron-metadata/default/ca.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769847283.4642477-119-123690763768607/.source.crt _original_basename=compute-0.ctlplane.example.com-ca.crt follow=False checksum=4d574945944269f1960401828db2762d2c018b87 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 08:14:44 compute-0 sudo[129065]: pam_unix(sudo:session): session closed for user root
Jan 31 08:14:44 compute-0 sudo[129217]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nfmrqomgnqexxcxylfmoeapuzsmezpoe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847284.53473-119-206437190671897/AnsiballZ_stat.py'
Jan 31 08:14:44 compute-0 sudo[129217]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:14:44 compute-0 python3.9[129219]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/neutron-metadata/default/tls.key follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 08:14:44 compute-0 sudo[129217]: pam_unix(sudo:session): session closed for user root
Jan 31 08:14:44 compute-0 ceph-mon[75227]: pgmap v364: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:14:45 compute-0 sudo[129340]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-enftklqlzgbisemnbuxdqpddczqoodwk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847284.53473-119-206437190671897/AnsiballZ_copy.py'
Jan 31 08:14:45 compute-0 sudo[129340]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:14:45 compute-0 python3.9[129342]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/neutron-metadata/default/tls.key group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769847284.53473-119-206437190671897/.source.key _original_basename=compute-0.ctlplane.example.com-tls.key follow=False checksum=e86b6cf4aae47e5bdcaaf716eff3983006637253 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 08:14:45 compute-0 sudo[129340]: pam_unix(sudo:session): session closed for user root
Jan 31 08:14:45 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 08:14:45 compute-0 sudo[129492]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bkcoebyezqvmliuyykmwcwjacdyutayb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847285.5533123-163-36112015381441/AnsiballZ_file.py'
Jan 31 08:14:45 compute-0 sudo[129492]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:14:45 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v365: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:14:45 compute-0 python3.9[129494]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/ovn/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 31 08:14:45 compute-0 sudo[129492]: pam_unix(sudo:session): session closed for user root
Jan 31 08:14:46 compute-0 sudo[129644]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gwayqlnumqaqtkboqreqnnqnepejawgj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847286.030521-163-32872549178994/AnsiballZ_file.py'
Jan 31 08:14:46 compute-0 sudo[129644]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:14:46 compute-0 python3.9[129646]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/ovn/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 31 08:14:46 compute-0 sudo[129644]: pam_unix(sudo:session): session closed for user root
Jan 31 08:14:46 compute-0 sudo[129796]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rrfsqydtwznoagzyeffmfghexklefaxn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847286.6491563-178-72713593394978/AnsiballZ_stat.py'
Jan 31 08:14:46 compute-0 sudo[129796]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:14:46 compute-0 ceph-mon[75227]: pgmap v365: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:14:47 compute-0 python3.9[129798]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/ovn/default/tls.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 08:14:47 compute-0 sudo[129796]: pam_unix(sudo:session): session closed for user root
Jan 31 08:14:47 compute-0 sudo[129919]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wfmonxcbnagttbfmmemlkywjtakyfhmk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847286.6491563-178-72713593394978/AnsiballZ_copy.py'
Jan 31 08:14:47 compute-0 sudo[129919]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:14:47 compute-0 python3.9[129921]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/ovn/default/tls.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769847286.6491563-178-72713593394978/.source.crt _original_basename=compute-0.ctlplane.example.com-tls.crt follow=False checksum=89a2dedd8167f548d7ca9fc4e2315de9d798066a backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 08:14:47 compute-0 sudo[129919]: pam_unix(sudo:session): session closed for user root
Jan 31 08:14:47 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v366: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:14:48 compute-0 sudo[130071]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xdeygqeporxexwdjffmudxicklimpijz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847287.7646909-178-163371699699038/AnsiballZ_stat.py'
Jan 31 08:14:48 compute-0 sudo[130071]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:14:48 compute-0 python3.9[130073]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/ovn/default/ca.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 08:14:48 compute-0 sudo[130071]: pam_unix(sudo:session): session closed for user root
Jan 31 08:14:48 compute-0 sudo[130194]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-idmlssnpavnxybskjqmjuyjbcpxaqwkr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847287.7646909-178-163371699699038/AnsiballZ_copy.py'
Jan 31 08:14:48 compute-0 sudo[130194]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:14:48 compute-0 python3.9[130196]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/ovn/default/ca.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769847287.7646909-178-163371699699038/.source.crt _original_basename=compute-0.ctlplane.example.com-ca.crt follow=False checksum=4d574945944269f1960401828db2762d2c018b87 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 08:14:48 compute-0 sudo[130194]: pam_unix(sudo:session): session closed for user root
Jan 31 08:14:48 compute-0 ceph-mon[75227]: pgmap v366: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:14:49 compute-0 sudo[130346]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aqilkmhijjbgsfqsykcycnqpcqahvkhn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847288.919947-178-97212129338974/AnsiballZ_stat.py'
Jan 31 08:14:49 compute-0 sudo[130346]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:14:49 compute-0 python3.9[130348]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/ovn/default/tls.key follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 08:14:49 compute-0 sudo[130346]: pam_unix(sudo:session): session closed for user root
Jan 31 08:14:49 compute-0 sudo[130469]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hjrgqhllxgopgxhpywhewlhdybyoojcr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847288.919947-178-97212129338974/AnsiballZ_copy.py'
Jan 31 08:14:49 compute-0 sudo[130469]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:14:49 compute-0 python3.9[130471]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/ovn/default/tls.key group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769847288.919947-178-97212129338974/.source.key _original_basename=compute-0.ctlplane.example.com-tls.key follow=False checksum=c9c1b4eb3cf996ed403ca96208c923e78c00afee backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 08:14:49 compute-0 sudo[130469]: pam_unix(sudo:session): session closed for user root
Jan 31 08:14:49 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v367: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:14:50 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 08:14:50 compute-0 sudo[130621]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tzyxnwiqedmapnvaqgisksjhyyzrylnd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847290.5017262-238-161315401120601/AnsiballZ_file.py'
Jan 31 08:14:50 compute-0 sudo[130621]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:14:50 compute-0 python3.9[130623]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/ovn setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 31 08:14:50 compute-0 ceph-mon[75227]: pgmap v367: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:14:50 compute-0 sudo[130621]: pam_unix(sudo:session): session closed for user root
Jan 31 08:14:51 compute-0 sudo[130773]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-avacvdbwucalsppjyhysuccgpguriimi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847291.115542-246-161429158329434/AnsiballZ_stat.py'
Jan 31 08:14:51 compute-0 sudo[130773]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:14:51 compute-0 python3.9[130775]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 08:14:51 compute-0 sudo[130773]: pam_unix(sudo:session): session closed for user root
Jan 31 08:14:51 compute-0 sudo[130896]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vyzajsjqhjngpyyckksddwkmqhqbwjfw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847291.115542-246-161429158329434/AnsiballZ_copy.py'
Jan 31 08:14:51 compute-0 sudo[130896]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:14:51 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v368: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:14:52 compute-0 python3.9[130898]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769847291.115542-246-161429158329434/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=ade25fea9b4947a8606692264e6e294ddcaac679 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 08:14:52 compute-0 sudo[130896]: pam_unix(sudo:session): session closed for user root
Jan 31 08:14:52 compute-0 sudo[131048]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jxcdtkijtblggeqwnddoresfcjqubmfr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847292.301771-262-119512193742992/AnsiballZ_file.py'
Jan 31 08:14:52 compute-0 sudo[131048]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:14:52 compute-0 python3.9[131050]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/libvirt setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 31 08:14:52 compute-0 sudo[131048]: pam_unix(sudo:session): session closed for user root
Jan 31 08:14:52 compute-0 ceph-mon[75227]: pgmap v368: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:14:53 compute-0 sudo[131200]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rjnbtgaaitubmznejnjqywjolobbtwsq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847292.9681025-270-67096905926248/AnsiballZ_stat.py'
Jan 31 08:14:53 compute-0 sudo[131200]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:14:53 compute-0 python3.9[131202]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/libvirt/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 08:14:53 compute-0 sudo[131200]: pam_unix(sudo:session): session closed for user root
Jan 31 08:14:53 compute-0 sudo[131323]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rqzngcijqcjkreiezfqyzdjnpwcxxjgm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847292.9681025-270-67096905926248/AnsiballZ_copy.py'
Jan 31 08:14:53 compute-0 sudo[131323]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:14:53 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v369: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:14:53 compute-0 python3.9[131325]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/libvirt/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769847292.9681025-270-67096905926248/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=ade25fea9b4947a8606692264e6e294ddcaac679 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 08:14:53 compute-0 sudo[131323]: pam_unix(sudo:session): session closed for user root
Jan 31 08:14:54 compute-0 sudo[131475]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eslmenrtgrvyzrrjsrosfhmqfauemtut ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847294.0796418-286-132226268032830/AnsiballZ_file.py'
Jan 31 08:14:54 compute-0 sudo[131475]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:14:54 compute-0 python3.9[131477]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/neutron-metadata setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 31 08:14:54 compute-0 sudo[131475]: pam_unix(sudo:session): session closed for user root
Jan 31 08:14:54 compute-0 sudo[131627]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mylmbpzigclcslrvrdqqeezxngagdciq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847294.6906905-294-271771990330542/AnsiballZ_stat.py'
Jan 31 08:14:54 compute-0 sudo[131627]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:14:54 compute-0 ceph-mon[75227]: pgmap v369: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:14:55 compute-0 python3.9[131629]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 08:14:55 compute-0 sudo[131627]: pam_unix(sudo:session): session closed for user root
Jan 31 08:14:55 compute-0 sudo[131750]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wbprdgzminjitgifzunqqubkrtctrowf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847294.6906905-294-271771990330542/AnsiballZ_copy.py'
Jan 31 08:14:55 compute-0 sudo[131750]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:14:55 compute-0 python3.9[131752]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769847294.6906905-294-271771990330542/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=ade25fea9b4947a8606692264e6e294ddcaac679 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 08:14:55 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 08:14:55 compute-0 sudo[131750]: pam_unix(sudo:session): session closed for user root
Jan 31 08:14:55 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v370: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:14:56 compute-0 sudo[131902]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nrqgtlbfdpyxozowrbnpqcwziyqtehfl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847295.816091-310-444559816297/AnsiballZ_file.py'
Jan 31 08:14:56 compute-0 sudo[131902]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:14:56 compute-0 python3.9[131904]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/bootstrap setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 31 08:14:56 compute-0 sudo[131902]: pam_unix(sudo:session): session closed for user root
Jan 31 08:14:56 compute-0 sudo[132054]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kbonwlnscwooxdnnqorjifwzkidblgbw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847296.355829-318-165895419332194/AnsiballZ_stat.py'
Jan 31 08:14:56 compute-0 sudo[132054]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:14:56 compute-0 python3.9[132056]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/bootstrap/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 08:14:56 compute-0 sudo[132054]: pam_unix(sudo:session): session closed for user root
Jan 31 08:14:57 compute-0 ceph-mon[75227]: pgmap v370: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:14:57 compute-0 sudo[132177]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jufqwbcqnpofivawbscapdyvdbptwofu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847296.355829-318-165895419332194/AnsiballZ_copy.py'
Jan 31 08:14:57 compute-0 sudo[132177]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:14:57 compute-0 python3.9[132179]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/bootstrap/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769847296.355829-318-165895419332194/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=ade25fea9b4947a8606692264e6e294ddcaac679 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 08:14:57 compute-0 sudo[132177]: pam_unix(sudo:session): session closed for user root
Jan 31 08:14:57 compute-0 sudo[132329]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-utjjbwvckqyhxbtmyacqxdfoeuysksju ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847297.4589124-334-49031165054800/AnsiballZ_file.py'
Jan 31 08:14:57 compute-0 sudo[132329]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:14:57 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v371: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:14:57 compute-0 python3.9[132331]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/repo-setup setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 31 08:14:57 compute-0 sudo[132329]: pam_unix(sudo:session): session closed for user root
Jan 31 08:14:58 compute-0 sudo[132481]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-divovqucxbezbznqrunmgrzhvqzzrzlf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847298.127833-342-261183602427854/AnsiballZ_stat.py'
Jan 31 08:14:58 compute-0 sudo[132481]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:14:58 compute-0 python3.9[132483]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/repo-setup/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 08:14:58 compute-0 sudo[132481]: pam_unix(sudo:session): session closed for user root
Jan 31 08:14:58 compute-0 sudo[132604]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yumqnawukkpheejnreikbrnoacrapkcl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847298.127833-342-261183602427854/AnsiballZ_copy.py'
Jan 31 08:14:58 compute-0 sudo[132604]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:14:59 compute-0 ceph-mon[75227]: pgmap v371: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:14:59 compute-0 python3.9[132606]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/repo-setup/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769847298.127833-342-261183602427854/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=ade25fea9b4947a8606692264e6e294ddcaac679 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 08:14:59 compute-0 sudo[132604]: pam_unix(sudo:session): session closed for user root
Jan 31 08:14:59 compute-0 sudo[132756]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xzlroerleqygwohsusdmrllckoupgtne ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847299.2225854-358-844502569632/AnsiballZ_file.py'
Jan 31 08:14:59 compute-0 sudo[132756]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:14:59 compute-0 python3.9[132758]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/nova setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 31 08:14:59 compute-0 sudo[132756]: pam_unix(sudo:session): session closed for user root
Jan 31 08:14:59 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v372: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:15:00 compute-0 sudo[132908]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ejtzvhkbrfmmwuugyvrabvunronatcnh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847299.8365793-366-105837305318786/AnsiballZ_stat.py'
Jan 31 08:15:00 compute-0 sudo[132908]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:15:00 compute-0 python3.9[132910]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 08:15:00 compute-0 sudo[132908]: pam_unix(sudo:session): session closed for user root
Jan 31 08:15:00 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 08:15:00 compute-0 sudo[133031]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wvaasonbkgeeslfokgmbvclflwmqpuvk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847299.8365793-366-105837305318786/AnsiballZ_copy.py'
Jan 31 08:15:00 compute-0 sudo[133031]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:15:00 compute-0 python3.9[133033]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769847299.8365793-366-105837305318786/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=ade25fea9b4947a8606692264e6e294ddcaac679 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 08:15:00 compute-0 sudo[133031]: pam_unix(sudo:session): session closed for user root
Jan 31 08:15:01 compute-0 ceph-mon[75227]: pgmap v372: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:15:01 compute-0 sshd-session[126930]: Connection closed by 192.168.122.30 port 52208
Jan 31 08:15:01 compute-0 sshd-session[126927]: pam_unix(sshd:session): session closed for user zuul
Jan 31 08:15:01 compute-0 systemd[1]: session-44.scope: Deactivated successfully.
Jan 31 08:15:01 compute-0 systemd[1]: session-44.scope: Consumed 20.210s CPU time.
Jan 31 08:15:01 compute-0 systemd-logind[793]: Session 44 logged out. Waiting for processes to exit.
Jan 31 08:15:01 compute-0 systemd-logind[793]: Removed session 44.
Jan 31 08:15:01 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v373: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:15:02 compute-0 ceph-mgr[75519]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:15:02 compute-0 ceph-mgr[75519]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:15:02 compute-0 ceph-mgr[75519]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:15:02 compute-0 ceph-mgr[75519]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:15:02 compute-0 ceph-mgr[75519]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:15:02 compute-0 ceph-mgr[75519]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:15:03 compute-0 ceph-mon[75227]: pgmap v373: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:15:03 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v374: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:15:05 compute-0 ceph-mon[75227]: pgmap v374: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:15:05 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 08:15:05 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v375: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:15:06 compute-0 sshd-session[133058]: Accepted publickey for zuul from 192.168.122.30 port 41880 ssh2: ECDSA SHA256:Skb+4tfaoVfLHQIqkRSeA/sFlTrVc6ZnX8V66qTLHY8
Jan 31 08:15:06 compute-0 systemd-logind[793]: New session 45 of user zuul.
Jan 31 08:15:06 compute-0 systemd[1]: Started Session 45 of User zuul.
Jan 31 08:15:06 compute-0 sshd-session[133058]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 31 08:15:07 compute-0 ceph-mon[75227]: pgmap v375: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:15:07 compute-0 sudo[133211]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pkektodbznxthjhmqhkgtodylfmyvemi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847306.87722-17-100349107840794/AnsiballZ_file.py'
Jan 31 08:15:07 compute-0 sudo[133211]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:15:07 compute-0 python3.9[133213]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/openstack/config/ceph state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 08:15:07 compute-0 sudo[133211]: pam_unix(sudo:session): session closed for user root
Jan 31 08:15:07 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v376: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:15:08 compute-0 sudo[133363]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ywplpntpthpkgqdssqqsowygxebusaxm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847307.6373734-29-86429583385594/AnsiballZ_stat.py'
Jan 31 08:15:08 compute-0 sudo[133363]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:15:08 compute-0 python3.9[133365]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/ceph/ceph.client.openstack.keyring follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 08:15:08 compute-0 sudo[133363]: pam_unix(sudo:session): session closed for user root
Jan 31 08:15:08 compute-0 sudo[133486]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-omjhflxrajlwrgtvijverlblufhzbmti ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847307.6373734-29-86429583385594/AnsiballZ_copy.py'
Jan 31 08:15:08 compute-0 sudo[133486]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:15:08 compute-0 python3.9[133488]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/ceph/ceph.client.openstack.keyring mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1769847307.6373734-29-86429583385594/.source.keyring _original_basename=ceph.client.openstack.keyring follow=False checksum=5ead94c69bd1df72757f346af781128058784f3a backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 08:15:08 compute-0 sudo[133486]: pam_unix(sudo:session): session closed for user root
Jan 31 08:15:09 compute-0 ceph-mon[75227]: pgmap v376: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:15:09 compute-0 sudo[133638]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mszapkbzgoahdvznwscxmfogvhoafszk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847308.940014-29-58775629134600/AnsiballZ_stat.py'
Jan 31 08:15:09 compute-0 sudo[133638]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:15:09 compute-0 python3.9[133640]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/ceph/ceph.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 08:15:09 compute-0 sudo[133638]: pam_unix(sudo:session): session closed for user root
Jan 31 08:15:09 compute-0 sudo[133761]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-muhuudczqqeszqpaoxdnewcjzkkugrux ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847308.940014-29-58775629134600/AnsiballZ_copy.py'
Jan 31 08:15:09 compute-0 sudo[133761]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:15:09 compute-0 python3.9[133763]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/ceph/ceph.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769847308.940014-29-58775629134600/.source.conf _original_basename=ceph.conf follow=False checksum=a00f0ea0dc22846dc13e7a7ab591bc83410e8962 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 08:15:09 compute-0 sudo[133761]: pam_unix(sudo:session): session closed for user root
Jan 31 08:15:09 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v377: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:15:10 compute-0 sshd-session[133061]: Connection closed by 192.168.122.30 port 41880
Jan 31 08:15:10 compute-0 sshd-session[133058]: pam_unix(sshd:session): session closed for user zuul
Jan 31 08:15:10 compute-0 systemd[1]: session-45.scope: Deactivated successfully.
Jan 31 08:15:10 compute-0 systemd[1]: session-45.scope: Consumed 2.250s CPU time.
Jan 31 08:15:10 compute-0 systemd-logind[793]: Session 45 logged out. Waiting for processes to exit.
Jan 31 08:15:10 compute-0 systemd-logind[793]: Removed session 45.
Jan 31 08:15:10 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 08:15:11 compute-0 ceph-mon[75227]: pgmap v377: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:15:11 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v378: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:15:13 compute-0 ceph-mon[75227]: pgmap v378: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:15:13 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v379: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:15:15 compute-0 ceph-mon[75227]: pgmap v379: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:15:15 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 08:15:15 compute-0 sshd-session[133788]: Accepted publickey for zuul from 192.168.122.30 port 55562 ssh2: ECDSA SHA256:Skb+4tfaoVfLHQIqkRSeA/sFlTrVc6ZnX8V66qTLHY8
Jan 31 08:15:15 compute-0 systemd-logind[793]: New session 46 of user zuul.
Jan 31 08:15:15 compute-0 systemd[1]: Started Session 46 of User zuul.
Jan 31 08:15:15 compute-0 sshd-session[133788]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 31 08:15:15 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v380: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:15:16 compute-0 python3.9[133941]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 31 08:15:17 compute-0 ceph-mon[75227]: pgmap v380: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:15:17 compute-0 sudo[134095]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iujicovvkbujclzggbkfnwgpwuvklqwi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847317.1205244-29-53166915645219/AnsiballZ_file.py'
Jan 31 08:15:17 compute-0 sudo[134095]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:15:17 compute-0 python3.9[134097]: ansible-ansible.builtin.file Invoked with group=zuul mode=0750 owner=zuul path=/var/lib/edpm-config/firewall setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 31 08:15:17 compute-0 sudo[134095]: pam_unix(sudo:session): session closed for user root
Jan 31 08:15:17 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v381: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:15:17 compute-0 sudo[134247]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fesazvzbnanabzsatxywjwnyifzmokix ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847317.7902324-29-233241718981789/AnsiballZ_file.py'
Jan 31 08:15:17 compute-0 sudo[134247]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:15:18 compute-0 python3.9[134249]: ansible-ansible.builtin.file Invoked with group=openvswitch owner=openvswitch path=/var/lib/openvswitch/ovn setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Jan 31 08:15:18 compute-0 sudo[134247]: pam_unix(sudo:session): session closed for user root
Jan 31 08:15:18 compute-0 ceph-mon[75227]: pgmap v381: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:15:18 compute-0 sudo[134400]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 08:15:18 compute-0 sudo[134400]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:15:18 compute-0 sudo[134400]: pam_unix(sudo:session): session closed for user root
Jan 31 08:15:18 compute-0 python3.9[134399]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'selinux'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 31 08:15:18 compute-0 sudo[134425]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/82c880e6-d992-5408-8b12-efff9c275473/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Jan 31 08:15:18 compute-0 sudo[134425]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:15:19 compute-0 sudo[134425]: pam_unix(sudo:session): session closed for user root
Jan 31 08:15:19 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 31 08:15:19 compute-0 ceph-mon[75227]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 08:15:19 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 31 08:15:19 compute-0 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 31 08:15:19 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 31 08:15:19 compute-0 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 08:15:19 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 31 08:15:19 compute-0 ceph-mon[75227]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 31 08:15:19 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 31 08:15:19 compute-0 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 31 08:15:19 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 31 08:15:19 compute-0 ceph-mon[75227]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 08:15:19 compute-0 sudo[134604]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 08:15:19 compute-0 sudo[134604]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:15:19 compute-0 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 08:15:19 compute-0 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 31 08:15:19 compute-0 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 08:15:19 compute-0 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 31 08:15:19 compute-0 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 31 08:15:19 compute-0 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 08:15:19 compute-0 sudo[134604]: pam_unix(sudo:session): session closed for user root
Jan 31 08:15:19 compute-0 sudo[134655]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kfgilzslgzjcbsaezooaltsgmjeyxghc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847318.9904623-52-274103540487698/AnsiballZ_seboolean.py'
Jan 31 08:15:19 compute-0 sudo[134655]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:15:19 compute-0 sudo[134656]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/82c880e6-d992-5408-8b12-efff9c275473/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 82c880e6-d992-5408-8b12-efff9c275473 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Jan 31 08:15:19 compute-0 sudo[134656]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:15:19 compute-0 python3.9[134675]: ansible-ansible.posix.seboolean Invoked with name=virt_sandbox_use_netlink persistent=True state=True ignore_selinux_state=False
Jan 31 08:15:19 compute-0 podman[134696]: 2026-01-31 08:15:19.611418236 +0000 UTC m=+0.037547675 container create fdee190c39bf0ebeeb73fd26fbcfbeb94b2e2ffdc33543e0d408bf32a23b4825 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_margulis, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.license=GPLv2)
Jan 31 08:15:19 compute-0 systemd[1]: Started libpod-conmon-fdee190c39bf0ebeeb73fd26fbcfbeb94b2e2ffdc33543e0d408bf32a23b4825.scope.
Jan 31 08:15:19 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:15:19 compute-0 podman[134696]: 2026-01-31 08:15:19.665454553 +0000 UTC m=+0.091584042 container init fdee190c39bf0ebeeb73fd26fbcfbeb94b2e2ffdc33543e0d408bf32a23b4825 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_margulis, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Jan 31 08:15:19 compute-0 podman[134696]: 2026-01-31 08:15:19.672759864 +0000 UTC m=+0.098889323 container start fdee190c39bf0ebeeb73fd26fbcfbeb94b2e2ffdc33543e0d408bf32a23b4825 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_margulis, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 08:15:19 compute-0 systemd[1]: libpod-fdee190c39bf0ebeeb73fd26fbcfbeb94b2e2ffdc33543e0d408bf32a23b4825.scope: Deactivated successfully.
Jan 31 08:15:19 compute-0 pedantic_margulis[134713]: 167 167
Jan 31 08:15:19 compute-0 conmon[134713]: conmon fdee190c39bf0ebeeb73 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-fdee190c39bf0ebeeb73fd26fbcfbeb94b2e2ffdc33543e0d408bf32a23b4825.scope/container/memory.events
Jan 31 08:15:19 compute-0 podman[134696]: 2026-01-31 08:15:19.680290162 +0000 UTC m=+0.106419621 container attach fdee190c39bf0ebeeb73fd26fbcfbeb94b2e2ffdc33543e0d408bf32a23b4825 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_margulis, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 31 08:15:19 compute-0 podman[134696]: 2026-01-31 08:15:19.680753304 +0000 UTC m=+0.106882763 container died fdee190c39bf0ebeeb73fd26fbcfbeb94b2e2ffdc33543e0d408bf32a23b4825 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_margulis, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS)
Jan 31 08:15:19 compute-0 podman[134696]: 2026-01-31 08:15:19.591604861 +0000 UTC m=+0.017734320 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 08:15:19 compute-0 systemd[1]: var-lib-containers-storage-overlay-74d79edfabef9a25c70ef9e8fab480b3405247726a5a6ca0f401dd4d31651383-merged.mount: Deactivated successfully.
Jan 31 08:15:19 compute-0 podman[134696]: 2026-01-31 08:15:19.715950603 +0000 UTC m=+0.142080032 container remove fdee190c39bf0ebeeb73fd26fbcfbeb94b2e2ffdc33543e0d408bf32a23b4825 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_margulis, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 31 08:15:19 compute-0 systemd[1]: libpod-conmon-fdee190c39bf0ebeeb73fd26fbcfbeb94b2e2ffdc33543e0d408bf32a23b4825.scope: Deactivated successfully.
Jan 31 08:15:19 compute-0 podman[134739]: 2026-01-31 08:15:19.833686884 +0000 UTC m=+0.042640775 container create 2ca837a3c57a0236b85d9486767fc22b9c31f24445fe7f132c34bf2dece2e1d1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wonderful_maxwell, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 08:15:19 compute-0 systemd[1]: Started libpod-conmon-2ca837a3c57a0236b85d9486767fc22b9c31f24445fe7f132c34bf2dece2e1d1.scope.
Jan 31 08:15:19 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v382: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:15:19 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:15:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9c469d2e048a71486591f4d22f1f37f4cd53e7640c6a98c6755ab3c68b2ae83d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 08:15:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9c469d2e048a71486591f4d22f1f37f4cd53e7640c6a98c6755ab3c68b2ae83d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 08:15:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9c469d2e048a71486591f4d22f1f37f4cd53e7640c6a98c6755ab3c68b2ae83d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 08:15:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9c469d2e048a71486591f4d22f1f37f4cd53e7640c6a98c6755ab3c68b2ae83d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 08:15:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9c469d2e048a71486591f4d22f1f37f4cd53e7640c6a98c6755ab3c68b2ae83d/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 31 08:15:19 compute-0 podman[134739]: 2026-01-31 08:15:19.81211647 +0000 UTC m=+0.021070381 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 08:15:19 compute-0 podman[134739]: 2026-01-31 08:15:19.977512123 +0000 UTC m=+0.186466024 container init 2ca837a3c57a0236b85d9486767fc22b9c31f24445fe7f132c34bf2dece2e1d1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wonderful_maxwell, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 08:15:19 compute-0 podman[134739]: 2026-01-31 08:15:19.982949672 +0000 UTC m=+0.191903553 container start 2ca837a3c57a0236b85d9486767fc22b9c31f24445fe7f132c34bf2dece2e1d1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wonderful_maxwell, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Jan 31 08:15:20 compute-0 podman[134739]: 2026-01-31 08:15:20.075675345 +0000 UTC m=+0.284629246 container attach 2ca837a3c57a0236b85d9486767fc22b9c31f24445fe7f132c34bf2dece2e1d1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wonderful_maxwell, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 08:15:20 compute-0 wonderful_maxwell[134755]: --> passed data devices: 0 physical, 3 LVM
Jan 31 08:15:20 compute-0 wonderful_maxwell[134755]: --> All data devices are unavailable
Jan 31 08:15:20 compute-0 systemd[1]: libpod-2ca837a3c57a0236b85d9486767fc22b9c31f24445fe7f132c34bf2dece2e1d1.scope: Deactivated successfully.
Jan 31 08:15:20 compute-0 podman[134739]: 2026-01-31 08:15:20.366781617 +0000 UTC m=+0.575735508 container died 2ca837a3c57a0236b85d9486767fc22b9c31f24445fe7f132c34bf2dece2e1d1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wonderful_maxwell, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 08:15:20 compute-0 dbus-broker-launch[778]: avc:  op=load_policy lsm=selinux seqno=9 res=1
Jan 31 08:15:20 compute-0 ceph-mon[75227]: pgmap v382: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:15:20 compute-0 systemd[1]: var-lib-containers-storage-overlay-9c469d2e048a71486591f4d22f1f37f4cd53e7640c6a98c6755ab3c68b2ae83d-merged.mount: Deactivated successfully.
Jan 31 08:15:20 compute-0 podman[134739]: 2026-01-31 08:15:20.601422475 +0000 UTC m=+0.810376356 container remove 2ca837a3c57a0236b85d9486767fc22b9c31f24445fe7f132c34bf2dece2e1d1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wonderful_maxwell, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0)
Jan 31 08:15:20 compute-0 systemd[1]: libpod-conmon-2ca837a3c57a0236b85d9486767fc22b9c31f24445fe7f132c34bf2dece2e1d1.scope: Deactivated successfully.
Jan 31 08:15:20 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 08:15:20 compute-0 sudo[134656]: pam_unix(sudo:session): session closed for user root
Jan 31 08:15:20 compute-0 ceph-mon[75227]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #21. Immutable memtables: 0.
Jan 31 08:15:20 compute-0 ceph-mon[75227]: rocksdb: (Original Log Time 2026/01/31-08:15:20.664661) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 31 08:15:20 compute-0 ceph-mon[75227]: rocksdb: [db/flush_job.cc:856] [default] [JOB 5] Flushing memtable with next log file: 21
Jan 31 08:15:20 compute-0 ceph-mon[75227]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769847320664688, "job": 5, "event": "flush_started", "num_memtables": 1, "num_entries": 1382, "num_deletes": 250, "total_data_size": 2070695, "memory_usage": 2105992, "flush_reason": "Manual Compaction"}
Jan 31 08:15:20 compute-0 ceph-mon[75227]: rocksdb: [db/flush_job.cc:885] [default] [JOB 5] Level-0 flush table #22: started
Jan 31 08:15:20 compute-0 sudo[134655]: pam_unix(sudo:session): session closed for user root
Jan 31 08:15:20 compute-0 ceph-mon[75227]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769847320672055, "cf_name": "default", "job": 5, "event": "table_file_creation", "file_number": 22, "file_size": 1215011, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 7460, "largest_seqno": 8841, "table_properties": {"data_size": 1210178, "index_size": 2101, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1669, "raw_key_size": 13059, "raw_average_key_size": 20, "raw_value_size": 1199277, "raw_average_value_size": 1862, "num_data_blocks": 99, "num_entries": 644, "num_filter_entries": 644, "num_deletions": 250, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769847185, "oldest_key_time": 1769847185, "file_creation_time": 1769847320, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "91992687-9ca4-489a-811f-a25b3432622d", "db_session_id": "RDN3DWKE2K2I6QTJYIJY", "orig_file_number": 22, "seqno_to_time_mapping": "N/A"}}
Jan 31 08:15:20 compute-0 ceph-mon[75227]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 5] Flush lasted 7482 microseconds, and 2914 cpu microseconds.
Jan 31 08:15:20 compute-0 ceph-mon[75227]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 31 08:15:20 compute-0 ceph-mon[75227]: rocksdb: (Original Log Time 2026/01/31-08:15:20.672129) [db/flush_job.cc:967] [default] [JOB 5] Level-0 flush table #22: 1215011 bytes OK
Jan 31 08:15:20 compute-0 ceph-mon[75227]: rocksdb: (Original Log Time 2026/01/31-08:15:20.672162) [db/memtable_list.cc:519] [default] Level-0 commit table #22 started
Jan 31 08:15:20 compute-0 ceph-mon[75227]: rocksdb: (Original Log Time 2026/01/31-08:15:20.674408) [db/memtable_list.cc:722] [default] Level-0 commit table #22: memtable #1 done
Jan 31 08:15:20 compute-0 ceph-mon[75227]: rocksdb: (Original Log Time 2026/01/31-08:15:20.674429) EVENT_LOG_v1 {"time_micros": 1769847320674423, "job": 5, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 31 08:15:20 compute-0 ceph-mon[75227]: rocksdb: (Original Log Time 2026/01/31-08:15:20.674458) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 31 08:15:20 compute-0 ceph-mon[75227]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 5] Try to delete WAL files size 2064443, prev total WAL file size 2064443, number of live WAL files 2.
Jan 31 08:15:20 compute-0 ceph-mon[75227]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000018.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 08:15:20 compute-0 ceph-mon[75227]: rocksdb: (Original Log Time 2026/01/31-08:15:20.675151) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740030' seq:72057594037927935, type:22 .. '6D67727374617400323531' seq:0, type:0; will stop at (end)
Jan 31 08:15:20 compute-0 ceph-mon[75227]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 6] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 31 08:15:20 compute-0 ceph-mon[75227]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 5 Base level 0, inputs: [22(1186KB)], [20(7838KB)]
Jan 31 08:15:20 compute-0 ceph-mon[75227]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769847320675224, "job": 6, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [22], "files_L6": [20], "score": -1, "input_data_size": 9242130, "oldest_snapshot_seqno": -1}
Jan 31 08:15:20 compute-0 sudo[134796]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 08:15:20 compute-0 sudo[134796]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:15:20 compute-0 sudo[134796]: pam_unix(sudo:session): session closed for user root
Jan 31 08:15:20 compute-0 ceph-mon[75227]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 6] Generated table #23: 3350 keys, 7109940 bytes, temperature: kUnknown
Jan 31 08:15:20 compute-0 ceph-mon[75227]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769847320734516, "cf_name": "default", "job": 6, "event": "table_file_creation", "file_number": 23, "file_size": 7109940, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 7083734, "index_size": 16752, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 8389, "raw_key_size": 80694, "raw_average_key_size": 24, "raw_value_size": 7019288, "raw_average_value_size": 2095, "num_data_blocks": 741, "num_entries": 3350, "num_filter_entries": 3350, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769846771, "oldest_key_time": 0, "file_creation_time": 1769847320, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "91992687-9ca4-489a-811f-a25b3432622d", "db_session_id": "RDN3DWKE2K2I6QTJYIJY", "orig_file_number": 23, "seqno_to_time_mapping": "N/A"}}
Jan 31 08:15:20 compute-0 ceph-mon[75227]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 31 08:15:20 compute-0 ceph-mon[75227]: rocksdb: (Original Log Time 2026/01/31-08:15:20.734722) [db/compaction/compaction_job.cc:1663] [default] [JOB 6] Compacted 1@0 + 1@6 files to L6 => 7109940 bytes
Jan 31 08:15:20 compute-0 ceph-mon[75227]: rocksdb: (Original Log Time 2026/01/31-08:15:20.737209) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 155.7 rd, 119.8 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.2, 7.7 +0.0 blob) out(6.8 +0.0 blob), read-write-amplify(13.5) write-amplify(5.9) OK, records in: 3800, records dropped: 450 output_compression: NoCompression
Jan 31 08:15:20 compute-0 ceph-mon[75227]: rocksdb: (Original Log Time 2026/01/31-08:15:20.737228) EVENT_LOG_v1 {"time_micros": 1769847320737218, "job": 6, "event": "compaction_finished", "compaction_time_micros": 59353, "compaction_time_cpu_micros": 24566, "output_level": 6, "num_output_files": 1, "total_output_size": 7109940, "num_input_records": 3800, "num_output_records": 3350, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 31 08:15:20 compute-0 ceph-mon[75227]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000022.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 08:15:20 compute-0 ceph-mon[75227]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769847320737412, "job": 6, "event": "table_file_deletion", "file_number": 22}
Jan 31 08:15:20 compute-0 ceph-mon[75227]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000020.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 08:15:20 compute-0 ceph-mon[75227]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769847320738224, "job": 6, "event": "table_file_deletion", "file_number": 20}
Jan 31 08:15:20 compute-0 ceph-mon[75227]: rocksdb: (Original Log Time 2026/01/31-08:15:20.675044) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 08:15:20 compute-0 ceph-mon[75227]: rocksdb: (Original Log Time 2026/01/31-08:15:20.738326) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 08:15:20 compute-0 ceph-mon[75227]: rocksdb: (Original Log Time 2026/01/31-08:15:20.738331) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 08:15:20 compute-0 ceph-mon[75227]: rocksdb: (Original Log Time 2026/01/31-08:15:20.738334) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 08:15:20 compute-0 ceph-mon[75227]: rocksdb: (Original Log Time 2026/01/31-08:15:20.738336) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 08:15:20 compute-0 ceph-mon[75227]: rocksdb: (Original Log Time 2026/01/31-08:15:20.738338) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 08:15:20 compute-0 sudo[134839]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/82c880e6-d992-5408-8b12-efff9c275473/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 82c880e6-d992-5408-8b12-efff9c275473 -- lvm list --format json
Jan 31 08:15:20 compute-0 sudo[134839]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:15:21 compute-0 podman[134935]: 2026-01-31 08:15:21.029713904 +0000 UTC m=+0.061701539 container create af77772a292020bcb9430579c8d6d16ed44a5a08ab8542a9b5ebf70f875d43b4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=exciting_bhabha, ceph=True, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 08:15:21 compute-0 podman[134935]: 2026-01-31 08:15:20.986854364 +0000 UTC m=+0.018842009 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 08:15:21 compute-0 systemd[1]: Started libpod-conmon-af77772a292020bcb9430579c8d6d16ed44a5a08ab8542a9b5ebf70f875d43b4.scope.
Jan 31 08:15:21 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:15:21 compute-0 sudo[135024]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mjizkclnlpuxeribovufrkqwsrvzkmxh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847320.8970883-62-106905847480482/AnsiballZ_setup.py'
Jan 31 08:15:21 compute-0 sudo[135024]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:15:21 compute-0 podman[134935]: 2026-01-31 08:15:21.141641685 +0000 UTC m=+0.173629350 container init af77772a292020bcb9430579c8d6d16ed44a5a08ab8542a9b5ebf70f875d43b4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=exciting_bhabha, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, ceph=True)
Jan 31 08:15:21 compute-0 podman[134935]: 2026-01-31 08:15:21.147896147 +0000 UTC m=+0.179883782 container start af77772a292020bcb9430579c8d6d16ed44a5a08ab8542a9b5ebf70f875d43b4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=exciting_bhabha, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 08:15:21 compute-0 exciting_bhabha[135025]: 167 167
Jan 31 08:15:21 compute-0 systemd[1]: libpod-af77772a292020bcb9430579c8d6d16ed44a5a08ab8542a9b5ebf70f875d43b4.scope: Deactivated successfully.
Jan 31 08:15:21 compute-0 podman[134935]: 2026-01-31 08:15:21.232191357 +0000 UTC m=+0.264179002 container attach af77772a292020bcb9430579c8d6d16ed44a5a08ab8542a9b5ebf70f875d43b4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=exciting_bhabha, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS)
Jan 31 08:15:21 compute-0 podman[134935]: 2026-01-31 08:15:21.232579048 +0000 UTC m=+0.264566673 container died af77772a292020bcb9430579c8d6d16ed44a5a08ab8542a9b5ebf70f875d43b4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=exciting_bhabha, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0)
Jan 31 08:15:21 compute-0 systemd[1]: var-lib-containers-storage-overlay-b1ce77ed7bc5b6a9d47fa4797a0b4f3ae77f6c57467c1f97231689b7a98804b9-merged.mount: Deactivated successfully.
Jan 31 08:15:21 compute-0 python3.9[135029]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 31 08:15:21 compute-0 podman[134935]: 2026-01-31 08:15:21.496072251 +0000 UTC m=+0.528059876 container remove af77772a292020bcb9430579c8d6d16ed44a5a08ab8542a9b5ebf70f875d43b4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=exciting_bhabha, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 08:15:21 compute-0 systemd[1]: libpod-conmon-af77772a292020bcb9430579c8d6d16ed44a5a08ab8542a9b5ebf70f875d43b4.scope: Deactivated successfully.
Jan 31 08:15:21 compute-0 podman[135059]: 2026-01-31 08:15:21.601352458 +0000 UTC m=+0.035708363 container create 0ca5b9e6b055bfee1f0219ac919b40763684d8646d7a42a59ccd6f20c239419f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dazzling_cohen, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 08:15:21 compute-0 systemd[1]: Started libpod-conmon-0ca5b9e6b055bfee1f0219ac919b40763684d8646d7a42a59ccd6f20c239419f.scope.
Jan 31 08:15:21 compute-0 sudo[135024]: pam_unix(sudo:session): session closed for user root
Jan 31 08:15:21 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:15:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2daf0a7aa49d18792f9f3db8f3687d658faf803088dee21cb49488ba9bc3468f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 08:15:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2daf0a7aa49d18792f9f3db8f3687d658faf803088dee21cb49488ba9bc3468f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 08:15:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2daf0a7aa49d18792f9f3db8f3687d658faf803088dee21cb49488ba9bc3468f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 08:15:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2daf0a7aa49d18792f9f3db8f3687d658faf803088dee21cb49488ba9bc3468f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 08:15:21 compute-0 podman[135059]: 2026-01-31 08:15:21.58506089 +0000 UTC m=+0.019416845 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 08:15:21 compute-0 podman[135059]: 2026-01-31 08:15:21.695196542 +0000 UTC m=+0.129552477 container init 0ca5b9e6b055bfee1f0219ac919b40763684d8646d7a42a59ccd6f20c239419f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dazzling_cohen, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle)
Jan 31 08:15:21 compute-0 podman[135059]: 2026-01-31 08:15:21.700201409 +0000 UTC m=+0.134557314 container start 0ca5b9e6b055bfee1f0219ac919b40763684d8646d7a42a59ccd6f20c239419f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dazzling_cohen, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 31 08:15:21 compute-0 podman[135059]: 2026-01-31 08:15:21.70420442 +0000 UTC m=+0.138560325 container attach 0ca5b9e6b055bfee1f0219ac919b40763684d8646d7a42a59ccd6f20c239419f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dazzling_cohen, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, ceph=True, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 08:15:21 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v383: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:15:21 compute-0 dazzling_cohen[135076]: {
Jan 31 08:15:21 compute-0 dazzling_cohen[135076]:     "0": [
Jan 31 08:15:21 compute-0 dazzling_cohen[135076]:         {
Jan 31 08:15:21 compute-0 dazzling_cohen[135076]:             "devices": [
Jan 31 08:15:21 compute-0 dazzling_cohen[135076]:                 "/dev/loop3"
Jan 31 08:15:21 compute-0 dazzling_cohen[135076]:             ],
Jan 31 08:15:21 compute-0 dazzling_cohen[135076]:             "lv_name": "ceph_lv0",
Jan 31 08:15:21 compute-0 dazzling_cohen[135076]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 08:15:21 compute-0 dazzling_cohen[135076]:             "lv_size": "21470642176",
Jan 31 08:15:21 compute-0 dazzling_cohen[135076]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=MTsNbY-MKaT-jGv0-3onj-5WQa-gnK0-BbfLsK,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=82c880e6-d992-5408-8b12-efff9c275473,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=39c36249-2898-4a76-b317-8e4ca379866f,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 31 08:15:21 compute-0 dazzling_cohen[135076]:             "lv_uuid": "MTsNbY-MKaT-jGv0-3onj-5WQa-gnK0-BbfLsK",
Jan 31 08:15:21 compute-0 dazzling_cohen[135076]:             "name": "ceph_lv0",
Jan 31 08:15:21 compute-0 dazzling_cohen[135076]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 08:15:21 compute-0 dazzling_cohen[135076]:             "tags": {
Jan 31 08:15:21 compute-0 dazzling_cohen[135076]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 31 08:15:21 compute-0 dazzling_cohen[135076]:                 "ceph.block_uuid": "MTsNbY-MKaT-jGv0-3onj-5WQa-gnK0-BbfLsK",
Jan 31 08:15:21 compute-0 dazzling_cohen[135076]:                 "ceph.cephx_lockbox_secret": "",
Jan 31 08:15:21 compute-0 dazzling_cohen[135076]:                 "ceph.cluster_fsid": "82c880e6-d992-5408-8b12-efff9c275473",
Jan 31 08:15:21 compute-0 dazzling_cohen[135076]:                 "ceph.cluster_name": "ceph",
Jan 31 08:15:21 compute-0 dazzling_cohen[135076]:                 "ceph.crush_device_class": "",
Jan 31 08:15:21 compute-0 dazzling_cohen[135076]:                 "ceph.encrypted": "0",
Jan 31 08:15:21 compute-0 dazzling_cohen[135076]:                 "ceph.objectstore": "bluestore",
Jan 31 08:15:21 compute-0 dazzling_cohen[135076]:                 "ceph.osd_fsid": "39c36249-2898-4a76-b317-8e4ca379866f",
Jan 31 08:15:21 compute-0 dazzling_cohen[135076]:                 "ceph.osd_id": "0",
Jan 31 08:15:21 compute-0 dazzling_cohen[135076]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 31 08:15:21 compute-0 dazzling_cohen[135076]:                 "ceph.type": "block",
Jan 31 08:15:21 compute-0 dazzling_cohen[135076]:                 "ceph.vdo": "0",
Jan 31 08:15:21 compute-0 dazzling_cohen[135076]:                 "ceph.with_tpm": "0"
Jan 31 08:15:21 compute-0 dazzling_cohen[135076]:             },
Jan 31 08:15:21 compute-0 dazzling_cohen[135076]:             "type": "block",
Jan 31 08:15:21 compute-0 dazzling_cohen[135076]:             "vg_name": "ceph_vg0"
Jan 31 08:15:21 compute-0 dazzling_cohen[135076]:         }
Jan 31 08:15:21 compute-0 dazzling_cohen[135076]:     ],
Jan 31 08:15:21 compute-0 dazzling_cohen[135076]:     "1": [
Jan 31 08:15:21 compute-0 dazzling_cohen[135076]:         {
Jan 31 08:15:21 compute-0 dazzling_cohen[135076]:             "devices": [
Jan 31 08:15:21 compute-0 dazzling_cohen[135076]:                 "/dev/loop4"
Jan 31 08:15:21 compute-0 dazzling_cohen[135076]:             ],
Jan 31 08:15:21 compute-0 dazzling_cohen[135076]:             "lv_name": "ceph_lv1",
Jan 31 08:15:21 compute-0 dazzling_cohen[135076]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 31 08:15:21 compute-0 dazzling_cohen[135076]:             "lv_size": "21470642176",
Jan 31 08:15:21 compute-0 dazzling_cohen[135076]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=p93Mbf-DMxT-pcUt-jSJE-SFna-oscq-yTAd40,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=82c880e6-d992-5408-8b12-efff9c275473,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=dacad4fa-56d8-4937-b2d8-306fb75187f3,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 31 08:15:21 compute-0 dazzling_cohen[135076]:             "lv_uuid": "p93Mbf-DMxT-pcUt-jSJE-SFna-oscq-yTAd40",
Jan 31 08:15:21 compute-0 dazzling_cohen[135076]:             "name": "ceph_lv1",
Jan 31 08:15:21 compute-0 dazzling_cohen[135076]:             "path": "/dev/ceph_vg1/ceph_lv1",
Jan 31 08:15:21 compute-0 dazzling_cohen[135076]:             "tags": {
Jan 31 08:15:21 compute-0 dazzling_cohen[135076]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 31 08:15:21 compute-0 dazzling_cohen[135076]:                 "ceph.block_uuid": "p93Mbf-DMxT-pcUt-jSJE-SFna-oscq-yTAd40",
Jan 31 08:15:21 compute-0 dazzling_cohen[135076]:                 "ceph.cephx_lockbox_secret": "",
Jan 31 08:15:21 compute-0 dazzling_cohen[135076]:                 "ceph.cluster_fsid": "82c880e6-d992-5408-8b12-efff9c275473",
Jan 31 08:15:21 compute-0 dazzling_cohen[135076]:                 "ceph.cluster_name": "ceph",
Jan 31 08:15:21 compute-0 dazzling_cohen[135076]:                 "ceph.crush_device_class": "",
Jan 31 08:15:21 compute-0 dazzling_cohen[135076]:                 "ceph.encrypted": "0",
Jan 31 08:15:21 compute-0 dazzling_cohen[135076]:                 "ceph.objectstore": "bluestore",
Jan 31 08:15:21 compute-0 dazzling_cohen[135076]:                 "ceph.osd_fsid": "dacad4fa-56d8-4937-b2d8-306fb75187f3",
Jan 31 08:15:21 compute-0 dazzling_cohen[135076]:                 "ceph.osd_id": "1",
Jan 31 08:15:21 compute-0 dazzling_cohen[135076]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 31 08:15:21 compute-0 dazzling_cohen[135076]:                 "ceph.type": "block",
Jan 31 08:15:21 compute-0 dazzling_cohen[135076]:                 "ceph.vdo": "0",
Jan 31 08:15:21 compute-0 dazzling_cohen[135076]:                 "ceph.with_tpm": "0"
Jan 31 08:15:21 compute-0 dazzling_cohen[135076]:             },
Jan 31 08:15:21 compute-0 dazzling_cohen[135076]:             "type": "block",
Jan 31 08:15:21 compute-0 dazzling_cohen[135076]:             "vg_name": "ceph_vg1"
Jan 31 08:15:21 compute-0 dazzling_cohen[135076]:         }
Jan 31 08:15:21 compute-0 dazzling_cohen[135076]:     ],
Jan 31 08:15:21 compute-0 dazzling_cohen[135076]:     "2": [
Jan 31 08:15:21 compute-0 dazzling_cohen[135076]:         {
Jan 31 08:15:21 compute-0 dazzling_cohen[135076]:             "devices": [
Jan 31 08:15:21 compute-0 dazzling_cohen[135076]:                 "/dev/loop5"
Jan 31 08:15:21 compute-0 dazzling_cohen[135076]:             ],
Jan 31 08:15:21 compute-0 dazzling_cohen[135076]:             "lv_name": "ceph_lv2",
Jan 31 08:15:21 compute-0 dazzling_cohen[135076]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 31 08:15:21 compute-0 dazzling_cohen[135076]:             "lv_size": "21470642176",
Jan 31 08:15:21 compute-0 dazzling_cohen[135076]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=fxd7JU-HnwP-NvcE-M4xv-EgEF-kK7y-w6dXCS,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=82c880e6-d992-5408-8b12-efff9c275473,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=faa25865-e7b6-44f9-8188-08bf287b941b,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 31 08:15:21 compute-0 dazzling_cohen[135076]:             "lv_uuid": "fxd7JU-HnwP-NvcE-M4xv-EgEF-kK7y-w6dXCS",
Jan 31 08:15:21 compute-0 dazzling_cohen[135076]:             "name": "ceph_lv2",
Jan 31 08:15:21 compute-0 dazzling_cohen[135076]:             "path": "/dev/ceph_vg2/ceph_lv2",
Jan 31 08:15:21 compute-0 dazzling_cohen[135076]:             "tags": {
Jan 31 08:15:21 compute-0 dazzling_cohen[135076]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 31 08:15:21 compute-0 dazzling_cohen[135076]:                 "ceph.block_uuid": "fxd7JU-HnwP-NvcE-M4xv-EgEF-kK7y-w6dXCS",
Jan 31 08:15:21 compute-0 dazzling_cohen[135076]:                 "ceph.cephx_lockbox_secret": "",
Jan 31 08:15:21 compute-0 dazzling_cohen[135076]:                 "ceph.cluster_fsid": "82c880e6-d992-5408-8b12-efff9c275473",
Jan 31 08:15:21 compute-0 dazzling_cohen[135076]:                 "ceph.cluster_name": "ceph",
Jan 31 08:15:21 compute-0 dazzling_cohen[135076]:                 "ceph.crush_device_class": "",
Jan 31 08:15:21 compute-0 dazzling_cohen[135076]:                 "ceph.encrypted": "0",
Jan 31 08:15:21 compute-0 dazzling_cohen[135076]:                 "ceph.objectstore": "bluestore",
Jan 31 08:15:21 compute-0 dazzling_cohen[135076]:                 "ceph.osd_fsid": "faa25865-e7b6-44f9-8188-08bf287b941b",
Jan 31 08:15:21 compute-0 dazzling_cohen[135076]:                 "ceph.osd_id": "2",
Jan 31 08:15:21 compute-0 dazzling_cohen[135076]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 31 08:15:21 compute-0 dazzling_cohen[135076]:                 "ceph.type": "block",
Jan 31 08:15:21 compute-0 dazzling_cohen[135076]:                 "ceph.vdo": "0",
Jan 31 08:15:21 compute-0 dazzling_cohen[135076]:                 "ceph.with_tpm": "0"
Jan 31 08:15:21 compute-0 dazzling_cohen[135076]:             },
Jan 31 08:15:21 compute-0 dazzling_cohen[135076]:             "type": "block",
Jan 31 08:15:21 compute-0 dazzling_cohen[135076]:             "vg_name": "ceph_vg2"
Jan 31 08:15:21 compute-0 dazzling_cohen[135076]:         }
Jan 31 08:15:21 compute-0 dazzling_cohen[135076]:     ]
Jan 31 08:15:21 compute-0 dazzling_cohen[135076]: }
Jan 31 08:15:21 compute-0 systemd[1]: libpod-0ca5b9e6b055bfee1f0219ac919b40763684d8646d7a42a59ccd6f20c239419f.scope: Deactivated successfully.
Jan 31 08:15:21 compute-0 podman[135059]: 2026-01-31 08:15:21.981738349 +0000 UTC m=+0.416094244 container died 0ca5b9e6b055bfee1f0219ac919b40763684d8646d7a42a59ccd6f20c239419f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dazzling_cohen, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 08:15:22 compute-0 sudo[135159]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ggakppqiezozebmjyexppyqqerfihacu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847320.8970883-62-106905847480482/AnsiballZ_dnf.py'
Jan 31 08:15:22 compute-0 sudo[135159]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:15:22 compute-0 systemd[1]: var-lib-containers-storage-overlay-2daf0a7aa49d18792f9f3db8f3687d658faf803088dee21cb49488ba9bc3468f-merged.mount: Deactivated successfully.
Jan 31 08:15:22 compute-0 podman[135059]: 2026-01-31 08:15:22.017827202 +0000 UTC m=+0.452183107 container remove 0ca5b9e6b055bfee1f0219ac919b40763684d8646d7a42a59ccd6f20c239419f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dazzling_cohen, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Jan 31 08:15:22 compute-0 systemd[1]: libpod-conmon-0ca5b9e6b055bfee1f0219ac919b40763684d8646d7a42a59ccd6f20c239419f.scope: Deactivated successfully.
Jan 31 08:15:22 compute-0 sudo[134839]: pam_unix(sudo:session): session closed for user root
Jan 31 08:15:22 compute-0 sudo[135174]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 08:15:22 compute-0 sudo[135174]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:15:22 compute-0 sudo[135174]: pam_unix(sudo:session): session closed for user root
Jan 31 08:15:22 compute-0 sudo[135199]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/82c880e6-d992-5408-8b12-efff9c275473/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 82c880e6-d992-5408-8b12-efff9c275473 -- raw list --format json
Jan 31 08:15:22 compute-0 sudo[135199]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:15:22 compute-0 python3.9[135173]: ansible-ansible.legacy.dnf Invoked with name=['openvswitch'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 31 08:15:22 compute-0 podman[135238]: 2026-01-31 08:15:22.403058106 +0000 UTC m=+0.048615849 container create fa3cedf254cd66151162cb9f95024edfff95342ebaa0ab880bce1fdacff8a73f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=heuristic_moser, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 08:15:22 compute-0 systemd[1]: Started libpod-conmon-fa3cedf254cd66151162cb9f95024edfff95342ebaa0ab880bce1fdacff8a73f.scope.
Jan 31 08:15:22 compute-0 podman[135238]: 2026-01-31 08:15:22.379321162 +0000 UTC m=+0.024878975 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 08:15:22 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:15:22 compute-0 podman[135238]: 2026-01-31 08:15:22.490670387 +0000 UTC m=+0.136228210 container init fa3cedf254cd66151162cb9f95024edfff95342ebaa0ab880bce1fdacff8a73f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=heuristic_moser, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 08:15:22 compute-0 podman[135238]: 2026-01-31 08:15:22.497842985 +0000 UTC m=+0.143400718 container start fa3cedf254cd66151162cb9f95024edfff95342ebaa0ab880bce1fdacff8a73f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=heuristic_moser, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True)
Jan 31 08:15:22 compute-0 podman[135238]: 2026-01-31 08:15:22.501114815 +0000 UTC m=+0.146672658 container attach fa3cedf254cd66151162cb9f95024edfff95342ebaa0ab880bce1fdacff8a73f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=heuristic_moser, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 08:15:22 compute-0 heuristic_moser[135256]: 167 167
Jan 31 08:15:22 compute-0 systemd[1]: libpod-fa3cedf254cd66151162cb9f95024edfff95342ebaa0ab880bce1fdacff8a73f.scope: Deactivated successfully.
Jan 31 08:15:22 compute-0 podman[135238]: 2026-01-31 08:15:22.503064518 +0000 UTC m=+0.148622291 container died fa3cedf254cd66151162cb9f95024edfff95342ebaa0ab880bce1fdacff8a73f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=heuristic_moser, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Jan 31 08:15:22 compute-0 systemd[1]: var-lib-containers-storage-overlay-b925cfa9c0be030232c1f8e4e156d3d600e2210a0ebe3ab602d03bf46680be27-merged.mount: Deactivated successfully.
Jan 31 08:15:22 compute-0 podman[135238]: 2026-01-31 08:15:22.538411091 +0000 UTC m=+0.183968824 container remove fa3cedf254cd66151162cb9f95024edfff95342ebaa0ab880bce1fdacff8a73f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=heuristic_moser, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Jan 31 08:15:22 compute-0 systemd[1]: libpod-conmon-fa3cedf254cd66151162cb9f95024edfff95342ebaa0ab880bce1fdacff8a73f.scope: Deactivated successfully.
Jan 31 08:15:22 compute-0 podman[135279]: 2026-01-31 08:15:22.67389067 +0000 UTC m=+0.045555714 container create 6c844445d37ba8ec114e82be4a10dc676690ed6ced96c59d54d070f073874185 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=festive_almeida, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True)
Jan 31 08:15:22 compute-0 systemd[1]: Started libpod-conmon-6c844445d37ba8ec114e82be4a10dc676690ed6ced96c59d54d070f073874185.scope.
Jan 31 08:15:22 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:15:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/919ddeb8c16bb141f32ea60ed6d80804f001bd3973d2e0cc882be501e463c4b1/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 08:15:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/919ddeb8c16bb141f32ea60ed6d80804f001bd3973d2e0cc882be501e463c4b1/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 08:15:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/919ddeb8c16bb141f32ea60ed6d80804f001bd3973d2e0cc882be501e463c4b1/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 08:15:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/919ddeb8c16bb141f32ea60ed6d80804f001bd3973d2e0cc882be501e463c4b1/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 08:15:22 compute-0 podman[135279]: 2026-01-31 08:15:22.735984849 +0000 UTC m=+0.107649893 container init 6c844445d37ba8ec114e82be4a10dc676690ed6ced96c59d54d070f073874185 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=festive_almeida, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 08:15:22 compute-0 podman[135279]: 2026-01-31 08:15:22.741854601 +0000 UTC m=+0.113519655 container start 6c844445d37ba8ec114e82be4a10dc676690ed6ced96c59d54d070f073874185 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=festive_almeida, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 31 08:15:22 compute-0 podman[135279]: 2026-01-31 08:15:22.745592694 +0000 UTC m=+0.117257768 container attach 6c844445d37ba8ec114e82be4a10dc676690ed6ced96c59d54d070f073874185 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=festive_almeida, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True)
Jan 31 08:15:22 compute-0 podman[135279]: 2026-01-31 08:15:22.652174823 +0000 UTC m=+0.023839907 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 08:15:22 compute-0 ceph-mon[75227]: pgmap v383: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:15:23 compute-0 lvm[135374]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 31 08:15:23 compute-0 lvm[135374]: VG ceph_vg0 finished
Jan 31 08:15:23 compute-0 lvm[135375]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 31 08:15:23 compute-0 lvm[135375]: VG ceph_vg1 finished
Jan 31 08:15:23 compute-0 lvm[135377]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 31 08:15:23 compute-0 lvm[135377]: VG ceph_vg2 finished
Jan 31 08:15:23 compute-0 festive_almeida[135295]: {}
Jan 31 08:15:23 compute-0 systemd[1]: libpod-6c844445d37ba8ec114e82be4a10dc676690ed6ced96c59d54d070f073874185.scope: Deactivated successfully.
Jan 31 08:15:23 compute-0 systemd[1]: libpod-6c844445d37ba8ec114e82be4a10dc676690ed6ced96c59d54d070f073874185.scope: Consumed 1.024s CPU time.
Jan 31 08:15:23 compute-0 podman[135279]: 2026-01-31 08:15:23.471053443 +0000 UTC m=+0.842718487 container died 6c844445d37ba8ec114e82be4a10dc676690ed6ced96c59d54d070f073874185 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=festive_almeida, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 31 08:15:23 compute-0 systemd[1]: var-lib-containers-storage-overlay-919ddeb8c16bb141f32ea60ed6d80804f001bd3973d2e0cc882be501e463c4b1-merged.mount: Deactivated successfully.
Jan 31 08:15:23 compute-0 sudo[135159]: pam_unix(sudo:session): session closed for user root
Jan 31 08:15:23 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v384: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:15:24 compute-0 podman[135279]: 2026-01-31 08:15:24.031507069 +0000 UTC m=+1.403172113 container remove 6c844445d37ba8ec114e82be4a10dc676690ed6ced96c59d54d070f073874185 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=festive_almeida, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle)
Jan 31 08:15:24 compute-0 systemd[1]: libpod-conmon-6c844445d37ba8ec114e82be4a10dc676690ed6ced96c59d54d070f073874185.scope: Deactivated successfully.
Jan 31 08:15:24 compute-0 sudo[135199]: pam_unix(sudo:session): session closed for user root
Jan 31 08:15:24 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 31 08:15:24 compute-0 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 08:15:24 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 31 08:15:24 compute-0 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 08:15:24 compute-0 sudo[135469]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 31 08:15:24 compute-0 sudo[135469]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:15:24 compute-0 sudo[135469]: pam_unix(sudo:session): session closed for user root
Jan 31 08:15:24 compute-0 sudo[135567]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zlluyeymownrprgphwjxkihwkiosdwps ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847323.9543314-74-23391804380915/AnsiballZ_systemd.py'
Jan 31 08:15:24 compute-0 sudo[135567]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:15:24 compute-0 python3.9[135569]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=openvswitch.service state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Jan 31 08:15:24 compute-0 sudo[135567]: pam_unix(sudo:session): session closed for user root
Jan 31 08:15:25 compute-0 ceph-mon[75227]: pgmap v384: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:15:25 compute-0 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 08:15:25 compute-0 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 08:15:25 compute-0 sudo[135722]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qmgqojcjdmztumejmayzaxbafdsvuaoz ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1769847325.003716-82-245090447031489/AnsiballZ_edpm_nftables_snippet.py'
Jan 31 08:15:25 compute-0 sudo[135722]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:15:25 compute-0 python3[135724]: ansible-osp.edpm.edpm_nftables_snippet Invoked with content=- rule_name: 118 neutron vxlan networks
                                             rule:
                                               proto: udp
                                               dport: 4789
                                           - rule_name: 119 neutron geneve networks
                                             rule:
                                               proto: udp
                                               dport: 6081
                                               state: ["UNTRACKED"]
                                           - rule_name: 120 neutron geneve networks no conntrack
                                             rule:
                                               proto: udp
                                               dport: 6081
                                               table: raw
                                               chain: OUTPUT
                                               jump: NOTRACK
                                               action: append
                                               state: []
                                           - rule_name: 121 neutron geneve networks no conntrack
                                             rule:
                                               proto: udp
                                               dport: 6081
                                               table: raw
                                               chain: PREROUTING
                                               jump: NOTRACK
                                               action: append
                                               state: []
                                            dest=/var/lib/edpm-config/firewall/ovn.yaml state=present
Jan 31 08:15:25 compute-0 sudo[135722]: pam_unix(sudo:session): session closed for user root
Jan 31 08:15:25 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 08:15:25 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v385: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:15:25 compute-0 sudo[135874]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-btvdckbolnnociavwhmtaqkfegittggd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847325.7926314-91-275503103210512/AnsiballZ_file.py'
Jan 31 08:15:25 compute-0 sudo[135874]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:15:26 compute-0 python3.9[135876]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 08:15:26 compute-0 sudo[135874]: pam_unix(sudo:session): session closed for user root
Jan 31 08:15:26 compute-0 sudo[136026]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kejihodufrvsahyjdcebynnbahgkicav ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847326.351293-99-52629892752452/AnsiballZ_stat.py'
Jan 31 08:15:26 compute-0 sudo[136026]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:15:26 compute-0 python3.9[136028]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 08:15:26 compute-0 sudo[136026]: pam_unix(sudo:session): session closed for user root
Jan 31 08:15:27 compute-0 ceph-mon[75227]: pgmap v385: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:15:27 compute-0 sudo[136104]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yqyufoljaenpuedrxazoxyfsabxoxpnx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847326.351293-99-52629892752452/AnsiballZ_file.py'
Jan 31 08:15:27 compute-0 sudo[136104]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:15:27 compute-0 python3.9[136106]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml _original_basename=base-rules.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 08:15:27 compute-0 sudo[136104]: pam_unix(sudo:session): session closed for user root
Jan 31 08:15:27 compute-0 sudo[136256]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xdkjksiywlrznrhmxunfxbpupwyhcspi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847327.4862297-111-237865396521940/AnsiballZ_stat.py'
Jan 31 08:15:27 compute-0 sudo[136256]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:15:27 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v386: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:15:27 compute-0 python3.9[136258]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 08:15:27 compute-0 sudo[136256]: pam_unix(sudo:session): session closed for user root
Jan 31 08:15:28 compute-0 sudo[136334]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bnwevnfqtfytmlczovcfcohcrdocpzch ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847327.4862297-111-237865396521940/AnsiballZ_file.py'
Jan 31 08:15:28 compute-0 sudo[136334]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:15:28 compute-0 python3.9[136336]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml _original_basename=.611d6u4w recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 08:15:28 compute-0 sudo[136334]: pam_unix(sudo:session): session closed for user root
Jan 31 08:15:28 compute-0 ceph-mon[75227]: pgmap v386: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:15:28 compute-0 sudo[136486]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wuisecutahslugxrvaaobzfyghxbdqnm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847328.5498116-123-19893459016015/AnsiballZ_stat.py'
Jan 31 08:15:28 compute-0 sudo[136486]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:15:29 compute-0 python3.9[136488]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 08:15:29 compute-0 sudo[136486]: pam_unix(sudo:session): session closed for user root
Jan 31 08:15:29 compute-0 sudo[136564]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-akafsjnwkrvnppvqzjpqpnvptzxretho ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847328.5498116-123-19893459016015/AnsiballZ_file.py'
Jan 31 08:15:29 compute-0 sudo[136564]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:15:29 compute-0 python3.9[136566]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/iptables.nft _original_basename=iptables.nft recurse=False state=file path=/etc/nftables/iptables.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 08:15:29 compute-0 sudo[136564]: pam_unix(sudo:session): session closed for user root
Jan 31 08:15:29 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v387: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:15:29 compute-0 sudo[136716]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lasjarlrpkndlghmwpsmvzgfemkjktwy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847329.5481312-136-196271984118749/AnsiballZ_command.py'
Jan 31 08:15:29 compute-0 sudo[136716]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:15:30 compute-0 python3.9[136718]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 08:15:30 compute-0 sudo[136716]: pam_unix(sudo:session): session closed for user root
Jan 31 08:15:30 compute-0 sudo[136869]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gkmpqankbhsndqnfscxlcmwubmyvwzzb ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1769847330.239225-144-24517616959860/AnsiballZ_edpm_nftables_from_files.py'
Jan 31 08:15:30 compute-0 sudo[136869]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:15:30 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 08:15:30 compute-0 python3[136871]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall
Jan 31 08:15:30 compute-0 sudo[136869]: pam_unix(sudo:session): session closed for user root
Jan 31 08:15:30 compute-0 ceph-mon[75227]: pgmap v387: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:15:31 compute-0 sudo[137021]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wjiclzfzowjewipvhfbvagactargjzyg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847330.9402416-152-35645192379239/AnsiballZ_stat.py'
Jan 31 08:15:31 compute-0 sudo[137021]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:15:31 compute-0 python3.9[137023]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 08:15:31 compute-0 sudo[137021]: pam_unix(sudo:session): session closed for user root
Jan 31 08:15:31 compute-0 ceph-mgr[75519]: [balancer INFO root] Optimize plan auto_2026-01-31_08:15:31
Jan 31 08:15:31 compute-0 ceph-mgr[75519]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 31 08:15:31 compute-0 ceph-mgr[75519]: [balancer INFO root] do_upmap
Jan 31 08:15:31 compute-0 ceph-mgr[75519]: [balancer INFO root] pools ['default.rgw.log', 'volumes', 'default.rgw.meta', 'backups', 'cephfs.cephfs.meta', '.rgw.root', 'default.rgw.control', 'images', 'cephfs.cephfs.data', '.mgr', 'vms']
Jan 31 08:15:31 compute-0 ceph-mgr[75519]: [balancer INFO root] prepared 0/10 upmap changes
Jan 31 08:15:31 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v388: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:15:31 compute-0 sudo[137146]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qczvvrgdjiclvttvqdwljzzleqlsqscp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847330.9402416-152-35645192379239/AnsiballZ_copy.py'
Jan 31 08:15:31 compute-0 sudo[137146]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:15:32 compute-0 python3.9[137148]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769847330.9402416-152-35645192379239/.source.nft follow=False _original_basename=jump-chain.j2 checksum=81c2fc96c23335ffe374f9b064e885d5d971ddf9 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 08:15:32 compute-0 sudo[137146]: pam_unix(sudo:session): session closed for user root
Jan 31 08:15:32 compute-0 sudo[137298]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qiarkhvkonnbzwjqndrrwqfjygizzmti ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847332.2327523-167-215500486561212/AnsiballZ_stat.py'
Jan 31 08:15:32 compute-0 sudo[137298]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:15:32 compute-0 python3.9[137300]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 08:15:32 compute-0 sudo[137298]: pam_unix(sudo:session): session closed for user root
Jan 31 08:15:32 compute-0 ceph-mgr[75519]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:15:32 compute-0 ceph-mgr[75519]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:15:32 compute-0 ceph-mgr[75519]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:15:32 compute-0 ceph-mgr[75519]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:15:32 compute-0 ceph-mgr[75519]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:15:32 compute-0 ceph-mgr[75519]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:15:32 compute-0 ceph-mgr[75519]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 31 08:15:32 compute-0 ceph-mgr[75519]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 31 08:15:32 compute-0 ceph-mgr[75519]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 08:15:32 compute-0 ceph-mgr[75519]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 08:15:32 compute-0 ceph-mgr[75519]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 08:15:32 compute-0 ceph-mgr[75519]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 08:15:32 compute-0 ceph-mgr[75519]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 08:15:32 compute-0 ceph-mgr[75519]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 08:15:32 compute-0 ceph-mgr[75519]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 08:15:32 compute-0 ceph-mgr[75519]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 08:15:32 compute-0 ceph-mon[75227]: pgmap v388: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:15:33 compute-0 sudo[137423]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tltkfioegqinhgezgsyyhcekdgnlvbxy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847332.2327523-167-215500486561212/AnsiballZ_copy.py'
Jan 31 08:15:33 compute-0 sudo[137423]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:15:33 compute-0 python3.9[137425]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-update-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769847332.2327523-167-215500486561212/.source.nft follow=False _original_basename=jump-chain.j2 checksum=ac8dea350c18f51f54d48dacc09613cda4c5540c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 08:15:33 compute-0 sudo[137423]: pam_unix(sudo:session): session closed for user root
Jan 31 08:15:33 compute-0 sudo[137575]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-twqlksefcpkhshynbjouwvlczjqmgano ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847333.4125764-182-185652564697319/AnsiballZ_stat.py'
Jan 31 08:15:33 compute-0 sudo[137575]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:15:33 compute-0 python3.9[137577]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 08:15:33 compute-0 sudo[137575]: pam_unix(sudo:session): session closed for user root
Jan 31 08:15:33 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v389: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:15:34 compute-0 sudo[137700]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uksxwxeqebpytdjtircfgdzakjffztof ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847333.4125764-182-185652564697319/AnsiballZ_copy.py'
Jan 31 08:15:34 compute-0 sudo[137700]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:15:34 compute-0 python3.9[137702]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-flushes.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769847333.4125764-182-185652564697319/.source.nft follow=False _original_basename=flush-chain.j2 checksum=4d3ffec49c8eb1a9b80d2f1e8cd64070063a87b4 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 08:15:34 compute-0 sudo[137700]: pam_unix(sudo:session): session closed for user root
Jan 31 08:15:34 compute-0 sudo[137852]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-phumsjtgegbtyyripqnwpmguvaaunnpw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847334.477682-197-248212656413907/AnsiballZ_stat.py'
Jan 31 08:15:34 compute-0 sudo[137852]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:15:34 compute-0 python3.9[137854]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 08:15:34 compute-0 sudo[137852]: pam_unix(sudo:session): session closed for user root
Jan 31 08:15:35 compute-0 ceph-mon[75227]: pgmap v389: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:15:35 compute-0 sudo[137977]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nmifmpaydasuvfqtuisrralanievcpyf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847334.477682-197-248212656413907/AnsiballZ_copy.py'
Jan 31 08:15:35 compute-0 sudo[137977]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:15:35 compute-0 python3.9[137979]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-chains.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769847334.477682-197-248212656413907/.source.nft follow=False _original_basename=chains.j2 checksum=298ada419730ec15df17ded0cc50c97a4014a591 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 08:15:35 compute-0 sudo[137977]: pam_unix(sudo:session): session closed for user root
Jan 31 08:15:35 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 08:15:35 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v390: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:15:35 compute-0 sudo[138129]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uylxgyiwcdmutzbkvhywwzipebkwhjnf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847335.5875292-212-103803243444494/AnsiballZ_stat.py'
Jan 31 08:15:35 compute-0 sudo[138129]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:15:36 compute-0 python3.9[138131]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 08:15:36 compute-0 sudo[138129]: pam_unix(sudo:session): session closed for user root
Jan 31 08:15:36 compute-0 sudo[138254]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-afaajxhmttevihoqwjjehbhqshsaioot ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847335.5875292-212-103803243444494/AnsiballZ_copy.py'
Jan 31 08:15:36 compute-0 sudo[138254]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:15:36 compute-0 python3.9[138256]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-rules.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769847335.5875292-212-103803243444494/.source.nft follow=False _original_basename=ruleset.j2 checksum=bdba38546f86123f1927359d89789bd211aba99d backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 08:15:36 compute-0 sudo[138254]: pam_unix(sudo:session): session closed for user root
Jan 31 08:15:37 compute-0 sudo[138406]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nhtxnqkadszxpdvvegmvwvjyzafmqbtw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847336.8410137-227-218209347283842/AnsiballZ_file.py'
Jan 31 08:15:37 compute-0 sudo[138406]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:15:37 compute-0 ceph-mon[75227]: pgmap v390: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:15:37 compute-0 python3.9[138408]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/nftables/edpm-rules.nft.changed state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 08:15:37 compute-0 sudo[138406]: pam_unix(sudo:session): session closed for user root
Jan 31 08:15:37 compute-0 sudo[138558]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vatzfclpbemxumuehjrksjmppjjcoquj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847337.4229631-235-113901069938027/AnsiballZ_command.py'
Jan 31 08:15:37 compute-0 sudo[138558]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:15:37 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v391: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:15:37 compute-0 python3.9[138560]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 08:15:38 compute-0 sudo[138558]: pam_unix(sudo:session): session closed for user root
Jan 31 08:15:38 compute-0 ceph-mon[75227]: pgmap v391: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:15:38 compute-0 sudo[138713]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nlimjcyugxfpatyjjnylxgzvezpjeyym ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847338.183285-243-251820461629916/AnsiballZ_blockinfile.py'
Jan 31 08:15:38 compute-0 sudo[138713]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:15:38 compute-0 python3.9[138715]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"
                                             include "/etc/nftables/edpm-chains.nft"
                                             include "/etc/nftables/edpm-rules.nft"
                                             include "/etc/nftables/edpm-jumps.nft"
                                              path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 08:15:38 compute-0 sudo[138713]: pam_unix(sudo:session): session closed for user root
Jan 31 08:15:39 compute-0 sudo[138865]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-shwhbgfdwcxxrqnchwihvzmdakayuvah ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847338.9324641-252-116740082072305/AnsiballZ_command.py'
Jan 31 08:15:39 compute-0 sudo[138865]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:15:39 compute-0 python3.9[138867]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 08:15:39 compute-0 sudo[138865]: pam_unix(sudo:session): session closed for user root
Jan 31 08:15:39 compute-0 sudo[139018]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-brpihmhsarkihiqqlbyjfhcnykfxkcky ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847339.5728092-260-132022414204993/AnsiballZ_stat.py'
Jan 31 08:15:39 compute-0 sudo[139018]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:15:39 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v392: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:15:39 compute-0 python3.9[139020]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 31 08:15:39 compute-0 sudo[139018]: pam_unix(sudo:session): session closed for user root
Jan 31 08:15:40 compute-0 sudo[139172]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jghfdtmdynlpgmeotkfbqtnzezownyrn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847340.1182742-268-140049282519931/AnsiballZ_command.py'
Jan 31 08:15:40 compute-0 sudo[139172]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:15:40 compute-0 python3.9[139174]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft | nft -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 08:15:40 compute-0 sudo[139172]: pam_unix(sudo:session): session closed for user root
Jan 31 08:15:40 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 08:15:40 compute-0 ceph-mon[75227]: pgmap v392: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:15:41 compute-0 sudo[139327]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rrmfbevapeuoldbomhzrkfxhmligiucj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847340.7617893-276-208608833185405/AnsiballZ_file.py'
Jan 31 08:15:41 compute-0 sudo[139327]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:15:41 compute-0 python3.9[139329]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 08:15:41 compute-0 sudo[139327]: pam_unix(sudo:session): session closed for user root
Jan 31 08:15:41 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v393: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:15:42 compute-0 python3.9[139479]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'machine'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 31 08:15:42 compute-0 ceph-mon[75227]: pgmap v393: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:15:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] _maybe_adjust
Jan 31 08:15:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:15:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 31 08:15:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:15:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 08:15:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:15:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 08:15:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:15:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 08:15:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:15:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 08:15:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:15:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 2.6947183441958982e-06 of space, bias 4.0, pg target 0.003233662013035078 quantized to 16 (current 16)
Jan 31 08:15:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:15:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 08:15:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:15:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 31 08:15:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:15:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Jan 31 08:15:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:15:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 08:15:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:15:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 31 08:15:43 compute-0 sudo[139630]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tivztznpytchfkzdthbyqpwzlebsfjdy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847342.8295648-316-105365529138950/AnsiballZ_command.py'
Jan 31 08:15:43 compute-0 sudo[139630]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:15:43 compute-0 python3.9[139632]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl set open . external_ids:hostname=compute-0.ctlplane.example.com external_ids:ovn-bridge=br-int external_ids:ovn-bridge-mappings=datacentre:br-ex external_ids:ovn-chassis-mac-mappings="datacentre:1e:0a:9e:41:65:cf" external_ids:ovn-encap-ip=172.19.0.100 external_ids:ovn-encap-type=geneve external_ids:ovn-encap-tos=0 external_ids:ovn-match-northd-version=False external_ids:ovn-monitor-all=True external_ids:ovn-remote=ssl:ovsdbserver-sb.openstack.svc:6642 external_ids:ovn-remote-probe-interval=60000 external_ids:ovn-ofctrl-wait-before-clear=8000 external_ids:rundir=/var/run/openvswitch 
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 08:15:43 compute-0 ovs-vsctl[139633]: ovs|00001|vsctl|INFO|Called as ovs-vsctl set open . external_ids:hostname=compute-0.ctlplane.example.com external_ids:ovn-bridge=br-int external_ids:ovn-bridge-mappings=datacentre:br-ex external_ids:ovn-chassis-mac-mappings=datacentre:1e:0a:9e:41:65:cf external_ids:ovn-encap-ip=172.19.0.100 external_ids:ovn-encap-type=geneve external_ids:ovn-encap-tos=0 external_ids:ovn-match-northd-version=False external_ids:ovn-monitor-all=True external_ids:ovn-remote=ssl:ovsdbserver-sb.openstack.svc:6642 external_ids:ovn-remote-probe-interval=60000 external_ids:ovn-ofctrl-wait-before-clear=8000 external_ids:rundir=/var/run/openvswitch
Jan 31 08:15:43 compute-0 sudo[139630]: pam_unix(sudo:session): session closed for user root
Jan 31 08:15:43 compute-0 sudo[139783]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ixmwrbcxtguqikdoksqazjcxlgnplgko ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847343.4638288-325-56990318977459/AnsiballZ_command.py'
Jan 31 08:15:43 compute-0 sudo[139783]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:15:43 compute-0 python3.9[139785]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail
                                             ovs-vsctl show | grep -q "Manager"
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 08:15:43 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v394: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:15:43 compute-0 sudo[139783]: pam_unix(sudo:session): session closed for user root
Jan 31 08:15:44 compute-0 sudo[139938]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ldieweqhuptnvjvlfrwspokklxqzdefq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847344.0815291-333-272302175166328/AnsiballZ_command.py'
Jan 31 08:15:44 compute-0 sudo[139938]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:15:44 compute-0 python3.9[139940]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl --timeout=5 --id=@manager -- create Manager target=\"ptcp:********@manager
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 08:15:44 compute-0 ovs-vsctl[139941]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --timeout=5 --id=@manager -- create Manager "target=\"ptcp:6640:127.0.0.1\"" -- add Open_vSwitch . manager_options @manager
Jan 31 08:15:44 compute-0 sudo[139938]: pam_unix(sudo:session): session closed for user root
Jan 31 08:15:44 compute-0 ceph-mon[75227]: pgmap v394: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:15:45 compute-0 python3.9[140091]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 31 08:15:45 compute-0 sudo[140243]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cllkmvntnurryhcbkxpcbvvdjgxrbghu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847345.3334138-350-228133917296199/AnsiballZ_file.py'
Jan 31 08:15:45 compute-0 sudo[140243]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:15:45 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 08:15:45 compute-0 python3.9[140245]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 31 08:15:45 compute-0 sudo[140243]: pam_unix(sudo:session): session closed for user root
Jan 31 08:15:45 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v395: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:15:46 compute-0 sudo[140395]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ykfwmcvsxiqikiilnssndpdfieapjgtx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847345.907753-358-251770468603332/AnsiballZ_stat.py'
Jan 31 08:15:46 compute-0 sudo[140395]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:15:46 compute-0 python3.9[140397]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 08:15:46 compute-0 sudo[140395]: pam_unix(sudo:session): session closed for user root
Jan 31 08:15:46 compute-0 sudo[140473]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yxdphwidjrxlxmjkhnnwjmjhhxmiybch ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847345.907753-358-251770468603332/AnsiballZ_file.py'
Jan 31 08:15:46 compute-0 sudo[140473]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:15:46 compute-0 python3.9[140475]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-container-shutdown _original_basename=edpm-container-shutdown recurse=False state=file path=/var/local/libexec/edpm-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 31 08:15:46 compute-0 sudo[140473]: pam_unix(sudo:session): session closed for user root
Jan 31 08:15:46 compute-0 ceph-mon[75227]: pgmap v395: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:15:47 compute-0 sudo[140625]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ljiwmggsookiyhvojlkkogccelwrymgs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847346.9428856-358-52739999652468/AnsiballZ_stat.py'
Jan 31 08:15:47 compute-0 sudo[140625]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:15:47 compute-0 python3.9[140627]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 08:15:47 compute-0 sudo[140625]: pam_unix(sudo:session): session closed for user root
Jan 31 08:15:47 compute-0 sudo[140703]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zpnmksrmzsmautsfejbisieyuwjzidez ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847346.9428856-358-52739999652468/AnsiballZ_file.py'
Jan 31 08:15:47 compute-0 sudo[140703]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:15:47 compute-0 python3.9[140705]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-start-podman-container _original_basename=edpm-start-podman-container recurse=False state=file path=/var/local/libexec/edpm-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 31 08:15:47 compute-0 sudo[140703]: pam_unix(sudo:session): session closed for user root
Jan 31 08:15:47 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v396: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:15:48 compute-0 sudo[140855]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nwniuccnygcovogejolpqajawssrxjuz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847347.9761124-381-40447505668283/AnsiballZ_file.py'
Jan 31 08:15:48 compute-0 sudo[140855]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:15:48 compute-0 python3.9[140857]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 08:15:48 compute-0 sudo[140855]: pam_unix(sudo:session): session closed for user root
Jan 31 08:15:48 compute-0 sudo[141007]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xfmuczlwocyqfddajlrduqbqkwlxawck ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847348.5795236-389-150840959320991/AnsiballZ_stat.py'
Jan 31 08:15:48 compute-0 sudo[141007]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:15:48 compute-0 python3.9[141009]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 08:15:49 compute-0 sudo[141007]: pam_unix(sudo:session): session closed for user root
Jan 31 08:15:49 compute-0 ceph-mon[75227]: pgmap v396: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:15:49 compute-0 sudo[141085]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aaxlgujxsibwhjixtiursfpfxepqfspt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847348.5795236-389-150840959320991/AnsiballZ_file.py'
Jan 31 08:15:49 compute-0 sudo[141085]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:15:49 compute-0 python3.9[141087]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/edpm-container-shutdown.service _original_basename=edpm-container-shutdown-service recurse=False state=file path=/etc/systemd/system/edpm-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 08:15:49 compute-0 sudo[141085]: pam_unix(sudo:session): session closed for user root
Jan 31 08:15:49 compute-0 sudo[141237]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rgiauuvttlufdxytoirnffrlrejwsjut ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847349.571194-401-190246276033809/AnsiballZ_stat.py'
Jan 31 08:15:49 compute-0 sudo[141237]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:15:49 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v397: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:15:49 compute-0 python3.9[141239]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 08:15:50 compute-0 sudo[141237]: pam_unix(sudo:session): session closed for user root
Jan 31 08:15:50 compute-0 sudo[141315]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-udzzriwofbqpohzzlrldfvbdghggddey ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847349.571194-401-190246276033809/AnsiballZ_file.py'
Jan 31 08:15:50 compute-0 sudo[141315]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:15:50 compute-0 python3.9[141317]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset _original_basename=91-edpm-container-shutdown-preset recurse=False state=file path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 08:15:50 compute-0 sudo[141315]: pam_unix(sudo:session): session closed for user root
Jan 31 08:15:50 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 08:15:50 compute-0 sudo[141467]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zecqvailrqcpnvgsodlxawdwihdpewzy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847350.5288653-413-240970659377610/AnsiballZ_systemd.py'
Jan 31 08:15:50 compute-0 sudo[141467]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:15:51 compute-0 ceph-mon[75227]: pgmap v397: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:15:51 compute-0 python3.9[141469]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 31 08:15:51 compute-0 systemd[1]: Reloading.
Jan 31 08:15:51 compute-0 systemd-rc-local-generator[141492]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 08:15:51 compute-0 systemd-sysv-generator[141500]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 08:15:51 compute-0 sudo[141467]: pam_unix(sudo:session): session closed for user root
Jan 31 08:15:51 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v398: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:15:51 compute-0 sudo[141657]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lhguwveazeohqwztqfzldhnwxfubkaya ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847351.7031972-421-112948956921441/AnsiballZ_stat.py'
Jan 31 08:15:51 compute-0 sudo[141657]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:15:52 compute-0 python3.9[141659]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 08:15:52 compute-0 sudo[141657]: pam_unix(sudo:session): session closed for user root
Jan 31 08:15:52 compute-0 ceph-mon[75227]: pgmap v398: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:15:52 compute-0 sudo[141735]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mouefztgjvnvktmtkgilonwsabqwatik ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847351.7031972-421-112948956921441/AnsiballZ_file.py'
Jan 31 08:15:52 compute-0 sudo[141735]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:15:52 compute-0 python3.9[141737]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/etc/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 08:15:52 compute-0 sudo[141735]: pam_unix(sudo:session): session closed for user root
Jan 31 08:15:52 compute-0 sudo[141887]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-crhditiazxpgosxvqomeksemdazdotlw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847352.728344-433-60748054471462/AnsiballZ_stat.py'
Jan 31 08:15:52 compute-0 sudo[141887]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:15:53 compute-0 python3.9[141889]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 08:15:53 compute-0 sudo[141887]: pam_unix(sudo:session): session closed for user root
Jan 31 08:15:53 compute-0 sudo[141965]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vuhcjpgkqbtvhujcthqjouppmavwqijn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847352.728344-433-60748054471462/AnsiballZ_file.py'
Jan 31 08:15:53 compute-0 sudo[141965]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:15:53 compute-0 python3.9[141967]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/etc/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 08:15:53 compute-0 sudo[141965]: pam_unix(sudo:session): session closed for user root
Jan 31 08:15:53 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v399: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:15:54 compute-0 sudo[142117]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-slyogqmxgvobnmvmwglxguhqhwqsttfs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847353.7596567-445-142127226130428/AnsiballZ_systemd.py'
Jan 31 08:15:54 compute-0 sudo[142117]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:15:54 compute-0 python3.9[142119]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 31 08:15:54 compute-0 systemd[1]: Reloading.
Jan 31 08:15:54 compute-0 systemd-rc-local-generator[142143]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 08:15:54 compute-0 systemd-sysv-generator[142149]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 08:15:54 compute-0 systemd[1]: Starting Create netns directory...
Jan 31 08:15:54 compute-0 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Jan 31 08:15:54 compute-0 systemd[1]: netns-placeholder.service: Deactivated successfully.
Jan 31 08:15:54 compute-0 systemd[1]: Finished Create netns directory.
Jan 31 08:15:54 compute-0 sudo[142117]: pam_unix(sudo:session): session closed for user root
Jan 31 08:15:54 compute-0 ceph-mon[75227]: pgmap v399: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:15:55 compute-0 sudo[142311]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vyylexdgorqudyshpoaeevnjfttvapju ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847354.8683672-455-2055250491981/AnsiballZ_file.py'
Jan 31 08:15:55 compute-0 sudo[142311]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:15:55 compute-0 python3.9[142313]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/healthchecks setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 31 08:15:55 compute-0 sudo[142311]: pam_unix(sudo:session): session closed for user root
Jan 31 08:15:55 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 08:15:55 compute-0 sudo[142463]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zzmtqqddrtlojniimauspktkijmjeays ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847355.4107118-463-206300139729687/AnsiballZ_stat.py'
Jan 31 08:15:55 compute-0 sudo[142463]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:15:55 compute-0 python3.9[142465]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/ovn_controller/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 08:15:55 compute-0 sudo[142463]: pam_unix(sudo:session): session closed for user root
Jan 31 08:15:55 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v400: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:15:56 compute-0 sudo[142586]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pcanlryoayactfjwpwwmkskgexckawah ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847355.4107118-463-206300139729687/AnsiballZ_copy.py'
Jan 31 08:15:56 compute-0 sudo[142586]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:15:56 compute-0 python3.9[142588]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/ovn_controller/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769847355.4107118-463-206300139729687/.source _original_basename=healthcheck follow=False checksum=4098dd010265fabdf5c26b97d169fc4e575ff457 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Jan 31 08:15:56 compute-0 sudo[142586]: pam_unix(sudo:session): session closed for user root
Jan 31 08:15:56 compute-0 ceph-mon[75227]: pgmap v400: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:15:57 compute-0 sudo[142738]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kpbqtefsvlxdgrzdkdzquicpddmadjpc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847356.7793531-480-125019188857367/AnsiballZ_file.py'
Jan 31 08:15:57 compute-0 sudo[142738]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:15:57 compute-0 python3.9[142740]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/edpm-config recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 08:15:57 compute-0 sudo[142738]: pam_unix(sudo:session): session closed for user root
Jan 31 08:15:57 compute-0 sudo[142890]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uhfywdoquflzlkfqlnbnntgvfjcaygnv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847357.404267-488-211716672970242/AnsiballZ_file.py'
Jan 31 08:15:57 compute-0 sudo[142890]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:15:57 compute-0 python3.9[142892]: ansible-ansible.builtin.file Invoked with path=/var/lib/kolla/config_files recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 31 08:15:57 compute-0 sudo[142890]: pam_unix(sudo:session): session closed for user root
Jan 31 08:15:57 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v401: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:15:58 compute-0 sudo[143042]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qunofizwvecutsxbupfhulqfvhlqpkxq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847358.0324063-496-234962115645165/AnsiballZ_stat.py'
Jan 31 08:15:58 compute-0 sudo[143042]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:15:58 compute-0 python3.9[143044]: ansible-ansible.legacy.stat Invoked with path=/var/lib/kolla/config_files/ovn_controller.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 08:15:58 compute-0 sudo[143042]: pam_unix(sudo:session): session closed for user root
Jan 31 08:15:58 compute-0 sudo[143165]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ymeefwrgfmkyyjahewblwpdpqpgzkvrc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847358.0324063-496-234962115645165/AnsiballZ_copy.py'
Jan 31 08:15:58 compute-0 sudo[143165]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:15:58 compute-0 python3.9[143167]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/kolla/config_files/ovn_controller.json mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1769847358.0324063-496-234962115645165/.source.json _original_basename=.vsglnr4f follow=False checksum=2328fc98619beeb08ee32b01f15bb43094c10b61 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 08:15:58 compute-0 sudo[143165]: pam_unix(sudo:session): session closed for user root
Jan 31 08:15:58 compute-0 ceph-mon[75227]: pgmap v401: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:15:59 compute-0 python3.9[143317]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/edpm-config/container-startup-config/ovn_controller state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 08:15:59 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v402: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:16:00 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 08:16:00 compute-0 ceph-mon[75227]: pgmap v402: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:16:01 compute-0 sudo[143738]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dosyqiadtvxonrnarskhunwwvbjkwknu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847360.8591232-536-164504528442301/AnsiballZ_container_config_data.py'
Jan 31 08:16:01 compute-0 sudo[143738]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:16:01 compute-0 python3.9[143740]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/edpm-config/container-startup-config/ovn_controller config_pattern=*.json debug=False
Jan 31 08:16:01 compute-0 sudo[143738]: pam_unix(sudo:session): session closed for user root
Jan 31 08:16:01 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v403: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:16:02 compute-0 sudo[143890]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oqsvkmcgxzrxsshajaexjihjmidwtsjy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847361.7010815-547-136539442480713/AnsiballZ_container_config_hash.py'
Jan 31 08:16:02 compute-0 sudo[143890]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:16:02 compute-0 python3.9[143892]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/openstack
Jan 31 08:16:02 compute-0 sudo[143890]: pam_unix(sudo:session): session closed for user root
Jan 31 08:16:02 compute-0 ceph-mgr[75519]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:16:02 compute-0 ceph-mgr[75519]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:16:02 compute-0 ceph-mgr[75519]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:16:02 compute-0 ceph-mgr[75519]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:16:02 compute-0 ceph-mgr[75519]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:16:02 compute-0 ceph-mgr[75519]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:16:02 compute-0 sudo[144042]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zezltdmopcsyhhxwkqylsvdjdjldxayu ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1769847362.5217364-557-27955362007951/AnsiballZ_edpm_container_manage.py'
Jan 31 08:16:02 compute-0 sudo[144042]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:16:02 compute-0 ceph-mon[75227]: pgmap v403: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:16:03 compute-0 python3[144044]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/edpm-config/container-startup-config/ovn_controller config_id=ovn_controller config_overrides={} config_patterns=*.json containers=['ovn_controller'] log_base_path=/var/log/containers/stdouts debug=False
Jan 31 08:16:03 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v404: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:16:05 compute-0 ceph-mon[75227]: pgmap v404: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:16:05 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 08:16:05 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v405: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:16:07 compute-0 ceph-mon[75227]: pgmap v405: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:16:07 compute-0 podman[144056]: 2026-01-31 08:16:07.41359995 +0000 UTC m=+4.152039014 image pull 9f8c6308802db66f6c1100257e3fa9593740e85d82f038b4185cf756493dc94e quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified
Jan 31 08:16:07 compute-0 podman[144175]: 2026-01-31 08:16:07.535788634 +0000 UTC m=+0.050450080 container create 14c3c41ef3ecc0cb180f3b1c6b2646401de390411c75f2edf127900ced71a3ae (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=ovn_controller, config_id=ovn_controller, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '3b798815db4eef76ff54d2cfe5801aee605c637f16e47a4297de289ab1fdb9c1-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20260127)
Jan 31 08:16:07 compute-0 podman[144175]: 2026-01-31 08:16:07.505944352 +0000 UTC m=+0.020605868 image pull 9f8c6308802db66f6c1100257e3fa9593740e85d82f038b4185cf756493dc94e quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified
Jan 31 08:16:07 compute-0 python3[144044]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name ovn_controller --conmon-pidfile /run/ovn_controller.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --env EDPM_CONFIG_HASH=3b798815db4eef76ff54d2cfe5801aee605c637f16e47a4297de289ab1fdb9c1-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9 --healthcheck-command /openstack/healthcheck --label config_id=ovn_controller --label container_name=ovn_controller --label managed_by=edpm_ansible --label config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '3b798815db4eef76ff54d2cfe5801aee605c637f16e47a4297de289ab1fdb9c1-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']} --log-driver journald --log-level info --network host --privileged=True --user root --volume /lib/modules:/lib/modules:ro --volume /run:/run --volume /var/lib/openvswitch/ovn:/run/ovn:shared,z --volume /var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro --volume /var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z --volume /var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z --volume /var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z --volume /var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z --volume /var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified
Jan 31 08:16:07 compute-0 sudo[144042]: pam_unix(sudo:session): session closed for user root
Jan 31 08:16:07 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v406: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:16:08 compute-0 sudo[144362]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xtwebdnpcndtegzwqdwpwnjpkdrnrbzu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847367.8041131-565-128239933986089/AnsiballZ_stat.py'
Jan 31 08:16:08 compute-0 sudo[144362]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:16:08 compute-0 ceph-mon[75227]: pgmap v406: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:16:08 compute-0 python3.9[144364]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 31 08:16:08 compute-0 sudo[144362]: pam_unix(sudo:session): session closed for user root
Jan 31 08:16:08 compute-0 sudo[144516]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iyxwvnumdueodnezslztvgqecenyqefj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847368.4482012-574-260453651161189/AnsiballZ_file.py'
Jan 31 08:16:08 compute-0 sudo[144516]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:16:08 compute-0 python3.9[144518]: ansible-file Invoked with path=/etc/systemd/system/edpm_ovn_controller.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 08:16:08 compute-0 sudo[144516]: pam_unix(sudo:session): session closed for user root
Jan 31 08:16:09 compute-0 sudo[144592]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-towvrefvwwnsznwxeyuaviqtqmutcajo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847368.4482012-574-260453651161189/AnsiballZ_stat.py'
Jan 31 08:16:09 compute-0 sudo[144592]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:16:09 compute-0 python3.9[144594]: ansible-stat Invoked with path=/etc/systemd/system/edpm_ovn_controller_healthcheck.timer follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 31 08:16:09 compute-0 sudo[144592]: pam_unix(sudo:session): session closed for user root
Jan 31 08:16:09 compute-0 sudo[144743]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xtymjbqhgpsrfuswwknpwzscmkzvjysj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847369.3068302-574-107954512761227/AnsiballZ_copy.py'
Jan 31 08:16:09 compute-0 sudo[144743]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:16:09 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v407: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:16:09 compute-0 python3.9[144745]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1769847369.3068302-574-107954512761227/source dest=/etc/systemd/system/edpm_ovn_controller.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 08:16:09 compute-0 sudo[144743]: pam_unix(sudo:session): session closed for user root
Jan 31 08:16:10 compute-0 sudo[144819]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nalrsqmvrnpikxmlrmtpzlunyypevxxl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847369.3068302-574-107954512761227/AnsiballZ_systemd.py'
Jan 31 08:16:10 compute-0 sudo[144819]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:16:10 compute-0 python3.9[144821]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Jan 31 08:16:10 compute-0 systemd[1]: Reloading.
Jan 31 08:16:10 compute-0 systemd-rc-local-generator[144844]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 08:16:10 compute-0 systemd-sysv-generator[144849]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 08:16:10 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 08:16:10 compute-0 sudo[144819]: pam_unix(sudo:session): session closed for user root
Jan 31 08:16:10 compute-0 sudo[144930]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jhwrntarshyldvnspymmzmtzrjeoqicb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847369.3068302-574-107954512761227/AnsiballZ_systemd.py'
Jan 31 08:16:10 compute-0 sudo[144930]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:16:11 compute-0 ceph-mon[75227]: pgmap v407: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:16:11 compute-0 python3.9[144932]: ansible-systemd Invoked with state=restarted name=edpm_ovn_controller.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 31 08:16:11 compute-0 systemd[1]: Reloading.
Jan 31 08:16:11 compute-0 systemd-rc-local-generator[144962]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 08:16:11 compute-0 systemd-sysv-generator[144965]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 08:16:11 compute-0 systemd[1]: Starting ovn_controller container...
Jan 31 08:16:11 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:16:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/87cd0f4556558e78008ae041da38720c9f50251c774d3f6f444a4642b75d8fdf/merged/run/ovn supports timestamps until 2038 (0x7fffffff)
Jan 31 08:16:11 compute-0 systemd[1]: Started /usr/bin/podman healthcheck run 14c3c41ef3ecc0cb180f3b1c6b2646401de390411c75f2edf127900ced71a3ae.
Jan 31 08:16:11 compute-0 podman[144974]: 2026-01-31 08:16:11.863183085 +0000 UTC m=+0.278515907 container init 14c3c41ef3ecc0cb180f3b1c6b2646401de390411c75f2edf127900ced71a3ae (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '3b798815db4eef76ff54d2cfe5801aee605c637f16e47a4297de289ab1fdb9c1-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Jan 31 08:16:11 compute-0 ovn_controller[144989]: + sudo -E kolla_set_configs
Jan 31 08:16:11 compute-0 podman[144974]: 2026-01-31 08:16:11.910530638 +0000 UTC m=+0.325863470 container start 14c3c41ef3ecc0cb180f3b1c6b2646401de390411c75f2edf127900ced71a3ae (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '3b798815db4eef76ff54d2cfe5801aee605c637f16e47a4297de289ab1fdb9c1-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2)
Jan 31 08:16:11 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v408: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:16:11 compute-0 edpm-start-podman-container[144974]: ovn_controller
Jan 31 08:16:11 compute-0 systemd[1]: Created slice User Slice of UID 0.
Jan 31 08:16:11 compute-0 systemd[1]: Starting User Runtime Directory /run/user/0...
Jan 31 08:16:11 compute-0 systemd[1]: Finished User Runtime Directory /run/user/0.
Jan 31 08:16:11 compute-0 systemd[1]: Starting User Manager for UID 0...
Jan 31 08:16:11 compute-0 systemd[145017]: pam_unix(systemd-user:session): session opened for user root(uid=0) by root(uid=0)
Jan 31 08:16:11 compute-0 edpm-start-podman-container[144973]: Creating additional drop-in dependency for "ovn_controller" (14c3c41ef3ecc0cb180f3b1c6b2646401de390411c75f2edf127900ced71a3ae)
Jan 31 08:16:11 compute-0 podman[144996]: 2026-01-31 08:16:11.981605825 +0000 UTC m=+0.065618287 container health_status 14c3c41ef3ecc0cb180f3b1c6b2646401de390411c75f2edf127900ced71a3ae (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=starting, health_failing_streak=1, health_log=, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '3b798815db4eef76ff54d2cfe5801aee605c637f16e47a4297de289ab1fdb9c1-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_controller, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Jan 31 08:16:11 compute-0 systemd[1]: 14c3c41ef3ecc0cb180f3b1c6b2646401de390411c75f2edf127900ced71a3ae-250c62c7a9abe11b.service: Main process exited, code=exited, status=1/FAILURE
Jan 31 08:16:11 compute-0 systemd[1]: 14c3c41ef3ecc0cb180f3b1c6b2646401de390411c75f2edf127900ced71a3ae-250c62c7a9abe11b.service: Failed with result 'exit-code'.
Jan 31 08:16:11 compute-0 systemd[1]: Reloading.
Jan 31 08:16:12 compute-0 systemd[145017]: Queued start job for default target Main User Target.
Jan 31 08:16:12 compute-0 systemd[145017]: Created slice User Application Slice.
Jan 31 08:16:12 compute-0 systemd-sysv-generator[145079]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 08:16:12 compute-0 systemd[145017]: Mark boot as successful after the user session has run 2 minutes was skipped because of an unmet condition check (ConditionUser=!@system).
Jan 31 08:16:12 compute-0 systemd[145017]: Started Daily Cleanup of User's Temporary Directories.
Jan 31 08:16:12 compute-0 systemd[145017]: Reached target Paths.
Jan 31 08:16:12 compute-0 systemd[145017]: Reached target Timers.
Jan 31 08:16:12 compute-0 systemd-rc-local-generator[145076]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 08:16:12 compute-0 systemd[145017]: Starting D-Bus User Message Bus Socket...
Jan 31 08:16:12 compute-0 systemd[145017]: Starting Create User's Volatile Files and Directories...
Jan 31 08:16:12 compute-0 systemd[145017]: Finished Create User's Volatile Files and Directories.
Jan 31 08:16:12 compute-0 systemd[145017]: Listening on D-Bus User Message Bus Socket.
Jan 31 08:16:12 compute-0 systemd[145017]: Reached target Sockets.
Jan 31 08:16:12 compute-0 systemd[145017]: Reached target Basic System.
Jan 31 08:16:12 compute-0 systemd[145017]: Reached target Main User Target.
Jan 31 08:16:12 compute-0 systemd[145017]: Startup finished in 119ms.
Jan 31 08:16:12 compute-0 systemd[1]: Started User Manager for UID 0.
Jan 31 08:16:12 compute-0 systemd[1]: Started ovn_controller container.
Jan 31 08:16:12 compute-0 systemd[1]: Started Session c1 of User root.
Jan 31 08:16:12 compute-0 sudo[144930]: pam_unix(sudo:session): session closed for user root
Jan 31 08:16:12 compute-0 ovn_controller[144989]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Jan 31 08:16:12 compute-0 ovn_controller[144989]: INFO:__main__:Validating config file
Jan 31 08:16:12 compute-0 ovn_controller[144989]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Jan 31 08:16:12 compute-0 ovn_controller[144989]: INFO:__main__:Writing out command to execute
Jan 31 08:16:12 compute-0 systemd[1]: session-c1.scope: Deactivated successfully.
Jan 31 08:16:12 compute-0 ovn_controller[144989]: ++ cat /run_command
Jan 31 08:16:12 compute-0 ovn_controller[144989]: + CMD='/usr/bin/ovn-controller --pidfile unix:/run/openvswitch/db.sock  -p /etc/pki/tls/private/ovndb.key -c /etc/pki/tls/certs/ovndb.crt -C /etc/pki/tls/certs/ovndbca.crt '
Jan 31 08:16:12 compute-0 ovn_controller[144989]: + ARGS=
Jan 31 08:16:12 compute-0 ovn_controller[144989]: + sudo kolla_copy_cacerts
Jan 31 08:16:12 compute-0 systemd[1]: Started Session c2 of User root.
Jan 31 08:16:12 compute-0 ovn_controller[144989]: + [[ ! -n '' ]]
Jan 31 08:16:12 compute-0 ovn_controller[144989]: + . kolla_extend_start
Jan 31 08:16:12 compute-0 ovn_controller[144989]: Running command: '/usr/bin/ovn-controller --pidfile unix:/run/openvswitch/db.sock  -p /etc/pki/tls/private/ovndb.key -c /etc/pki/tls/certs/ovndb.crt -C /etc/pki/tls/certs/ovndbca.crt '
Jan 31 08:16:12 compute-0 ovn_controller[144989]: + echo 'Running command: '\''/usr/bin/ovn-controller --pidfile unix:/run/openvswitch/db.sock  -p /etc/pki/tls/private/ovndb.key -c /etc/pki/tls/certs/ovndb.crt -C /etc/pki/tls/certs/ovndbca.crt '\'''
Jan 31 08:16:12 compute-0 ovn_controller[144989]: + umask 0022
Jan 31 08:16:12 compute-0 ovn_controller[144989]: + exec /usr/bin/ovn-controller --pidfile unix:/run/openvswitch/db.sock -p /etc/pki/tls/private/ovndb.key -c /etc/pki/tls/certs/ovndb.crt -C /etc/pki/tls/certs/ovndbca.crt
Jan 31 08:16:12 compute-0 systemd[1]: session-c2.scope: Deactivated successfully.
Jan 31 08:16:12 compute-0 ovn_controller[144989]: 2026-01-31T08:16:12Z|00001|reconnect|INFO|unix:/run/openvswitch/db.sock: connecting...
Jan 31 08:16:12 compute-0 ovn_controller[144989]: 2026-01-31T08:16:12Z|00002|reconnect|INFO|unix:/run/openvswitch/db.sock: connected
Jan 31 08:16:12 compute-0 ovn_controller[144989]: 2026-01-31T08:16:12Z|00003|main|INFO|OVN internal version is : [24.03.8-20.33.0-76.8]
Jan 31 08:16:12 compute-0 ovn_controller[144989]: 2026-01-31T08:16:12Z|00004|main|INFO|OVS IDL reconnected, force recompute.
Jan 31 08:16:12 compute-0 ovn_controller[144989]: 2026-01-31T08:16:12Z|00005|reconnect|INFO|ssl:ovsdbserver-sb.openstack.svc:6642: connecting...
Jan 31 08:16:12 compute-0 ovn_controller[144989]: 2026-01-31T08:16:12Z|00006|main|INFO|OVNSB IDL reconnected, force recompute.
Jan 31 08:16:12 compute-0 NetworkManager[49054]: <info>  [1769847372.4113] manager: (br-int): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/16)
Jan 31 08:16:12 compute-0 NetworkManager[49054]: <info>  [1769847372.4119] device (br-int)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Jan 31 08:16:12 compute-0 NetworkManager[49054]: <warn>  [1769847372.4121] device (br-int)[Open vSwitch Interface]: error setting IPv4 forwarding to '1': No such file or directory
Jan 31 08:16:12 compute-0 NetworkManager[49054]: <info>  [1769847372.4126] manager: (br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/17)
Jan 31 08:16:12 compute-0 NetworkManager[49054]: <info>  [1769847372.4130] manager: (br-int): new Open vSwitch Bridge device (/org/freedesktop/NetworkManager/Devices/18)
Jan 31 08:16:12 compute-0 NetworkManager[49054]: <info>  [1769847372.4132] device (br-int)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'none', managed-type: 'full')
Jan 31 08:16:12 compute-0 kernel: br-int: entered promiscuous mode
Jan 31 08:16:12 compute-0 ovn_controller[144989]: 2026-01-31T08:16:12Z|00007|reconnect|INFO|ssl:ovsdbserver-sb.openstack.svc:6642: connected
Jan 31 08:16:12 compute-0 ovn_controller[144989]: 2026-01-31T08:16:12Z|00008|features|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting to switch
Jan 31 08:16:12 compute-0 ovn_controller[144989]: 2026-01-31T08:16:12Z|00009|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting...
Jan 31 08:16:12 compute-0 ovn_controller[144989]: 2026-01-31T08:16:12Z|00010|features|INFO|OVS Feature: ct_zero_snat, state: supported
Jan 31 08:16:12 compute-0 ovn_controller[144989]: 2026-01-31T08:16:12Z|00011|features|INFO|OVS Feature: ct_flush, state: supported
Jan 31 08:16:12 compute-0 ovn_controller[144989]: 2026-01-31T08:16:12Z|00012|features|INFO|OVS Feature: dp_hash_l4_sym_support, state: supported
Jan 31 08:16:12 compute-0 ovn_controller[144989]: 2026-01-31T08:16:12Z|00013|reconnect|INFO|unix:/run/openvswitch/db.sock: connecting...
Jan 31 08:16:12 compute-0 ovn_controller[144989]: 2026-01-31T08:16:12Z|00014|main|INFO|OVS feature set changed, force recompute.
Jan 31 08:16:12 compute-0 ovn_controller[144989]: 2026-01-31T08:16:12Z|00015|ofctrl|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting to switch
Jan 31 08:16:12 compute-0 ovn_controller[144989]: 2026-01-31T08:16:12Z|00016|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting...
Jan 31 08:16:12 compute-0 ovn_controller[144989]: 2026-01-31T08:16:12Z|00017|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected
Jan 31 08:16:12 compute-0 ovn_controller[144989]: 2026-01-31T08:16:12Z|00018|reconnect|INFO|unix:/run/openvswitch/db.sock: connected
Jan 31 08:16:12 compute-0 ovn_controller[144989]: 2026-01-31T08:16:12Z|00019|features|INFO|OVS DB schema supports 4 flow table prefixes, our IDL supports: 4
Jan 31 08:16:12 compute-0 ovn_controller[144989]: 2026-01-31T08:16:12Z|00020|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected
Jan 31 08:16:12 compute-0 ovn_controller[144989]: 2026-01-31T08:16:12Z|00021|ofctrl|INFO|ofctrl-wait-before-clear is now 8000 ms (was 0 ms)
Jan 31 08:16:12 compute-0 ovn_controller[144989]: 2026-01-31T08:16:12Z|00022|main|INFO|OVS OpenFlow connection reconnected,force recompute.
Jan 31 08:16:12 compute-0 ovn_controller[144989]: 2026-01-31T08:16:12Z|00023|main|INFO|Setting flow table prefixes: ip_src, ip_dst, ipv6_src, ipv6_dst.
Jan 31 08:16:12 compute-0 ovn_controller[144989]: 2026-01-31T08:16:12Z|00024|main|INFO|OVS feature set changed, force recompute.
Jan 31 08:16:12 compute-0 systemd-udevd[145121]: Network interface NamePolicy= disabled on kernel command line.
Jan 31 08:16:12 compute-0 ovn_controller[144989]: 2026-01-31T08:16:12Z|00001|pinctrl(ovn_pinctrl0)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting to switch
Jan 31 08:16:12 compute-0 ovn_controller[144989]: 2026-01-31T08:16:12Z|00001|statctrl(ovn_statctrl3)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting to switch
Jan 31 08:16:12 compute-0 ovn_controller[144989]: 2026-01-31T08:16:12Z|00002|rconn(ovn_pinctrl0)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting...
Jan 31 08:16:12 compute-0 ovn_controller[144989]: 2026-01-31T08:16:12Z|00002|rconn(ovn_statctrl3)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting...
Jan 31 08:16:12 compute-0 ovn_controller[144989]: 2026-01-31T08:16:12Z|00003|rconn(ovn_pinctrl0)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected
Jan 31 08:16:12 compute-0 ovn_controller[144989]: 2026-01-31T08:16:12Z|00003|rconn(ovn_statctrl3)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected
Jan 31 08:16:12 compute-0 NetworkManager[49054]: <info>  [1769847372.4382] manager: (ovn-ade796-0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/19)
Jan 31 08:16:12 compute-0 kernel: genev_sys_6081: entered promiscuous mode
Jan 31 08:16:12 compute-0 NetworkManager[49054]: <info>  [1769847372.4607] device (genev_sys_6081): carrier: link connected
Jan 31 08:16:12 compute-0 NetworkManager[49054]: <info>  [1769847372.4609] manager: (genev_sys_6081): new Generic device (/org/freedesktop/NetworkManager/Devices/20)
Jan 31 08:16:12 compute-0 systemd-udevd[145124]: Network interface NamePolicy= disabled on kernel command line.
Jan 31 08:16:13 compute-0 python3.9[145251]: ansible-ansible.builtin.slurp Invoked with src=/var/lib/edpm-config/deployed_services.yaml
Jan 31 08:16:13 compute-0 ceph-mon[75227]: pgmap v408: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:16:13 compute-0 ceph-mon[75227]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 31 08:16:13 compute-0 ceph-mon[75227]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 600.0 total, 600.0 interval
                                           Cumulative writes: 2034 writes, 9080 keys, 2034 commit groups, 1.0 writes per commit group, ingest: 0.01 GB, 0.02 MB/s
                                           Cumulative WAL: 2034 writes, 2034 syncs, 1.00 writes per sync, written: 0.01 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 2034 writes, 9080 keys, 2034 commit groups, 1.0 writes per commit group, ingest: 12.25 MB, 0.02 MB/s
                                           Interval WAL: 2034 writes, 2034 syncs, 1.00 writes per sync, written: 0.01 GB, 0.02 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0    178.9      0.05              0.01         3    0.016       0      0       0.0       0.0
                                             L6      1/0    6.78 MB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.6    159.4    139.3      0.10              0.04         2    0.052    7245    739       0.0       0.0
                                            Sum      1/0    6.78 MB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   2.6    107.9    152.1      0.15              0.05         5    0.031    7245    739       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   2.6    110.9    156.1      0.15              0.05         4    0.037    7245    739       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Low      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0    159.4    139.3      0.10              0.04         2    0.052    7245    739       0.0       0.0
                                           High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0    194.4      0.05              0.01         2    0.023       0      0       0.0       0.0
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     13.5      0.00              0.00         1    0.004       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.009, interval 0.009
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.02 GB write, 0.04 MB/s write, 0.02 GB read, 0.03 MB/s read, 0.2 seconds
                                           Interval compaction: 0.02 GB write, 0.04 MB/s write, 0.02 GB read, 0.03 MB/s read, 0.1 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55bf4c7858d0#2 capacity: 308.00 MB usage: 687.05 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 0 last_secs: 0.000107 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(38,595.77 KB,0.188897%) FilterBlock(6,28.36 KB,0.00899179%) IndexBlock(6,62.92 KB,0.0199504%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
Jan 31 08:16:13 compute-0 sudo[145401]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ldqvjbnuvjncirtvoteiqfhghsyvoyhz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847373.4281287-619-80319872166865/AnsiballZ_stat.py'
Jan 31 08:16:13 compute-0 sudo[145401]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:16:13 compute-0 python3.9[145403]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/deployed_services.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 08:16:13 compute-0 sudo[145401]: pam_unix(sudo:session): session closed for user root
Jan 31 08:16:13 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v409: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:16:14 compute-0 sudo[145524]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ukygxriuxxwvuortcofqiizynqawmsod ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847373.4281287-619-80319872166865/AnsiballZ_copy.py'
Jan 31 08:16:14 compute-0 sudo[145524]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:16:14 compute-0 python3.9[145526]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/deployed_services.yaml mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769847373.4281287-619-80319872166865/.source.yaml _original_basename=.5m6ejl3c follow=False checksum=1a2e4ae73b9ac25b107575967ad92468de0fdd78 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 08:16:14 compute-0 sudo[145524]: pam_unix(sudo:session): session closed for user root
Jan 31 08:16:14 compute-0 sudo[145676]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-meziabkcrxakufyabobypcqvtlvdfjkq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847374.513532-634-215513705472769/AnsiballZ_command.py'
Jan 31 08:16:14 compute-0 sudo[145676]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:16:14 compute-0 python3.9[145678]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl remove open . other_config hw-offload
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 08:16:14 compute-0 ovs-vsctl[145679]: ovs|00001|vsctl|INFO|Called as ovs-vsctl remove open . other_config hw-offload
Jan 31 08:16:14 compute-0 sudo[145676]: pam_unix(sudo:session): session closed for user root
Jan 31 08:16:15 compute-0 ceph-mon[75227]: pgmap v409: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:16:15 compute-0 sudo[145829]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qaomhtidytheursstiszokscyohpgdjd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847375.1209016-642-104856745736102/AnsiballZ_command.py'
Jan 31 08:16:15 compute-0 sudo[145829]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:16:15 compute-0 python3.9[145831]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl get Open_vSwitch . external_ids:ovn-cms-options | sed 's/\"//g'
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 08:16:15 compute-0 ovs-vsctl[145833]: ovs|00001|db_ctl_base|ERR|no key "ovn-cms-options" in Open_vSwitch record "." column external_ids
Jan 31 08:16:15 compute-0 sudo[145829]: pam_unix(sudo:session): session closed for user root
Jan 31 08:16:15 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 08:16:15 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v410: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:16:16 compute-0 sudo[145984]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-alaxynqqgkiqcuyhcamafcyumjiauedt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847375.904742-656-108192211711391/AnsiballZ_command.py'
Jan 31 08:16:16 compute-0 sudo[145984]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:16:16 compute-0 python3.9[145986]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl remove Open_vSwitch . external_ids ovn-cms-options
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 08:16:16 compute-0 ovs-vsctl[145987]: ovs|00001|vsctl|INFO|Called as ovs-vsctl remove Open_vSwitch . external_ids ovn-cms-options
Jan 31 08:16:16 compute-0 sudo[145984]: pam_unix(sudo:session): session closed for user root
Jan 31 08:16:16 compute-0 sshd-session[133791]: Connection closed by 192.168.122.30 port 55562
Jan 31 08:16:16 compute-0 sshd-session[133788]: pam_unix(sshd:session): session closed for user zuul
Jan 31 08:16:16 compute-0 systemd[1]: session-46.scope: Deactivated successfully.
Jan 31 08:16:16 compute-0 systemd[1]: session-46.scope: Consumed 50.732s CPU time.
Jan 31 08:16:16 compute-0 systemd-logind[793]: Session 46 logged out. Waiting for processes to exit.
Jan 31 08:16:16 compute-0 systemd-logind[793]: Removed session 46.
Jan 31 08:16:17 compute-0 ceph-mon[75227]: pgmap v410: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:16:17 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v411: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:16:19 compute-0 ceph-mon[75227]: pgmap v411: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:16:19 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v412: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:16:20 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 08:16:21 compute-0 ceph-mon[75227]: pgmap v412: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:16:21 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v413: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:16:22 compute-0 sshd-session[146012]: Accepted publickey for zuul from 192.168.122.30 port 46134 ssh2: ECDSA SHA256:Skb+4tfaoVfLHQIqkRSeA/sFlTrVc6ZnX8V66qTLHY8
Jan 31 08:16:22 compute-0 systemd-logind[793]: New session 48 of user zuul.
Jan 31 08:16:22 compute-0 systemd[1]: Started Session 48 of User zuul.
Jan 31 08:16:22 compute-0 systemd[1]: Stopping User Manager for UID 0...
Jan 31 08:16:22 compute-0 systemd[145017]: Activating special unit Exit the Session...
Jan 31 08:16:22 compute-0 systemd[145017]: Stopped target Main User Target.
Jan 31 08:16:22 compute-0 systemd[145017]: Stopped target Basic System.
Jan 31 08:16:22 compute-0 systemd[145017]: Stopped target Paths.
Jan 31 08:16:22 compute-0 systemd[145017]: Stopped target Sockets.
Jan 31 08:16:22 compute-0 systemd[145017]: Stopped target Timers.
Jan 31 08:16:22 compute-0 systemd[145017]: Stopped Daily Cleanup of User's Temporary Directories.
Jan 31 08:16:22 compute-0 systemd[145017]: Closed D-Bus User Message Bus Socket.
Jan 31 08:16:22 compute-0 systemd[145017]: Stopped Create User's Volatile Files and Directories.
Jan 31 08:16:22 compute-0 systemd[145017]: Removed slice User Application Slice.
Jan 31 08:16:22 compute-0 systemd[145017]: Reached target Shutdown.
Jan 31 08:16:22 compute-0 systemd[145017]: Finished Exit the Session.
Jan 31 08:16:22 compute-0 systemd[145017]: Reached target Exit the Session.
Jan 31 08:16:22 compute-0 sshd-session[146012]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 31 08:16:22 compute-0 systemd[1]: user@0.service: Deactivated successfully.
Jan 31 08:16:22 compute-0 systemd[1]: Stopped User Manager for UID 0.
Jan 31 08:16:22 compute-0 systemd[1]: Stopping User Runtime Directory /run/user/0...
Jan 31 08:16:22 compute-0 systemd[1]: run-user-0.mount: Deactivated successfully.
Jan 31 08:16:22 compute-0 systemd[1]: user-runtime-dir@0.service: Deactivated successfully.
Jan 31 08:16:22 compute-0 systemd[1]: Stopped User Runtime Directory /run/user/0.
Jan 31 08:16:22 compute-0 systemd[1]: Removed slice User Slice of UID 0.
Jan 31 08:16:23 compute-0 ceph-mon[75227]: pgmap v413: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:16:23 compute-0 python3.9[146167]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 31 08:16:23 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v414: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:16:24 compute-0 ceph-mon[75227]: pgmap v414: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:16:24 compute-0 sudo[146321]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-seyophwcyapwihrfnnslhnsiqoltjvir ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847383.9941995-29-169229963419235/AnsiballZ_file.py'
Jan 31 08:16:24 compute-0 sudo[146321]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:16:24 compute-0 sudo[146322]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 08:16:24 compute-0 sudo[146322]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:16:24 compute-0 sudo[146322]: pam_unix(sudo:session): session closed for user root
Jan 31 08:16:24 compute-0 sudo[146349]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/82c880e6-d992-5408-8b12-efff9c275473/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Jan 31 08:16:24 compute-0 sudo[146349]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:16:24 compute-0 python3.9[146336]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/var/lib/openstack/neutron-ovn-metadata-agent setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Jan 31 08:16:24 compute-0 sudo[146321]: pam_unix(sudo:session): session closed for user root
Jan 31 08:16:24 compute-0 sudo[146349]: pam_unix(sudo:session): session closed for user root
Jan 31 08:16:24 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 31 08:16:24 compute-0 ceph-mon[75227]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 08:16:24 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 31 08:16:24 compute-0 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 31 08:16:24 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 31 08:16:24 compute-0 sudo[146555]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ntupogtqraobfbrooxttkerxdemacgng ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847384.6990447-29-32997042149459/AnsiballZ_file.py'
Jan 31 08:16:24 compute-0 sudo[146555]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:16:24 compute-0 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 08:16:24 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 31 08:16:24 compute-0 ceph-mon[75227]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 31 08:16:24 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 31 08:16:24 compute-0 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 31 08:16:24 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 31 08:16:24 compute-0 ceph-mon[75227]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 08:16:25 compute-0 sudo[146558]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 08:16:25 compute-0 sudo[146558]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:16:25 compute-0 sudo[146558]: pam_unix(sudo:session): session closed for user root
Jan 31 08:16:25 compute-0 sudo[146583]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/82c880e6-d992-5408-8b12-efff9c275473/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 82c880e6-d992-5408-8b12-efff9c275473 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Jan 31 08:16:25 compute-0 sudo[146583]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:16:25 compute-0 python3.9[146557]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 31 08:16:25 compute-0 sudo[146555]: pam_unix(sudo:session): session closed for user root
Jan 31 08:16:25 compute-0 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 08:16:25 compute-0 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 31 08:16:25 compute-0 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 08:16:25 compute-0 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 31 08:16:25 compute-0 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 31 08:16:25 compute-0 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 08:16:25 compute-0 podman[146697]: 2026-01-31 08:16:25.320656336 +0000 UTC m=+0.043416309 container create b7e5cc945e348e3291afed269b31edbb3222f505a741cadcfe2def8d2d531119 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cranky_booth, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Jan 31 08:16:25 compute-0 systemd[1]: Started libpod-conmon-b7e5cc945e348e3291afed269b31edbb3222f505a741cadcfe2def8d2d531119.scope.
Jan 31 08:16:25 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:16:25 compute-0 podman[146697]: 2026-01-31 08:16:25.393448043 +0000 UTC m=+0.116208016 container init b7e5cc945e348e3291afed269b31edbb3222f505a741cadcfe2def8d2d531119 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cranky_booth, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.build-date=20251030)
Jan 31 08:16:25 compute-0 podman[146697]: 2026-01-31 08:16:25.299457146 +0000 UTC m=+0.022217169 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 08:16:25 compute-0 podman[146697]: 2026-01-31 08:16:25.401017684 +0000 UTC m=+0.123777657 container start b7e5cc945e348e3291afed269b31edbb3222f505a741cadcfe2def8d2d531119 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cranky_booth, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 08:16:25 compute-0 podman[146697]: 2026-01-31 08:16:25.404353667 +0000 UTC m=+0.127113810 container attach b7e5cc945e348e3291afed269b31edbb3222f505a741cadcfe2def8d2d531119 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cranky_booth, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 08:16:25 compute-0 cranky_booth[146750]: 167 167
Jan 31 08:16:25 compute-0 systemd[1]: libpod-b7e5cc945e348e3291afed269b31edbb3222f505a741cadcfe2def8d2d531119.scope: Deactivated successfully.
Jan 31 08:16:25 compute-0 podman[146697]: 2026-01-31 08:16:25.406004303 +0000 UTC m=+0.128764276 container died b7e5cc945e348e3291afed269b31edbb3222f505a741cadcfe2def8d2d531119 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cranky_booth, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=tentacle, io.buildah.version=1.41.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 08:16:25 compute-0 systemd[1]: var-lib-containers-storage-overlay-fb05b9914f06e9cea799a904a5ce959cea1fa82c25c163add3eae907debb0f3c-merged.mount: Deactivated successfully.
Jan 31 08:16:25 compute-0 sudo[146799]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fyjdvlfofcnbzqffiplokrsmcfuppfrd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847385.2124333-29-238326243533421/AnsiballZ_file.py'
Jan 31 08:16:25 compute-0 sudo[146799]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:16:25 compute-0 podman[146697]: 2026-01-31 08:16:25.438642221 +0000 UTC m=+0.161402194 container remove b7e5cc945e348e3291afed269b31edbb3222f505a741cadcfe2def8d2d531119 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cranky_booth, org.label-schema.build-date=20251030, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 08:16:25 compute-0 systemd[1]: libpod-conmon-b7e5cc945e348e3291afed269b31edbb3222f505a741cadcfe2def8d2d531119.scope: Deactivated successfully.
Jan 31 08:16:25 compute-0 podman[146814]: 2026-01-31 08:16:25.553143059 +0000 UTC m=+0.039072738 container create a4d1f4361fbc047079a6dffc25fd80169a3284166fb5408da320f759fe06fef1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quirky_roentgen, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 08:16:25 compute-0 systemd[1]: Started libpod-conmon-a4d1f4361fbc047079a6dffc25fd80169a3284166fb5408da320f759fe06fef1.scope.
Jan 31 08:16:25 compute-0 python3.9[146806]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron/kill_scripts setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 31 08:16:25 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:16:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/effe2d366dde8c5312699fd25701be795054e0e605aac08fde415cc254c362b9/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 08:16:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/effe2d366dde8c5312699fd25701be795054e0e605aac08fde415cc254c362b9/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 08:16:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/effe2d366dde8c5312699fd25701be795054e0e605aac08fde415cc254c362b9/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 08:16:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/effe2d366dde8c5312699fd25701be795054e0e605aac08fde415cc254c362b9/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 08:16:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/effe2d366dde8c5312699fd25701be795054e0e605aac08fde415cc254c362b9/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 31 08:16:25 compute-0 podman[146814]: 2026-01-31 08:16:25.534942053 +0000 UTC m=+0.020871722 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 08:16:25 compute-0 sudo[146799]: pam_unix(sudo:session): session closed for user root
Jan 31 08:16:25 compute-0 podman[146814]: 2026-01-31 08:16:25.643409473 +0000 UTC m=+0.129339142 container init a4d1f4361fbc047079a6dffc25fd80169a3284166fb5408da320f759fe06fef1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quirky_roentgen, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.build-date=20251030, io.buildah.version=1.41.3)
Jan 31 08:16:25 compute-0 podman[146814]: 2026-01-31 08:16:25.652475125 +0000 UTC m=+0.138404764 container start a4d1f4361fbc047079a6dffc25fd80169a3284166fb5408da320f759fe06fef1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quirky_roentgen, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 08:16:25 compute-0 podman[146814]: 2026-01-31 08:16:25.655768897 +0000 UTC m=+0.141698556 container attach a4d1f4361fbc047079a6dffc25fd80169a3284166fb5408da320f759fe06fef1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quirky_roentgen, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True)
Jan 31 08:16:25 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 08:16:25 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v415: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:16:25 compute-0 sudo[146993]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eddmqmugvmodhdrapxuyqqffdxnlabya ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847385.741209-29-267860424537644/AnsiballZ_file.py'
Jan 31 08:16:25 compute-0 sudo[146993]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:16:26 compute-0 quirky_roentgen[146831]: --> passed data devices: 0 physical, 3 LVM
Jan 31 08:16:26 compute-0 quirky_roentgen[146831]: --> All data devices are unavailable
Jan 31 08:16:26 compute-0 systemd[1]: libpod-a4d1f4361fbc047079a6dffc25fd80169a3284166fb5408da320f759fe06fef1.scope: Deactivated successfully.
Jan 31 08:16:26 compute-0 podman[146814]: 2026-01-31 08:16:26.059148608 +0000 UTC m=+0.545078287 container died a4d1f4361fbc047079a6dffc25fd80169a3284166fb5408da320f759fe06fef1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quirky_roentgen, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Jan 31 08:16:26 compute-0 systemd[1]: var-lib-containers-storage-overlay-effe2d366dde8c5312699fd25701be795054e0e605aac08fde415cc254c362b9-merged.mount: Deactivated successfully.
Jan 31 08:16:26 compute-0 podman[146814]: 2026-01-31 08:16:26.109827209 +0000 UTC m=+0.595756858 container remove a4d1f4361fbc047079a6dffc25fd80169a3284166fb5408da320f759fe06fef1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quirky_roentgen, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 08:16:26 compute-0 systemd[1]: libpod-conmon-a4d1f4361fbc047079a6dffc25fd80169a3284166fb5408da320f759fe06fef1.scope: Deactivated successfully.
Jan 31 08:16:26 compute-0 sudo[146583]: pam_unix(sudo:session): session closed for user root
Jan 31 08:16:26 compute-0 python3.9[146997]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron/ovn-metadata-proxy setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 31 08:16:26 compute-0 sudo[146993]: pam_unix(sudo:session): session closed for user root
Jan 31 08:16:26 compute-0 sudo[147017]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 08:16:26 compute-0 sudo[147017]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:16:26 compute-0 sudo[147017]: pam_unix(sudo:session): session closed for user root
Jan 31 08:16:26 compute-0 sudo[147046]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/82c880e6-d992-5408-8b12-efff9c275473/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 82c880e6-d992-5408-8b12-efff9c275473 -- lvm list --format json
Jan 31 08:16:26 compute-0 sudo[147046]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:16:26 compute-0 ceph-mon[75227]: pgmap v415: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:16:26 compute-0 podman[147184]: 2026-01-31 08:16:26.482466624 +0000 UTC m=+0.039538772 container create a06fe364a8db235aa57f7bf20f39b10f2a83abe9eb5e8bce5dbadbb7a8ac16b9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ecstatic_cerf, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3)
Jan 31 08:16:26 compute-0 systemd[1]: Started libpod-conmon-a06fe364a8db235aa57f7bf20f39b10f2a83abe9eb5e8bce5dbadbb7a8ac16b9.scope.
Jan 31 08:16:26 compute-0 sudo[147244]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nonqwmrhueqfwjbxujzvlmgjpwmeafmd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847386.2987556-29-275922949521838/AnsiballZ_file.py'
Jan 31 08:16:26 compute-0 sudo[147244]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:16:26 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:16:26 compute-0 podman[147184]: 2026-01-31 08:16:26.460628636 +0000 UTC m=+0.017700794 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 08:16:26 compute-0 podman[147184]: 2026-01-31 08:16:26.568027236 +0000 UTC m=+0.125099404 container init a06fe364a8db235aa57f7bf20f39b10f2a83abe9eb5e8bce5dbadbb7a8ac16b9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ecstatic_cerf, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Jan 31 08:16:26 compute-0 podman[147184]: 2026-01-31 08:16:26.573663623 +0000 UTC m=+0.130735771 container start a06fe364a8db235aa57f7bf20f39b10f2a83abe9eb5e8bce5dbadbb7a8ac16b9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ecstatic_cerf, org.label-schema.build-date=20251030, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 08:16:26 compute-0 ecstatic_cerf[147246]: 167 167
Jan 31 08:16:26 compute-0 podman[147184]: 2026-01-31 08:16:26.577399957 +0000 UTC m=+0.134472115 container attach a06fe364a8db235aa57f7bf20f39b10f2a83abe9eb5e8bce5dbadbb7a8ac16b9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ecstatic_cerf, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle)
Jan 31 08:16:26 compute-0 systemd[1]: libpod-a06fe364a8db235aa57f7bf20f39b10f2a83abe9eb5e8bce5dbadbb7a8ac16b9.scope: Deactivated successfully.
Jan 31 08:16:26 compute-0 podman[147184]: 2026-01-31 08:16:26.577877501 +0000 UTC m=+0.134949679 container died a06fe364a8db235aa57f7bf20f39b10f2a83abe9eb5e8bce5dbadbb7a8ac16b9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ecstatic_cerf, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=tentacle, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0)
Jan 31 08:16:26 compute-0 systemd[1]: var-lib-containers-storage-overlay-6b8b2474f27cb6631f26a732cb7771af7f9b94704e21c815bb9bd606b45adc97-merged.mount: Deactivated successfully.
Jan 31 08:16:26 compute-0 podman[147184]: 2026-01-31 08:16:26.618569154 +0000 UTC m=+0.175641312 container remove a06fe364a8db235aa57f7bf20f39b10f2a83abe9eb5e8bce5dbadbb7a8ac16b9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ecstatic_cerf, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Jan 31 08:16:26 compute-0 systemd[1]: libpod-conmon-a06fe364a8db235aa57f7bf20f39b10f2a83abe9eb5e8bce5dbadbb7a8ac16b9.scope: Deactivated successfully.
Jan 31 08:16:26 compute-0 python3.9[147248]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron/external/pids setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 31 08:16:26 compute-0 sudo[147244]: pam_unix(sudo:session): session closed for user root
Jan 31 08:16:26 compute-0 podman[147272]: 2026-01-31 08:16:26.782704594 +0000 UTC m=+0.060584998 container create c5f60780a34d9cbea8e575836dba986f314fabc90b43437fdd95657fb45edc48 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elegant_volhard, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 08:16:26 compute-0 systemd[1]: Started libpod-conmon-c5f60780a34d9cbea8e575836dba986f314fabc90b43437fdd95657fb45edc48.scope.
Jan 31 08:16:26 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:16:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5fdfb547dcb05eefa6f7c43402769e94068f8197dbac377ab93daababcd4b8d3/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 08:16:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5fdfb547dcb05eefa6f7c43402769e94068f8197dbac377ab93daababcd4b8d3/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 08:16:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5fdfb547dcb05eefa6f7c43402769e94068f8197dbac377ab93daababcd4b8d3/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 08:16:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5fdfb547dcb05eefa6f7c43402769e94068f8197dbac377ab93daababcd4b8d3/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 08:16:26 compute-0 podman[147272]: 2026-01-31 08:16:26.759798536 +0000 UTC m=+0.037678980 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 08:16:26 compute-0 podman[147272]: 2026-01-31 08:16:26.872982737 +0000 UTC m=+0.150863151 container init c5f60780a34d9cbea8e575836dba986f314fabc90b43437fdd95657fb45edc48 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elegant_volhard, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_REF=tentacle, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 08:16:26 compute-0 podman[147272]: 2026-01-31 08:16:26.878633685 +0000 UTC m=+0.156514059 container start c5f60780a34d9cbea8e575836dba986f314fabc90b43437fdd95657fb45edc48 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elegant_volhard, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3)
Jan 31 08:16:26 compute-0 podman[147272]: 2026-01-31 08:16:26.882868802 +0000 UTC m=+0.160749196 container attach c5f60780a34d9cbea8e575836dba986f314fabc90b43437fdd95657fb45edc48 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elegant_volhard, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 31 08:16:27 compute-0 elegant_volhard[147313]: {
Jan 31 08:16:27 compute-0 elegant_volhard[147313]:     "0": [
Jan 31 08:16:27 compute-0 elegant_volhard[147313]:         {
Jan 31 08:16:27 compute-0 elegant_volhard[147313]:             "devices": [
Jan 31 08:16:27 compute-0 elegant_volhard[147313]:                 "/dev/loop3"
Jan 31 08:16:27 compute-0 elegant_volhard[147313]:             ],
Jan 31 08:16:27 compute-0 elegant_volhard[147313]:             "lv_name": "ceph_lv0",
Jan 31 08:16:27 compute-0 elegant_volhard[147313]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 08:16:27 compute-0 elegant_volhard[147313]:             "lv_size": "21470642176",
Jan 31 08:16:27 compute-0 elegant_volhard[147313]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=MTsNbY-MKaT-jGv0-3onj-5WQa-gnK0-BbfLsK,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=82c880e6-d992-5408-8b12-efff9c275473,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=39c36249-2898-4a76-b317-8e4ca379866f,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 31 08:16:27 compute-0 elegant_volhard[147313]:             "lv_uuid": "MTsNbY-MKaT-jGv0-3onj-5WQa-gnK0-BbfLsK",
Jan 31 08:16:27 compute-0 elegant_volhard[147313]:             "name": "ceph_lv0",
Jan 31 08:16:27 compute-0 elegant_volhard[147313]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 08:16:27 compute-0 elegant_volhard[147313]:             "tags": {
Jan 31 08:16:27 compute-0 elegant_volhard[147313]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 31 08:16:27 compute-0 elegant_volhard[147313]:                 "ceph.block_uuid": "MTsNbY-MKaT-jGv0-3onj-5WQa-gnK0-BbfLsK",
Jan 31 08:16:27 compute-0 elegant_volhard[147313]:                 "ceph.cephx_lockbox_secret": "",
Jan 31 08:16:27 compute-0 elegant_volhard[147313]:                 "ceph.cluster_fsid": "82c880e6-d992-5408-8b12-efff9c275473",
Jan 31 08:16:27 compute-0 elegant_volhard[147313]:                 "ceph.cluster_name": "ceph",
Jan 31 08:16:27 compute-0 elegant_volhard[147313]:                 "ceph.crush_device_class": "",
Jan 31 08:16:27 compute-0 elegant_volhard[147313]:                 "ceph.encrypted": "0",
Jan 31 08:16:27 compute-0 elegant_volhard[147313]:                 "ceph.objectstore": "bluestore",
Jan 31 08:16:27 compute-0 elegant_volhard[147313]:                 "ceph.osd_fsid": "39c36249-2898-4a76-b317-8e4ca379866f",
Jan 31 08:16:27 compute-0 elegant_volhard[147313]:                 "ceph.osd_id": "0",
Jan 31 08:16:27 compute-0 elegant_volhard[147313]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 31 08:16:27 compute-0 elegant_volhard[147313]:                 "ceph.type": "block",
Jan 31 08:16:27 compute-0 elegant_volhard[147313]:                 "ceph.vdo": "0",
Jan 31 08:16:27 compute-0 elegant_volhard[147313]:                 "ceph.with_tpm": "0"
Jan 31 08:16:27 compute-0 elegant_volhard[147313]:             },
Jan 31 08:16:27 compute-0 elegant_volhard[147313]:             "type": "block",
Jan 31 08:16:27 compute-0 elegant_volhard[147313]:             "vg_name": "ceph_vg0"
Jan 31 08:16:27 compute-0 elegant_volhard[147313]:         }
Jan 31 08:16:27 compute-0 elegant_volhard[147313]:     ],
Jan 31 08:16:27 compute-0 elegant_volhard[147313]:     "1": [
Jan 31 08:16:27 compute-0 elegant_volhard[147313]:         {
Jan 31 08:16:27 compute-0 elegant_volhard[147313]:             "devices": [
Jan 31 08:16:27 compute-0 elegant_volhard[147313]:                 "/dev/loop4"
Jan 31 08:16:27 compute-0 elegant_volhard[147313]:             ],
Jan 31 08:16:27 compute-0 elegant_volhard[147313]:             "lv_name": "ceph_lv1",
Jan 31 08:16:27 compute-0 elegant_volhard[147313]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 31 08:16:27 compute-0 elegant_volhard[147313]:             "lv_size": "21470642176",
Jan 31 08:16:27 compute-0 elegant_volhard[147313]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=p93Mbf-DMxT-pcUt-jSJE-SFna-oscq-yTAd40,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=82c880e6-d992-5408-8b12-efff9c275473,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=dacad4fa-56d8-4937-b2d8-306fb75187f3,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 31 08:16:27 compute-0 elegant_volhard[147313]:             "lv_uuid": "p93Mbf-DMxT-pcUt-jSJE-SFna-oscq-yTAd40",
Jan 31 08:16:27 compute-0 elegant_volhard[147313]:             "name": "ceph_lv1",
Jan 31 08:16:27 compute-0 elegant_volhard[147313]:             "path": "/dev/ceph_vg1/ceph_lv1",
Jan 31 08:16:27 compute-0 elegant_volhard[147313]:             "tags": {
Jan 31 08:16:27 compute-0 elegant_volhard[147313]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 31 08:16:27 compute-0 elegant_volhard[147313]:                 "ceph.block_uuid": "p93Mbf-DMxT-pcUt-jSJE-SFna-oscq-yTAd40",
Jan 31 08:16:27 compute-0 elegant_volhard[147313]:                 "ceph.cephx_lockbox_secret": "",
Jan 31 08:16:27 compute-0 elegant_volhard[147313]:                 "ceph.cluster_fsid": "82c880e6-d992-5408-8b12-efff9c275473",
Jan 31 08:16:27 compute-0 elegant_volhard[147313]:                 "ceph.cluster_name": "ceph",
Jan 31 08:16:27 compute-0 elegant_volhard[147313]:                 "ceph.crush_device_class": "",
Jan 31 08:16:27 compute-0 elegant_volhard[147313]:                 "ceph.encrypted": "0",
Jan 31 08:16:27 compute-0 elegant_volhard[147313]:                 "ceph.objectstore": "bluestore",
Jan 31 08:16:27 compute-0 elegant_volhard[147313]:                 "ceph.osd_fsid": "dacad4fa-56d8-4937-b2d8-306fb75187f3",
Jan 31 08:16:27 compute-0 elegant_volhard[147313]:                 "ceph.osd_id": "1",
Jan 31 08:16:27 compute-0 elegant_volhard[147313]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 31 08:16:27 compute-0 elegant_volhard[147313]:                 "ceph.type": "block",
Jan 31 08:16:27 compute-0 elegant_volhard[147313]:                 "ceph.vdo": "0",
Jan 31 08:16:27 compute-0 elegant_volhard[147313]:                 "ceph.with_tpm": "0"
Jan 31 08:16:27 compute-0 elegant_volhard[147313]:             },
Jan 31 08:16:27 compute-0 elegant_volhard[147313]:             "type": "block",
Jan 31 08:16:27 compute-0 elegant_volhard[147313]:             "vg_name": "ceph_vg1"
Jan 31 08:16:27 compute-0 elegant_volhard[147313]:         }
Jan 31 08:16:27 compute-0 elegant_volhard[147313]:     ],
Jan 31 08:16:27 compute-0 elegant_volhard[147313]:     "2": [
Jan 31 08:16:27 compute-0 elegant_volhard[147313]:         {
Jan 31 08:16:27 compute-0 elegant_volhard[147313]:             "devices": [
Jan 31 08:16:27 compute-0 elegant_volhard[147313]:                 "/dev/loop5"
Jan 31 08:16:27 compute-0 elegant_volhard[147313]:             ],
Jan 31 08:16:27 compute-0 elegant_volhard[147313]:             "lv_name": "ceph_lv2",
Jan 31 08:16:27 compute-0 elegant_volhard[147313]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 31 08:16:27 compute-0 elegant_volhard[147313]:             "lv_size": "21470642176",
Jan 31 08:16:27 compute-0 elegant_volhard[147313]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=fxd7JU-HnwP-NvcE-M4xv-EgEF-kK7y-w6dXCS,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=82c880e6-d992-5408-8b12-efff9c275473,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=faa25865-e7b6-44f9-8188-08bf287b941b,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 31 08:16:27 compute-0 elegant_volhard[147313]:             "lv_uuid": "fxd7JU-HnwP-NvcE-M4xv-EgEF-kK7y-w6dXCS",
Jan 31 08:16:27 compute-0 elegant_volhard[147313]:             "name": "ceph_lv2",
Jan 31 08:16:27 compute-0 elegant_volhard[147313]:             "path": "/dev/ceph_vg2/ceph_lv2",
Jan 31 08:16:27 compute-0 elegant_volhard[147313]:             "tags": {
Jan 31 08:16:27 compute-0 elegant_volhard[147313]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 31 08:16:27 compute-0 elegant_volhard[147313]:                 "ceph.block_uuid": "fxd7JU-HnwP-NvcE-M4xv-EgEF-kK7y-w6dXCS",
Jan 31 08:16:27 compute-0 elegant_volhard[147313]:                 "ceph.cephx_lockbox_secret": "",
Jan 31 08:16:27 compute-0 elegant_volhard[147313]:                 "ceph.cluster_fsid": "82c880e6-d992-5408-8b12-efff9c275473",
Jan 31 08:16:27 compute-0 elegant_volhard[147313]:                 "ceph.cluster_name": "ceph",
Jan 31 08:16:27 compute-0 elegant_volhard[147313]:                 "ceph.crush_device_class": "",
Jan 31 08:16:27 compute-0 elegant_volhard[147313]:                 "ceph.encrypted": "0",
Jan 31 08:16:27 compute-0 elegant_volhard[147313]:                 "ceph.objectstore": "bluestore",
Jan 31 08:16:27 compute-0 elegant_volhard[147313]:                 "ceph.osd_fsid": "faa25865-e7b6-44f9-8188-08bf287b941b",
Jan 31 08:16:27 compute-0 elegant_volhard[147313]:                 "ceph.osd_id": "2",
Jan 31 08:16:27 compute-0 elegant_volhard[147313]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 31 08:16:27 compute-0 elegant_volhard[147313]:                 "ceph.type": "block",
Jan 31 08:16:27 compute-0 elegant_volhard[147313]:                 "ceph.vdo": "0",
Jan 31 08:16:27 compute-0 elegant_volhard[147313]:                 "ceph.with_tpm": "0"
Jan 31 08:16:27 compute-0 elegant_volhard[147313]:             },
Jan 31 08:16:27 compute-0 elegant_volhard[147313]:             "type": "block",
Jan 31 08:16:27 compute-0 elegant_volhard[147313]:             "vg_name": "ceph_vg2"
Jan 31 08:16:27 compute-0 elegant_volhard[147313]:         }
Jan 31 08:16:27 compute-0 elegant_volhard[147313]:     ]
Jan 31 08:16:27 compute-0 elegant_volhard[147313]: }
Jan 31 08:16:27 compute-0 systemd[1]: libpod-c5f60780a34d9cbea8e575836dba986f314fabc90b43437fdd95657fb45edc48.scope: Deactivated successfully.
Jan 31 08:16:27 compute-0 podman[147272]: 2026-01-31 08:16:27.172396664 +0000 UTC m=+0.450277108 container died c5f60780a34d9cbea8e575836dba986f314fabc90b43437fdd95657fb45edc48 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elegant_volhard, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Jan 31 08:16:27 compute-0 systemd[1]: var-lib-containers-storage-overlay-5fdfb547dcb05eefa6f7c43402769e94068f8197dbac377ab93daababcd4b8d3-merged.mount: Deactivated successfully.
Jan 31 08:16:27 compute-0 podman[147272]: 2026-01-31 08:16:27.226757547 +0000 UTC m=+0.504637951 container remove c5f60780a34d9cbea8e575836dba986f314fabc90b43437fdd95657fb45edc48 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elegant_volhard, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 31 08:16:27 compute-0 systemd[1]: libpod-conmon-c5f60780a34d9cbea8e575836dba986f314fabc90b43437fdd95657fb45edc48.scope: Deactivated successfully.
Jan 31 08:16:27 compute-0 sudo[147046]: pam_unix(sudo:session): session closed for user root
Jan 31 08:16:27 compute-0 sudo[147459]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 08:16:27 compute-0 sudo[147459]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:16:27 compute-0 sudo[147459]: pam_unix(sudo:session): session closed for user root
Jan 31 08:16:27 compute-0 sudo[147484]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/82c880e6-d992-5408-8b12-efff9c275473/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 82c880e6-d992-5408-8b12-efff9c275473 -- raw list --format json
Jan 31 08:16:27 compute-0 sudo[147484]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:16:27 compute-0 python3.9[147447]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'selinux'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 31 08:16:27 compute-0 podman[147589]: 2026-01-31 08:16:27.670739649 +0000 UTC m=+0.046453685 container create d8edfb3fa4134e2bf9f4ec5d85d6335e0a5b71041fd52be37778cab75f150398 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=determined_volhard, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2)
Jan 31 08:16:27 compute-0 systemd[1]: Started libpod-conmon-d8edfb3fa4134e2bf9f4ec5d85d6335e0a5b71041fd52be37778cab75f150398.scope.
Jan 31 08:16:27 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:16:27 compute-0 podman[147589]: 2026-01-31 08:16:27.652416149 +0000 UTC m=+0.028130185 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 08:16:27 compute-0 podman[147589]: 2026-01-31 08:16:27.748405681 +0000 UTC m=+0.124119707 container init d8edfb3fa4134e2bf9f4ec5d85d6335e0a5b71041fd52be37778cab75f150398 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=determined_volhard, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, OSD_FLAVOR=default)
Jan 31 08:16:27 compute-0 podman[147589]: 2026-01-31 08:16:27.754319046 +0000 UTC m=+0.130033062 container start d8edfb3fa4134e2bf9f4ec5d85d6335e0a5b71041fd52be37778cab75f150398 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=determined_volhard, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 08:16:27 compute-0 podman[147589]: 2026-01-31 08:16:27.757520885 +0000 UTC m=+0.133234901 container attach d8edfb3fa4134e2bf9f4ec5d85d6335e0a5b71041fd52be37778cab75f150398 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=determined_volhard, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 08:16:27 compute-0 determined_volhard[147615]: 167 167
Jan 31 08:16:27 compute-0 systemd[1]: libpod-d8edfb3fa4134e2bf9f4ec5d85d6335e0a5b71041fd52be37778cab75f150398.scope: Deactivated successfully.
Jan 31 08:16:27 compute-0 podman[147589]: 2026-01-31 08:16:27.758169113 +0000 UTC m=+0.133883149 container died d8edfb3fa4134e2bf9f4ec5d85d6335e0a5b71041fd52be37778cab75f150398 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=determined_volhard, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Jan 31 08:16:27 compute-0 systemd[1]: var-lib-containers-storage-overlay-86d67a8f1c5414d1b469ff923953ecd03d8bf9f9ba0f9289df35a9bddb2d5099-merged.mount: Deactivated successfully.
Jan 31 08:16:27 compute-0 podman[147589]: 2026-01-31 08:16:27.797886279 +0000 UTC m=+0.173600275 container remove d8edfb3fa4134e2bf9f4ec5d85d6335e0a5b71041fd52be37778cab75f150398 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=determined_volhard, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Jan 31 08:16:27 compute-0 systemd[1]: libpod-conmon-d8edfb3fa4134e2bf9f4ec5d85d6335e0a5b71041fd52be37778cab75f150398.scope: Deactivated successfully.
Jan 31 08:16:27 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v416: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:16:27 compute-0 podman[147676]: 2026-01-31 08:16:27.937490656 +0000 UTC m=+0.042660589 container create 523797fa6ef63bc05aab6109e3184f3ba5f2f213cbd3eff80b6d8ee9cb05f2a1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pensive_yonath, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 08:16:27 compute-0 systemd[1]: Started libpod-conmon-523797fa6ef63bc05aab6109e3184f3ba5f2f213cbd3eff80b6d8ee9cb05f2a1.scope.
Jan 31 08:16:27 compute-0 sudo[147724]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ocyviqzldmdjdseuqepoeegdmfvzzpak ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847387.5858393-73-24177252106365/AnsiballZ_seboolean.py'
Jan 31 08:16:27 compute-0 sudo[147724]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:16:27 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:16:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8137b7db32d25c7b5306ab7614dd99f514f8aeadd8c8532a27371cf66a90c876/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 08:16:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8137b7db32d25c7b5306ab7614dd99f514f8aeadd8c8532a27371cf66a90c876/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 08:16:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8137b7db32d25c7b5306ab7614dd99f514f8aeadd8c8532a27371cf66a90c876/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 08:16:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8137b7db32d25c7b5306ab7614dd99f514f8aeadd8c8532a27371cf66a90c876/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 08:16:28 compute-0 podman[147676]: 2026-01-31 08:16:28.011563688 +0000 UTC m=+0.116733621 container init 523797fa6ef63bc05aab6109e3184f3ba5f2f213cbd3eff80b6d8ee9cb05f2a1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pensive_yonath, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle)
Jan 31 08:16:28 compute-0 podman[147676]: 2026-01-31 08:16:27.920320708 +0000 UTC m=+0.025490641 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 08:16:28 compute-0 podman[147676]: 2026-01-31 08:16:28.017562585 +0000 UTC m=+0.122732498 container start 523797fa6ef63bc05aab6109e3184f3ba5f2f213cbd3eff80b6d8ee9cb05f2a1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pensive_yonath, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Jan 31 08:16:28 compute-0 podman[147676]: 2026-01-31 08:16:28.020562579 +0000 UTC m=+0.125732492 container attach 523797fa6ef63bc05aab6109e3184f3ba5f2f213cbd3eff80b6d8ee9cb05f2a1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pensive_yonath, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 08:16:28 compute-0 python3.9[147731]: ansible-ansible.posix.seboolean Invoked with name=virt_sandbox_use_netlink persistent=True state=True ignore_selinux_state=False
Jan 31 08:16:28 compute-0 lvm[147806]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 31 08:16:28 compute-0 lvm[147806]: VG ceph_vg0 finished
Jan 31 08:16:28 compute-0 lvm[147809]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 31 08:16:28 compute-0 lvm[147809]: VG ceph_vg1 finished
Jan 31 08:16:28 compute-0 lvm[147810]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 31 08:16:28 compute-0 lvm[147810]: VG ceph_vg2 finished
Jan 31 08:16:28 compute-0 sudo[147724]: pam_unix(sudo:session): session closed for user root
Jan 31 08:16:28 compute-0 pensive_yonath[147729]: {}
Jan 31 08:16:28 compute-0 systemd[1]: libpod-523797fa6ef63bc05aab6109e3184f3ba5f2f213cbd3eff80b6d8ee9cb05f2a1.scope: Deactivated successfully.
Jan 31 08:16:28 compute-0 podman[147676]: 2026-01-31 08:16:28.770091087 +0000 UTC m=+0.875261000 container died 523797fa6ef63bc05aab6109e3184f3ba5f2f213cbd3eff80b6d8ee9cb05f2a1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pensive_yonath, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_REF=tentacle)
Jan 31 08:16:28 compute-0 systemd[1]: var-lib-containers-storage-overlay-8137b7db32d25c7b5306ab7614dd99f514f8aeadd8c8532a27371cf66a90c876-merged.mount: Deactivated successfully.
Jan 31 08:16:29 compute-0 ceph-mon[75227]: pgmap v416: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:16:29 compute-0 podman[147676]: 2026-01-31 08:16:29.2187914 +0000 UTC m=+1.323961323 container remove 523797fa6ef63bc05aab6109e3184f3ba5f2f213cbd3eff80b6d8ee9cb05f2a1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pensive_yonath, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS)
Jan 31 08:16:29 compute-0 systemd[1]: libpod-conmon-523797fa6ef63bc05aab6109e3184f3ba5f2f213cbd3eff80b6d8ee9cb05f2a1.scope: Deactivated successfully.
Jan 31 08:16:29 compute-0 sudo[147484]: pam_unix(sudo:session): session closed for user root
Jan 31 08:16:29 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 31 08:16:29 compute-0 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 08:16:29 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 31 08:16:29 compute-0 python3.9[147975]: ansible-ansible.legacy.stat Invoked with path=/var/lib/neutron/ovn_metadata_haproxy_wrapper follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 08:16:29 compute-0 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 08:16:29 compute-0 sudo[148001]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 31 08:16:29 compute-0 sudo[148001]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:16:29 compute-0 sudo[148001]: pam_unix(sudo:session): session closed for user root
Jan 31 08:16:29 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v417: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:16:30 compute-0 python3.9[148123]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/neutron/ovn_metadata_haproxy_wrapper mode=0755 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769847388.8327312-81-119805003470172/.source follow=False _original_basename=haproxy.j2 checksum=a5072e7b19ca96a1f495d94f97f31903737cfd27 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 31 08:16:30 compute-0 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 08:16:30 compute-0 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 08:16:30 compute-0 ceph-mon[75227]: pgmap v417: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:16:30 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 08:16:30 compute-0 python3.9[148274]: ansible-ansible.legacy.stat Invoked with path=/var/lib/neutron/kill_scripts/haproxy-kill follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 08:16:31 compute-0 python3.9[148395]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/neutron/kill_scripts/haproxy-kill mode=0755 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769847390.471877-96-196558196739024/.source follow=False _original_basename=kill-script.j2 checksum=2dfb5489f491f61b95691c3bf95fa1fe48ff3700 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 31 08:16:31 compute-0 ceph-mgr[75519]: [balancer INFO root] Optimize plan auto_2026-01-31_08:16:31
Jan 31 08:16:31 compute-0 ceph-mgr[75519]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 31 08:16:31 compute-0 ceph-mgr[75519]: [balancer INFO root] do_upmap
Jan 31 08:16:31 compute-0 ceph-mgr[75519]: [balancer INFO root] pools ['cephfs.cephfs.meta', 'backups', 'default.rgw.log', 'images', 'default.rgw.control', 'cephfs.cephfs.data', 'default.rgw.meta', 'vms', '.rgw.root', '.mgr', 'volumes']
Jan 31 08:16:31 compute-0 ceph-mgr[75519]: [balancer INFO root] prepared 0/10 upmap changes
Jan 31 08:16:31 compute-0 sudo[148545]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gtkjwexuvxmikxtiltnkujdlgdrcujen ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847391.6063592-113-219490305132262/AnsiballZ_setup.py'
Jan 31 08:16:31 compute-0 sudo[148545]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:16:31 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v418: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:16:32 compute-0 python3.9[148547]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 31 08:16:32 compute-0 sudo[148545]: pam_unix(sudo:session): session closed for user root
Jan 31 08:16:32 compute-0 sudo[148629]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xpjcafcjlpiiucpsqtzqyeijjfepiudb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847391.6063592-113-219490305132262/AnsiballZ_dnf.py'
Jan 31 08:16:32 compute-0 sudo[148629]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:16:32 compute-0 ceph-mgr[75519]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:16:32 compute-0 ceph-mgr[75519]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:16:32 compute-0 ceph-mgr[75519]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:16:32 compute-0 ceph-mgr[75519]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:16:32 compute-0 ceph-mgr[75519]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:16:32 compute-0 ceph-mgr[75519]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:16:32 compute-0 ceph-mgr[75519]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 31 08:16:32 compute-0 ceph-mgr[75519]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 08:16:32 compute-0 ceph-mgr[75519]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 31 08:16:32 compute-0 ceph-mgr[75519]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 08:16:32 compute-0 ceph-mgr[75519]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 08:16:32 compute-0 ceph-mgr[75519]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 08:16:32 compute-0 ceph-mgr[75519]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 08:16:32 compute-0 ceph-mgr[75519]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 08:16:32 compute-0 ceph-mgr[75519]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 08:16:32 compute-0 ceph-mgr[75519]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 08:16:33 compute-0 ceph-mon[75227]: pgmap v418: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:16:33 compute-0 python3.9[148631]: ansible-ansible.legacy.dnf Invoked with name=['openvswitch'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 31 08:16:33 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v419: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:16:34 compute-0 sudo[148629]: pam_unix(sudo:session): session closed for user root
Jan 31 08:16:35 compute-0 sudo[148782]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fnifwnmvxenwubeslopprsxnhqljojux ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847394.5257654-125-48245929165061/AnsiballZ_systemd.py'
Jan 31 08:16:35 compute-0 ceph-mon[75227]: pgmap v419: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:16:35 compute-0 sudo[148782]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:16:35 compute-0 python3.9[148784]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=openvswitch.service state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Jan 31 08:16:35 compute-0 sudo[148782]: pam_unix(sudo:session): session closed for user root
Jan 31 08:16:35 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 08:16:35 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v420: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:16:36 compute-0 python3.9[148937]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/neutron-ovn-metadata-agent/01-rootwrap.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 08:16:36 compute-0 python3.9[149058]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/neutron-ovn-metadata-agent/01-rootwrap.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769847395.6060321-133-217847457028105/.source.conf follow=False _original_basename=rootwrap.conf.j2 checksum=11f2cfb4b7d97b2cef3c2c2d88089e6999cffe22 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 31 08:16:37 compute-0 python3.9[149208]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/neutron-ovn-metadata-agent/01-neutron-ovn-metadata-agent.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 08:16:37 compute-0 ceph-mon[75227]: pgmap v420: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:16:37 compute-0 python3.9[149329]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/neutron-ovn-metadata-agent/01-neutron-ovn-metadata-agent.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769847396.668276-133-41418450536309/.source.conf follow=False _original_basename=neutron-ovn-metadata-agent.conf.j2 checksum=8bc979abbe81c2cf3993a225517a7e2483e20443 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 31 08:16:37 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v421: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:16:38 compute-0 ceph-mon[75227]: pgmap v421: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:16:38 compute-0 python3.9[149479]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/neutron-ovn-metadata-agent/10-neutron-metadata.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 08:16:39 compute-0 python3.9[149600]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/neutron-ovn-metadata-agent/10-neutron-metadata.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769847398.3192205-177-184363009209326/.source.conf _original_basename=10-neutron-metadata.conf follow=False checksum=ca7d4d155f5b812fab1a3b70e34adb495d291b8d backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 31 08:16:39 compute-0 python3.9[149750]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/neutron-ovn-metadata-agent/05-nova-metadata.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 08:16:39 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v422: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:16:40 compute-0 python3.9[149871]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/neutron-ovn-metadata-agent/05-nova-metadata.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769847399.37619-177-1710377557794/.source.conf _original_basename=05-nova-metadata.conf follow=False checksum=a14d6b38898a379cd37fc0bf365d17f10859446f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 31 08:16:40 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 08:16:40 compute-0 python3.9[150021]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 31 08:16:40 compute-0 ceph-mon[75227]: pgmap v422: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:16:41 compute-0 sudo[150173]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hwvunrgthgotuicqtirfcghhfbfsbrmm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847401.1724508-215-113775351512551/AnsiballZ_file.py'
Jan 31 08:16:41 compute-0 sudo[150173]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:16:41 compute-0 python3.9[150175]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 31 08:16:41 compute-0 sudo[150173]: pam_unix(sudo:session): session closed for user root
Jan 31 08:16:41 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v423: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:16:42 compute-0 sudo[150336]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ynjtjfdkkwozncrdbdoxzwnlifnsqrni ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847401.7956135-223-107584985271304/AnsiballZ_stat.py'
Jan 31 08:16:42 compute-0 sudo[150336]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:16:42 compute-0 ovn_controller[144989]: 2026-01-31T08:16:42Z|00025|memory|INFO|16256 kB peak resident set size after 29.7 seconds
Jan 31 08:16:42 compute-0 ovn_controller[144989]: 2026-01-31T08:16:42Z|00026|memory|INFO|idl-cells-OVN_Southbound:239 idl-cells-Open_vSwitch:528 ofctrl_desired_flow_usage-KB:5 ofctrl_installed_flow_usage-KB:4 ofctrl_sb_flow_ref_usage-KB:2
Jan 31 08:16:42 compute-0 podman[150299]: 2026-01-31 08:16:42.143727991 +0000 UTC m=+0.119490638 container health_status 14c3c41ef3ecc0cb180f3b1c6b2646401de390411c75f2edf127900ced71a3ae (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '3b798815db4eef76ff54d2cfe5801aee605c637f16e47a4297de289ab1fdb9c1-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127)
Jan 31 08:16:42 compute-0 python3.9[150342]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 08:16:42 compute-0 sudo[150336]: pam_unix(sudo:session): session closed for user root
Jan 31 08:16:42 compute-0 sudo[150429]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lekvzskhputrpmnzsktzsvnyjgkogmcf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847401.7956135-223-107584985271304/AnsiballZ_file.py'
Jan 31 08:16:42 compute-0 sudo[150429]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:16:42 compute-0 python3.9[150431]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-container-shutdown _original_basename=edpm-container-shutdown recurse=False state=file path=/var/local/libexec/edpm-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 31 08:16:42 compute-0 sudo[150429]: pam_unix(sudo:session): session closed for user root
Jan 31 08:16:42 compute-0 ceph-mon[75227]: pgmap v423: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:16:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] _maybe_adjust
Jan 31 08:16:43 compute-0 sudo[150581]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-daamonwobmjuhhjdrilibaclrzfvkfsj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847402.8343787-223-255243509018262/AnsiballZ_stat.py'
Jan 31 08:16:43 compute-0 sudo[150581]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:16:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:16:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 31 08:16:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:16:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 08:16:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:16:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 08:16:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:16:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 08:16:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:16:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 08:16:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:16:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 2.6947183441958982e-06 of space, bias 4.0, pg target 0.003233662013035078 quantized to 16 (current 16)
Jan 31 08:16:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:16:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 08:16:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:16:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 31 08:16:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:16:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Jan 31 08:16:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:16:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 08:16:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:16:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 31 08:16:43 compute-0 python3.9[150583]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 08:16:43 compute-0 sudo[150581]: pam_unix(sudo:session): session closed for user root
Jan 31 08:16:43 compute-0 sudo[150659]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lwfthnkkkrfaihtqddrpkjfnkhhpclsl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847402.8343787-223-255243509018262/AnsiballZ_file.py'
Jan 31 08:16:43 compute-0 sudo[150659]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:16:43 compute-0 python3.9[150661]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-start-podman-container _original_basename=edpm-start-podman-container recurse=False state=file path=/var/local/libexec/edpm-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 31 08:16:43 compute-0 sudo[150659]: pam_unix(sudo:session): session closed for user root
Jan 31 08:16:43 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v424: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:16:44 compute-0 sudo[150811]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vxjgigcaltkpffgzsatgfyxxlchohnva ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847403.915062-246-76513678781295/AnsiballZ_file.py'
Jan 31 08:16:44 compute-0 sudo[150811]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:16:44 compute-0 python3.9[150813]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 08:16:44 compute-0 sudo[150811]: pam_unix(sudo:session): session closed for user root
Jan 31 08:16:44 compute-0 sudo[150963]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cxfmpqcdkjaxuamxhfaadxobnlykynog ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847404.5263524-254-177441007424889/AnsiballZ_stat.py'
Jan 31 08:16:44 compute-0 sudo[150963]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:16:44 compute-0 python3.9[150965]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 08:16:45 compute-0 ceph-mon[75227]: pgmap v424: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:16:45 compute-0 sudo[150963]: pam_unix(sudo:session): session closed for user root
Jan 31 08:16:45 compute-0 sudo[151041]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tuetxwlnsgonbrqoeklanoyfmzqtywue ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847404.5263524-254-177441007424889/AnsiballZ_file.py'
Jan 31 08:16:45 compute-0 sudo[151041]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:16:45 compute-0 python3.9[151043]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/edpm-container-shutdown.service _original_basename=edpm-container-shutdown-service recurse=False state=file path=/etc/systemd/system/edpm-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 08:16:45 compute-0 sudo[151041]: pam_unix(sudo:session): session closed for user root
Jan 31 08:16:45 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 08:16:45 compute-0 sudo[151193]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ofguynlehnyyzvquikcabgjqdttnutyb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847405.5763218-266-240395860634018/AnsiballZ_stat.py'
Jan 31 08:16:45 compute-0 sudo[151193]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:16:45 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v425: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:16:46 compute-0 python3.9[151195]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 08:16:46 compute-0 sudo[151193]: pam_unix(sudo:session): session closed for user root
Jan 31 08:16:46 compute-0 sudo[151271]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-leiwslvxijlqctzlzaigxzavorkndeuj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847405.5763218-266-240395860634018/AnsiballZ_file.py'
Jan 31 08:16:46 compute-0 sudo[151271]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:16:46 compute-0 python3.9[151273]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset _original_basename=91-edpm-container-shutdown-preset recurse=False state=file path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 08:16:46 compute-0 sudo[151271]: pam_unix(sudo:session): session closed for user root
Jan 31 08:16:46 compute-0 sudo[151423]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-snjolkrtieqtahtrtrixvptrqxazowcd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847406.5973845-278-28273163917250/AnsiballZ_systemd.py'
Jan 31 08:16:46 compute-0 sudo[151423]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:16:47 compute-0 ceph-mon[75227]: pgmap v425: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:16:47 compute-0 python3.9[151425]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 31 08:16:47 compute-0 systemd[1]: Reloading.
Jan 31 08:16:47 compute-0 systemd-rc-local-generator[151452]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 08:16:47 compute-0 systemd-sysv-generator[151456]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 08:16:47 compute-0 sudo[151423]: pam_unix(sudo:session): session closed for user root
Jan 31 08:16:47 compute-0 sudo[151612]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-prkdqrifreorkapdwbmdihxbqkdhtedr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847407.5331-286-10672070595790/AnsiballZ_stat.py'
Jan 31 08:16:47 compute-0 sudo[151612]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:16:47 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v426: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:16:48 compute-0 python3.9[151614]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 08:16:48 compute-0 sudo[151612]: pam_unix(sudo:session): session closed for user root
Jan 31 08:16:48 compute-0 sudo[151690]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pxcndlafbmmmsrfisdeuboqnrybbwxeo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847407.5331-286-10672070595790/AnsiballZ_file.py'
Jan 31 08:16:48 compute-0 sudo[151690]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:16:48 compute-0 python3.9[151692]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/etc/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 08:16:48 compute-0 sudo[151690]: pam_unix(sudo:session): session closed for user root
Jan 31 08:16:48 compute-0 sudo[151842]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vvisjpnsbqxyovtqxfzjivhrhtrxfbkl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847408.6279373-298-129765601433493/AnsiballZ_stat.py'
Jan 31 08:16:48 compute-0 sudo[151842]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:16:49 compute-0 ceph-mon[75227]: pgmap v426: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:16:49 compute-0 python3.9[151844]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 08:16:49 compute-0 sudo[151842]: pam_unix(sudo:session): session closed for user root
Jan 31 08:16:49 compute-0 sudo[151920]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tyqcmjctchedolhhxldmzcvpesglbtrz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847408.6279373-298-129765601433493/AnsiballZ_file.py'
Jan 31 08:16:49 compute-0 sudo[151920]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:16:49 compute-0 python3.9[151922]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/etc/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 08:16:49 compute-0 sudo[151920]: pam_unix(sudo:session): session closed for user root
Jan 31 08:16:49 compute-0 sudo[152072]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bvydqcidmpcaqoaalgzujoqbpfqmqrmj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847409.654759-310-122831208067467/AnsiballZ_systemd.py'
Jan 31 08:16:49 compute-0 sudo[152072]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:16:49 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v427: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:16:50 compute-0 python3.9[152074]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 31 08:16:50 compute-0 systemd[1]: Reloading.
Jan 31 08:16:50 compute-0 systemd-sysv-generator[152105]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 08:16:50 compute-0 systemd-rc-local-generator[152102]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 08:16:50 compute-0 systemd[1]: Starting Create netns directory...
Jan 31 08:16:50 compute-0 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Jan 31 08:16:50 compute-0 systemd[1]: netns-placeholder.service: Deactivated successfully.
Jan 31 08:16:50 compute-0 systemd[1]: Finished Create netns directory.
Jan 31 08:16:50 compute-0 sudo[152072]: pam_unix(sudo:session): session closed for user root
Jan 31 08:16:50 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 08:16:50 compute-0 sudo[152265]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jlebesyevmyyijnwrvcautiuvygcoptg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847410.6898906-320-3429558075523/AnsiballZ_file.py'
Jan 31 08:16:50 compute-0 sudo[152265]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:16:51 compute-0 ceph-mon[75227]: pgmap v427: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:16:51 compute-0 python3.9[152267]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/healthchecks setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 31 08:16:51 compute-0 sudo[152265]: pam_unix(sudo:session): session closed for user root
Jan 31 08:16:51 compute-0 sudo[152417]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zhnwwhhrnfuylfqvyzzdkynjawoalnob ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847411.2888505-328-15952052968702/AnsiballZ_stat.py'
Jan 31 08:16:51 compute-0 sudo[152417]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:16:51 compute-0 python3.9[152419]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/ovn_metadata_agent/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 08:16:51 compute-0 sudo[152417]: pam_unix(sudo:session): session closed for user root
Jan 31 08:16:51 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v428: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:16:52 compute-0 ceph-mon[75227]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #24. Immutable memtables: 0.
Jan 31 08:16:52 compute-0 ceph-mon[75227]: rocksdb: (Original Log Time 2026/01/31-08:16:52.058405) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 31 08:16:52 compute-0 ceph-mon[75227]: rocksdb: [db/flush_job.cc:856] [default] [JOB 7] Flushing memtable with next log file: 24
Jan 31 08:16:52 compute-0 ceph-mon[75227]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769847412058501, "job": 7, "event": "flush_started", "num_memtables": 1, "num_entries": 946, "num_deletes": 251, "total_data_size": 1403443, "memory_usage": 1427776, "flush_reason": "Manual Compaction"}
Jan 31 08:16:52 compute-0 ceph-mon[75227]: rocksdb: [db/flush_job.cc:885] [default] [JOB 7] Level-0 flush table #25: started
Jan 31 08:16:52 compute-0 ceph-mon[75227]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769847412070459, "cf_name": "default", "job": 7, "event": "table_file_creation", "file_number": 25, "file_size": 1380275, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 8842, "largest_seqno": 9787, "table_properties": {"data_size": 1375588, "index_size": 2275, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1349, "raw_key_size": 9605, "raw_average_key_size": 18, "raw_value_size": 1366284, "raw_average_value_size": 2658, "num_data_blocks": 106, "num_entries": 514, "num_filter_entries": 514, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769847321, "oldest_key_time": 1769847321, "file_creation_time": 1769847412, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "91992687-9ca4-489a-811f-a25b3432622d", "db_session_id": "RDN3DWKE2K2I6QTJYIJY", "orig_file_number": 25, "seqno_to_time_mapping": "N/A"}}
Jan 31 08:16:52 compute-0 ceph-mon[75227]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 7] Flush lasted 12130 microseconds, and 6280 cpu microseconds.
Jan 31 08:16:52 compute-0 ceph-mon[75227]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 31 08:16:52 compute-0 ceph-mon[75227]: rocksdb: (Original Log Time 2026/01/31-08:16:52.070549) [db/flush_job.cc:967] [default] [JOB 7] Level-0 flush table #25: 1380275 bytes OK
Jan 31 08:16:52 compute-0 ceph-mon[75227]: rocksdb: (Original Log Time 2026/01/31-08:16:52.070584) [db/memtable_list.cc:519] [default] Level-0 commit table #25 started
Jan 31 08:16:52 compute-0 ceph-mon[75227]: rocksdb: (Original Log Time 2026/01/31-08:16:52.072491) [db/memtable_list.cc:722] [default] Level-0 commit table #25: memtable #1 done
Jan 31 08:16:52 compute-0 ceph-mon[75227]: rocksdb: (Original Log Time 2026/01/31-08:16:52.072519) EVENT_LOG_v1 {"time_micros": 1769847412072511, "job": 7, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 31 08:16:52 compute-0 ceph-mon[75227]: rocksdb: (Original Log Time 2026/01/31-08:16:52.072546) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 31 08:16:52 compute-0 ceph-mon[75227]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 7] Try to delete WAL files size 1398922, prev total WAL file size 1398922, number of live WAL files 2.
Jan 31 08:16:52 compute-0 ceph-mon[75227]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000021.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 08:16:52 compute-0 ceph-mon[75227]: rocksdb: (Original Log Time 2026/01/31-08:16:52.073103) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F7300323531' seq:72057594037927935, type:22 .. '7061786F7300353033' seq:0, type:0; will stop at (end)
Jan 31 08:16:52 compute-0 ceph-mon[75227]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 8] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 31 08:16:52 compute-0 ceph-mon[75227]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 7 Base level 0, inputs: [25(1347KB)], [23(6943KB)]
Jan 31 08:16:52 compute-0 ceph-mon[75227]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769847412073147, "job": 8, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [25], "files_L6": [23], "score": -1, "input_data_size": 8490215, "oldest_snapshot_seqno": -1}
Jan 31 08:16:52 compute-0 sudo[152540]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-geecwpbtzzmcmfkqlgzaidscizvhfzis ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847411.2888505-328-15952052968702/AnsiballZ_copy.py'
Jan 31 08:16:52 compute-0 sudo[152540]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:16:52 compute-0 ceph-mon[75227]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 8] Generated table #26: 3350 keys, 6612133 bytes, temperature: kUnknown
Jan 31 08:16:52 compute-0 ceph-mon[75227]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769847412117916, "cf_name": "default", "job": 8, "event": "table_file_creation", "file_number": 26, "file_size": 6612133, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 6587109, "index_size": 15571, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 8389, "raw_key_size": 81374, "raw_average_key_size": 24, "raw_value_size": 6523808, "raw_average_value_size": 1947, "num_data_blocks": 678, "num_entries": 3350, "num_filter_entries": 3350, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769846771, "oldest_key_time": 0, "file_creation_time": 1769847412, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "91992687-9ca4-489a-811f-a25b3432622d", "db_session_id": "RDN3DWKE2K2I6QTJYIJY", "orig_file_number": 26, "seqno_to_time_mapping": "N/A"}}
Jan 31 08:16:52 compute-0 ceph-mon[75227]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 31 08:16:52 compute-0 ceph-mon[75227]: rocksdb: (Original Log Time 2026/01/31-08:16:52.118311) [db/compaction/compaction_job.cc:1663] [default] [JOB 8] Compacted 1@0 + 1@6 files to L6 => 6612133 bytes
Jan 31 08:16:52 compute-0 ceph-mon[75227]: rocksdb: (Original Log Time 2026/01/31-08:16:52.120130) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 189.1 rd, 147.2 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.3, 6.8 +0.0 blob) out(6.3 +0.0 blob), read-write-amplify(10.9) write-amplify(4.8) OK, records in: 3864, records dropped: 514 output_compression: NoCompression
Jan 31 08:16:52 compute-0 ceph-mon[75227]: rocksdb: (Original Log Time 2026/01/31-08:16:52.120157) EVENT_LOG_v1 {"time_micros": 1769847412120142, "job": 8, "event": "compaction_finished", "compaction_time_micros": 44908, "compaction_time_cpu_micros": 21063, "output_level": 6, "num_output_files": 1, "total_output_size": 6612133, "num_input_records": 3864, "num_output_records": 3350, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 31 08:16:52 compute-0 ceph-mon[75227]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000025.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 08:16:52 compute-0 ceph-mon[75227]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769847412120459, "job": 8, "event": "table_file_deletion", "file_number": 25}
Jan 31 08:16:52 compute-0 ceph-mon[75227]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000023.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 08:16:52 compute-0 ceph-mon[75227]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769847412121201, "job": 8, "event": "table_file_deletion", "file_number": 23}
Jan 31 08:16:52 compute-0 ceph-mon[75227]: rocksdb: (Original Log Time 2026/01/31-08:16:52.073053) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 08:16:52 compute-0 ceph-mon[75227]: rocksdb: (Original Log Time 2026/01/31-08:16:52.121243) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 08:16:52 compute-0 ceph-mon[75227]: rocksdb: (Original Log Time 2026/01/31-08:16:52.121266) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 08:16:52 compute-0 ceph-mon[75227]: rocksdb: (Original Log Time 2026/01/31-08:16:52.121269) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 08:16:52 compute-0 ceph-mon[75227]: rocksdb: (Original Log Time 2026/01/31-08:16:52.121271) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 08:16:52 compute-0 ceph-mon[75227]: rocksdb: (Original Log Time 2026/01/31-08:16:52.121273) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 08:16:52 compute-0 python3.9[152542]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/ovn_metadata_agent/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769847411.2888505-328-15952052968702/.source _original_basename=healthcheck follow=False checksum=898a5a1fcd473cf731177fc866e3bd7ebf20a131 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Jan 31 08:16:52 compute-0 sudo[152540]: pam_unix(sudo:session): session closed for user root
Jan 31 08:16:52 compute-0 sudo[152692]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-snokoqpngudscwgzfdyajidfzhjeenof ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847412.6104198-345-158900264930474/AnsiballZ_file.py'
Jan 31 08:16:52 compute-0 sudo[152692]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:16:53 compute-0 python3.9[152694]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/edpm-config recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 08:16:53 compute-0 ceph-mon[75227]: pgmap v428: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:16:53 compute-0 sudo[152692]: pam_unix(sudo:session): session closed for user root
Jan 31 08:16:53 compute-0 sudo[152844]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-duxpqxkicyfemdxosdyfgsncfquatcix ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847413.2081437-353-240918853179094/AnsiballZ_file.py'
Jan 31 08:16:53 compute-0 sudo[152844]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:16:53 compute-0 python3.9[152846]: ansible-ansible.builtin.file Invoked with path=/var/lib/kolla/config_files recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 31 08:16:53 compute-0 sudo[152844]: pam_unix(sudo:session): session closed for user root
Jan 31 08:16:53 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v429: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:16:54 compute-0 sudo[152996]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ngwtuqvuzcdozrxwwqweivinnxypfioh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847413.7682061-361-10031163139324/AnsiballZ_stat.py'
Jan 31 08:16:54 compute-0 sudo[152996]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:16:54 compute-0 python3.9[152998]: ansible-ansible.legacy.stat Invoked with path=/var/lib/kolla/config_files/ovn_metadata_agent.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 08:16:54 compute-0 sudo[152996]: pam_unix(sudo:session): session closed for user root
Jan 31 08:16:54 compute-0 sudo[153119]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-shwxmamzskzqmfpiqxwqszlgrilqhdlh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847413.7682061-361-10031163139324/AnsiballZ_copy.py'
Jan 31 08:16:54 compute-0 sudo[153119]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:16:54 compute-0 python3.9[153121]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/kolla/config_files/ovn_metadata_agent.json mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1769847413.7682061-361-10031163139324/.source.json _original_basename=.jy0favfc follow=False checksum=a908ef151ded3a33ae6c9ac8be72a35e5e33b9dc backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 08:16:54 compute-0 sudo[153119]: pam_unix(sudo:session): session closed for user root
Jan 31 08:16:55 compute-0 ceph-mon[75227]: pgmap v429: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:16:55 compute-0 python3.9[153271]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/edpm-config/container-startup-config/ovn_metadata_agent state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 08:16:55 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 08:16:55 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v430: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:16:57 compute-0 ceph-mon[75227]: pgmap v430: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:16:57 compute-0 sudo[153692]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fouylqbiygrglxnpxmmtapdlsjpftouc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847416.7513015-401-41784013612858/AnsiballZ_container_config_data.py'
Jan 31 08:16:57 compute-0 sudo[153692]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:16:57 compute-0 python3.9[153694]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/edpm-config/container-startup-config/ovn_metadata_agent config_pattern=*.json debug=False
Jan 31 08:16:57 compute-0 sudo[153692]: pam_unix(sudo:session): session closed for user root
Jan 31 08:16:57 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v431: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:16:58 compute-0 sudo[153844]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xbrxxlmfkevrisecqqbvfdqdtjjopqmt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847417.6714544-412-267049918163842/AnsiballZ_container_config_hash.py'
Jan 31 08:16:58 compute-0 sudo[153844]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:16:58 compute-0 python3.9[153846]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/openstack
Jan 31 08:16:58 compute-0 sudo[153844]: pam_unix(sudo:session): session closed for user root
Jan 31 08:16:58 compute-0 sudo[153996]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oqgmecbsentastiazsfeobfkmgnsnwrb ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1769847418.4657614-422-88379621778248/AnsiballZ_edpm_container_manage.py'
Jan 31 08:16:58 compute-0 sudo[153996]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:16:59 compute-0 ceph-mon[75227]: pgmap v431: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:16:59 compute-0 python3[153998]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/edpm-config/container-startup-config/ovn_metadata_agent config_id=ovn_metadata_agent config_overrides={} config_patterns=*.json containers=['ovn_metadata_agent'] log_base_path=/var/log/containers/stdouts debug=False
Jan 31 08:16:59 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v432: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:17:00 compute-0 ceph-mon[75227]: pgmap v432: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:17:00 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 08:17:01 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v433: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:17:02 compute-0 ceph-mgr[75519]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:17:02 compute-0 ceph-mgr[75519]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:17:02 compute-0 ceph-mgr[75519]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:17:02 compute-0 ceph-mgr[75519]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:17:02 compute-0 ceph-mgr[75519]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:17:02 compute-0 ceph-mgr[75519]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:17:03 compute-0 ceph-mon[75227]: pgmap v433: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:17:03 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v434: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:17:05 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v435: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:17:06 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 08:17:07 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v436: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:17:07 compute-0 ceph-mon[75227]: pgmap v434: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:17:09 compute-0 ceph-mon[75227]: pgmap v435: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:17:09 compute-0 ceph-mon[75227]: pgmap v436: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:17:09 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v437: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:17:10 compute-0 podman[154012]: 2026-01-31 08:17:10.349212958 +0000 UTC m=+11.107393778 image pull 19964fda6b912d3d57e21b0bcc221725d936e513025030cb508474fe04b06af8 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 31 08:17:10 compute-0 podman[154134]: 2026-01-31 08:17:10.447220907 +0000 UTC m=+0.025054878 image pull 19964fda6b912d3d57e21b0bcc221725d936e513025030cb508474fe04b06af8 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 31 08:17:10 compute-0 ceph-mon[75227]: pgmap v437: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:17:11 compute-0 podman[154134]: 2026-01-31 08:17:11.052154081 +0000 UTC m=+0.629987982 container create 5cc46d1955888fed41771eb977e7f9416e280539f01559118253757fe3eb0869 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20260127, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_metadata_agent, managed_by=edpm_ansible, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '3b798815db4eef76ff54d2cfe5801aee605c637f16e47a4297de289ab1fdb9c1-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Jan 31 08:17:11 compute-0 python3[153998]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name ovn_metadata_agent --cgroupns=host --conmon-pidfile /run/ovn_metadata_agent.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --env EDPM_CONFIG_HASH=3b798815db4eef76ff54d2cfe5801aee605c637f16e47a4297de289ab1fdb9c1-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d --healthcheck-command /openstack/healthcheck --label config_id=ovn_metadata_agent --label container_name=ovn_metadata_agent --label managed_by=edpm_ansible --label config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '3b798815db4eef76ff54d2cfe5801aee605c637f16e47a4297de289ab1fdb9c1-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']} --log-driver journald --log-level info --network host --pid host --privileged=True --user root --volume /run/openvswitch:/run/openvswitch:z --volume /var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z --volume /run/netns:/run/netns:shared --volume /var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro --volume /var/lib/neutron:/var/lib/neutron:shared,z --volume /var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro --volume /var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro --volume /var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z --volume /var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z --volume /var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z --volume /var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z --volume /var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 31 08:17:11 compute-0 sudo[153996]: pam_unix(sudo:session): session closed for user root
Jan 31 08:17:11 compute-0 sudo[154320]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tcnzifpbfnnghcyyqnsbfwwiqecgkxvf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847431.3190167-430-127205421880535/AnsiballZ_stat.py'
Jan 31 08:17:11 compute-0 sudo[154320]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:17:11 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 08:17:11 compute-0 python3.9[154322]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 31 08:17:11 compute-0 sudo[154320]: pam_unix(sudo:session): session closed for user root
Jan 31 08:17:11 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v438: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:17:12 compute-0 sudo[154485]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tikvetlwbqhizecezrabgfyvjvjqozzz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847431.9703205-439-259262607995340/AnsiballZ_file.py'
Jan 31 08:17:12 compute-0 sudo[154485]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:17:12 compute-0 podman[154448]: 2026-01-31 08:17:12.423538253 +0000 UTC m=+0.225430428 container health_status 14c3c41ef3ecc0cb180f3b1c6b2646401de390411c75f2edf127900ced71a3ae (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20260127, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '3b798815db4eef76ff54d2cfe5801aee605c637f16e47a4297de289ab1fdb9c1-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team)
Jan 31 08:17:12 compute-0 python3.9[154492]: ansible-file Invoked with path=/etc/systemd/system/edpm_ovn_metadata_agent.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 08:17:12 compute-0 sudo[154485]: pam_unix(sudo:session): session closed for user root
Jan 31 08:17:12 compute-0 sudo[154575]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rjnlpdjinicpdgolnsgoccwwklovrgma ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847431.9703205-439-259262607995340/AnsiballZ_stat.py'
Jan 31 08:17:12 compute-0 sudo[154575]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:17:12 compute-0 python3.9[154577]: ansible-stat Invoked with path=/etc/systemd/system/edpm_ovn_metadata_agent_healthcheck.timer follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 31 08:17:12 compute-0 sudo[154575]: pam_unix(sudo:session): session closed for user root
Jan 31 08:17:13 compute-0 ceph-mon[75227]: pgmap v438: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:17:13 compute-0 sudo[154726]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mnorzzlehlpzymnoyufbfczuzfswoglh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847432.9207106-439-19935681485815/AnsiballZ_copy.py'
Jan 31 08:17:13 compute-0 sudo[154726]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:17:13 compute-0 python3.9[154728]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1769847432.9207106-439-19935681485815/source dest=/etc/systemd/system/edpm_ovn_metadata_agent.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 08:17:13 compute-0 sudo[154726]: pam_unix(sudo:session): session closed for user root
Jan 31 08:17:13 compute-0 sudo[154802]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ifbxflprxzfxsmaunqvnnluirsxnbtmg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847432.9207106-439-19935681485815/AnsiballZ_systemd.py'
Jan 31 08:17:13 compute-0 sudo[154802]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:17:13 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v439: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:17:14 compute-0 python3.9[154804]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Jan 31 08:17:14 compute-0 systemd[1]: Reloading.
Jan 31 08:17:14 compute-0 systemd-rc-local-generator[154828]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 08:17:14 compute-0 systemd-sysv-generator[154831]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 08:17:14 compute-0 sudo[154802]: pam_unix(sudo:session): session closed for user root
Jan 31 08:17:14 compute-0 sudo[154914]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-coctveyilhrlkcpoegchnhevafaypirw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847432.9207106-439-19935681485815/AnsiballZ_systemd.py'
Jan 31 08:17:14 compute-0 sudo[154914]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:17:14 compute-0 python3.9[154916]: ansible-systemd Invoked with state=restarted name=edpm_ovn_metadata_agent.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 31 08:17:14 compute-0 systemd[1]: Reloading.
Jan 31 08:17:14 compute-0 systemd-rc-local-generator[154944]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 08:17:14 compute-0 systemd-sysv-generator[154948]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 08:17:15 compute-0 ceph-mon[75227]: pgmap v439: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:17:15 compute-0 systemd[1]: Starting ovn_metadata_agent container...
Jan 31 08:17:15 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:17:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/881fad8bab9bcb9120aebd18d25ae3dd80dcb5d3d3be236d25ba70ef23eaf771/merged/etc/neutron.conf.d supports timestamps until 2038 (0x7fffffff)
Jan 31 08:17:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/881fad8bab9bcb9120aebd18d25ae3dd80dcb5d3d3be236d25ba70ef23eaf771/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 31 08:17:15 compute-0 systemd[1]: Started /usr/bin/podman healthcheck run 5cc46d1955888fed41771eb977e7f9416e280539f01559118253757fe3eb0869.
Jan 31 08:17:15 compute-0 podman[154956]: 2026-01-31 08:17:15.490670409 +0000 UTC m=+0.216978892 container init 5cc46d1955888fed41771eb977e7f9416e280539f01559118253757fe3eb0869 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '3b798815db4eef76ff54d2cfe5801aee605c637f16e47a4297de289ab1fdb9c1-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team)
Jan 31 08:17:15 compute-0 ovn_metadata_agent[154972]: + sudo -E kolla_set_configs
Jan 31 08:17:15 compute-0 podman[154956]: 2026-01-31 08:17:15.540011153 +0000 UTC m=+0.266319596 container start 5cc46d1955888fed41771eb977e7f9416e280539f01559118253757fe3eb0869 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '3b798815db4eef76ff54d2cfe5801aee605c637f16e47a4297de289ab1fdb9c1-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Jan 31 08:17:15 compute-0 edpm-start-podman-container[154956]: ovn_metadata_agent
Jan 31 08:17:15 compute-0 podman[154979]: 2026-01-31 08:17:15.700529292 +0000 UTC m=+0.149996507 container health_status 5cc46d1955888fed41771eb977e7f9416e280539f01559118253757fe3eb0869 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.build-date=20260127, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '3b798815db4eef76ff54d2cfe5801aee605c637f16e47a4297de289ab1fdb9c1-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 08:17:15 compute-0 edpm-start-podman-container[154955]: Creating additional drop-in dependency for "ovn_metadata_agent" (5cc46d1955888fed41771eb977e7f9416e280539f01559118253757fe3eb0869)
Jan 31 08:17:15 compute-0 systemd[1]: Reloading.
Jan 31 08:17:15 compute-0 ovn_metadata_agent[154972]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Jan 31 08:17:15 compute-0 ovn_metadata_agent[154972]: INFO:__main__:Validating config file
Jan 31 08:17:15 compute-0 ovn_metadata_agent[154972]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Jan 31 08:17:15 compute-0 ovn_metadata_agent[154972]: INFO:__main__:Copying service configuration files
Jan 31 08:17:15 compute-0 ovn_metadata_agent[154972]: INFO:__main__:Deleting /etc/neutron/rootwrap.conf
Jan 31 08:17:15 compute-0 ovn_metadata_agent[154972]: INFO:__main__:Copying /etc/neutron.conf.d/01-rootwrap.conf to /etc/neutron/rootwrap.conf
Jan 31 08:17:15 compute-0 ovn_metadata_agent[154972]: INFO:__main__:Setting permission for /etc/neutron/rootwrap.conf
Jan 31 08:17:15 compute-0 ovn_metadata_agent[154972]: INFO:__main__:Writing out command to execute
Jan 31 08:17:15 compute-0 ovn_metadata_agent[154972]: INFO:__main__:Setting permission for /var/lib/neutron
Jan 31 08:17:15 compute-0 ovn_metadata_agent[154972]: INFO:__main__:Setting permission for /var/lib/neutron/kill_scripts
Jan 31 08:17:15 compute-0 ovn_metadata_agent[154972]: INFO:__main__:Setting permission for /var/lib/neutron/ovn-metadata-proxy
Jan 31 08:17:15 compute-0 ovn_metadata_agent[154972]: INFO:__main__:Setting permission for /var/lib/neutron/external
Jan 31 08:17:15 compute-0 ovn_metadata_agent[154972]: INFO:__main__:Setting permission for /var/lib/neutron/ovn_metadata_haproxy_wrapper
Jan 31 08:17:15 compute-0 ovn_metadata_agent[154972]: INFO:__main__:Setting permission for /var/lib/neutron/kill_scripts/haproxy-kill
Jan 31 08:17:15 compute-0 ovn_metadata_agent[154972]: INFO:__main__:Setting permission for /var/lib/neutron/external/pids
Jan 31 08:17:15 compute-0 ovn_metadata_agent[154972]: ++ cat /run_command
Jan 31 08:17:15 compute-0 ovn_metadata_agent[154972]: + CMD=neutron-ovn-metadata-agent
Jan 31 08:17:15 compute-0 ovn_metadata_agent[154972]: + ARGS=
Jan 31 08:17:15 compute-0 ovn_metadata_agent[154972]: + sudo kolla_copy_cacerts
Jan 31 08:17:15 compute-0 systemd-rc-local-generator[155043]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 08:17:15 compute-0 systemd-sysv-generator[155048]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 08:17:15 compute-0 ovn_metadata_agent[154972]: + [[ ! -n '' ]]
Jan 31 08:17:15 compute-0 ovn_metadata_agent[154972]: + . kolla_extend_start
Jan 31 08:17:15 compute-0 ovn_metadata_agent[154972]: Running command: 'neutron-ovn-metadata-agent'
Jan 31 08:17:15 compute-0 ovn_metadata_agent[154972]: + echo 'Running command: '\''neutron-ovn-metadata-agent'\'''
Jan 31 08:17:15 compute-0 ovn_metadata_agent[154972]: + umask 0022
Jan 31 08:17:15 compute-0 ovn_metadata_agent[154972]: + exec neutron-ovn-metadata-agent
Jan 31 08:17:15 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v440: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:17:16 compute-0 systemd[1]: Started ovn_metadata_agent container.
Jan 31 08:17:16 compute-0 sudo[154914]: pam_unix(sudo:session): session closed for user root
Jan 31 08:17:16 compute-0 ceph-osd[85971]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 31 08:17:16 compute-0 ceph-osd[85971]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Cumulative writes: 5575 writes, 24K keys, 5575 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.03 MB/s
                                           Cumulative WAL: 5575 writes, 837 syncs, 6.66 writes per sync, written: 0.02 GB, 0.03 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 5575 writes, 24K keys, 5575 commit groups, 1.0 writes per commit group, ingest: 18.85 MB, 0.03 MB/s
                                           Interval WAL: 5575 writes, 837 syncs, 6.66 writes per sync, written: 0.02 GB, 0.03 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.01              0.00         1    0.010       0      0       0.0       0.0
                                            Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.01              0.00         1    0.010       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.1      0.01              0.00         1    0.010       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x561e014618d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.6e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
                                           
                                           ** Compaction Stats [m-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x561e014618d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.6e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-0] **
                                           
                                           ** Compaction Stats [m-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x561e014618d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.6e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-1] **
                                           
                                           ** Compaction Stats [m-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x561e014618d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.6e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-2] **
                                           
                                           ** Compaction Stats [p-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.56 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.01              0.00         1    0.012       0      0       0.0       0.0
                                            Sum      1/0    1.56 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.01              0.00         1    0.012       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.1      0.01              0.00         1    0.012       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x561e014618d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.6e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-0] **
                                           
                                           ** Compaction Stats [p-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x561e014618d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.6e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-1] **
                                           
                                           ** Compaction Stats [p-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x561e014618d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.6e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-2] **
                                           
                                           ** Compaction Stats [O-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x561e01461a30#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 2 last_secs: 9e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-0] **
                                           
                                           ** Compaction Stats [O-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x561e01461a30#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 2 last_secs: 9e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-1] **
                                           
                                           ** Compaction Stats [O-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.25 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.01              0.00         1    0.010       0      0       0.0       0.0
                                            Sum      1/0    1.25 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.01              0.00         1    0.010       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.1      0.01              0.00         1    0.010       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x561e01461a30#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 2 last_secs: 9e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-2] **
                                           
                                           ** Compaction Stats [L] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [L] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.003       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x561e014618d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.6e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [L] **
                                           
                                           ** Compaction Stats [P] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [P] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x561e014618d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.6e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [P] **
Jan 31 08:17:16 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 08:17:16 compute-0 python3.9[155208]: ansible-ansible.builtin.slurp Invoked with src=/var/lib/edpm-config/deployed_services.yaml
Jan 31 08:17:17 compute-0 ceph-mon[75227]: pgmap v440: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:17:17 compute-0 sudo[155359]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-flpcjozobxkwydybthxocchfrthltszb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847437.1628768-484-247964179401463/AnsiballZ_stat.py'
Jan 31 08:17:17 compute-0 sudo[155359]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:17:17 compute-0 python3.9[155361]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/deployed_services.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 08:17:17 compute-0 sudo[155359]: pam_unix(sudo:session): session closed for user root
Jan 31 08:17:17 compute-0 ovn_metadata_agent[154972]: 2026-01-31 08:17:17.824 154977 INFO neutron.common.config [-] Logging enabled!
Jan 31 08:17:17 compute-0 ovn_metadata_agent[154972]: 2026-01-31 08:17:17.825 154977 INFO neutron.common.config [-] /usr/bin/neutron-ovn-metadata-agent version 22.2.2.dev44
Jan 31 08:17:17 compute-0 ovn_metadata_agent[154972]: 2026-01-31 08:17:17.825 154977 DEBUG neutron.common.config [-] command line: /usr/bin/neutron-ovn-metadata-agent setup_logging /usr/lib/python3.9/site-packages/neutron/common/config.py:123
Jan 31 08:17:17 compute-0 ovn_metadata_agent[154972]: 2026-01-31 08:17:17.825 154977 DEBUG neutron.agent.ovn.metadata_agent [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589
Jan 31 08:17:17 compute-0 ovn_metadata_agent[154972]: 2026-01-31 08:17:17.826 154977 DEBUG neutron.agent.ovn.metadata_agent [-] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590
Jan 31 08:17:17 compute-0 ovn_metadata_agent[154972]: 2026-01-31 08:17:17.826 154977 DEBUG neutron.agent.ovn.metadata_agent [-] command line args: [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591
Jan 31 08:17:17 compute-0 ovn_metadata_agent[154972]: 2026-01-31 08:17:17.826 154977 DEBUG neutron.agent.ovn.metadata_agent [-] config files: ['/etc/neutron/neutron.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592
Jan 31 08:17:17 compute-0 ovn_metadata_agent[154972]: 2026-01-31 08:17:17.826 154977 DEBUG neutron.agent.ovn.metadata_agent [-] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594
Jan 31 08:17:17 compute-0 ovn_metadata_agent[154972]: 2026-01-31 08:17:17.826 154977 DEBUG neutron.agent.ovn.metadata_agent [-] agent_down_time                = 75 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 08:17:17 compute-0 ovn_metadata_agent[154972]: 2026-01-31 08:17:17.826 154977 DEBUG neutron.agent.ovn.metadata_agent [-] allow_bulk                     = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 08:17:17 compute-0 ovn_metadata_agent[154972]: 2026-01-31 08:17:17.826 154977 DEBUG neutron.agent.ovn.metadata_agent [-] api_extensions_path            =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 08:17:17 compute-0 ovn_metadata_agent[154972]: 2026-01-31 08:17:17.826 154977 DEBUG neutron.agent.ovn.metadata_agent [-] api_paste_config               = api-paste.ini log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 08:17:17 compute-0 ovn_metadata_agent[154972]: 2026-01-31 08:17:17.827 154977 DEBUG neutron.agent.ovn.metadata_agent [-] api_workers                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 08:17:17 compute-0 ovn_metadata_agent[154972]: 2026-01-31 08:17:17.827 154977 DEBUG neutron.agent.ovn.metadata_agent [-] auth_ca_cert                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 08:17:17 compute-0 ovn_metadata_agent[154972]: 2026-01-31 08:17:17.827 154977 DEBUG neutron.agent.ovn.metadata_agent [-] auth_strategy                  = keystone log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 08:17:17 compute-0 ovn_metadata_agent[154972]: 2026-01-31 08:17:17.827 154977 DEBUG neutron.agent.ovn.metadata_agent [-] backlog                        = 4096 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 08:17:17 compute-0 ovn_metadata_agent[154972]: 2026-01-31 08:17:17.827 154977 DEBUG neutron.agent.ovn.metadata_agent [-] base_mac                       = fa:16:3e:00:00:00 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 08:17:17 compute-0 ovn_metadata_agent[154972]: 2026-01-31 08:17:17.827 154977 DEBUG neutron.agent.ovn.metadata_agent [-] bind_host                      = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 08:17:17 compute-0 ovn_metadata_agent[154972]: 2026-01-31 08:17:17.827 154977 DEBUG neutron.agent.ovn.metadata_agent [-] bind_port                      = 9696 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 08:17:17 compute-0 ovn_metadata_agent[154972]: 2026-01-31 08:17:17.827 154977 DEBUG neutron.agent.ovn.metadata_agent [-] client_socket_timeout          = 900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 08:17:17 compute-0 ovn_metadata_agent[154972]: 2026-01-31 08:17:17.827 154977 DEBUG neutron.agent.ovn.metadata_agent [-] config_dir                     = ['/etc/neutron.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 08:17:17 compute-0 ovn_metadata_agent[154972]: 2026-01-31 08:17:17.828 154977 DEBUG neutron.agent.ovn.metadata_agent [-] config_file                    = ['/etc/neutron/neutron.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 08:17:17 compute-0 ovn_metadata_agent[154972]: 2026-01-31 08:17:17.828 154977 DEBUG neutron.agent.ovn.metadata_agent [-] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 08:17:17 compute-0 ovn_metadata_agent[154972]: 2026-01-31 08:17:17.828 154977 DEBUG neutron.agent.ovn.metadata_agent [-] control_exchange               = neutron log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 08:17:17 compute-0 ovn_metadata_agent[154972]: 2026-01-31 08:17:17.828 154977 DEBUG neutron.agent.ovn.metadata_agent [-] core_plugin                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 08:17:17 compute-0 ovn_metadata_agent[154972]: 2026-01-31 08:17:17.828 154977 DEBUG neutron.agent.ovn.metadata_agent [-] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 08:17:17 compute-0 ovn_metadata_agent[154972]: 2026-01-31 08:17:17.828 154977 DEBUG neutron.agent.ovn.metadata_agent [-] default_availability_zones     = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 08:17:17 compute-0 ovn_metadata_agent[154972]: 2026-01-31 08:17:17.828 154977 DEBUG neutron.agent.ovn.metadata_agent [-] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'OFPHandler=INFO', 'OfctlService=INFO', 'os_ken.base.app_manager=INFO', 'os_ken.controller.controller=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 08:17:17 compute-0 ovn_metadata_agent[154972]: 2026-01-31 08:17:17.828 154977 DEBUG neutron.agent.ovn.metadata_agent [-] dhcp_agent_notification        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 08:17:17 compute-0 ovn_metadata_agent[154972]: 2026-01-31 08:17:17.828 154977 DEBUG neutron.agent.ovn.metadata_agent [-] dhcp_lease_duration            = 86400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 08:17:17 compute-0 ovn_metadata_agent[154972]: 2026-01-31 08:17:17.829 154977 DEBUG neutron.agent.ovn.metadata_agent [-] dhcp_load_type                 = networks log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 08:17:17 compute-0 ovn_metadata_agent[154972]: 2026-01-31 08:17:17.829 154977 DEBUG neutron.agent.ovn.metadata_agent [-] dns_domain                     = openstacklocal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 08:17:17 compute-0 ovn_metadata_agent[154972]: 2026-01-31 08:17:17.829 154977 DEBUG neutron.agent.ovn.metadata_agent [-] enable_new_agents              = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 08:17:17 compute-0 ovn_metadata_agent[154972]: 2026-01-31 08:17:17.829 154977 DEBUG neutron.agent.ovn.metadata_agent [-] enable_traditional_dhcp        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 08:17:17 compute-0 ovn_metadata_agent[154972]: 2026-01-31 08:17:17.829 154977 DEBUG neutron.agent.ovn.metadata_agent [-] external_dns_driver            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 08:17:17 compute-0 ovn_metadata_agent[154972]: 2026-01-31 08:17:17.829 154977 DEBUG neutron.agent.ovn.metadata_agent [-] external_pids                  = /var/lib/neutron/external/pids log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 08:17:17 compute-0 ovn_metadata_agent[154972]: 2026-01-31 08:17:17.829 154977 DEBUG neutron.agent.ovn.metadata_agent [-] filter_validation              = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 08:17:17 compute-0 ovn_metadata_agent[154972]: 2026-01-31 08:17:17.829 154977 DEBUG neutron.agent.ovn.metadata_agent [-] global_physnet_mtu             = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 08:17:17 compute-0 ovn_metadata_agent[154972]: 2026-01-31 08:17:17.829 154977 DEBUG neutron.agent.ovn.metadata_agent [-] host                           = compute-0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 08:17:17 compute-0 ovn_metadata_agent[154972]: 2026-01-31 08:17:17.830 154977 DEBUG neutron.agent.ovn.metadata_agent [-] http_retries                   = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 08:17:17 compute-0 ovn_metadata_agent[154972]: 2026-01-31 08:17:17.830 154977 DEBUG neutron.agent.ovn.metadata_agent [-] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 08:17:17 compute-0 ovn_metadata_agent[154972]: 2026-01-31 08:17:17.830 154977 DEBUG neutron.agent.ovn.metadata_agent [-] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 08:17:17 compute-0 ovn_metadata_agent[154972]: 2026-01-31 08:17:17.830 154977 DEBUG neutron.agent.ovn.metadata_agent [-] ipam_driver                    = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 08:17:17 compute-0 ovn_metadata_agent[154972]: 2026-01-31 08:17:17.830 154977 DEBUG neutron.agent.ovn.metadata_agent [-] ipv6_pd_enabled                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 08:17:17 compute-0 ovn_metadata_agent[154972]: 2026-01-31 08:17:17.830 154977 DEBUG neutron.agent.ovn.metadata_agent [-] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 08:17:17 compute-0 ovn_metadata_agent[154972]: 2026-01-31 08:17:17.830 154977 DEBUG neutron.agent.ovn.metadata_agent [-] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 08:17:17 compute-0 ovn_metadata_agent[154972]: 2026-01-31 08:17:17.830 154977 DEBUG neutron.agent.ovn.metadata_agent [-] log_dir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 08:17:17 compute-0 ovn_metadata_agent[154972]: 2026-01-31 08:17:17.830 154977 DEBUG neutron.agent.ovn.metadata_agent [-] log_file                       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 08:17:17 compute-0 ovn_metadata_agent[154972]: 2026-01-31 08:17:17.831 154977 DEBUG neutron.agent.ovn.metadata_agent [-] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 08:17:17 compute-0 ovn_metadata_agent[154972]: 2026-01-31 08:17:17.831 154977 DEBUG neutron.agent.ovn.metadata_agent [-] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 08:17:17 compute-0 ovn_metadata_agent[154972]: 2026-01-31 08:17:17.831 154977 DEBUG neutron.agent.ovn.metadata_agent [-] log_rotation_type              = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 08:17:17 compute-0 ovn_metadata_agent[154972]: 2026-01-31 08:17:17.831 154977 DEBUG neutron.agent.ovn.metadata_agent [-] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 08:17:17 compute-0 ovn_metadata_agent[154972]: 2026-01-31 08:17:17.831 154977 DEBUG neutron.agent.ovn.metadata_agent [-] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 08:17:17 compute-0 ovn_metadata_agent[154972]: 2026-01-31 08:17:17.831 154977 DEBUG neutron.agent.ovn.metadata_agent [-] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 08:17:17 compute-0 ovn_metadata_agent[154972]: 2026-01-31 08:17:17.831 154977 DEBUG neutron.agent.ovn.metadata_agent [-] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 08:17:17 compute-0 ovn_metadata_agent[154972]: 2026-01-31 08:17:17.831 154977 DEBUG neutron.agent.ovn.metadata_agent [-] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 08:17:17 compute-0 ovn_metadata_agent[154972]: 2026-01-31 08:17:17.832 154977 DEBUG neutron.agent.ovn.metadata_agent [-] max_dns_nameservers            = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 08:17:17 compute-0 ovn_metadata_agent[154972]: 2026-01-31 08:17:17.832 154977 DEBUG neutron.agent.ovn.metadata_agent [-] max_header_line                = 16384 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 08:17:17 compute-0 ovn_metadata_agent[154972]: 2026-01-31 08:17:17.832 154977 DEBUG neutron.agent.ovn.metadata_agent [-] max_logfile_count              = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 08:17:17 compute-0 ovn_metadata_agent[154972]: 2026-01-31 08:17:17.832 154977 DEBUG neutron.agent.ovn.metadata_agent [-] max_logfile_size_mb            = 200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 08:17:17 compute-0 ovn_metadata_agent[154972]: 2026-01-31 08:17:17.832 154977 DEBUG neutron.agent.ovn.metadata_agent [-] max_subnet_host_routes         = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 08:17:17 compute-0 ovn_metadata_agent[154972]: 2026-01-31 08:17:17.832 154977 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_backlog               = 4096 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 08:17:17 compute-0 ovn_metadata_agent[154972]: 2026-01-31 08:17:17.832 154977 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_group           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 08:17:17 compute-0 ovn_metadata_agent[154972]: 2026-01-31 08:17:17.832 154977 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_shared_secret   = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 08:17:17 compute-0 ovn_metadata_agent[154972]: 2026-01-31 08:17:17.833 154977 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_socket          = /var/lib/neutron/metadata_proxy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 08:17:17 compute-0 ovn_metadata_agent[154972]: 2026-01-31 08:17:17.833 154977 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_socket_mode     = deduce log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 08:17:17 compute-0 ovn_metadata_agent[154972]: 2026-01-31 08:17:17.833 154977 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_user            =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 08:17:17 compute-0 ovn_metadata_agent[154972]: 2026-01-31 08:17:17.833 154977 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_workers               = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 08:17:17 compute-0 ovn_metadata_agent[154972]: 2026-01-31 08:17:17.833 154977 DEBUG neutron.agent.ovn.metadata_agent [-] network_link_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 08:17:17 compute-0 ovn_metadata_agent[154972]: 2026-01-31 08:17:17.833 154977 DEBUG neutron.agent.ovn.metadata_agent [-] notify_nova_on_port_data_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 08:17:17 compute-0 ovn_metadata_agent[154972]: 2026-01-31 08:17:17.833 154977 DEBUG neutron.agent.ovn.metadata_agent [-] notify_nova_on_port_status_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 08:17:17 compute-0 ovn_metadata_agent[154972]: 2026-01-31 08:17:17.833 154977 DEBUG neutron.agent.ovn.metadata_agent [-] nova_client_cert               =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 08:17:17 compute-0 ovn_metadata_agent[154972]: 2026-01-31 08:17:17.834 154977 DEBUG neutron.agent.ovn.metadata_agent [-] nova_client_priv_key           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 08:17:17 compute-0 ovn_metadata_agent[154972]: 2026-01-31 08:17:17.834 154977 DEBUG neutron.agent.ovn.metadata_agent [-] nova_metadata_host             = nova-metadata-internal.openstack.svc log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 08:17:17 compute-0 ovn_metadata_agent[154972]: 2026-01-31 08:17:17.834 154977 DEBUG neutron.agent.ovn.metadata_agent [-] nova_metadata_insecure         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 08:17:17 compute-0 ovn_metadata_agent[154972]: 2026-01-31 08:17:17.834 154977 DEBUG neutron.agent.ovn.metadata_agent [-] nova_metadata_port             = 8775 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 08:17:17 compute-0 ovn_metadata_agent[154972]: 2026-01-31 08:17:17.834 154977 DEBUG neutron.agent.ovn.metadata_agent [-] nova_metadata_protocol         = https log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 08:17:17 compute-0 ovn_metadata_agent[154972]: 2026-01-31 08:17:17.834 154977 DEBUG neutron.agent.ovn.metadata_agent [-] pagination_max_limit           = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 08:17:17 compute-0 ovn_metadata_agent[154972]: 2026-01-31 08:17:17.834 154977 DEBUG neutron.agent.ovn.metadata_agent [-] periodic_fuzzy_delay           = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 08:17:17 compute-0 ovn_metadata_agent[154972]: 2026-01-31 08:17:17.834 154977 DEBUG neutron.agent.ovn.metadata_agent [-] periodic_interval              = 40 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 08:17:17 compute-0 ovn_metadata_agent[154972]: 2026-01-31 08:17:17.835 154977 DEBUG neutron.agent.ovn.metadata_agent [-] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 08:17:17 compute-0 ovn_metadata_agent[154972]: 2026-01-31 08:17:17.835 154977 DEBUG neutron.agent.ovn.metadata_agent [-] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 08:17:17 compute-0 ovn_metadata_agent[154972]: 2026-01-31 08:17:17.835 154977 DEBUG neutron.agent.ovn.metadata_agent [-] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 08:17:17 compute-0 ovn_metadata_agent[154972]: 2026-01-31 08:17:17.835 154977 DEBUG neutron.agent.ovn.metadata_agent [-] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 08:17:17 compute-0 ovn_metadata_agent[154972]: 2026-01-31 08:17:17.835 154977 DEBUG neutron.agent.ovn.metadata_agent [-] retry_until_window             = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 08:17:17 compute-0 ovn_metadata_agent[154972]: 2026-01-31 08:17:17.835 154977 DEBUG neutron.agent.ovn.metadata_agent [-] rpc_resources_processing_step  = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 08:17:17 compute-0 ovn_metadata_agent[154972]: 2026-01-31 08:17:17.835 154977 DEBUG neutron.agent.ovn.metadata_agent [-] rpc_response_max_timeout       = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 08:17:17 compute-0 ovn_metadata_agent[154972]: 2026-01-31 08:17:17.836 154977 DEBUG neutron.agent.ovn.metadata_agent [-] rpc_state_report_workers       = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 08:17:17 compute-0 ovn_metadata_agent[154972]: 2026-01-31 08:17:17.836 154977 DEBUG neutron.agent.ovn.metadata_agent [-] rpc_workers                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 08:17:17 compute-0 ovn_metadata_agent[154972]: 2026-01-31 08:17:17.836 154977 DEBUG neutron.agent.ovn.metadata_agent [-] send_events_interval           = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 08:17:17 compute-0 ovn_metadata_agent[154972]: 2026-01-31 08:17:17.836 154977 DEBUG neutron.agent.ovn.metadata_agent [-] service_plugins                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 08:17:17 compute-0 ovn_metadata_agent[154972]: 2026-01-31 08:17:17.836 154977 DEBUG neutron.agent.ovn.metadata_agent [-] setproctitle                   = on log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 08:17:17 compute-0 ovn_metadata_agent[154972]: 2026-01-31 08:17:17.836 154977 DEBUG neutron.agent.ovn.metadata_agent [-] state_path                     = /var/lib/neutron log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 08:17:17 compute-0 ovn_metadata_agent[154972]: 2026-01-31 08:17:17.836 154977 DEBUG neutron.agent.ovn.metadata_agent [-] syslog_log_facility            = syslog log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 08:17:17 compute-0 ovn_metadata_agent[154972]: 2026-01-31 08:17:17.836 154977 DEBUG neutron.agent.ovn.metadata_agent [-] tcp_keepidle                   = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 08:17:17 compute-0 ovn_metadata_agent[154972]: 2026-01-31 08:17:17.836 154977 DEBUG neutron.agent.ovn.metadata_agent [-] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 08:17:17 compute-0 ovn_metadata_agent[154972]: 2026-01-31 08:17:17.836 154977 DEBUG neutron.agent.ovn.metadata_agent [-] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 08:17:17 compute-0 ovn_metadata_agent[154972]: 2026-01-31 08:17:17.837 154977 DEBUG neutron.agent.ovn.metadata_agent [-] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 08:17:17 compute-0 ovn_metadata_agent[154972]: 2026-01-31 08:17:17.837 154977 DEBUG neutron.agent.ovn.metadata_agent [-] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 08:17:17 compute-0 ovn_metadata_agent[154972]: 2026-01-31 08:17:17.837 154977 DEBUG neutron.agent.ovn.metadata_agent [-] use_ssl                        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 08:17:17 compute-0 ovn_metadata_agent[154972]: 2026-01-31 08:17:17.837 154977 DEBUG neutron.agent.ovn.metadata_agent [-] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 08:17:17 compute-0 ovn_metadata_agent[154972]: 2026-01-31 08:17:17.837 154977 DEBUG neutron.agent.ovn.metadata_agent [-] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 08:17:17 compute-0 ovn_metadata_agent[154972]: 2026-01-31 08:17:17.837 154977 DEBUG neutron.agent.ovn.metadata_agent [-] vlan_transparent               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 08:17:17 compute-0 ovn_metadata_agent[154972]: 2026-01-31 08:17:17.837 154977 DEBUG neutron.agent.ovn.metadata_agent [-] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 08:17:17 compute-0 ovn_metadata_agent[154972]: 2026-01-31 08:17:17.837 154977 DEBUG neutron.agent.ovn.metadata_agent [-] wsgi_default_pool_size         = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 08:17:17 compute-0 ovn_metadata_agent[154972]: 2026-01-31 08:17:17.838 154977 DEBUG neutron.agent.ovn.metadata_agent [-] wsgi_keep_alive                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 08:17:17 compute-0 ovn_metadata_agent[154972]: 2026-01-31 08:17:17.838 154977 DEBUG neutron.agent.ovn.metadata_agent [-] wsgi_log_format                = %(client_ip)s "%(request_line)s" status: %(status_code)s  len: %(body_length)s time: %(wall_seconds).7f log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 08:17:17 compute-0 ovn_metadata_agent[154972]: 2026-01-31 08:17:17.838 154977 DEBUG neutron.agent.ovn.metadata_agent [-] wsgi_server_debug              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 08:17:17 compute-0 ovn_metadata_agent[154972]: 2026-01-31 08:17:17.838 154977 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_concurrency.disable_process_locking = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:17:17 compute-0 ovn_metadata_agent[154972]: 2026-01-31 08:17:17.838 154977 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_concurrency.lock_path     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:17:17 compute-0 ovn_metadata_agent[154972]: 2026-01-31 08:17:17.838 154977 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.connection_string     = messaging:// log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:17:17 compute-0 ovn_metadata_agent[154972]: 2026-01-31 08:17:17.838 154977 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.enabled               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:17:17 compute-0 ovn_metadata_agent[154972]: 2026-01-31 08:17:17.838 154977 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.es_doc_type           = notification log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:17:17 compute-0 ovn_metadata_agent[154972]: 2026-01-31 08:17:17.838 154977 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.es_scroll_size        = 10000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:17:17 compute-0 ovn_metadata_agent[154972]: 2026-01-31 08:17:17.839 154977 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.es_scroll_time        = 2m log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:17:17 compute-0 ovn_metadata_agent[154972]: 2026-01-31 08:17:17.839 154977 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.filter_error_trace    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:17:17 compute-0 ovn_metadata_agent[154972]: 2026-01-31 08:17:17.839 154977 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.hmac_keys             = SECRET_KEY log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:17:17 compute-0 ovn_metadata_agent[154972]: 2026-01-31 08:17:17.839 154977 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.sentinel_service_name = mymaster log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:17:17 compute-0 ovn_metadata_agent[154972]: 2026-01-31 08:17:17.839 154977 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.socket_timeout        = 0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:17:17 compute-0 ovn_metadata_agent[154972]: 2026-01-31 08:17:17.839 154977 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.trace_sqlalchemy      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:17:17 compute-0 ovn_metadata_agent[154972]: 2026-01-31 08:17:17.839 154977 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.enforce_new_defaults = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:17:17 compute-0 ovn_metadata_agent[154972]: 2026-01-31 08:17:17.839 154977 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.enforce_scope      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:17:17 compute-0 ovn_metadata_agent[154972]: 2026-01-31 08:17:17.839 154977 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.policy_default_rule = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:17:17 compute-0 ovn_metadata_agent[154972]: 2026-01-31 08:17:17.840 154977 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.policy_dirs        = ['policy.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:17:17 compute-0 ovn_metadata_agent[154972]: 2026-01-31 08:17:17.840 154977 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.policy_file        = policy.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:17:17 compute-0 ovn_metadata_agent[154972]: 2026-01-31 08:17:17.840 154977 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_content_type = application/x-www-form-urlencoded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:17:17 compute-0 ovn_metadata_agent[154972]: 2026-01-31 08:17:17.840 154977 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:17:17 compute-0 ovn_metadata_agent[154972]: 2026-01-31 08:17:17.840 154977 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_ssl_client_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:17:17 compute-0 ovn_metadata_agent[154972]: 2026-01-31 08:17:17.840 154977 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_ssl_client_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:17:17 compute-0 ovn_metadata_agent[154972]: 2026-01-31 08:17:17.840 154977 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_ssl_verify_server_crt = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:17:17 compute-0 ovn_metadata_agent[154972]: 2026-01-31 08:17:17.840 154977 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_buffer_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:17:17 compute-0 ovn_metadata_agent[154972]: 2026-01-31 08:17:17.840 154977 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:17:17 compute-0 ovn_metadata_agent[154972]: 2026-01-31 08:17:17.841 154977 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_process_name =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:17:17 compute-0 ovn_metadata_agent[154972]: 2026-01-31 08:17:17.841 154977 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_socket_file = /var/tmp/metrics_collector.sock log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:17:17 compute-0 ovn_metadata_agent[154972]: 2026-01-31 08:17:17.841 154977 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_thread_stop_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:17:17 compute-0 ovn_metadata_agent[154972]: 2026-01-31 08:17:17.841 154977 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_middleware.http_basic_auth_user_file = /etc/htpasswd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:17:17 compute-0 ovn_metadata_agent[154972]: 2026-01-31 08:17:17.841 154977 DEBUG neutron.agent.ovn.metadata_agent [-] service_providers.service_provider = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:17:17 compute-0 ovn_metadata_agent[154972]: 2026-01-31 08:17:17.841 154977 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.capabilities           = [21, 12, 1, 2, 19] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:17:17 compute-0 ovn_metadata_agent[154972]: 2026-01-31 08:17:17.841 154977 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.group                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:17:17 compute-0 ovn_metadata_agent[154972]: 2026-01-31 08:17:17.841 154977 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.helper_command         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:17:17 compute-0 ovn_metadata_agent[154972]: 2026-01-31 08:17:17.841 154977 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.logger_name            = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:17:17 compute-0 ovn_metadata_agent[154972]: 2026-01-31 08:17:17.841 154977 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.thread_pool_size       = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:17:17 compute-0 ovn_metadata_agent[154972]: 2026-01-31 08:17:17.842 154977 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.user                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:17:17 compute-0 ovn_metadata_agent[154972]: 2026-01-31 08:17:17.842 154977 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.capabilities = [21, 12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:17:17 compute-0 ovn_metadata_agent[154972]: 2026-01-31 08:17:17.842 154977 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.group     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:17:17 compute-0 ovn_metadata_agent[154972]: 2026-01-31 08:17:17.842 154977 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:17:17 compute-0 ovn_metadata_agent[154972]: 2026-01-31 08:17:17.842 154977 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:17:17 compute-0 ovn_metadata_agent[154972]: 2026-01-31 08:17:17.842 154977 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:17:17 compute-0 ovn_metadata_agent[154972]: 2026-01-31 08:17:17.842 154977 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.user      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:17:17 compute-0 ovn_metadata_agent[154972]: 2026-01-31 08:17:17.842 154977 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.capabilities = [21, 12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:17:17 compute-0 ovn_metadata_agent[154972]: 2026-01-31 08:17:17.842 154977 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:17:17 compute-0 ovn_metadata_agent[154972]: 2026-01-31 08:17:17.843 154977 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:17:17 compute-0 ovn_metadata_agent[154972]: 2026-01-31 08:17:17.843 154977 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:17:17 compute-0 ovn_metadata_agent[154972]: 2026-01-31 08:17:17.843 154977 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:17:17 compute-0 ovn_metadata_agent[154972]: 2026-01-31 08:17:17.843 154977 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:17:17 compute-0 ovn_metadata_agent[154972]: 2026-01-31 08:17:17.843 154977 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.capabilities = [21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:17:17 compute-0 ovn_metadata_agent[154972]: 2026-01-31 08:17:17.843 154977 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:17:17 compute-0 ovn_metadata_agent[154972]: 2026-01-31 08:17:17.843 154977 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:17:17 compute-0 ovn_metadata_agent[154972]: 2026-01-31 08:17:17.843 154977 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:17:17 compute-0 ovn_metadata_agent[154972]: 2026-01-31 08:17:17.843 154977 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:17:17 compute-0 ovn_metadata_agent[154972]: 2026-01-31 08:17:17.843 154977 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:17:17 compute-0 ovn_metadata_agent[154972]: 2026-01-31 08:17:17.844 154977 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.capabilities = [12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:17:17 compute-0 ovn_metadata_agent[154972]: 2026-01-31 08:17:17.844 154977 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:17:17 compute-0 ovn_metadata_agent[154972]: 2026-01-31 08:17:17.844 154977 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:17:17 compute-0 ovn_metadata_agent[154972]: 2026-01-31 08:17:17.844 154977 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:17:17 compute-0 ovn_metadata_agent[154972]: 2026-01-31 08:17:17.844 154977 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:17:17 compute-0 ovn_metadata_agent[154972]: 2026-01-31 08:17:17.844 154977 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:17:17 compute-0 ovn_metadata_agent[154972]: 2026-01-31 08:17:17.844 154977 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.capabilities      = [12, 21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:17:17 compute-0 ovn_metadata_agent[154972]: 2026-01-31 08:17:17.844 154977 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.group             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:17:17 compute-0 ovn_metadata_agent[154972]: 2026-01-31 08:17:17.844 154977 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.helper_command    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:17:17 compute-0 ovn_metadata_agent[154972]: 2026-01-31 08:17:17.844 154977 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.logger_name       = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:17:17 compute-0 ovn_metadata_agent[154972]: 2026-01-31 08:17:17.845 154977 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.thread_pool_size  = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:17:17 compute-0 ovn_metadata_agent[154972]: 2026-01-31 08:17:17.845 154977 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.user              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:17:17 compute-0 ovn_metadata_agent[154972]: 2026-01-31 08:17:17.845 154977 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.check_child_processes_action = respawn log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:17:17 compute-0 ovn_metadata_agent[154972]: 2026-01-31 08:17:17.845 154977 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.check_child_processes_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:17:17 compute-0 ovn_metadata_agent[154972]: 2026-01-31 08:17:17.845 154977 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.comment_iptables_rules   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:17:17 compute-0 ovn_metadata_agent[154972]: 2026-01-31 08:17:17.845 154977 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.debug_iptables_rules     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:17:17 compute-0 ovn_metadata_agent[154972]: 2026-01-31 08:17:17.845 154977 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.kill_scripts_path        = /etc/neutron/kill_scripts/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:17:17 compute-0 ovn_metadata_agent[154972]: 2026-01-31 08:17:17.845 154977 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.root_helper              = sudo neutron-rootwrap /etc/neutron/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:17:17 compute-0 ovn_metadata_agent[154972]: 2026-01-31 08:17:17.845 154977 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.root_helper_daemon       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:17:17 compute-0 ovn_metadata_agent[154972]: 2026-01-31 08:17:17.846 154977 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.use_helper_for_ns_read   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:17:17 compute-0 ovn_metadata_agent[154972]: 2026-01-31 08:17:17.846 154977 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.use_random_fully         = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:17:17 compute-0 ovn_metadata_agent[154972]: 2026-01-31 08:17:17.846 154977 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_versionedobjects.fatal_exception_format_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:17:17 compute-0 ovn_metadata_agent[154972]: 2026-01-31 08:17:17.846 154977 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.default_quota           = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:17:17 compute-0 ovn_metadata_agent[154972]: 2026-01-31 08:17:17.846 154977 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_driver            = neutron.db.quota.driver_nolock.DbQuotaNoLockDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:17:17 compute-0 ovn_metadata_agent[154972]: 2026-01-31 08:17:17.846 154977 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_network           = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:17:17 compute-0 ovn_metadata_agent[154972]: 2026-01-31 08:17:17.847 154977 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_port              = 500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:17:17 compute-0 ovn_metadata_agent[154972]: 2026-01-31 08:17:17.847 154977 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_security_group    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:17:17 compute-0 ovn_metadata_agent[154972]: 2026-01-31 08:17:17.847 154977 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_security_group_rule = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:17:17 compute-0 ovn_metadata_agent[154972]: 2026-01-31 08:17:17.847 154977 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_subnet            = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:17:17 compute-0 ovn_metadata_agent[154972]: 2026-01-31 08:17:17.847 154977 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.track_quota_usage       = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:17:17 compute-0 ovn_metadata_agent[154972]: 2026-01-31 08:17:17.848 154977 DEBUG neutron.agent.ovn.metadata_agent [-] nova.auth_section              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:17:17 compute-0 ovn_metadata_agent[154972]: 2026-01-31 08:17:17.848 154977 DEBUG neutron.agent.ovn.metadata_agent [-] nova.auth_type                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:17:17 compute-0 ovn_metadata_agent[154972]: 2026-01-31 08:17:17.848 154977 DEBUG neutron.agent.ovn.metadata_agent [-] nova.cafile                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:17:17 compute-0 ovn_metadata_agent[154972]: 2026-01-31 08:17:17.848 154977 DEBUG neutron.agent.ovn.metadata_agent [-] nova.certfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:17:17 compute-0 ovn_metadata_agent[154972]: 2026-01-31 08:17:17.848 154977 DEBUG neutron.agent.ovn.metadata_agent [-] nova.collect_timing            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:17:17 compute-0 ovn_metadata_agent[154972]: 2026-01-31 08:17:17.848 154977 DEBUG neutron.agent.ovn.metadata_agent [-] nova.endpoint_type             = public log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:17:17 compute-0 ovn_metadata_agent[154972]: 2026-01-31 08:17:17.848 154977 DEBUG neutron.agent.ovn.metadata_agent [-] nova.insecure                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:17:17 compute-0 ovn_metadata_agent[154972]: 2026-01-31 08:17:17.849 154977 DEBUG neutron.agent.ovn.metadata_agent [-] nova.keyfile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:17:17 compute-0 ovn_metadata_agent[154972]: 2026-01-31 08:17:17.849 154977 DEBUG neutron.agent.ovn.metadata_agent [-] nova.region_name               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:17:17 compute-0 ovn_metadata_agent[154972]: 2026-01-31 08:17:17.849 154977 DEBUG neutron.agent.ovn.metadata_agent [-] nova.split_loggers             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:17:17 compute-0 ovn_metadata_agent[154972]: 2026-01-31 08:17:17.849 154977 DEBUG neutron.agent.ovn.metadata_agent [-] nova.timeout                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:17:17 compute-0 ovn_metadata_agent[154972]: 2026-01-31 08:17:17.849 154977 DEBUG neutron.agent.ovn.metadata_agent [-] placement.auth_section         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:17:17 compute-0 ovn_metadata_agent[154972]: 2026-01-31 08:17:17.849 154977 DEBUG neutron.agent.ovn.metadata_agent [-] placement.auth_type            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:17:17 compute-0 ovn_metadata_agent[154972]: 2026-01-31 08:17:17.849 154977 DEBUG neutron.agent.ovn.metadata_agent [-] placement.cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:17:17 compute-0 ovn_metadata_agent[154972]: 2026-01-31 08:17:17.849 154977 DEBUG neutron.agent.ovn.metadata_agent [-] placement.certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:17:17 compute-0 ovn_metadata_agent[154972]: 2026-01-31 08:17:17.849 154977 DEBUG neutron.agent.ovn.metadata_agent [-] placement.collect_timing       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:17:17 compute-0 ovn_metadata_agent[154972]: 2026-01-31 08:17:17.850 154977 DEBUG neutron.agent.ovn.metadata_agent [-] placement.endpoint_type        = public log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:17:17 compute-0 ovn_metadata_agent[154972]: 2026-01-31 08:17:17.850 154977 DEBUG neutron.agent.ovn.metadata_agent [-] placement.insecure             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:17:17 compute-0 ovn_metadata_agent[154972]: 2026-01-31 08:17:17.850 154977 DEBUG neutron.agent.ovn.metadata_agent [-] placement.keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:17:17 compute-0 ovn_metadata_agent[154972]: 2026-01-31 08:17:17.850 154977 DEBUG neutron.agent.ovn.metadata_agent [-] placement.region_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:17:17 compute-0 ovn_metadata_agent[154972]: 2026-01-31 08:17:17.850 154977 DEBUG neutron.agent.ovn.metadata_agent [-] placement.split_loggers        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:17:17 compute-0 ovn_metadata_agent[154972]: 2026-01-31 08:17:17.850 154977 DEBUG neutron.agent.ovn.metadata_agent [-] placement.timeout              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:17:17 compute-0 ovn_metadata_agent[154972]: 2026-01-31 08:17:17.850 154977 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:17:17 compute-0 ovn_metadata_agent[154972]: 2026-01-31 08:17:17.850 154977 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.auth_type               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:17:17 compute-0 ovn_metadata_agent[154972]: 2026-01-31 08:17:17.850 154977 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:17:17 compute-0 ovn_metadata_agent[154972]: 2026-01-31 08:17:17.851 154977 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:17:17 compute-0 ovn_metadata_agent[154972]: 2026-01-31 08:17:17.851 154977 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:17:17 compute-0 ovn_metadata_agent[154972]: 2026-01-31 08:17:17.851 154977 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:17:17 compute-0 ovn_metadata_agent[154972]: 2026-01-31 08:17:17.851 154977 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:17:17 compute-0 ovn_metadata_agent[154972]: 2026-01-31 08:17:17.851 154977 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.enable_notifications    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:17:17 compute-0 ovn_metadata_agent[154972]: 2026-01-31 08:17:17.851 154977 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:17:17 compute-0 ovn_metadata_agent[154972]: 2026-01-31 08:17:17.851 154977 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:17:17 compute-0 ovn_metadata_agent[154972]: 2026-01-31 08:17:17.851 154977 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.interface               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:17:17 compute-0 ovn_metadata_agent[154972]: 2026-01-31 08:17:17.851 154977 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:17:17 compute-0 ovn_metadata_agent[154972]: 2026-01-31 08:17:17.851 154977 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:17:17 compute-0 ovn_metadata_agent[154972]: 2026-01-31 08:17:17.852 154977 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:17:17 compute-0 ovn_metadata_agent[154972]: 2026-01-31 08:17:17.852 154977 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:17:17 compute-0 ovn_metadata_agent[154972]: 2026-01-31 08:17:17.852 154977 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:17:17 compute-0 ovn_metadata_agent[154972]: 2026-01-31 08:17:17.852 154977 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.service_type            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:17:17 compute-0 ovn_metadata_agent[154972]: 2026-01-31 08:17:17.852 154977 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:17:17 compute-0 ovn_metadata_agent[154972]: 2026-01-31 08:17:17.852 154977 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:17:17 compute-0 ovn_metadata_agent[154972]: 2026-01-31 08:17:17.852 154977 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:17:17 compute-0 ovn_metadata_agent[154972]: 2026-01-31 08:17:17.852 154977 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:17:17 compute-0 ovn_metadata_agent[154972]: 2026-01-31 08:17:17.853 154977 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.valid_interfaces        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:17:17 compute-0 ovn_metadata_agent[154972]: 2026-01-31 08:17:17.853 154977 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:17:17 compute-0 ovn_metadata_agent[154972]: 2026-01-31 08:17:17.853 154977 DEBUG neutron.agent.ovn.metadata_agent [-] cli_script.dry_run             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:17:17 compute-0 ovn_metadata_agent[154972]: 2026-01-31 08:17:17.853 154977 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.allow_stateless_action_supported = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:17:17 compute-0 ovn_metadata_agent[154972]: 2026-01-31 08:17:17.853 154977 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.dhcp_default_lease_time    = 43200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:17:17 compute-0 ovn_metadata_agent[154972]: 2026-01-31 08:17:17.853 154977 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.disable_ovn_dhcp_for_baremetal_ports = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:17:17 compute-0 ovn_metadata_agent[154972]: 2026-01-31 08:17:17.853 154977 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.dns_servers                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:17:17 compute-0 ovn_metadata_agent[154972]: 2026-01-31 08:17:17.853 154977 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.enable_distributed_floating_ip = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:17:17 compute-0 ovn_metadata_agent[154972]: 2026-01-31 08:17:17.854 154977 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.neutron_sync_mode          = log log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:17:17 compute-0 ovn_metadata_agent[154972]: 2026-01-31 08:17:17.854 154977 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_dhcp4_global_options   = {} log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:17:17 compute-0 ovn_metadata_agent[154972]: 2026-01-31 08:17:17.854 154977 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_dhcp6_global_options   = {} log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:17:17 compute-0 ovn_metadata_agent[154972]: 2026-01-31 08:17:17.854 154977 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_emit_need_to_frag      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:17:17 compute-0 ovn_metadata_agent[154972]: 2026-01-31 08:17:17.854 154977 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_l3_mode                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:17:17 compute-0 ovn_metadata_agent[154972]: 2026-01-31 08:17:17.854 154977 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_l3_scheduler           = leastloaded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:17:17 compute-0 ovn_metadata_agent[154972]: 2026-01-31 08:17:17.854 154977 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_metadata_enabled       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:17:17 compute-0 ovn_metadata_agent[154972]: 2026-01-31 08:17:17.854 154977 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_nb_ca_cert             =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:17:17 compute-0 ovn_metadata_agent[154972]: 2026-01-31 08:17:17.855 154977 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_nb_certificate         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:17:17 compute-0 ovn_metadata_agent[154972]: 2026-01-31 08:17:17.855 154977 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_nb_connection          = tcp:127.0.0.1:6641 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:17:17 compute-0 ovn_metadata_agent[154972]: 2026-01-31 08:17:17.855 154977 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_nb_private_key         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:17:17 compute-0 ovn_metadata_agent[154972]: 2026-01-31 08:17:17.855 154977 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_sb_ca_cert             = /etc/pki/tls/certs/ovndbca.crt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:17:17 compute-0 ovn_metadata_agent[154972]: 2026-01-31 08:17:17.855 154977 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_sb_certificate         = /etc/pki/tls/certs/ovndb.crt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:17:17 compute-0 ovn_metadata_agent[154972]: 2026-01-31 08:17:17.855 154977 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_sb_connection          = ssl:ovsdbserver-sb.openstack.svc:6642 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:17:17 compute-0 ovn_metadata_agent[154972]: 2026-01-31 08:17:17.855 154977 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_sb_private_key         = /etc/pki/tls/private/ovndb.key log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:17:17 compute-0 ovn_metadata_agent[154972]: 2026-01-31 08:17:17.855 154977 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovsdb_connection_timeout   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:17:17 compute-0 ovn_metadata_agent[154972]: 2026-01-31 08:17:17.856 154977 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovsdb_log_level            = INFO log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:17:17 compute-0 ovn_metadata_agent[154972]: 2026-01-31 08:17:17.856 154977 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovsdb_probe_interval       = 60000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:17:17 compute-0 ovn_metadata_agent[154972]: 2026-01-31 08:17:17.856 154977 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovsdb_retry_max_interval   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:17:17 compute-0 ovn_metadata_agent[154972]: 2026-01-31 08:17:17.856 154977 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.vhost_sock_dir             = /var/run/openvswitch log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:17:17 compute-0 ovn_metadata_agent[154972]: 2026-01-31 08:17:17.856 154977 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.vif_type                   = ovs log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:17:17 compute-0 ovn_metadata_agent[154972]: 2026-01-31 08:17:17.856 154977 DEBUG neutron.agent.ovn.metadata_agent [-] OVS.bridge_mac_table_size      = 50000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:17:17 compute-0 ovn_metadata_agent[154972]: 2026-01-31 08:17:17.856 154977 DEBUG neutron.agent.ovn.metadata_agent [-] OVS.igmp_snooping_enable       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:17:17 compute-0 ovn_metadata_agent[154972]: 2026-01-31 08:17:17.856 154977 DEBUG neutron.agent.ovn.metadata_agent [-] OVS.ovsdb_timeout              = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:17:17 compute-0 ovn_metadata_agent[154972]: 2026-01-31 08:17:17.856 154977 DEBUG neutron.agent.ovn.metadata_agent [-] ovs.ovsdb_connection           = tcp:127.0.0.1:6640 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:17:17 compute-0 ovn_metadata_agent[154972]: 2026-01-31 08:17:17.857 154977 DEBUG neutron.agent.ovn.metadata_agent [-] ovs.ovsdb_connection_timeout   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:17:17 compute-0 ovn_metadata_agent[154972]: 2026-01-31 08:17:17.857 154977 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:17:17 compute-0 ovn_metadata_agent[154972]: 2026-01-31 08:17:17.857 154977 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.amqp_durable_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:17:17 compute-0 ovn_metadata_agent[154972]: 2026-01-31 08:17:17.857 154977 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:17:17 compute-0 ovn_metadata_agent[154972]: 2026-01-31 08:17:17.857 154977 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:17:17 compute-0 ovn_metadata_agent[154972]: 2026-01-31 08:17:17.857 154977 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:17:17 compute-0 ovn_metadata_agent[154972]: 2026-01-31 08:17:17.857 154977 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:17:17 compute-0 ovn_metadata_agent[154972]: 2026-01-31 08:17:17.857 154977 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:17:17 compute-0 ovn_metadata_agent[154972]: 2026-01-31 08:17:17.857 154977 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:17:17 compute-0 ovn_metadata_agent[154972]: 2026-01-31 08:17:17.858 154977 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:17:17 compute-0 ovn_metadata_agent[154972]: 2026-01-31 08:17:17.858 154977 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:17:17 compute-0 ovn_metadata_agent[154972]: 2026-01-31 08:17:17.858 154977 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:17:17 compute-0 ovn_metadata_agent[154972]: 2026-01-31 08:17:17.858 154977 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:17:17 compute-0 ovn_metadata_agent[154972]: 2026-01-31 08:17:17.858 154977 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:17:17 compute-0 ovn_metadata_agent[154972]: 2026-01-31 08:17:17.858 154977 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:17:17 compute-0 ovn_metadata_agent[154972]: 2026-01-31 08:17:17.858 154977 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:17:17 compute-0 ovn_metadata_agent[154972]: 2026-01-31 08:17:17.859 154977 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:17:17 compute-0 ovn_metadata_agent[154972]: 2026-01-31 08:17:17.859 154977 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:17:17 compute-0 ovn_metadata_agent[154972]: 2026-01-31 08:17:17.859 154977 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:17:17 compute-0 ovn_metadata_agent[154972]: 2026-01-31 08:17:17.859 154977 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:17:17 compute-0 ovn_metadata_agent[154972]: 2026-01-31 08:17:17.859 154977 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:17:17 compute-0 ovn_metadata_agent[154972]: 2026-01-31 08:17:17.859 154977 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_quorum_queue = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:17:17 compute-0 ovn_metadata_agent[154972]: 2026-01-31 08:17:17.860 154977 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:17:17 compute-0 ovn_metadata_agent[154972]: 2026-01-31 08:17:17.860 154977 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:17:17 compute-0 ovn_metadata_agent[154972]: 2026-01-31 08:17:17.860 154977 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:17:17 compute-0 ovn_metadata_agent[154972]: 2026-01-31 08:17:17.860 154977 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:17:17 compute-0 ovn_metadata_agent[154972]: 2026-01-31 08:17:17.860 154977 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:17:17 compute-0 ovn_metadata_agent[154972]: 2026-01-31 08:17:17.860 154977 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:17:17 compute-0 ovn_metadata_agent[154972]: 2026-01-31 08:17:17.861 154977 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:17:17 compute-0 ovn_metadata_agent[154972]: 2026-01-31 08:17:17.861 154977 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:17:17 compute-0 ovn_metadata_agent[154972]: 2026-01-31 08:17:17.861 154977 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:17:17 compute-0 ovn_metadata_agent[154972]: 2026-01-31 08:17:17.861 154977 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:17:17 compute-0 ovn_metadata_agent[154972]: 2026-01-31 08:17:17.861 154977 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_notifications.driver = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:17:17 compute-0 ovn_metadata_agent[154972]: 2026-01-31 08:17:17.861 154977 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:17:17 compute-0 ovn_metadata_agent[154972]: 2026-01-31 08:17:17.861 154977 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:17:17 compute-0 ovn_metadata_agent[154972]: 2026-01-31 08:17:17.862 154977 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:17:17 compute-0 ovn_metadata_agent[154972]: 2026-01-31 08:17:17.862 154977 DEBUG neutron.agent.ovn.metadata_agent [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613
Jan 31 08:17:17 compute-0 ovn_metadata_agent[154972]: 2026-01-31 08:17:17.872 154977 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Bridge.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106
Jan 31 08:17:17 compute-0 ovn_metadata_agent[154972]: 2026-01-31 08:17:17.872 154977 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Port.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106
Jan 31 08:17:17 compute-0 ovn_metadata_agent[154972]: 2026-01-31 08:17:17.872 154977 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Interface.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106
Jan 31 08:17:17 compute-0 ovn_metadata_agent[154972]: 2026-01-31 08:17:17.873 154977 INFO ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: connecting...
Jan 31 08:17:17 compute-0 ovn_metadata_agent[154972]: 2026-01-31 08:17:17.873 154977 INFO ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: connected
Jan 31 08:17:17 compute-0 ovn_metadata_agent[154972]: 2026-01-31 08:17:17.887 154977 DEBUG neutron.agent.ovn.metadata.agent [-] Loaded chassis name c8bc61c4-1b90-42d4-9c52-3d83532ede66 (UUID: c8bc61c4-1b90-42d4-9c52-3d83532ede66) and ovn bridge br-int. _load_config /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:309
Jan 31 08:17:17 compute-0 ovn_metadata_agent[154972]: 2026-01-31 08:17:17.911 154977 INFO neutron.agent.ovn.metadata.ovsdb [-] Getting OvsdbSbOvnIdl for MetadataAgent with retry
Jan 31 08:17:17 compute-0 ovn_metadata_agent[154972]: 2026-01-31 08:17:17.911 154977 DEBUG ovsdbapp.backend.ovs_idl [-] Created lookup_table index Chassis.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:87
Jan 31 08:17:17 compute-0 ovn_metadata_agent[154972]: 2026-01-31 08:17:17.911 154977 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Datapath_Binding.tunnel_key autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106
Jan 31 08:17:17 compute-0 ovn_metadata_agent[154972]: 2026-01-31 08:17:17.911 154977 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Chassis_Private.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106
Jan 31 08:17:17 compute-0 ovn_metadata_agent[154972]: 2026-01-31 08:17:17.915 154977 INFO ovsdbapp.backend.ovs_idl.vlog [-] ssl:ovsdbserver-sb.openstack.svc:6642: connecting...
Jan 31 08:17:17 compute-0 ovn_metadata_agent[154972]: 2026-01-31 08:17:17.920 154977 INFO ovsdbapp.backend.ovs_idl.vlog [-] ssl:ovsdbserver-sb.openstack.svc:6642: connected
Jan 31 08:17:17 compute-0 ovn_metadata_agent[154972]: 2026-01-31 08:17:17.926 154977 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched CREATE: ChassisPrivateCreateEvent(events=('create',), table='Chassis_Private', conditions=(('name', '=', 'c8bc61c4-1b90-42d4-9c52-3d83532ede66'),), old_conditions=None), priority=20 to row=Chassis_Private(chassis=[<ovs.db.idl.Row object at 0x7efc988fdd30>], external_ids={}, name=c8bc61c4-1b90-42d4-9c52-3d83532ede66, nb_cfg_timestamp=1769847380441, nb_cfg=1) old= matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 31 08:17:17 compute-0 ovn_metadata_agent[154972]: 2026-01-31 08:17:17.927 154977 DEBUG neutron_lib.callbacks.manager [-] Subscribe: <bound method MetadataProxyHandler.post_fork_initialize of <neutron.agent.ovn.metadata.server.MetadataProxyHandler object at 0x7efc9887ec10>> process after_init 55550000, False subscribe /usr/lib/python3.9/site-packages/neutron_lib/callbacks/manager.py:52
Jan 31 08:17:17 compute-0 ovn_metadata_agent[154972]: 2026-01-31 08:17:17.928 154977 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 31 08:17:17 compute-0 ovn_metadata_agent[154972]: 2026-01-31 08:17:17.928 154977 DEBUG oslo_concurrency.lockutils [-] Acquired lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 31 08:17:17 compute-0 ovn_metadata_agent[154972]: 2026-01-31 08:17:17.928 154977 DEBUG oslo_concurrency.lockutils [-] Releasing lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 31 08:17:17 compute-0 ovn_metadata_agent[154972]: 2026-01-31 08:17:17.928 154977 INFO oslo_service.service [-] Starting 1 workers
Jan 31 08:17:17 compute-0 ovn_metadata_agent[154972]: 2026-01-31 08:17:17.931 154977 DEBUG oslo_service.service [-] Started child 155459 _start_child /usr/lib/python3.9/site-packages/oslo_service/service.py:575
Jan 31 08:17:17 compute-0 ovn_metadata_agent[154972]: 2026-01-31 08:17:17.934 154977 INFO oslo.privsep.daemon [-] Running privsep helper: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/neutron/neutron.conf', '--config-dir', '/etc/neutron.conf.d', '--privsep_context', 'neutron.privileged.namespace_cmd', '--privsep_sock_path', '/tmp/tmp3_v07x8g/privsep.sock']
Jan 31 08:17:17 compute-0 ovn_metadata_agent[154972]: 2026-01-31 08:17:17.935 155459 DEBUG neutron_lib.callbacks.manager [-] Publish callbacks ['neutron.agent.ovn.metadata.server.MetadataProxyHandler.post_fork_initialize-166633'] for process (None), after_init _notify_loop /usr/lib/python3.9/site-packages/neutron_lib/callbacks/manager.py:184
Jan 31 08:17:17 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v441: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:17:17 compute-0 ovn_metadata_agent[154972]: 2026-01-31 08:17:17.959 155459 INFO neutron.agent.ovn.metadata.ovsdb [-] Getting OvsdbSbOvnIdl for MetadataAgent with retry
Jan 31 08:17:17 compute-0 ovn_metadata_agent[154972]: 2026-01-31 08:17:17.960 155459 DEBUG ovsdbapp.backend.ovs_idl [-] Created lookup_table index Chassis.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:87
Jan 31 08:17:17 compute-0 ovn_metadata_agent[154972]: 2026-01-31 08:17:17.960 155459 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Datapath_Binding.tunnel_key autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106
Jan 31 08:17:17 compute-0 ovn_metadata_agent[154972]: 2026-01-31 08:17:17.965 155459 INFO ovsdbapp.backend.ovs_idl.vlog [-] ssl:ovsdbserver-sb.openstack.svc:6642: connecting...
Jan 31 08:17:17 compute-0 sudo[155487]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fhgqohsbzqaxrjjmdiccqafcsaminnxk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847437.1628768-484-247964179401463/AnsiballZ_copy.py'
Jan 31 08:17:17 compute-0 ovn_metadata_agent[154972]: 2026-01-31 08:17:17.974 155459 INFO ovsdbapp.backend.ovs_idl.vlog [-] ssl:ovsdbserver-sb.openstack.svc:6642: connected
Jan 31 08:17:17 compute-0 sudo[155487]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:17:17 compute-0 ovn_metadata_agent[154972]: 2026-01-31 08:17:17.982 155459 INFO eventlet.wsgi.server [-] (155459) wsgi starting up on http:/var/lib/neutron/metadata_proxy
Jan 31 08:17:18 compute-0 python3.9[155490]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/deployed_services.yaml mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769847437.1628768-484-247964179401463/.source.yaml _original_basename=.tf7twk81 follow=False checksum=123065ba71fa8a2d5bb23ca29c6be2688936190b backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 08:17:18 compute-0 sudo[155487]: pam_unix(sudo:session): session closed for user root
Jan 31 08:17:18 compute-0 ceph-mon[75227]: pgmap v441: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:17:18 compute-0 kernel: capability: warning: `privsep-helper' uses deprecated v2 capabilities in a way that may be insecure
Jan 31 08:17:18 compute-0 ovn_metadata_agent[154972]: 2026-01-31 08:17:18.579 154977 INFO oslo.privsep.daemon [-] Spawned new privsep daemon via rootwrap
Jan 31 08:17:18 compute-0 ovn_metadata_agent[154972]: 2026-01-31 08:17:18.580 154977 DEBUG oslo.privsep.daemon [-] Accepted privsep connection to /tmp/tmp3_v07x8g/privsep.sock __init__ /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:362
Jan 31 08:17:18 compute-0 ovn_metadata_agent[154972]: 2026-01-31 08:17:18.481 155516 INFO oslo.privsep.daemon [-] privsep daemon starting
Jan 31 08:17:18 compute-0 ovn_metadata_agent[154972]: 2026-01-31 08:17:18.486 155516 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0
Jan 31 08:17:18 compute-0 ovn_metadata_agent[154972]: 2026-01-31 08:17:18.490 155516 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_SYS_ADMIN/CAP_SYS_ADMIN/none
Jan 31 08:17:18 compute-0 ovn_metadata_agent[154972]: 2026-01-31 08:17:18.490 155516 INFO oslo.privsep.daemon [-] privsep daemon running as pid 155516
Jan 31 08:17:18 compute-0 ovn_metadata_agent[154972]: 2026-01-31 08:17:18.584 155516 DEBUG oslo.privsep.daemon [-] privsep: reply[597cebf1-dffc-4322-9f34-1b632162ba4a]: (2,) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:17:18 compute-0 sshd-session[146015]: Connection closed by 192.168.122.30 port 46134
Jan 31 08:17:18 compute-0 sshd-session[146012]: pam_unix(sshd:session): session closed for user zuul
Jan 31 08:17:18 compute-0 systemd[1]: session-48.scope: Deactivated successfully.
Jan 31 08:17:18 compute-0 systemd[1]: session-48.scope: Consumed 48.778s CPU time.
Jan 31 08:17:18 compute-0 systemd-logind[793]: Session 48 logged out. Waiting for processes to exit.
Jan 31 08:17:18 compute-0 systemd-logind[793]: Removed session 48.
Jan 31 08:17:19 compute-0 ovn_metadata_agent[154972]: 2026-01-31 08:17:19.070 155516 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "context-manager" by "neutron_lib.db.api._create_context_manager" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:17:19 compute-0 ovn_metadata_agent[154972]: 2026-01-31 08:17:19.070 155516 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" acquired by "neutron_lib.db.api._create_context_manager" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:17:19 compute-0 ovn_metadata_agent[154972]: 2026-01-31 08:17:19.070 155516 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" "released" by "neutron_lib.db.api._create_context_manager" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:17:19 compute-0 ovn_metadata_agent[154972]: 2026-01-31 08:17:19.627 155516 DEBUG oslo.privsep.daemon [-] privsep: reply[47a9d3b1-0417-4b42-8727-9b64ba7a929d]: (4, []) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 31 08:17:19 compute-0 ovn_metadata_agent[154972]: 2026-01-31 08:17:19.630 154977 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbAddCommand(_result=None, table=Chassis_Private, record=c8bc61c4-1b90-42d4-9c52-3d83532ede66, column=external_ids, values=({'neutron:ovn-metadata-id': '55e132ff-622c-524b-8a5a-3db2e758bc47'},)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:17:19 compute-0 ovn_metadata_agent[154972]: 2026-01-31 08:17:19.693 154977 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=c8bc61c4-1b90-42d4-9c52-3d83532ede66, col_values=(('external_ids', {'neutron:ovn-bridge': 'br-int'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:17:19 compute-0 ovn_metadata_agent[154972]: 2026-01-31 08:17:19.800 154977 DEBUG oslo_service.service [-] Full set of CONF: wait /usr/lib/python3.9/site-packages/oslo_service/service.py:649
Jan 31 08:17:19 compute-0 ovn_metadata_agent[154972]: 2026-01-31 08:17:19.801 154977 DEBUG oslo_service.service [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589
Jan 31 08:17:19 compute-0 ovn_metadata_agent[154972]: 2026-01-31 08:17:19.801 154977 DEBUG oslo_service.service [-] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590
Jan 31 08:17:19 compute-0 ovn_metadata_agent[154972]: 2026-01-31 08:17:19.801 154977 DEBUG oslo_service.service [-] command line args: [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591
Jan 31 08:17:19 compute-0 ovn_metadata_agent[154972]: 2026-01-31 08:17:19.801 154977 DEBUG oslo_service.service [-] config files: ['/etc/neutron/neutron.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592
Jan 31 08:17:19 compute-0 ovn_metadata_agent[154972]: 2026-01-31 08:17:19.801 154977 DEBUG oslo_service.service [-] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594
Jan 31 08:17:19 compute-0 ovn_metadata_agent[154972]: 2026-01-31 08:17:19.801 154977 DEBUG oslo_service.service [-] agent_down_time                = 75 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 08:17:19 compute-0 ovn_metadata_agent[154972]: 2026-01-31 08:17:19.802 154977 DEBUG oslo_service.service [-] allow_bulk                     = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 08:17:19 compute-0 ovn_metadata_agent[154972]: 2026-01-31 08:17:19.802 154977 DEBUG oslo_service.service [-] api_extensions_path            =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 08:17:19 compute-0 ovn_metadata_agent[154972]: 2026-01-31 08:17:19.802 154977 DEBUG oslo_service.service [-] api_paste_config               = api-paste.ini log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 08:17:19 compute-0 ovn_metadata_agent[154972]: 2026-01-31 08:17:19.802 154977 DEBUG oslo_service.service [-] api_workers                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 08:17:19 compute-0 ovn_metadata_agent[154972]: 2026-01-31 08:17:19.802 154977 DEBUG oslo_service.service [-] auth_ca_cert                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 08:17:19 compute-0 ovn_metadata_agent[154972]: 2026-01-31 08:17:19.802 154977 DEBUG oslo_service.service [-] auth_strategy                  = keystone log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 08:17:19 compute-0 ovn_metadata_agent[154972]: 2026-01-31 08:17:19.802 154977 DEBUG oslo_service.service [-] backlog                        = 4096 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 08:17:19 compute-0 ovn_metadata_agent[154972]: 2026-01-31 08:17:19.802 154977 DEBUG oslo_service.service [-] base_mac                       = fa:16:3e:00:00:00 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 08:17:19 compute-0 ovn_metadata_agent[154972]: 2026-01-31 08:17:19.803 154977 DEBUG oslo_service.service [-] bind_host                      = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 08:17:19 compute-0 ovn_metadata_agent[154972]: 2026-01-31 08:17:19.803 154977 DEBUG oslo_service.service [-] bind_port                      = 9696 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 08:17:19 compute-0 ovn_metadata_agent[154972]: 2026-01-31 08:17:19.803 154977 DEBUG oslo_service.service [-] client_socket_timeout          = 900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 08:17:19 compute-0 ovn_metadata_agent[154972]: 2026-01-31 08:17:19.803 154977 DEBUG oslo_service.service [-] config_dir                     = ['/etc/neutron.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 08:17:19 compute-0 ovn_metadata_agent[154972]: 2026-01-31 08:17:19.803 154977 DEBUG oslo_service.service [-] config_file                    = ['/etc/neutron/neutron.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 08:17:19 compute-0 ovn_metadata_agent[154972]: 2026-01-31 08:17:19.803 154977 DEBUG oslo_service.service [-] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 08:17:19 compute-0 ovn_metadata_agent[154972]: 2026-01-31 08:17:19.803 154977 DEBUG oslo_service.service [-] control_exchange               = neutron log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 08:17:19 compute-0 ovn_metadata_agent[154972]: 2026-01-31 08:17:19.804 154977 DEBUG oslo_service.service [-] core_plugin                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 08:17:19 compute-0 ovn_metadata_agent[154972]: 2026-01-31 08:17:19.804 154977 DEBUG oslo_service.service [-] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 08:17:19 compute-0 ovn_metadata_agent[154972]: 2026-01-31 08:17:19.804 154977 DEBUG oslo_service.service [-] default_availability_zones     = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 08:17:19 compute-0 ovn_metadata_agent[154972]: 2026-01-31 08:17:19.804 154977 DEBUG oslo_service.service [-] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'OFPHandler=INFO', 'OfctlService=INFO', 'os_ken.base.app_manager=INFO', 'os_ken.controller.controller=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 08:17:19 compute-0 ovn_metadata_agent[154972]: 2026-01-31 08:17:19.804 154977 DEBUG oslo_service.service [-] dhcp_agent_notification        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 08:17:19 compute-0 ovn_metadata_agent[154972]: 2026-01-31 08:17:19.804 154977 DEBUG oslo_service.service [-] dhcp_lease_duration            = 86400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 08:17:19 compute-0 ovn_metadata_agent[154972]: 2026-01-31 08:17:19.804 154977 DEBUG oslo_service.service [-] dhcp_load_type                 = networks log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 08:17:19 compute-0 ovn_metadata_agent[154972]: 2026-01-31 08:17:19.804 154977 DEBUG oslo_service.service [-] dns_domain                     = openstacklocal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 08:17:19 compute-0 ovn_metadata_agent[154972]: 2026-01-31 08:17:19.805 154977 DEBUG oslo_service.service [-] enable_new_agents              = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 08:17:19 compute-0 ovn_metadata_agent[154972]: 2026-01-31 08:17:19.805 154977 DEBUG oslo_service.service [-] enable_traditional_dhcp        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 08:17:19 compute-0 ovn_metadata_agent[154972]: 2026-01-31 08:17:19.805 154977 DEBUG oslo_service.service [-] external_dns_driver            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 08:17:19 compute-0 ovn_metadata_agent[154972]: 2026-01-31 08:17:19.805 154977 DEBUG oslo_service.service [-] external_pids                  = /var/lib/neutron/external/pids log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 08:17:19 compute-0 ovn_metadata_agent[154972]: 2026-01-31 08:17:19.805 154977 DEBUG oslo_service.service [-] filter_validation              = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 08:17:19 compute-0 ovn_metadata_agent[154972]: 2026-01-31 08:17:19.805 154977 DEBUG oslo_service.service [-] global_physnet_mtu             = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 08:17:19 compute-0 ovn_metadata_agent[154972]: 2026-01-31 08:17:19.805 154977 DEBUG oslo_service.service [-] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 08:17:19 compute-0 ovn_metadata_agent[154972]: 2026-01-31 08:17:19.806 154977 DEBUG oslo_service.service [-] host                           = compute-0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 08:17:19 compute-0 ovn_metadata_agent[154972]: 2026-01-31 08:17:19.806 154977 DEBUG oslo_service.service [-] http_retries                   = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 08:17:19 compute-0 ovn_metadata_agent[154972]: 2026-01-31 08:17:19.806 154977 DEBUG oslo_service.service [-] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 08:17:19 compute-0 ovn_metadata_agent[154972]: 2026-01-31 08:17:19.806 154977 DEBUG oslo_service.service [-] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 08:17:19 compute-0 ovn_metadata_agent[154972]: 2026-01-31 08:17:19.806 154977 DEBUG oslo_service.service [-] ipam_driver                    = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 08:17:19 compute-0 ovn_metadata_agent[154972]: 2026-01-31 08:17:19.806 154977 DEBUG oslo_service.service [-] ipv6_pd_enabled                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 08:17:19 compute-0 ovn_metadata_agent[154972]: 2026-01-31 08:17:19.806 154977 DEBUG oslo_service.service [-] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 08:17:19 compute-0 ovn_metadata_agent[154972]: 2026-01-31 08:17:19.806 154977 DEBUG oslo_service.service [-] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 08:17:19 compute-0 ovn_metadata_agent[154972]: 2026-01-31 08:17:19.806 154977 DEBUG oslo_service.service [-] log_dir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 08:17:19 compute-0 ovn_metadata_agent[154972]: 2026-01-31 08:17:19.807 154977 DEBUG oslo_service.service [-] log_file                       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 08:17:19 compute-0 ovn_metadata_agent[154972]: 2026-01-31 08:17:19.807 154977 DEBUG oslo_service.service [-] log_options                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 08:17:19 compute-0 ovn_metadata_agent[154972]: 2026-01-31 08:17:19.807 154977 DEBUG oslo_service.service [-] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 08:17:19 compute-0 ovn_metadata_agent[154972]: 2026-01-31 08:17:19.807 154977 DEBUG oslo_service.service [-] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 08:17:19 compute-0 ovn_metadata_agent[154972]: 2026-01-31 08:17:19.807 154977 DEBUG oslo_service.service [-] log_rotation_type              = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 08:17:19 compute-0 ovn_metadata_agent[154972]: 2026-01-31 08:17:19.807 154977 DEBUG oslo_service.service [-] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 08:17:19 compute-0 ovn_metadata_agent[154972]: 2026-01-31 08:17:19.807 154977 DEBUG oslo_service.service [-] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 08:17:19 compute-0 ovn_metadata_agent[154972]: 2026-01-31 08:17:19.807 154977 DEBUG oslo_service.service [-] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 08:17:19 compute-0 ovn_metadata_agent[154972]: 2026-01-31 08:17:19.807 154977 DEBUG oslo_service.service [-] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 08:17:19 compute-0 ovn_metadata_agent[154972]: 2026-01-31 08:17:19.807 154977 DEBUG oslo_service.service [-] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 08:17:19 compute-0 ovn_metadata_agent[154972]: 2026-01-31 08:17:19.808 154977 DEBUG oslo_service.service [-] max_dns_nameservers            = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 08:17:19 compute-0 ovn_metadata_agent[154972]: 2026-01-31 08:17:19.808 154977 DEBUG oslo_service.service [-] max_header_line                = 16384 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 08:17:19 compute-0 ovn_metadata_agent[154972]: 2026-01-31 08:17:19.808 154977 DEBUG oslo_service.service [-] max_logfile_count              = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 08:17:19 compute-0 ovn_metadata_agent[154972]: 2026-01-31 08:17:19.808 154977 DEBUG oslo_service.service [-] max_logfile_size_mb            = 200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 08:17:19 compute-0 ovn_metadata_agent[154972]: 2026-01-31 08:17:19.808 154977 DEBUG oslo_service.service [-] max_subnet_host_routes         = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 08:17:19 compute-0 ovn_metadata_agent[154972]: 2026-01-31 08:17:19.808 154977 DEBUG oslo_service.service [-] metadata_backlog               = 4096 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 08:17:19 compute-0 ovn_metadata_agent[154972]: 2026-01-31 08:17:19.808 154977 DEBUG oslo_service.service [-] metadata_proxy_group           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 08:17:19 compute-0 ovn_metadata_agent[154972]: 2026-01-31 08:17:19.808 154977 DEBUG oslo_service.service [-] metadata_proxy_shared_secret   = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 08:17:19 compute-0 ovn_metadata_agent[154972]: 2026-01-31 08:17:19.808 154977 DEBUG oslo_service.service [-] metadata_proxy_socket          = /var/lib/neutron/metadata_proxy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 08:17:19 compute-0 ovn_metadata_agent[154972]: 2026-01-31 08:17:19.808 154977 DEBUG oslo_service.service [-] metadata_proxy_socket_mode     = deduce log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 08:17:19 compute-0 ovn_metadata_agent[154972]: 2026-01-31 08:17:19.809 154977 DEBUG oslo_service.service [-] metadata_proxy_user            =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 08:17:19 compute-0 ovn_metadata_agent[154972]: 2026-01-31 08:17:19.809 154977 DEBUG oslo_service.service [-] metadata_workers               = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 08:17:19 compute-0 ovn_metadata_agent[154972]: 2026-01-31 08:17:19.809 154977 DEBUG oslo_service.service [-] network_link_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 08:17:19 compute-0 ovn_metadata_agent[154972]: 2026-01-31 08:17:19.809 154977 DEBUG oslo_service.service [-] notify_nova_on_port_data_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 08:17:19 compute-0 ovn_metadata_agent[154972]: 2026-01-31 08:17:19.809 154977 DEBUG oslo_service.service [-] notify_nova_on_port_status_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 08:17:19 compute-0 ovn_metadata_agent[154972]: 2026-01-31 08:17:19.809 154977 DEBUG oslo_service.service [-] nova_client_cert               =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 08:17:19 compute-0 ovn_metadata_agent[154972]: 2026-01-31 08:17:19.809 154977 DEBUG oslo_service.service [-] nova_client_priv_key           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 08:17:19 compute-0 ovn_metadata_agent[154972]: 2026-01-31 08:17:19.810 154977 DEBUG oslo_service.service [-] nova_metadata_host             = nova-metadata-internal.openstack.svc log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 08:17:19 compute-0 ovn_metadata_agent[154972]: 2026-01-31 08:17:19.810 154977 DEBUG oslo_service.service [-] nova_metadata_insecure         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 08:17:19 compute-0 ovn_metadata_agent[154972]: 2026-01-31 08:17:19.810 154977 DEBUG oslo_service.service [-] nova_metadata_port             = 8775 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 08:17:19 compute-0 ovn_metadata_agent[154972]: 2026-01-31 08:17:19.810 154977 DEBUG oslo_service.service [-] nova_metadata_protocol         = https log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 08:17:19 compute-0 ovn_metadata_agent[154972]: 2026-01-31 08:17:19.810 154977 DEBUG oslo_service.service [-] pagination_max_limit           = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 08:17:19 compute-0 ovn_metadata_agent[154972]: 2026-01-31 08:17:19.810 154977 DEBUG oslo_service.service [-] periodic_fuzzy_delay           = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 08:17:19 compute-0 ovn_metadata_agent[154972]: 2026-01-31 08:17:19.810 154977 DEBUG oslo_service.service [-] periodic_interval              = 40 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 08:17:19 compute-0 ovn_metadata_agent[154972]: 2026-01-31 08:17:19.810 154977 DEBUG oslo_service.service [-] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 08:17:19 compute-0 ovn_metadata_agent[154972]: 2026-01-31 08:17:19.810 154977 DEBUG oslo_service.service [-] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 08:17:19 compute-0 ovn_metadata_agent[154972]: 2026-01-31 08:17:19.811 154977 DEBUG oslo_service.service [-] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 08:17:19 compute-0 ovn_metadata_agent[154972]: 2026-01-31 08:17:19.811 154977 DEBUG oslo_service.service [-] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 08:17:19 compute-0 ovn_metadata_agent[154972]: 2026-01-31 08:17:19.811 154977 DEBUG oslo_service.service [-] retry_until_window             = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 08:17:19 compute-0 ovn_metadata_agent[154972]: 2026-01-31 08:17:19.811 154977 DEBUG oslo_service.service [-] rpc_resources_processing_step  = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 08:17:19 compute-0 ovn_metadata_agent[154972]: 2026-01-31 08:17:19.811 154977 DEBUG oslo_service.service [-] rpc_response_max_timeout       = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 08:17:19 compute-0 ovn_metadata_agent[154972]: 2026-01-31 08:17:19.811 154977 DEBUG oslo_service.service [-] rpc_state_report_workers       = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 08:17:19 compute-0 ovn_metadata_agent[154972]: 2026-01-31 08:17:19.811 154977 DEBUG oslo_service.service [-] rpc_workers                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 08:17:19 compute-0 ovn_metadata_agent[154972]: 2026-01-31 08:17:19.811 154977 DEBUG oslo_service.service [-] send_events_interval           = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 08:17:19 compute-0 ovn_metadata_agent[154972]: 2026-01-31 08:17:19.812 154977 DEBUG oslo_service.service [-] service_plugins                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 08:17:19 compute-0 ovn_metadata_agent[154972]: 2026-01-31 08:17:19.812 154977 DEBUG oslo_service.service [-] setproctitle                   = on log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 08:17:19 compute-0 ovn_metadata_agent[154972]: 2026-01-31 08:17:19.812 154977 DEBUG oslo_service.service [-] state_path                     = /var/lib/neutron log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 08:17:19 compute-0 ovn_metadata_agent[154972]: 2026-01-31 08:17:19.812 154977 DEBUG oslo_service.service [-] syslog_log_facility            = syslog log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 08:17:19 compute-0 ovn_metadata_agent[154972]: 2026-01-31 08:17:19.812 154977 DEBUG oslo_service.service [-] tcp_keepidle                   = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 08:17:19 compute-0 ovn_metadata_agent[154972]: 2026-01-31 08:17:19.812 154977 DEBUG oslo_service.service [-] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 08:17:19 compute-0 ovn_metadata_agent[154972]: 2026-01-31 08:17:19.812 154977 DEBUG oslo_service.service [-] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 08:17:19 compute-0 ovn_metadata_agent[154972]: 2026-01-31 08:17:19.812 154977 DEBUG oslo_service.service [-] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 08:17:19 compute-0 ovn_metadata_agent[154972]: 2026-01-31 08:17:19.813 154977 DEBUG oslo_service.service [-] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 08:17:19 compute-0 ovn_metadata_agent[154972]: 2026-01-31 08:17:19.813 154977 DEBUG oslo_service.service [-] use_ssl                        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 08:17:19 compute-0 ovn_metadata_agent[154972]: 2026-01-31 08:17:19.813 154977 DEBUG oslo_service.service [-] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 08:17:19 compute-0 ovn_metadata_agent[154972]: 2026-01-31 08:17:19.813 154977 DEBUG oslo_service.service [-] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 08:17:19 compute-0 ovn_metadata_agent[154972]: 2026-01-31 08:17:19.813 154977 DEBUG oslo_service.service [-] vlan_transparent               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 08:17:19 compute-0 ovn_metadata_agent[154972]: 2026-01-31 08:17:19.813 154977 DEBUG oslo_service.service [-] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 08:17:19 compute-0 ovn_metadata_agent[154972]: 2026-01-31 08:17:19.813 154977 DEBUG oslo_service.service [-] wsgi_default_pool_size         = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 08:17:19 compute-0 ovn_metadata_agent[154972]: 2026-01-31 08:17:19.814 154977 DEBUG oslo_service.service [-] wsgi_keep_alive                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 08:17:19 compute-0 ovn_metadata_agent[154972]: 2026-01-31 08:17:19.814 154977 DEBUG oslo_service.service [-] wsgi_log_format                = %(client_ip)s "%(request_line)s" status: %(status_code)s  len: %(body_length)s time: %(wall_seconds).7f log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 08:17:19 compute-0 ovn_metadata_agent[154972]: 2026-01-31 08:17:19.814 154977 DEBUG oslo_service.service [-] wsgi_server_debug              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 08:17:19 compute-0 ovn_metadata_agent[154972]: 2026-01-31 08:17:19.814 154977 DEBUG oslo_service.service [-] oslo_concurrency.disable_process_locking = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:17:19 compute-0 ovn_metadata_agent[154972]: 2026-01-31 08:17:19.814 154977 DEBUG oslo_service.service [-] oslo_concurrency.lock_path     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:17:19 compute-0 ovn_metadata_agent[154972]: 2026-01-31 08:17:19.814 154977 DEBUG oslo_service.service [-] profiler.connection_string     = messaging:// log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:17:19 compute-0 ovn_metadata_agent[154972]: 2026-01-31 08:17:19.815 154977 DEBUG oslo_service.service [-] profiler.enabled               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:17:19 compute-0 ovn_metadata_agent[154972]: 2026-01-31 08:17:19.815 154977 DEBUG oslo_service.service [-] profiler.es_doc_type           = notification log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:17:19 compute-0 ovn_metadata_agent[154972]: 2026-01-31 08:17:19.815 154977 DEBUG oslo_service.service [-] profiler.es_scroll_size        = 10000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:17:19 compute-0 ovn_metadata_agent[154972]: 2026-01-31 08:17:19.815 154977 DEBUG oslo_service.service [-] profiler.es_scroll_time        = 2m log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:17:19 compute-0 ovn_metadata_agent[154972]: 2026-01-31 08:17:19.815 154977 DEBUG oslo_service.service [-] profiler.filter_error_trace    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:17:19 compute-0 ovn_metadata_agent[154972]: 2026-01-31 08:17:19.815 154977 DEBUG oslo_service.service [-] profiler.hmac_keys             = SECRET_KEY log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:17:19 compute-0 ovn_metadata_agent[154972]: 2026-01-31 08:17:19.815 154977 DEBUG oslo_service.service [-] profiler.sentinel_service_name = mymaster log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:17:19 compute-0 ovn_metadata_agent[154972]: 2026-01-31 08:17:19.815 154977 DEBUG oslo_service.service [-] profiler.socket_timeout        = 0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:17:19 compute-0 ovn_metadata_agent[154972]: 2026-01-31 08:17:19.816 154977 DEBUG oslo_service.service [-] profiler.trace_sqlalchemy      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:17:19 compute-0 ovn_metadata_agent[154972]: 2026-01-31 08:17:19.816 154977 DEBUG oslo_service.service [-] oslo_policy.enforce_new_defaults = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:17:19 compute-0 ovn_metadata_agent[154972]: 2026-01-31 08:17:19.816 154977 DEBUG oslo_service.service [-] oslo_policy.enforce_scope      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:17:19 compute-0 ovn_metadata_agent[154972]: 2026-01-31 08:17:19.816 154977 DEBUG oslo_service.service [-] oslo_policy.policy_default_rule = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:17:19 compute-0 ovn_metadata_agent[154972]: 2026-01-31 08:17:19.816 154977 DEBUG oslo_service.service [-] oslo_policy.policy_dirs        = ['policy.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:17:19 compute-0 ovn_metadata_agent[154972]: 2026-01-31 08:17:19.816 154977 DEBUG oslo_service.service [-] oslo_policy.policy_file        = policy.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:17:19 compute-0 ovn_metadata_agent[154972]: 2026-01-31 08:17:19.816 154977 DEBUG oslo_service.service [-] oslo_policy.remote_content_type = application/x-www-form-urlencoded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:17:19 compute-0 ovn_metadata_agent[154972]: 2026-01-31 08:17:19.816 154977 DEBUG oslo_service.service [-] oslo_policy.remote_ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:17:19 compute-0 ovn_metadata_agent[154972]: 2026-01-31 08:17:19.817 154977 DEBUG oslo_service.service [-] oslo_policy.remote_ssl_client_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:17:19 compute-0 ovn_metadata_agent[154972]: 2026-01-31 08:17:19.817 154977 DEBUG oslo_service.service [-] oslo_policy.remote_ssl_client_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:17:19 compute-0 ovn_metadata_agent[154972]: 2026-01-31 08:17:19.817 154977 DEBUG oslo_service.service [-] oslo_policy.remote_ssl_verify_server_crt = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:17:19 compute-0 ovn_metadata_agent[154972]: 2026-01-31 08:17:19.817 154977 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_buffer_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:17:19 compute-0 ovn_metadata_agent[154972]: 2026-01-31 08:17:19.817 154977 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:17:19 compute-0 ovn_metadata_agent[154972]: 2026-01-31 08:17:19.817 154977 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_process_name =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:17:19 compute-0 ovn_metadata_agent[154972]: 2026-01-31 08:17:19.817 154977 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_socket_file = /var/tmp/metrics_collector.sock log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:17:19 compute-0 ovn_metadata_agent[154972]: 2026-01-31 08:17:19.817 154977 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_thread_stop_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:17:19 compute-0 ovn_metadata_agent[154972]: 2026-01-31 08:17:19.817 154977 DEBUG oslo_service.service [-] oslo_middleware.http_basic_auth_user_file = /etc/htpasswd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:17:19 compute-0 ovn_metadata_agent[154972]: 2026-01-31 08:17:19.818 154977 DEBUG oslo_service.service [-] service_providers.service_provider = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:17:19 compute-0 ovn_metadata_agent[154972]: 2026-01-31 08:17:19.818 154977 DEBUG oslo_service.service [-] privsep.capabilities           = [21, 12, 1, 2, 19] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:17:19 compute-0 ovn_metadata_agent[154972]: 2026-01-31 08:17:19.818 154977 DEBUG oslo_service.service [-] privsep.group                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:17:19 compute-0 ovn_metadata_agent[154972]: 2026-01-31 08:17:19.818 154977 DEBUG oslo_service.service [-] privsep.helper_command         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:17:19 compute-0 ovn_metadata_agent[154972]: 2026-01-31 08:17:19.818 154977 DEBUG oslo_service.service [-] privsep.logger_name            = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:17:19 compute-0 ovn_metadata_agent[154972]: 2026-01-31 08:17:19.818 154977 DEBUG oslo_service.service [-] privsep.thread_pool_size       = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:17:19 compute-0 ovn_metadata_agent[154972]: 2026-01-31 08:17:19.818 154977 DEBUG oslo_service.service [-] privsep.user                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:17:19 compute-0 ovn_metadata_agent[154972]: 2026-01-31 08:17:19.818 154977 DEBUG oslo_service.service [-] privsep_dhcp_release.capabilities = [21, 12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:17:19 compute-0 ovn_metadata_agent[154972]: 2026-01-31 08:17:19.819 154977 DEBUG oslo_service.service [-] privsep_dhcp_release.group     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:17:19 compute-0 ovn_metadata_agent[154972]: 2026-01-31 08:17:19.820 154977 DEBUG oslo_service.service [-] privsep_dhcp_release.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:17:19 compute-0 ovn_metadata_agent[154972]: 2026-01-31 08:17:19.820 154977 DEBUG oslo_service.service [-] privsep_dhcp_release.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:17:19 compute-0 ovn_metadata_agent[154972]: 2026-01-31 08:17:19.820 154977 DEBUG oslo_service.service [-] privsep_dhcp_release.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:17:19 compute-0 ovn_metadata_agent[154972]: 2026-01-31 08:17:19.820 154977 DEBUG oslo_service.service [-] privsep_dhcp_release.user      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:17:19 compute-0 ovn_metadata_agent[154972]: 2026-01-31 08:17:19.821 154977 DEBUG oslo_service.service [-] privsep_ovs_vsctl.capabilities = [21, 12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:17:19 compute-0 ovn_metadata_agent[154972]: 2026-01-31 08:17:19.821 154977 DEBUG oslo_service.service [-] privsep_ovs_vsctl.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:17:19 compute-0 ovn_metadata_agent[154972]: 2026-01-31 08:17:19.821 154977 DEBUG oslo_service.service [-] privsep_ovs_vsctl.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:17:19 compute-0 ovn_metadata_agent[154972]: 2026-01-31 08:17:19.821 154977 DEBUG oslo_service.service [-] privsep_ovs_vsctl.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:17:19 compute-0 ovn_metadata_agent[154972]: 2026-01-31 08:17:19.821 154977 DEBUG oslo_service.service [-] privsep_ovs_vsctl.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:17:19 compute-0 ovn_metadata_agent[154972]: 2026-01-31 08:17:19.821 154977 DEBUG oslo_service.service [-] privsep_ovs_vsctl.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:17:19 compute-0 ovn_metadata_agent[154972]: 2026-01-31 08:17:19.822 154977 DEBUG oslo_service.service [-] privsep_namespace.capabilities = [21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:17:19 compute-0 ovn_metadata_agent[154972]: 2026-01-31 08:17:19.822 154977 DEBUG oslo_service.service [-] privsep_namespace.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:17:19 compute-0 ovn_metadata_agent[154972]: 2026-01-31 08:17:19.822 154977 DEBUG oslo_service.service [-] privsep_namespace.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:17:19 compute-0 ovn_metadata_agent[154972]: 2026-01-31 08:17:19.822 154977 DEBUG oslo_service.service [-] privsep_namespace.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:17:19 compute-0 ovn_metadata_agent[154972]: 2026-01-31 08:17:19.822 154977 DEBUG oslo_service.service [-] privsep_namespace.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:17:19 compute-0 ovn_metadata_agent[154972]: 2026-01-31 08:17:19.822 154977 DEBUG oslo_service.service [-] privsep_namespace.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:17:19 compute-0 ovn_metadata_agent[154972]: 2026-01-31 08:17:19.822 154977 DEBUG oslo_service.service [-] privsep_conntrack.capabilities = [12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:17:19 compute-0 ovn_metadata_agent[154972]: 2026-01-31 08:17:19.823 154977 DEBUG oslo_service.service [-] privsep_conntrack.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:17:19 compute-0 ovn_metadata_agent[154972]: 2026-01-31 08:17:19.823 154977 DEBUG oslo_service.service [-] privsep_conntrack.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:17:19 compute-0 ovn_metadata_agent[154972]: 2026-01-31 08:17:19.823 154977 DEBUG oslo_service.service [-] privsep_conntrack.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:17:19 compute-0 ovn_metadata_agent[154972]: 2026-01-31 08:17:19.823 154977 DEBUG oslo_service.service [-] privsep_conntrack.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:17:19 compute-0 ovn_metadata_agent[154972]: 2026-01-31 08:17:19.823 154977 DEBUG oslo_service.service [-] privsep_conntrack.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:17:19 compute-0 ovn_metadata_agent[154972]: 2026-01-31 08:17:19.823 154977 DEBUG oslo_service.service [-] privsep_link.capabilities      = [12, 21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:17:19 compute-0 ovn_metadata_agent[154972]: 2026-01-31 08:17:19.824 154977 DEBUG oslo_service.service [-] privsep_link.group             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:17:19 compute-0 ovn_metadata_agent[154972]: 2026-01-31 08:17:19.824 154977 DEBUG oslo_service.service [-] privsep_link.helper_command    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:17:19 compute-0 ovn_metadata_agent[154972]: 2026-01-31 08:17:19.824 154977 DEBUG oslo_service.service [-] privsep_link.logger_name       = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:17:19 compute-0 ovn_metadata_agent[154972]: 2026-01-31 08:17:19.824 154977 DEBUG oslo_service.service [-] privsep_link.thread_pool_size  = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:17:19 compute-0 ovn_metadata_agent[154972]: 2026-01-31 08:17:19.824 154977 DEBUG oslo_service.service [-] privsep_link.user              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:17:19 compute-0 ovn_metadata_agent[154972]: 2026-01-31 08:17:19.824 154977 DEBUG oslo_service.service [-] AGENT.check_child_processes_action = respawn log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:17:19 compute-0 ovn_metadata_agent[154972]: 2026-01-31 08:17:19.824 154977 DEBUG oslo_service.service [-] AGENT.check_child_processes_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:17:19 compute-0 ovn_metadata_agent[154972]: 2026-01-31 08:17:19.825 154977 DEBUG oslo_service.service [-] AGENT.comment_iptables_rules   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:17:19 compute-0 ovn_metadata_agent[154972]: 2026-01-31 08:17:19.825 154977 DEBUG oslo_service.service [-] AGENT.debug_iptables_rules     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:17:19 compute-0 ovn_metadata_agent[154972]: 2026-01-31 08:17:19.825 154977 DEBUG oslo_service.service [-] AGENT.kill_scripts_path        = /etc/neutron/kill_scripts/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:17:19 compute-0 ovn_metadata_agent[154972]: 2026-01-31 08:17:19.825 154977 DEBUG oslo_service.service [-] AGENT.root_helper              = sudo neutron-rootwrap /etc/neutron/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:17:19 compute-0 ovn_metadata_agent[154972]: 2026-01-31 08:17:19.825 154977 DEBUG oslo_service.service [-] AGENT.root_helper_daemon       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:17:19 compute-0 ovn_metadata_agent[154972]: 2026-01-31 08:17:19.825 154977 DEBUG oslo_service.service [-] AGENT.use_helper_for_ns_read   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:17:19 compute-0 ovn_metadata_agent[154972]: 2026-01-31 08:17:19.826 154977 DEBUG oslo_service.service [-] AGENT.use_random_fully         = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:17:19 compute-0 ovn_metadata_agent[154972]: 2026-01-31 08:17:19.826 154977 DEBUG oslo_service.service [-] oslo_versionedobjects.fatal_exception_format_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:17:19 compute-0 ovn_metadata_agent[154972]: 2026-01-31 08:17:19.826 154977 DEBUG oslo_service.service [-] QUOTAS.default_quota           = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:17:19 compute-0 ovn_metadata_agent[154972]: 2026-01-31 08:17:19.826 154977 DEBUG oslo_service.service [-] QUOTAS.quota_driver            = neutron.db.quota.driver_nolock.DbQuotaNoLockDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:17:19 compute-0 ovn_metadata_agent[154972]: 2026-01-31 08:17:19.826 154977 DEBUG oslo_service.service [-] QUOTAS.quota_network           = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:17:19 compute-0 ovn_metadata_agent[154972]: 2026-01-31 08:17:19.826 154977 DEBUG oslo_service.service [-] QUOTAS.quota_port              = 500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:17:19 compute-0 ovn_metadata_agent[154972]: 2026-01-31 08:17:19.827 154977 DEBUG oslo_service.service [-] QUOTAS.quota_security_group    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:17:19 compute-0 ovn_metadata_agent[154972]: 2026-01-31 08:17:19.827 154977 DEBUG oslo_service.service [-] QUOTAS.quota_security_group_rule = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:17:19 compute-0 ovn_metadata_agent[154972]: 2026-01-31 08:17:19.827 154977 DEBUG oslo_service.service [-] QUOTAS.quota_subnet            = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:17:19 compute-0 ovn_metadata_agent[154972]: 2026-01-31 08:17:19.827 154977 DEBUG oslo_service.service [-] QUOTAS.track_quota_usage       = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:17:19 compute-0 ovn_metadata_agent[154972]: 2026-01-31 08:17:19.827 154977 DEBUG oslo_service.service [-] nova.auth_section              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:17:19 compute-0 ovn_metadata_agent[154972]: 2026-01-31 08:17:19.827 154977 DEBUG oslo_service.service [-] nova.auth_type                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:17:19 compute-0 ovn_metadata_agent[154972]: 2026-01-31 08:17:19.828 154977 DEBUG oslo_service.service [-] nova.cafile                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:17:19 compute-0 ovn_metadata_agent[154972]: 2026-01-31 08:17:19.828 154977 DEBUG oslo_service.service [-] nova.certfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:17:19 compute-0 ovn_metadata_agent[154972]: 2026-01-31 08:17:19.828 154977 DEBUG oslo_service.service [-] nova.collect_timing            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:17:19 compute-0 ovn_metadata_agent[154972]: 2026-01-31 08:17:19.828 154977 DEBUG oslo_service.service [-] nova.endpoint_type             = public log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:17:19 compute-0 ovn_metadata_agent[154972]: 2026-01-31 08:17:19.828 154977 DEBUG oslo_service.service [-] nova.insecure                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:17:19 compute-0 ovn_metadata_agent[154972]: 2026-01-31 08:17:19.828 154977 DEBUG oslo_service.service [-] nova.keyfile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:17:19 compute-0 ovn_metadata_agent[154972]: 2026-01-31 08:17:19.828 154977 DEBUG oslo_service.service [-] nova.region_name               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:17:19 compute-0 ovn_metadata_agent[154972]: 2026-01-31 08:17:19.829 154977 DEBUG oslo_service.service [-] nova.split_loggers             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:17:19 compute-0 ovn_metadata_agent[154972]: 2026-01-31 08:17:19.829 154977 DEBUG oslo_service.service [-] nova.timeout                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:17:19 compute-0 ovn_metadata_agent[154972]: 2026-01-31 08:17:19.829 154977 DEBUG oslo_service.service [-] placement.auth_section         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:17:19 compute-0 ovn_metadata_agent[154972]: 2026-01-31 08:17:19.829 154977 DEBUG oslo_service.service [-] placement.auth_type            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:17:19 compute-0 ovn_metadata_agent[154972]: 2026-01-31 08:17:19.829 154977 DEBUG oslo_service.service [-] placement.cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:17:19 compute-0 ovn_metadata_agent[154972]: 2026-01-31 08:17:19.829 154977 DEBUG oslo_service.service [-] placement.certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:17:19 compute-0 ovn_metadata_agent[154972]: 2026-01-31 08:17:19.829 154977 DEBUG oslo_service.service [-] placement.collect_timing       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:17:19 compute-0 ovn_metadata_agent[154972]: 2026-01-31 08:17:19.830 154977 DEBUG oslo_service.service [-] placement.endpoint_type        = public log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:17:19 compute-0 ovn_metadata_agent[154972]: 2026-01-31 08:17:19.830 154977 DEBUG oslo_service.service [-] placement.insecure             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:17:19 compute-0 ovn_metadata_agent[154972]: 2026-01-31 08:17:19.830 154977 DEBUG oslo_service.service [-] placement.keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:17:19 compute-0 ovn_metadata_agent[154972]: 2026-01-31 08:17:19.830 154977 DEBUG oslo_service.service [-] placement.region_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:17:19 compute-0 ovn_metadata_agent[154972]: 2026-01-31 08:17:19.830 154977 DEBUG oslo_service.service [-] placement.split_loggers        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:17:19 compute-0 ovn_metadata_agent[154972]: 2026-01-31 08:17:19.830 154977 DEBUG oslo_service.service [-] placement.timeout              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:17:19 compute-0 ovn_metadata_agent[154972]: 2026-01-31 08:17:19.831 154977 DEBUG oslo_service.service [-] ironic.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:17:19 compute-0 ovn_metadata_agent[154972]: 2026-01-31 08:17:19.831 154977 DEBUG oslo_service.service [-] ironic.auth_type               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:17:19 compute-0 ovn_metadata_agent[154972]: 2026-01-31 08:17:19.831 154977 DEBUG oslo_service.service [-] ironic.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:17:19 compute-0 ovn_metadata_agent[154972]: 2026-01-31 08:17:19.831 154977 DEBUG oslo_service.service [-] ironic.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:17:19 compute-0 ovn_metadata_agent[154972]: 2026-01-31 08:17:19.831 154977 DEBUG oslo_service.service [-] ironic.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:17:19 compute-0 ovn_metadata_agent[154972]: 2026-01-31 08:17:19.831 154977 DEBUG oslo_service.service [-] ironic.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:17:19 compute-0 ovn_metadata_agent[154972]: 2026-01-31 08:17:19.831 154977 DEBUG oslo_service.service [-] ironic.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:17:19 compute-0 ovn_metadata_agent[154972]: 2026-01-31 08:17:19.832 154977 DEBUG oslo_service.service [-] ironic.enable_notifications    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:17:19 compute-0 ovn_metadata_agent[154972]: 2026-01-31 08:17:19.832 154977 DEBUG oslo_service.service [-] ironic.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:17:19 compute-0 ovn_metadata_agent[154972]: 2026-01-31 08:17:19.832 154977 DEBUG oslo_service.service [-] ironic.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:17:19 compute-0 ovn_metadata_agent[154972]: 2026-01-31 08:17:19.832 154977 DEBUG oslo_service.service [-] ironic.interface               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:17:19 compute-0 ovn_metadata_agent[154972]: 2026-01-31 08:17:19.832 154977 DEBUG oslo_service.service [-] ironic.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:17:19 compute-0 ovn_metadata_agent[154972]: 2026-01-31 08:17:19.832 154977 DEBUG oslo_service.service [-] ironic.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:17:19 compute-0 ovn_metadata_agent[154972]: 2026-01-31 08:17:19.832 154977 DEBUG oslo_service.service [-] ironic.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:17:19 compute-0 ovn_metadata_agent[154972]: 2026-01-31 08:17:19.833 154977 DEBUG oslo_service.service [-] ironic.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:17:19 compute-0 ovn_metadata_agent[154972]: 2026-01-31 08:17:19.833 154977 DEBUG oslo_service.service [-] ironic.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:17:19 compute-0 ovn_metadata_agent[154972]: 2026-01-31 08:17:19.833 154977 DEBUG oslo_service.service [-] ironic.service_type            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:17:19 compute-0 ovn_metadata_agent[154972]: 2026-01-31 08:17:19.833 154977 DEBUG oslo_service.service [-] ironic.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:17:19 compute-0 ovn_metadata_agent[154972]: 2026-01-31 08:17:19.833 154977 DEBUG oslo_service.service [-] ironic.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:17:19 compute-0 ovn_metadata_agent[154972]: 2026-01-31 08:17:19.833 154977 DEBUG oslo_service.service [-] ironic.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:17:19 compute-0 ovn_metadata_agent[154972]: 2026-01-31 08:17:19.833 154977 DEBUG oslo_service.service [-] ironic.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:17:19 compute-0 ovn_metadata_agent[154972]: 2026-01-31 08:17:19.834 154977 DEBUG oslo_service.service [-] ironic.valid_interfaces        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:17:19 compute-0 ovn_metadata_agent[154972]: 2026-01-31 08:17:19.834 154977 DEBUG oslo_service.service [-] ironic.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:17:19 compute-0 ovn_metadata_agent[154972]: 2026-01-31 08:17:19.834 154977 DEBUG oslo_service.service [-] cli_script.dry_run             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:17:19 compute-0 ovn_metadata_agent[154972]: 2026-01-31 08:17:19.834 154977 DEBUG oslo_service.service [-] ovn.allow_stateless_action_supported = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:17:19 compute-0 ovn_metadata_agent[154972]: 2026-01-31 08:17:19.834 154977 DEBUG oslo_service.service [-] ovn.dhcp_default_lease_time    = 43200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:17:19 compute-0 ovn_metadata_agent[154972]: 2026-01-31 08:17:19.834 154977 DEBUG oslo_service.service [-] ovn.disable_ovn_dhcp_for_baremetal_ports = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:17:19 compute-0 ovn_metadata_agent[154972]: 2026-01-31 08:17:19.835 154977 DEBUG oslo_service.service [-] ovn.dns_servers                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:17:19 compute-0 ovn_metadata_agent[154972]: 2026-01-31 08:17:19.835 154977 DEBUG oslo_service.service [-] ovn.enable_distributed_floating_ip = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:17:19 compute-0 ovn_metadata_agent[154972]: 2026-01-31 08:17:19.835 154977 DEBUG oslo_service.service [-] ovn.neutron_sync_mode          = log log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:17:19 compute-0 ovn_metadata_agent[154972]: 2026-01-31 08:17:19.835 154977 DEBUG oslo_service.service [-] ovn.ovn_dhcp4_global_options   = {} log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:17:19 compute-0 ovn_metadata_agent[154972]: 2026-01-31 08:17:19.835 154977 DEBUG oslo_service.service [-] ovn.ovn_dhcp6_global_options   = {} log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:17:19 compute-0 ovn_metadata_agent[154972]: 2026-01-31 08:17:19.835 154977 DEBUG oslo_service.service [-] ovn.ovn_emit_need_to_frag      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:17:19 compute-0 ovn_metadata_agent[154972]: 2026-01-31 08:17:19.836 154977 DEBUG oslo_service.service [-] ovn.ovn_l3_mode                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:17:19 compute-0 ovn_metadata_agent[154972]: 2026-01-31 08:17:19.836 154977 DEBUG oslo_service.service [-] ovn.ovn_l3_scheduler           = leastloaded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:17:19 compute-0 ovn_metadata_agent[154972]: 2026-01-31 08:17:19.836 154977 DEBUG oslo_service.service [-] ovn.ovn_metadata_enabled       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:17:19 compute-0 ovn_metadata_agent[154972]: 2026-01-31 08:17:19.836 154977 DEBUG oslo_service.service [-] ovn.ovn_nb_ca_cert             =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:17:19 compute-0 ovn_metadata_agent[154972]: 2026-01-31 08:17:19.836 154977 DEBUG oslo_service.service [-] ovn.ovn_nb_certificate         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:17:19 compute-0 ovn_metadata_agent[154972]: 2026-01-31 08:17:19.836 154977 DEBUG oslo_service.service [-] ovn.ovn_nb_connection          = tcp:127.0.0.1:6641 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:17:19 compute-0 ovn_metadata_agent[154972]: 2026-01-31 08:17:19.837 154977 DEBUG oslo_service.service [-] ovn.ovn_nb_private_key         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:17:19 compute-0 ovn_metadata_agent[154972]: 2026-01-31 08:17:19.837 154977 DEBUG oslo_service.service [-] ovn.ovn_sb_ca_cert             = /etc/pki/tls/certs/ovndbca.crt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:17:19 compute-0 ovn_metadata_agent[154972]: 2026-01-31 08:17:19.837 154977 DEBUG oslo_service.service [-] ovn.ovn_sb_certificate         = /etc/pki/tls/certs/ovndb.crt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:17:19 compute-0 ovn_metadata_agent[154972]: 2026-01-31 08:17:19.837 154977 DEBUG oslo_service.service [-] ovn.ovn_sb_connection          = ssl:ovsdbserver-sb.openstack.svc:6642 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:17:19 compute-0 ovn_metadata_agent[154972]: 2026-01-31 08:17:19.837 154977 DEBUG oslo_service.service [-] ovn.ovn_sb_private_key         = /etc/pki/tls/private/ovndb.key log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:17:19 compute-0 ovn_metadata_agent[154972]: 2026-01-31 08:17:19.837 154977 DEBUG oslo_service.service [-] ovn.ovsdb_connection_timeout   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:17:19 compute-0 ovn_metadata_agent[154972]: 2026-01-31 08:17:19.838 154977 DEBUG oslo_service.service [-] ovn.ovsdb_log_level            = INFO log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:17:19 compute-0 ovn_metadata_agent[154972]: 2026-01-31 08:17:19.838 154977 DEBUG oslo_service.service [-] ovn.ovsdb_probe_interval       = 60000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:17:19 compute-0 ovn_metadata_agent[154972]: 2026-01-31 08:17:19.838 154977 DEBUG oslo_service.service [-] ovn.ovsdb_retry_max_interval   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:17:19 compute-0 ovn_metadata_agent[154972]: 2026-01-31 08:17:19.838 154977 DEBUG oslo_service.service [-] ovn.vhost_sock_dir             = /var/run/openvswitch log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:17:19 compute-0 ovn_metadata_agent[154972]: 2026-01-31 08:17:19.838 154977 DEBUG oslo_service.service [-] ovn.vif_type                   = ovs log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:17:19 compute-0 ovn_metadata_agent[154972]: 2026-01-31 08:17:19.838 154977 DEBUG oslo_service.service [-] OVS.bridge_mac_table_size      = 50000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:17:19 compute-0 ovn_metadata_agent[154972]: 2026-01-31 08:17:19.838 154977 DEBUG oslo_service.service [-] OVS.igmp_snooping_enable       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:17:19 compute-0 ovn_metadata_agent[154972]: 2026-01-31 08:17:19.839 154977 DEBUG oslo_service.service [-] OVS.ovsdb_timeout              = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:17:19 compute-0 ovn_metadata_agent[154972]: 2026-01-31 08:17:19.839 154977 DEBUG oslo_service.service [-] ovs.ovsdb_connection           = tcp:127.0.0.1:6640 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:17:19 compute-0 ovn_metadata_agent[154972]: 2026-01-31 08:17:19.839 154977 DEBUG oslo_service.service [-] ovs.ovsdb_connection_timeout   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:17:19 compute-0 ovn_metadata_agent[154972]: 2026-01-31 08:17:19.839 154977 DEBUG oslo_service.service [-] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:17:19 compute-0 ovn_metadata_agent[154972]: 2026-01-31 08:17:19.839 154977 DEBUG oslo_service.service [-] oslo_messaging_rabbit.amqp_durable_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:17:19 compute-0 ovn_metadata_agent[154972]: 2026-01-31 08:17:19.839 154977 DEBUG oslo_service.service [-] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:17:19 compute-0 ovn_metadata_agent[154972]: 2026-01-31 08:17:19.840 154977 DEBUG oslo_service.service [-] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:17:19 compute-0 ovn_metadata_agent[154972]: 2026-01-31 08:17:19.840 154977 DEBUG oslo_service.service [-] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:17:19 compute-0 ovn_metadata_agent[154972]: 2026-01-31 08:17:19.840 154977 DEBUG oslo_service.service [-] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:17:19 compute-0 ovn_metadata_agent[154972]: 2026-01-31 08:17:19.840 154977 DEBUG oslo_service.service [-] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:17:19 compute-0 ovn_metadata_agent[154972]: 2026-01-31 08:17:19.840 154977 DEBUG oslo_service.service [-] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:17:19 compute-0 ovn_metadata_agent[154972]: 2026-01-31 08:17:19.840 154977 DEBUG oslo_service.service [-] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:17:19 compute-0 ovn_metadata_agent[154972]: 2026-01-31 08:17:19.841 154977 DEBUG oslo_service.service [-] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:17:19 compute-0 ovn_metadata_agent[154972]: 2026-01-31 08:17:19.841 154977 DEBUG oslo_service.service [-] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:17:19 compute-0 ovn_metadata_agent[154972]: 2026-01-31 08:17:19.841 154977 DEBUG oslo_service.service [-] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:17:19 compute-0 ovn_metadata_agent[154972]: 2026-01-31 08:17:19.841 154977 DEBUG oslo_service.service [-] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:17:19 compute-0 ovn_metadata_agent[154972]: 2026-01-31 08:17:19.841 154977 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:17:19 compute-0 ovn_metadata_agent[154972]: 2026-01-31 08:17:19.841 154977 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:17:19 compute-0 ovn_metadata_agent[154972]: 2026-01-31 08:17:19.842 154977 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:17:19 compute-0 ovn_metadata_agent[154972]: 2026-01-31 08:17:19.842 154977 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:17:19 compute-0 ovn_metadata_agent[154972]: 2026-01-31 08:17:19.842 154977 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:17:19 compute-0 ovn_metadata_agent[154972]: 2026-01-31 08:17:19.842 154977 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:17:19 compute-0 ovn_metadata_agent[154972]: 2026-01-31 08:17:19.842 154977 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:17:19 compute-0 ovn_metadata_agent[154972]: 2026-01-31 08:17:19.842 154977 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_quorum_queue = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:17:19 compute-0 ovn_metadata_agent[154972]: 2026-01-31 08:17:19.843 154977 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:17:19 compute-0 ovn_metadata_agent[154972]: 2026-01-31 08:17:19.843 154977 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:17:19 compute-0 ovn_metadata_agent[154972]: 2026-01-31 08:17:19.843 154977 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:17:19 compute-0 ovn_metadata_agent[154972]: 2026-01-31 08:17:19.843 154977 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:17:19 compute-0 ovn_metadata_agent[154972]: 2026-01-31 08:17:19.843 154977 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:17:19 compute-0 ovn_metadata_agent[154972]: 2026-01-31 08:17:19.843 154977 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:17:19 compute-0 ovn_metadata_agent[154972]: 2026-01-31 08:17:19.844 154977 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:17:19 compute-0 ovn_metadata_agent[154972]: 2026-01-31 08:17:19.844 154977 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:17:19 compute-0 ovn_metadata_agent[154972]: 2026-01-31 08:17:19.844 154977 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:17:19 compute-0 ovn_metadata_agent[154972]: 2026-01-31 08:17:19.844 154977 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:17:19 compute-0 ovn_metadata_agent[154972]: 2026-01-31 08:17:19.844 154977 DEBUG oslo_service.service [-] oslo_messaging_notifications.driver = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:17:19 compute-0 ovn_metadata_agent[154972]: 2026-01-31 08:17:19.845 154977 DEBUG oslo_service.service [-] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:17:19 compute-0 ovn_metadata_agent[154972]: 2026-01-31 08:17:19.845 154977 DEBUG oslo_service.service [-] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:17:19 compute-0 ovn_metadata_agent[154972]: 2026-01-31 08:17:19.845 154977 DEBUG oslo_service.service [-] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:17:19 compute-0 ovn_metadata_agent[154972]: 2026-01-31 08:17:19.845 154977 DEBUG oslo_service.service [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613
Jan 31 08:17:19 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v442: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:17:21 compute-0 ceph-mon[75227]: pgmap v442: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:17:21 compute-0 ceph-osd[87035]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 31 08:17:21 compute-0 ceph-osd[87035]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 600.3 total, 600.0 interval
                                           Cumulative writes: 6832 writes, 29K keys, 6832 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.03 MB/s
                                           Cumulative WAL: 6832 writes, 1235 syncs, 5.53 writes per sync, written: 0.02 GB, 0.03 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 6832 writes, 29K keys, 6832 commit groups, 1.0 writes per commit group, ingest: 19.93 MB, 0.03 MB/s
                                           Interval WAL: 6832 writes, 1235 syncs, 5.53 writes per sync, written: 0.02 GB, 0.03 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.06              0.00         1    0.065       0      0       0.0       0.0
                                            Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.06              0.00         1    0.065       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.06              0.00         1    0.065       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.3 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.1 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55d7805d98d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
                                           
                                           ** Compaction Stats [m-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.3 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55d7805d98d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-0] **
                                           
                                           ** Compaction Stats [m-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.3 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55d7805d98d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-1] **
                                           
                                           ** Compaction Stats [m-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.3 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55d7805d98d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-2] **
                                           
                                           ** Compaction Stats [p-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.56 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.01              0.00         1    0.011       0      0       0.0       0.0
                                            Sum      1/0    1.56 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.01              0.00         1    0.011       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.1      0.01              0.00         1    0.011       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.3 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55d7805d98d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-0] **
                                           
                                           ** Compaction Stats [p-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.3 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55d7805d98d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-1] **
                                           
                                           ** Compaction Stats [p-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.3 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55d7805d98d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-2] **
                                           
                                           ** Compaction Stats [O-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.3 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55d7805d9a30#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 2 last_secs: 5e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-0] **
                                           
                                           ** Compaction Stats [O-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.3 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55d7805d9a30#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 2 last_secs: 5e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-1] **
                                           
                                           ** Compaction Stats [O-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.25 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.03              0.00         1    0.031       0      0       0.0       0.0
                                            Sum      1/0    1.25 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.03              0.00         1    0.031       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.03              0.00         1    0.031       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.3 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55d7805d9a30#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 2 last_secs: 5e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-2] **
                                           
                                           ** Compaction Stats [L] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.03              0.00         1    0.032       0      0       0.0       0.0
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.03              0.00         1    0.032       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [L] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.03              0.00         1    0.032       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.3 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55d7805d98d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [L] **
                                           
                                           ** Compaction Stats [P] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [P] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.3 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55d7805d98d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [P] **
Jan 31 08:17:21 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 08:17:21 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v443: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:17:23 compute-0 ceph-mon[75227]: pgmap v443: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:17:23 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v444: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:17:25 compute-0 sshd-session[155521]: Accepted publickey for zuul from 192.168.122.30 port 55366 ssh2: ECDSA SHA256:Skb+4tfaoVfLHQIqkRSeA/sFlTrVc6ZnX8V66qTLHY8
Jan 31 08:17:25 compute-0 systemd-logind[793]: New session 49 of user zuul.
Jan 31 08:17:25 compute-0 ceph-mon[75227]: pgmap v444: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:17:25 compute-0 systemd[1]: Started Session 49 of User zuul.
Jan 31 08:17:25 compute-0 sshd-session[155521]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 31 08:17:25 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v445: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:17:26 compute-0 python3.9[155674]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 31 08:17:26 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 08:17:27 compute-0 sudo[155828]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yrdklndgsnndkgbbwdaepbdjyoevepgx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847446.647148-29-108363707783727/AnsiballZ_command.py'
Jan 31 08:17:27 compute-0 sudo[155828]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:17:27 compute-0 ceph-mon[75227]: pgmap v445: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:17:27 compute-0 python3.9[155830]: ansible-ansible.legacy.command Invoked with _raw_params=podman ps -a --filter name=^nova_virtlogd$ --format \{\{.Names\}\} _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 08:17:27 compute-0 sudo[155828]: pam_unix(sudo:session): session closed for user root
Jan 31 08:17:27 compute-0 ceph-osd[88096]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 31 08:17:27 compute-0 ceph-osd[88096]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 600.8 total, 600.0 interval
                                           Cumulative writes: 5364 writes, 23K keys, 5364 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.03 MB/s
                                           Cumulative WAL: 5364 writes, 713 syncs, 7.52 writes per sync, written: 0.02 GB, 0.03 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 5364 writes, 23K keys, 5364 commit groups, 1.0 writes per commit group, ingest: 18.56 MB, 0.03 MB/s
                                           Interval WAL: 5364 writes, 713 syncs, 7.52 writes per sync, written: 0.02 GB, 0.03 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.25              0.00         1    0.249       0      0       0.0       0.0
                                            Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.25              0.00         1    0.249       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.25              0.00         1    0.249       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.8 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.2 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5603a1de18d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 4.5e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
                                           
                                           ** Compaction Stats [m-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.8 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5603a1de18d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 4.5e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-0] **
                                           
                                           ** Compaction Stats [m-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.8 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5603a1de18d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 4.5e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-1] **
                                           
                                           ** Compaction Stats [m-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.8 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5603a1de18d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 4.5e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-2] **
                                           
                                           ** Compaction Stats [p-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.56 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.07              0.00         1    0.071       0      0       0.0       0.0
                                            Sum      1/0    1.56 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.07              0.00         1    0.071       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.07              0.00         1    0.071       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.8 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.1 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5603a1de18d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 4.5e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-0] **
                                           
                                           ** Compaction Stats [p-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.8 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5603a1de18d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 4.5e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-1] **
                                           
                                           ** Compaction Stats [p-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.8 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5603a1de18d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 4.5e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-2] **
                                           
                                           ** Compaction Stats [O-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.8 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5603a1de1a30#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 2 last_secs: 4e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-0] **
                                           
                                           ** Compaction Stats [O-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.8 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5603a1de1a30#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 2 last_secs: 4e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-1] **
                                           
                                           ** Compaction Stats [O-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.25 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.21              0.00         1    0.207       0      0       0.0       0.0
                                            Sum      1/0    1.25 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.21              0.00         1    0.207       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.21              0.00         1    0.207       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.8 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.2 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5603a1de1a30#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 2 last_secs: 4e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-2] **
                                           
                                           ** Compaction Stats [L] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.04              0.00         1    0.043       0      0       0.0       0.0
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.04              0.00         1    0.043       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [L] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.04              0.00         1    0.043       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.8 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5603a1de18d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 4.5e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [L] **
                                           
                                           ** Compaction Stats [P] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [P] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.8 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5603a1de18d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 4.5e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [P] **
Jan 31 08:17:27 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v446: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:17:28 compute-0 sudo[155993]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fpyyxxpjqdjmxwuykzntmakmalmmmdjk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847447.6787646-40-43700371514648/AnsiballZ_systemd_service.py'
Jan 31 08:17:28 compute-0 sudo[155993]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:17:28 compute-0 python3.9[155995]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Jan 31 08:17:28 compute-0 systemd[1]: Reloading.
Jan 31 08:17:28 compute-0 systemd-rc-local-generator[156022]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 08:17:28 compute-0 systemd-sysv-generator[156025]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 08:17:28 compute-0 sudo[155993]: pam_unix(sudo:session): session closed for user root
Jan 31 08:17:29 compute-0 ceph-mon[75227]: pgmap v446: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:17:29 compute-0 python3.9[156179]: ansible-ansible.builtin.service_facts Invoked
Jan 31 08:17:29 compute-0 network[156196]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Jan 31 08:17:29 compute-0 network[156197]: 'network-scripts' will be removed from distribution in near future.
Jan 31 08:17:29 compute-0 network[156198]: It is advised to switch to 'NetworkManager' instead for network management.
Jan 31 08:17:29 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v447: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:17:29 compute-0 sudo[156204]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 08:17:29 compute-0 sudo[156204]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:17:30 compute-0 sudo[156204]: pam_unix(sudo:session): session closed for user root
Jan 31 08:17:30 compute-0 sudo[156230]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/82c880e6-d992-5408-8b12-efff9c275473/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 check-host
Jan 31 08:17:30 compute-0 sudo[156230]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:17:30 compute-0 ceph-mon[75227]: pgmap v447: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:17:30 compute-0 sudo[156230]: pam_unix(sudo:session): session closed for user root
Jan 31 08:17:30 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 31 08:17:30 compute-0 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 08:17:30 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 31 08:17:30 compute-0 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 08:17:30 compute-0 sudo[156304]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 08:17:30 compute-0 sudo[156304]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:17:30 compute-0 sudo[156304]: pam_unix(sudo:session): session closed for user root
Jan 31 08:17:30 compute-0 sudo[156332]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/82c880e6-d992-5408-8b12-efff9c275473/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Jan 31 08:17:30 compute-0 sudo[156332]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:17:30 compute-0 sudo[156332]: pam_unix(sudo:session): session closed for user root
Jan 31 08:17:30 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 31 08:17:30 compute-0 ceph-mon[75227]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 08:17:30 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 31 08:17:30 compute-0 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 31 08:17:30 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 31 08:17:30 compute-0 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 08:17:30 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 31 08:17:30 compute-0 ceph-mon[75227]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 31 08:17:30 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 31 08:17:30 compute-0 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 31 08:17:30 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 31 08:17:30 compute-0 ceph-mon[75227]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 08:17:30 compute-0 sudo[156422]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 08:17:30 compute-0 sudo[156422]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:17:30 compute-0 sudo[156422]: pam_unix(sudo:session): session closed for user root
Jan 31 08:17:31 compute-0 sudo[156450]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/82c880e6-d992-5408-8b12-efff9c275473/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 82c880e6-d992-5408-8b12-efff9c275473 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Jan 31 08:17:31 compute-0 sudo[156450]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:17:31 compute-0 podman[156507]: 2026-01-31 08:17:31.343328503 +0000 UTC m=+0.057234674 container create 2949aef1c4b4875ee07840f22e4a0f046038e8ce4421ce9ce1e1a09bfe65f959 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=busy_jang, CEPH_REF=tentacle, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20251030)
Jan 31 08:17:31 compute-0 podman[156507]: 2026-01-31 08:17:31.310929478 +0000 UTC m=+0.024835679 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 08:17:31 compute-0 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 08:17:31 compute-0 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 08:17:31 compute-0 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 08:17:31 compute-0 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 31 08:17:31 compute-0 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 08:17:31 compute-0 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 31 08:17:31 compute-0 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 31 08:17:31 compute-0 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 08:17:31 compute-0 systemd[1]: Started libpod-conmon-2949aef1c4b4875ee07840f22e4a0f046038e8ce4421ce9ce1e1a09bfe65f959.scope.
Jan 31 08:17:31 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:17:31 compute-0 podman[156507]: 2026-01-31 08:17:31.538607573 +0000 UTC m=+0.252513814 container init 2949aef1c4b4875ee07840f22e4a0f046038e8ce4421ce9ce1e1a09bfe65f959 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=busy_jang, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 08:17:31 compute-0 podman[156507]: 2026-01-31 08:17:31.547992266 +0000 UTC m=+0.261898467 container start 2949aef1c4b4875ee07840f22e4a0f046038e8ce4421ce9ce1e1a09bfe65f959 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=busy_jang, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3)
Jan 31 08:17:31 compute-0 podman[156507]: 2026-01-31 08:17:31.561553474 +0000 UTC m=+0.275459685 container attach 2949aef1c4b4875ee07840f22e4a0f046038e8ce4421ce9ce1e1a09bfe65f959 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=busy_jang, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 08:17:31 compute-0 busy_jang[156534]: 167 167
Jan 31 08:17:31 compute-0 systemd[1]: libpod-2949aef1c4b4875ee07840f22e4a0f046038e8ce4421ce9ce1e1a09bfe65f959.scope: Deactivated successfully.
Jan 31 08:17:31 compute-0 conmon[156534]: conmon 2949aef1c4b4875ee078 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-2949aef1c4b4875ee07840f22e4a0f046038e8ce4421ce9ce1e1a09bfe65f959.scope/container/memory.events
Jan 31 08:17:31 compute-0 podman[156507]: 2026-01-31 08:17:31.565115351 +0000 UTC m=+0.279021552 container died 2949aef1c4b4875ee07840f22e4a0f046038e8ce4421ce9ce1e1a09bfe65f959 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=busy_jang, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 08:17:31 compute-0 systemd[1]: var-lib-containers-storage-overlay-319779b9a1984a67a2e776dfff1cb499b58137d7a7009f1eddd573bd58adf7fd-merged.mount: Deactivated successfully.
Jan 31 08:17:31 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 08:17:31 compute-0 podman[156507]: 2026-01-31 08:17:31.660207155 +0000 UTC m=+0.374113316 container remove 2949aef1c4b4875ee07840f22e4a0f046038e8ce4421ce9ce1e1a09bfe65f959 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=busy_jang, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Jan 31 08:17:31 compute-0 systemd[1]: libpod-conmon-2949aef1c4b4875ee07840f22e4a0f046038e8ce4421ce9ce1e1a09bfe65f959.scope: Deactivated successfully.
Jan 31 08:17:31 compute-0 ceph-mgr[75519]: [balancer INFO root] Optimize plan auto_2026-01-31_08:17:31
Jan 31 08:17:31 compute-0 ceph-mgr[75519]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 31 08:17:31 compute-0 ceph-mgr[75519]: [balancer INFO root] do_upmap
Jan 31 08:17:31 compute-0 ceph-mgr[75519]: [balancer INFO root] pools ['vms', 'images', 'default.rgw.log', 'default.rgw.meta', 'cephfs.cephfs.meta', 'cephfs.cephfs.data', '.rgw.root', '.mgr', 'volumes', 'backups', 'default.rgw.control']
Jan 31 08:17:31 compute-0 ceph-mgr[75519]: [balancer INFO root] prepared 0/10 upmap changes
Jan 31 08:17:31 compute-0 podman[156588]: 2026-01-31 08:17:31.818409138 +0000 UTC m=+0.066763211 container create 475f39e4575006c5601d5f19781c106ffc8e05167c534af9a6700eaa125cc2e6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cool_goodall, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, io.buildah.version=1.41.3)
Jan 31 08:17:31 compute-0 systemd[1]: Started libpod-conmon-475f39e4575006c5601d5f19781c106ffc8e05167c534af9a6700eaa125cc2e6.scope.
Jan 31 08:17:31 compute-0 podman[156588]: 2026-01-31 08:17:31.781461956 +0000 UTC m=+0.029816019 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 08:17:31 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:17:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/08edea015a76cd055c7fc67689cf631deecd72feb02f39e480ac0c60b569bbc9/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 08:17:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/08edea015a76cd055c7fc67689cf631deecd72feb02f39e480ac0c60b569bbc9/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 08:17:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/08edea015a76cd055c7fc67689cf631deecd72feb02f39e480ac0c60b569bbc9/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 08:17:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/08edea015a76cd055c7fc67689cf631deecd72feb02f39e480ac0c60b569bbc9/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 08:17:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/08edea015a76cd055c7fc67689cf631deecd72feb02f39e480ac0c60b569bbc9/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 31 08:17:31 compute-0 podman[156588]: 2026-01-31 08:17:31.936134023 +0000 UTC m=+0.184488096 container init 475f39e4575006c5601d5f19781c106ffc8e05167c534af9a6700eaa125cc2e6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cool_goodall, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, ceph=True, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 31 08:17:31 compute-0 podman[156588]: 2026-01-31 08:17:31.94336107 +0000 UTC m=+0.191715123 container start 475f39e4575006c5601d5f19781c106ffc8e05167c534af9a6700eaa125cc2e6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cool_goodall, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Jan 31 08:17:31 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v448: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:17:31 compute-0 podman[156588]: 2026-01-31 08:17:31.970386454 +0000 UTC m=+0.218740517 container attach 475f39e4575006c5601d5f19781c106ffc8e05167c534af9a6700eaa125cc2e6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cool_goodall, io.buildah.version=1.41.3, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 08:17:32 compute-0 ceph-mgr[75519]: [devicehealth INFO root] Check health
Jan 31 08:17:32 compute-0 sudo[156735]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tcldudpmdbykgufuthzeapzintwnypbx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847451.868125-59-105292736891719/AnsiballZ_systemd_service.py'
Jan 31 08:17:32 compute-0 sudo[156735]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:17:32 compute-0 cool_goodall[156628]: --> passed data devices: 0 physical, 3 LVM
Jan 31 08:17:32 compute-0 cool_goodall[156628]: --> All data devices are unavailable
Jan 31 08:17:32 compute-0 systemd[1]: libpod-475f39e4575006c5601d5f19781c106ffc8e05167c534af9a6700eaa125cc2e6.scope: Deactivated successfully.
Jan 31 08:17:32 compute-0 podman[156588]: 2026-01-31 08:17:32.428315623 +0000 UTC m=+0.676669686 container died 475f39e4575006c5601d5f19781c106ffc8e05167c534af9a6700eaa125cc2e6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cool_goodall, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 08:17:32 compute-0 python3.9[156737]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_libvirt.target state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 31 08:17:32 compute-0 ceph-mon[75227]: pgmap v448: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:17:32 compute-0 systemd[1]: var-lib-containers-storage-overlay-08edea015a76cd055c7fc67689cf631deecd72feb02f39e480ac0c60b569bbc9-merged.mount: Deactivated successfully.
Jan 31 08:17:32 compute-0 sudo[156735]: pam_unix(sudo:session): session closed for user root
Jan 31 08:17:32 compute-0 podman[156588]: 2026-01-31 08:17:32.572329039 +0000 UTC m=+0.820683102 container remove 475f39e4575006c5601d5f19781c106ffc8e05167c534af9a6700eaa125cc2e6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cool_goodall, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Jan 31 08:17:32 compute-0 systemd[1]: libpod-conmon-475f39e4575006c5601d5f19781c106ffc8e05167c534af9a6700eaa125cc2e6.scope: Deactivated successfully.
Jan 31 08:17:32 compute-0 sudo[156450]: pam_unix(sudo:session): session closed for user root
Jan 31 08:17:32 compute-0 sudo[156818]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 08:17:32 compute-0 sudo[156818]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:17:32 compute-0 sudo[156818]: pam_unix(sudo:session): session closed for user root
Jan 31 08:17:32 compute-0 sudo[156868]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/82c880e6-d992-5408-8b12-efff9c275473/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 82c880e6-d992-5408-8b12-efff9c275473 -- lvm list --format json
Jan 31 08:17:32 compute-0 sudo[156868]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:17:32 compute-0 ceph-mgr[75519]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:17:32 compute-0 ceph-mgr[75519]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:17:32 compute-0 ceph-mgr[75519]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:17:32 compute-0 ceph-mgr[75519]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:17:32 compute-0 ceph-mgr[75519]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:17:32 compute-0 ceph-mgr[75519]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:17:32 compute-0 sudo[156966]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kkkoqilsnuysyztemwiivdhogtrufjgb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847452.6070716-59-108307125110818/AnsiballZ_systemd_service.py'
Jan 31 08:17:32 compute-0 sudo[156966]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:17:32 compute-0 ceph-mgr[75519]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 31 08:17:32 compute-0 ceph-mgr[75519]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 31 08:17:32 compute-0 ceph-mgr[75519]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 08:17:32 compute-0 ceph-mgr[75519]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 08:17:32 compute-0 ceph-mgr[75519]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 08:17:32 compute-0 ceph-mgr[75519]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 08:17:32 compute-0 ceph-mgr[75519]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 08:17:32 compute-0 ceph-mgr[75519]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 08:17:32 compute-0 ceph-mgr[75519]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 08:17:32 compute-0 ceph-mgr[75519]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 08:17:33 compute-0 podman[156981]: 2026-01-31 08:17:33.056680262 +0000 UTC m=+0.097109804 container create fb78b63b3f807c9dfd2842d57af933dc63dfdd77161517c115fae2f77a6dfb8c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=mystifying_khorana, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 08:17:33 compute-0 podman[156981]: 2026-01-31 08:17:32.985117368 +0000 UTC m=+0.025546920 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 08:17:33 compute-0 systemd[1]: Started libpod-conmon-fb78b63b3f807c9dfd2842d57af933dc63dfdd77161517c115fae2f77a6dfb8c.scope.
Jan 31 08:17:33 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:17:33 compute-0 python3.9[156968]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtlogd_wrapper.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 31 08:17:33 compute-0 podman[156981]: 2026-01-31 08:17:33.205008358 +0000 UTC m=+0.245437900 container init fb78b63b3f807c9dfd2842d57af933dc63dfdd77161517c115fae2f77a6dfb8c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=mystifying_khorana, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 08:17:33 compute-0 sudo[156966]: pam_unix(sudo:session): session closed for user root
Jan 31 08:17:33 compute-0 podman[156981]: 2026-01-31 08:17:33.212938657 +0000 UTC m=+0.253368169 container start fb78b63b3f807c9dfd2842d57af933dc63dfdd77161517c115fae2f77a6dfb8c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=mystifying_khorana, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Jan 31 08:17:33 compute-0 mystifying_khorana[156997]: 167 167
Jan 31 08:17:33 compute-0 systemd[1]: libpod-fb78b63b3f807c9dfd2842d57af933dc63dfdd77161517c115fae2f77a6dfb8c.scope: Deactivated successfully.
Jan 31 08:17:33 compute-0 podman[156981]: 2026-01-31 08:17:33.271589453 +0000 UTC m=+0.312019005 container attach fb78b63b3f807c9dfd2842d57af933dc63dfdd77161517c115fae2f77a6dfb8c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=mystifying_khorana, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, OSD_FLAVOR=default)
Jan 31 08:17:33 compute-0 podman[156981]: 2026-01-31 08:17:33.273067217 +0000 UTC m=+0.313496749 container died fb78b63b3f807c9dfd2842d57af933dc63dfdd77161517c115fae2f77a6dfb8c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=mystifying_khorana, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 08:17:33 compute-0 systemd[1]: var-lib-containers-storage-overlay-477d064de361f52d530c8496a3506ed45cd5fe87d7d05f8113e84c59fa6d98fa-merged.mount: Deactivated successfully.
Jan 31 08:17:33 compute-0 podman[156981]: 2026-01-31 08:17:33.366719337 +0000 UTC m=+0.407148869 container remove fb78b63b3f807c9dfd2842d57af933dc63dfdd77161517c115fae2f77a6dfb8c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=mystifying_khorana, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 08:17:33 compute-0 systemd[1]: libpod-conmon-fb78b63b3f807c9dfd2842d57af933dc63dfdd77161517c115fae2f77a6dfb8c.scope: Deactivated successfully.
Jan 31 08:17:33 compute-0 podman[157122]: 2026-01-31 08:17:33.554686627 +0000 UTC m=+0.077351310 container create 9a422705f8b1ea5782e9a4718ddcdbcf7a6f9394ed7dd3ac1de0455ac5414cf3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=lucid_kilby, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.build-date=20251030, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Jan 31 08:17:33 compute-0 sudo[157185]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dxecyamobtvwtawnczlilyrvskbkbyph ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847453.3210404-59-269079845421695/AnsiballZ_systemd_service.py'
Jan 31 08:17:33 compute-0 sudo[157185]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:17:33 compute-0 podman[157122]: 2026-01-31 08:17:33.507227298 +0000 UTC m=+0.029892031 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 08:17:33 compute-0 systemd[1]: Started libpod-conmon-9a422705f8b1ea5782e9a4718ddcdbcf7a6f9394ed7dd3ac1de0455ac5414cf3.scope.
Jan 31 08:17:33 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:17:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/af14f8a74fced95ffe1c3034735965f65b32ff59aa179d752e17c92b57587978/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 08:17:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/af14f8a74fced95ffe1c3034735965f65b32ff59aa179d752e17c92b57587978/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 08:17:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/af14f8a74fced95ffe1c3034735965f65b32ff59aa179d752e17c92b57587978/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 08:17:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/af14f8a74fced95ffe1c3034735965f65b32ff59aa179d752e17c92b57587978/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 08:17:33 compute-0 podman[157122]: 2026-01-31 08:17:33.738998526 +0000 UTC m=+0.261663249 container init 9a422705f8b1ea5782e9a4718ddcdbcf7a6f9394ed7dd3ac1de0455ac5414cf3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=lucid_kilby, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 08:17:33 compute-0 podman[157122]: 2026-01-31 08:17:33.749515523 +0000 UTC m=+0.272180206 container start 9a422705f8b1ea5782e9a4718ddcdbcf7a6f9394ed7dd3ac1de0455ac5414cf3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=lucid_kilby, OSD_FLAVOR=default, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 08:17:33 compute-0 podman[157122]: 2026-01-31 08:17:33.759356919 +0000 UTC m=+0.282021572 container attach 9a422705f8b1ea5782e9a4718ddcdbcf7a6f9394ed7dd3ac1de0455ac5414cf3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=lucid_kilby, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Jan 31 08:17:33 compute-0 python3.9[157187]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtnodedevd.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 31 08:17:33 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v449: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:17:33 compute-0 sudo[157185]: pam_unix(sudo:session): session closed for user root
Jan 31 08:17:34 compute-0 lucid_kilby[157190]: {
Jan 31 08:17:34 compute-0 lucid_kilby[157190]:     "0": [
Jan 31 08:17:34 compute-0 lucid_kilby[157190]:         {
Jan 31 08:17:34 compute-0 lucid_kilby[157190]:             "devices": [
Jan 31 08:17:34 compute-0 lucid_kilby[157190]:                 "/dev/loop3"
Jan 31 08:17:34 compute-0 lucid_kilby[157190]:             ],
Jan 31 08:17:34 compute-0 lucid_kilby[157190]:             "lv_name": "ceph_lv0",
Jan 31 08:17:34 compute-0 lucid_kilby[157190]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 08:17:34 compute-0 lucid_kilby[157190]:             "lv_size": "21470642176",
Jan 31 08:17:34 compute-0 lucid_kilby[157190]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=MTsNbY-MKaT-jGv0-3onj-5WQa-gnK0-BbfLsK,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=82c880e6-d992-5408-8b12-efff9c275473,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=39c36249-2898-4a76-b317-8e4ca379866f,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 31 08:17:34 compute-0 lucid_kilby[157190]:             "lv_uuid": "MTsNbY-MKaT-jGv0-3onj-5WQa-gnK0-BbfLsK",
Jan 31 08:17:34 compute-0 lucid_kilby[157190]:             "name": "ceph_lv0",
Jan 31 08:17:34 compute-0 lucid_kilby[157190]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 08:17:34 compute-0 lucid_kilby[157190]:             "tags": {
Jan 31 08:17:34 compute-0 lucid_kilby[157190]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 31 08:17:34 compute-0 lucid_kilby[157190]:                 "ceph.block_uuid": "MTsNbY-MKaT-jGv0-3onj-5WQa-gnK0-BbfLsK",
Jan 31 08:17:34 compute-0 lucid_kilby[157190]:                 "ceph.cephx_lockbox_secret": "",
Jan 31 08:17:34 compute-0 lucid_kilby[157190]:                 "ceph.cluster_fsid": "82c880e6-d992-5408-8b12-efff9c275473",
Jan 31 08:17:34 compute-0 lucid_kilby[157190]:                 "ceph.cluster_name": "ceph",
Jan 31 08:17:34 compute-0 lucid_kilby[157190]:                 "ceph.crush_device_class": "",
Jan 31 08:17:34 compute-0 lucid_kilby[157190]:                 "ceph.encrypted": "0",
Jan 31 08:17:34 compute-0 lucid_kilby[157190]:                 "ceph.objectstore": "bluestore",
Jan 31 08:17:34 compute-0 lucid_kilby[157190]:                 "ceph.osd_fsid": "39c36249-2898-4a76-b317-8e4ca379866f",
Jan 31 08:17:34 compute-0 lucid_kilby[157190]:                 "ceph.osd_id": "0",
Jan 31 08:17:34 compute-0 lucid_kilby[157190]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 31 08:17:34 compute-0 lucid_kilby[157190]:                 "ceph.type": "block",
Jan 31 08:17:34 compute-0 lucid_kilby[157190]:                 "ceph.vdo": "0",
Jan 31 08:17:34 compute-0 lucid_kilby[157190]:                 "ceph.with_tpm": "0"
Jan 31 08:17:34 compute-0 lucid_kilby[157190]:             },
Jan 31 08:17:34 compute-0 lucid_kilby[157190]:             "type": "block",
Jan 31 08:17:34 compute-0 lucid_kilby[157190]:             "vg_name": "ceph_vg0"
Jan 31 08:17:34 compute-0 lucid_kilby[157190]:         }
Jan 31 08:17:34 compute-0 lucid_kilby[157190]:     ],
Jan 31 08:17:34 compute-0 lucid_kilby[157190]:     "1": [
Jan 31 08:17:34 compute-0 lucid_kilby[157190]:         {
Jan 31 08:17:34 compute-0 lucid_kilby[157190]:             "devices": [
Jan 31 08:17:34 compute-0 lucid_kilby[157190]:                 "/dev/loop4"
Jan 31 08:17:34 compute-0 lucid_kilby[157190]:             ],
Jan 31 08:17:34 compute-0 lucid_kilby[157190]:             "lv_name": "ceph_lv1",
Jan 31 08:17:34 compute-0 lucid_kilby[157190]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 31 08:17:34 compute-0 lucid_kilby[157190]:             "lv_size": "21470642176",
Jan 31 08:17:34 compute-0 lucid_kilby[157190]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=p93Mbf-DMxT-pcUt-jSJE-SFna-oscq-yTAd40,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=82c880e6-d992-5408-8b12-efff9c275473,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=dacad4fa-56d8-4937-b2d8-306fb75187f3,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 31 08:17:34 compute-0 lucid_kilby[157190]:             "lv_uuid": "p93Mbf-DMxT-pcUt-jSJE-SFna-oscq-yTAd40",
Jan 31 08:17:34 compute-0 lucid_kilby[157190]:             "name": "ceph_lv1",
Jan 31 08:17:34 compute-0 lucid_kilby[157190]:             "path": "/dev/ceph_vg1/ceph_lv1",
Jan 31 08:17:34 compute-0 lucid_kilby[157190]:             "tags": {
Jan 31 08:17:34 compute-0 lucid_kilby[157190]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 31 08:17:34 compute-0 lucid_kilby[157190]:                 "ceph.block_uuid": "p93Mbf-DMxT-pcUt-jSJE-SFna-oscq-yTAd40",
Jan 31 08:17:34 compute-0 lucid_kilby[157190]:                 "ceph.cephx_lockbox_secret": "",
Jan 31 08:17:34 compute-0 lucid_kilby[157190]:                 "ceph.cluster_fsid": "82c880e6-d992-5408-8b12-efff9c275473",
Jan 31 08:17:34 compute-0 lucid_kilby[157190]:                 "ceph.cluster_name": "ceph",
Jan 31 08:17:34 compute-0 lucid_kilby[157190]:                 "ceph.crush_device_class": "",
Jan 31 08:17:34 compute-0 lucid_kilby[157190]:                 "ceph.encrypted": "0",
Jan 31 08:17:34 compute-0 lucid_kilby[157190]:                 "ceph.objectstore": "bluestore",
Jan 31 08:17:34 compute-0 lucid_kilby[157190]:                 "ceph.osd_fsid": "dacad4fa-56d8-4937-b2d8-306fb75187f3",
Jan 31 08:17:34 compute-0 lucid_kilby[157190]:                 "ceph.osd_id": "1",
Jan 31 08:17:34 compute-0 lucid_kilby[157190]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 31 08:17:34 compute-0 lucid_kilby[157190]:                 "ceph.type": "block",
Jan 31 08:17:34 compute-0 lucid_kilby[157190]:                 "ceph.vdo": "0",
Jan 31 08:17:34 compute-0 lucid_kilby[157190]:                 "ceph.with_tpm": "0"
Jan 31 08:17:34 compute-0 lucid_kilby[157190]:             },
Jan 31 08:17:34 compute-0 lucid_kilby[157190]:             "type": "block",
Jan 31 08:17:34 compute-0 lucid_kilby[157190]:             "vg_name": "ceph_vg1"
Jan 31 08:17:34 compute-0 lucid_kilby[157190]:         }
Jan 31 08:17:34 compute-0 lucid_kilby[157190]:     ],
Jan 31 08:17:34 compute-0 lucid_kilby[157190]:     "2": [
Jan 31 08:17:34 compute-0 lucid_kilby[157190]:         {
Jan 31 08:17:34 compute-0 lucid_kilby[157190]:             "devices": [
Jan 31 08:17:34 compute-0 lucid_kilby[157190]:                 "/dev/loop5"
Jan 31 08:17:34 compute-0 lucid_kilby[157190]:             ],
Jan 31 08:17:34 compute-0 lucid_kilby[157190]:             "lv_name": "ceph_lv2",
Jan 31 08:17:34 compute-0 lucid_kilby[157190]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 31 08:17:34 compute-0 lucid_kilby[157190]:             "lv_size": "21470642176",
Jan 31 08:17:34 compute-0 lucid_kilby[157190]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=fxd7JU-HnwP-NvcE-M4xv-EgEF-kK7y-w6dXCS,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=82c880e6-d992-5408-8b12-efff9c275473,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=faa25865-e7b6-44f9-8188-08bf287b941b,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 31 08:17:34 compute-0 lucid_kilby[157190]:             "lv_uuid": "fxd7JU-HnwP-NvcE-M4xv-EgEF-kK7y-w6dXCS",
Jan 31 08:17:34 compute-0 lucid_kilby[157190]:             "name": "ceph_lv2",
Jan 31 08:17:34 compute-0 lucid_kilby[157190]:             "path": "/dev/ceph_vg2/ceph_lv2",
Jan 31 08:17:34 compute-0 lucid_kilby[157190]:             "tags": {
Jan 31 08:17:34 compute-0 lucid_kilby[157190]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 31 08:17:34 compute-0 lucid_kilby[157190]:                 "ceph.block_uuid": "fxd7JU-HnwP-NvcE-M4xv-EgEF-kK7y-w6dXCS",
Jan 31 08:17:34 compute-0 lucid_kilby[157190]:                 "ceph.cephx_lockbox_secret": "",
Jan 31 08:17:34 compute-0 lucid_kilby[157190]:                 "ceph.cluster_fsid": "82c880e6-d992-5408-8b12-efff9c275473",
Jan 31 08:17:34 compute-0 lucid_kilby[157190]:                 "ceph.cluster_name": "ceph",
Jan 31 08:17:34 compute-0 lucid_kilby[157190]:                 "ceph.crush_device_class": "",
Jan 31 08:17:34 compute-0 lucid_kilby[157190]:                 "ceph.encrypted": "0",
Jan 31 08:17:34 compute-0 lucid_kilby[157190]:                 "ceph.objectstore": "bluestore",
Jan 31 08:17:34 compute-0 lucid_kilby[157190]:                 "ceph.osd_fsid": "faa25865-e7b6-44f9-8188-08bf287b941b",
Jan 31 08:17:34 compute-0 lucid_kilby[157190]:                 "ceph.osd_id": "2",
Jan 31 08:17:34 compute-0 lucid_kilby[157190]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 31 08:17:34 compute-0 lucid_kilby[157190]:                 "ceph.type": "block",
Jan 31 08:17:34 compute-0 lucid_kilby[157190]:                 "ceph.vdo": "0",
Jan 31 08:17:34 compute-0 lucid_kilby[157190]:                 "ceph.with_tpm": "0"
Jan 31 08:17:34 compute-0 lucid_kilby[157190]:             },
Jan 31 08:17:34 compute-0 lucid_kilby[157190]:             "type": "block",
Jan 31 08:17:34 compute-0 lucid_kilby[157190]:             "vg_name": "ceph_vg2"
Jan 31 08:17:34 compute-0 lucid_kilby[157190]:         }
Jan 31 08:17:34 compute-0 lucid_kilby[157190]:     ]
Jan 31 08:17:34 compute-0 lucid_kilby[157190]: }
Jan 31 08:17:34 compute-0 systemd[1]: libpod-9a422705f8b1ea5782e9a4718ddcdbcf7a6f9394ed7dd3ac1de0455ac5414cf3.scope: Deactivated successfully.
Jan 31 08:17:34 compute-0 podman[157122]: 2026-01-31 08:17:34.100681617 +0000 UTC m=+0.623346250 container died 9a422705f8b1ea5782e9a4718ddcdbcf7a6f9394ed7dd3ac1de0455ac5414cf3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=lucid_kilby, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 08:17:34 compute-0 systemd[1]: var-lib-containers-storage-overlay-af14f8a74fced95ffe1c3034735965f65b32ff59aa179d752e17c92b57587978-merged.mount: Deactivated successfully.
Jan 31 08:17:34 compute-0 podman[157122]: 2026-01-31 08:17:34.227233727 +0000 UTC m=+0.749898430 container remove 9a422705f8b1ea5782e9a4718ddcdbcf7a6f9394ed7dd3ac1de0455ac5414cf3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=lucid_kilby, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True)
Jan 31 08:17:34 compute-0 systemd[1]: libpod-conmon-9a422705f8b1ea5782e9a4718ddcdbcf7a6f9394ed7dd3ac1de0455ac5414cf3.scope: Deactivated successfully.
Jan 31 08:17:34 compute-0 sudo[156868]: pam_unix(sudo:session): session closed for user root
Jan 31 08:17:34 compute-0 sudo[157335]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 08:17:34 compute-0 sudo[157335]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:17:34 compute-0 sudo[157335]: pam_unix(sudo:session): session closed for user root
Jan 31 08:17:34 compute-0 sudo[157387]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-evyaypgpqejrflwesugkmwgdseydfnyj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847454.11785-59-206713406269696/AnsiballZ_systemd_service.py'
Jan 31 08:17:34 compute-0 sudo[157387]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:17:34 compute-0 sudo[157388]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/82c880e6-d992-5408-8b12-efff9c275473/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 82c880e6-d992-5408-8b12-efff9c275473 -- raw list --format json
Jan 31 08:17:34 compute-0 sudo[157388]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:17:34 compute-0 python3.9[157394]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtproxyd.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 31 08:17:34 compute-0 sudo[157387]: pam_unix(sudo:session): session closed for user root
Jan 31 08:17:34 compute-0 podman[157427]: 2026-01-31 08:17:34.715919212 +0000 UTC m=+0.076076402 container create 845a9a3400f2f77e384ea0e4f08645a3655523d66307a00bd1584e8236eade4e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=tender_montalcini, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 08:17:34 compute-0 podman[157427]: 2026-01-31 08:17:34.665086051 +0000 UTC m=+0.025243301 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 08:17:34 compute-0 systemd[1]: Started libpod-conmon-845a9a3400f2f77e384ea0e4f08645a3655523d66307a00bd1584e8236eade4e.scope.
Jan 31 08:17:34 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:17:34 compute-0 podman[157427]: 2026-01-31 08:17:34.839378349 +0000 UTC m=+0.199535539 container init 845a9a3400f2f77e384ea0e4f08645a3655523d66307a00bd1584e8236eade4e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=tender_montalcini, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 08:17:34 compute-0 podman[157427]: 2026-01-31 08:17:34.84473256 +0000 UTC m=+0.204889730 container start 845a9a3400f2f77e384ea0e4f08645a3655523d66307a00bd1584e8236eade4e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=tender_montalcini, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 08:17:34 compute-0 tender_montalcini[157469]: 167 167
Jan 31 08:17:34 compute-0 systemd[1]: libpod-845a9a3400f2f77e384ea0e4f08645a3655523d66307a00bd1584e8236eade4e.scope: Deactivated successfully.
Jan 31 08:17:34 compute-0 podman[157427]: 2026-01-31 08:17:34.848676999 +0000 UTC m=+0.208834179 container attach 845a9a3400f2f77e384ea0e4f08645a3655523d66307a00bd1584e8236eade4e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=tender_montalcini, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS)
Jan 31 08:17:34 compute-0 podman[157427]: 2026-01-31 08:17:34.848980098 +0000 UTC m=+0.209137268 container died 845a9a3400f2f77e384ea0e4f08645a3655523d66307a00bd1584e8236eade4e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=tender_montalcini, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0)
Jan 31 08:17:34 compute-0 systemd[1]: var-lib-containers-storage-overlay-b95bc1f7ae05d5caae1da9ff84ff0ce6eed6ef15095d2e0b669928d461ceb651-merged.mount: Deactivated successfully.
Jan 31 08:17:35 compute-0 podman[157427]: 2026-01-31 08:17:35.079703315 +0000 UTC m=+0.439860505 container remove 845a9a3400f2f77e384ea0e4f08645a3655523d66307a00bd1584e8236eade4e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=tender_montalcini, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Jan 31 08:17:35 compute-0 ceph-mon[75227]: pgmap v449: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:17:35 compute-0 systemd[1]: libpod-conmon-845a9a3400f2f77e384ea0e4f08645a3655523d66307a00bd1584e8236eade4e.scope: Deactivated successfully.
Jan 31 08:17:35 compute-0 sudo[157613]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-umgkiuogirstjasgynwbgqdsfyemrvrx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847454.8291233-59-57543719917792/AnsiballZ_systemd_service.py'
Jan 31 08:17:35 compute-0 sudo[157613]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:17:35 compute-0 podman[157623]: 2026-01-31 08:17:35.30370418 +0000 UTC m=+0.104117876 container create c065fbce3e8685dab6fa6f701711898b523bfa483a8b71553d6f1779e12e1fd8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=infallible_knuth, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 08:17:35 compute-0 podman[157623]: 2026-01-31 08:17:35.236215988 +0000 UTC m=+0.036629794 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 08:17:35 compute-0 systemd[1]: Started libpod-conmon-c065fbce3e8685dab6fa6f701711898b523bfa483a8b71553d6f1779e12e1fd8.scope.
Jan 31 08:17:35 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:17:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e4acec4e0bbe1b47db64a7c48bf721e63bb8b64771296482353af5c3b0df1969/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 08:17:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e4acec4e0bbe1b47db64a7c48bf721e63bb8b64771296482353af5c3b0df1969/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 08:17:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e4acec4e0bbe1b47db64a7c48bf721e63bb8b64771296482353af5c3b0df1969/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 08:17:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e4acec4e0bbe1b47db64a7c48bf721e63bb8b64771296482353af5c3b0df1969/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 08:17:35 compute-0 python3.9[157617]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtqemud.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 31 08:17:35 compute-0 podman[157623]: 2026-01-31 08:17:35.441169469 +0000 UTC m=+0.241583245 container init c065fbce3e8685dab6fa6f701711898b523bfa483a8b71553d6f1779e12e1fd8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=infallible_knuth, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030)
Jan 31 08:17:35 compute-0 podman[157623]: 2026-01-31 08:17:35.448762598 +0000 UTC m=+0.249176334 container start c065fbce3e8685dab6fa6f701711898b523bfa483a8b71553d6f1779e12e1fd8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=infallible_knuth, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Jan 31 08:17:35 compute-0 sudo[157613]: pam_unix(sudo:session): session closed for user root
Jan 31 08:17:35 compute-0 podman[157623]: 2026-01-31 08:17:35.479661798 +0000 UTC m=+0.280075594 container attach c065fbce3e8685dab6fa6f701711898b523bfa483a8b71553d6f1779e12e1fd8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=infallible_knuth, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=tentacle)
Jan 31 08:17:35 compute-0 sudo[157824]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vohaphmtncptjuxncpnspdfhluaonjsn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847455.6139603-59-43950262089363/AnsiballZ_systemd_service.py'
Jan 31 08:17:35 compute-0 sudo[157824]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:17:35 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v450: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:17:36 compute-0 lvm[157873]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 31 08:17:36 compute-0 lvm[157873]: VG ceph_vg1 finished
Jan 31 08:17:36 compute-0 lvm[157872]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 31 08:17:36 compute-0 lvm[157872]: VG ceph_vg0 finished
Jan 31 08:17:36 compute-0 lvm[157875]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 31 08:17:36 compute-0 lvm[157875]: VG ceph_vg2 finished
Jan 31 08:17:36 compute-0 python3.9[157831]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtsecretd.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 31 08:17:36 compute-0 sudo[157824]: pam_unix(sudo:session): session closed for user root
Jan 31 08:17:36 compute-0 infallible_knuth[157641]: {}
Jan 31 08:17:36 compute-0 systemd[1]: libpod-c065fbce3e8685dab6fa6f701711898b523bfa483a8b71553d6f1779e12e1fd8.scope: Deactivated successfully.
Jan 31 08:17:36 compute-0 systemd[1]: libpod-c065fbce3e8685dab6fa6f701711898b523bfa483a8b71553d6f1779e12e1fd8.scope: Consumed 1.098s CPU time.
Jan 31 08:17:36 compute-0 podman[157623]: 2026-01-31 08:17:36.387599646 +0000 UTC m=+1.188013372 container died c065fbce3e8685dab6fa6f701711898b523bfa483a8b71553d6f1779e12e1fd8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=infallible_knuth, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 08:17:36 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 08:17:36 compute-0 sudo[158042]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vzejqvrhatkpwfqhqgbexctbxzksxdqw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847456.3448603-59-153544785447733/AnsiballZ_systemd_service.py'
Jan 31 08:17:36 compute-0 sudo[158042]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:17:36 compute-0 systemd[1]: var-lib-containers-storage-overlay-e4acec4e0bbe1b47db64a7c48bf721e63bb8b64771296482353af5c3b0df1969-merged.mount: Deactivated successfully.
Jan 31 08:17:36 compute-0 python3.9[158044]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtstoraged.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 31 08:17:36 compute-0 podman[157623]: 2026-01-31 08:17:36.992339254 +0000 UTC m=+1.792752960 container remove c065fbce3e8685dab6fa6f701711898b523bfa483a8b71553d6f1779e12e1fd8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=infallible_knuth, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Jan 31 08:17:36 compute-0 systemd[1]: libpod-conmon-c065fbce3e8685dab6fa6f701711898b523bfa483a8b71553d6f1779e12e1fd8.scope: Deactivated successfully.
Jan 31 08:17:37 compute-0 sudo[158042]: pam_unix(sudo:session): session closed for user root
Jan 31 08:17:37 compute-0 sudo[157388]: pam_unix(sudo:session): session closed for user root
Jan 31 08:17:37 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 31 08:17:37 compute-0 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 08:17:37 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 31 08:17:37 compute-0 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 08:17:37 compute-0 sudo[158069]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 31 08:17:37 compute-0 sudo[158069]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:17:37 compute-0 sudo[158069]: pam_unix(sudo:session): session closed for user root
Jan 31 08:17:37 compute-0 ceph-mon[75227]: pgmap v450: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:17:37 compute-0 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 08:17:37 compute-0 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 08:17:37 compute-0 sudo[158220]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-clfjtjcrjugadvlxfefjikleuvhyvpqp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847457.2576087-111-278543357973798/AnsiballZ_file.py'
Jan 31 08:17:37 compute-0 sudo[158220]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:17:37 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v451: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:17:37 compute-0 python3.9[158222]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_libvirt.target state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 08:17:38 compute-0 sudo[158220]: pam_unix(sudo:session): session closed for user root
Jan 31 08:17:38 compute-0 sudo[158372]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-agqwlqydblqukindhmqxuomoxqjvcvli ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847458.1458614-111-19760601847709/AnsiballZ_file.py'
Jan 31 08:17:38 compute-0 sudo[158372]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:17:38 compute-0 python3.9[158374]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtlogd_wrapper.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 08:17:38 compute-0 sudo[158372]: pam_unix(sudo:session): session closed for user root
Jan 31 08:17:38 compute-0 sudo[158524]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wshiqujzxdfsadinvwdamqgffzuhgolm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847458.7274957-111-14701043212455/AnsiballZ_file.py'
Jan 31 08:17:38 compute-0 sudo[158524]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:17:39 compute-0 python3.9[158526]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtnodedevd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 08:17:39 compute-0 ceph-mon[75227]: pgmap v451: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:17:39 compute-0 sudo[158524]: pam_unix(sudo:session): session closed for user root
Jan 31 08:17:39 compute-0 sudo[158676]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iwyhoihjrwxieyjecltljhyrmuetlpxa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847459.3317804-111-222175710022770/AnsiballZ_file.py'
Jan 31 08:17:39 compute-0 sudo[158676]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:17:39 compute-0 python3.9[158678]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtproxyd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 08:17:39 compute-0 sudo[158676]: pam_unix(sudo:session): session closed for user root
Jan 31 08:17:39 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v452: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:17:40 compute-0 sudo[158828]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lqbuupognlzqmytzdhgyctebufyziard ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847459.9664145-111-40747243578437/AnsiballZ_file.py'
Jan 31 08:17:40 compute-0 sudo[158828]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:17:40 compute-0 python3.9[158830]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtqemud.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 08:17:40 compute-0 sudo[158828]: pam_unix(sudo:session): session closed for user root
Jan 31 08:17:40 compute-0 sudo[158980]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lqhjjdyuwkulyhccphynedoekbgbfiiq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847460.6124735-111-190229088576855/AnsiballZ_file.py'
Jan 31 08:17:40 compute-0 sudo[158980]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:17:41 compute-0 python3.9[158982]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtsecretd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 08:17:41 compute-0 sudo[158980]: pam_unix(sudo:session): session closed for user root
Jan 31 08:17:41 compute-0 ceph-mon[75227]: pgmap v452: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:17:41 compute-0 sudo[159132]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lcrjoqloqnyvhxipdjepwpfzxyduweqe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847461.1989994-111-118191830369476/AnsiballZ_file.py'
Jan 31 08:17:41 compute-0 sudo[159132]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:17:41 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 08:17:41 compute-0 python3.9[159134]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtstoraged.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 08:17:41 compute-0 sudo[159132]: pam_unix(sudo:session): session closed for user root
Jan 31 08:17:41 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v453: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:17:42 compute-0 sudo[159284]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rhwnfezztsjwwrqjjepgzlulvuzwujhl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847461.799937-161-6617552545285/AnsiballZ_file.py'
Jan 31 08:17:42 compute-0 sudo[159284]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:17:42 compute-0 python3.9[159286]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_libvirt.target state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 08:17:42 compute-0 sudo[159284]: pam_unix(sudo:session): session closed for user root
Jan 31 08:17:42 compute-0 sudo[159447]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jllfbjwofijquqbkdtquoaxbujbqnzvw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847462.566029-161-25889450969081/AnsiballZ_file.py'
Jan 31 08:17:42 compute-0 sudo[159447]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:17:42 compute-0 podman[159410]: 2026-01-31 08:17:42.901096698 +0000 UTC m=+0.095537458 container health_status 14c3c41ef3ecc0cb180f3b1c6b2646401de390411c75f2edf127900ced71a3ae (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_controller, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '3b798815db4eef76ff54d2cfe5801aee605c637f16e47a4297de289ab1fdb9c1-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, io.buildah.version=1.41.3)
Jan 31 08:17:43 compute-0 python3.9[159455]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtlogd_wrapper.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 08:17:43 compute-0 sudo[159447]: pam_unix(sudo:session): session closed for user root
Jan 31 08:17:43 compute-0 ceph-mon[75227]: pgmap v453: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:17:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] _maybe_adjust
Jan 31 08:17:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:17:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 31 08:17:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:17:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 08:17:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:17:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 08:17:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:17:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 08:17:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:17:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 08:17:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:17:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 2.6947183441958982e-06 of space, bias 4.0, pg target 0.003233662013035078 quantized to 16 (current 16)
Jan 31 08:17:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:17:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 08:17:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:17:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 31 08:17:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:17:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Jan 31 08:17:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:17:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 08:17:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:17:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 31 08:17:43 compute-0 sudo[159615]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-umxjgpfkrwaoslsunvyjrcvinfhigpdn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847463.1694624-161-52608603210598/AnsiballZ_file.py'
Jan 31 08:17:43 compute-0 sudo[159615]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:17:43 compute-0 python3.9[159617]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtnodedevd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 08:17:43 compute-0 sudo[159615]: pam_unix(sudo:session): session closed for user root
Jan 31 08:17:43 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v454: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:17:44 compute-0 sudo[159767]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zdkriaijweijggdogihtokanfxfriney ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847463.8666286-161-64798268365861/AnsiballZ_file.py'
Jan 31 08:17:44 compute-0 sudo[159767]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:17:44 compute-0 python3.9[159769]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtproxyd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 08:17:44 compute-0 sudo[159767]: pam_unix(sudo:session): session closed for user root
Jan 31 08:17:44 compute-0 sudo[159919]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uplkkaycqoftwcsyvocksxiyyiozosvi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847464.4625127-161-177779546036821/AnsiballZ_file.py'
Jan 31 08:17:44 compute-0 sudo[159919]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:17:44 compute-0 python3.9[159921]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtqemud.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 08:17:44 compute-0 sudo[159919]: pam_unix(sudo:session): session closed for user root
Jan 31 08:17:45 compute-0 ceph-mon[75227]: pgmap v454: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:17:45 compute-0 sudo[160071]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bdswmswbneqphxbwrimtapvfobevuhpn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847465.035602-161-33797387603331/AnsiballZ_file.py'
Jan 31 08:17:45 compute-0 sudo[160071]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:17:45 compute-0 python3.9[160073]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtsecretd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 08:17:45 compute-0 sudo[160071]: pam_unix(sudo:session): session closed for user root
Jan 31 08:17:45 compute-0 sudo[160234]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-konbnmnmietiacoccfnawwtmtgpluakp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847465.5843532-161-3240931703457/AnsiballZ_file.py'
Jan 31 08:17:45 compute-0 sudo[160234]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:17:45 compute-0 podman[160197]: 2026-01-31 08:17:45.920053248 +0000 UTC m=+0.098286200 container health_status 5cc46d1955888fed41771eb977e7f9416e280539f01559118253757fe3eb0869 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '3b798815db4eef76ff54d2cfe5801aee605c637f16e47a4297de289ab1fdb9c1-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Jan 31 08:17:45 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v455: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:17:46 compute-0 python3.9[160240]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtstoraged.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 08:17:46 compute-0 sudo[160234]: pam_unix(sudo:session): session closed for user root
Jan 31 08:17:46 compute-0 sudo[160392]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gqjiegegbgynnofbrenkcmahogyjwyda ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847466.34941-212-143562954984388/AnsiballZ_command.py'
Jan 31 08:17:46 compute-0 sudo[160392]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:17:46 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 08:17:46 compute-0 python3.9[160394]: ansible-ansible.legacy.command Invoked with _raw_params=if systemctl is-active certmonger.service; then
                                               systemctl disable --now certmonger.service
                                               test -f /etc/systemd/system/certmonger.service || systemctl mask certmonger.service
                                             fi
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 08:17:46 compute-0 sudo[160392]: pam_unix(sudo:session): session closed for user root
Jan 31 08:17:47 compute-0 ceph-mon[75227]: pgmap v455: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:17:47 compute-0 python3.9[160546]: ansible-ansible.builtin.find Invoked with file_type=any hidden=True paths=['/var/lib/certmonger/requests'] patterns=[] read_whole_file=False age_stamp=mtime recurse=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Jan 31 08:17:47 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v456: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:17:48 compute-0 sudo[160696]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ingilmtijdmsiajyvzikcygmtwrdvtff ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847467.8867598-230-70916509107344/AnsiballZ_systemd_service.py'
Jan 31 08:17:48 compute-0 sudo[160696]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:17:48 compute-0 python3.9[160698]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Jan 31 08:17:48 compute-0 systemd[1]: Reloading.
Jan 31 08:17:48 compute-0 systemd-sysv-generator[160723]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 08:17:48 compute-0 systemd-rc-local-generator[160716]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 08:17:48 compute-0 sudo[160696]: pam_unix(sudo:session): session closed for user root
Jan 31 08:17:49 compute-0 sudo[160883]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-llvjfonlpbopxpbzmebxmmiqxrerisjv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847468.9424658-238-199244656527368/AnsiballZ_command.py'
Jan 31 08:17:49 compute-0 sudo[160883]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:17:49 compute-0 ceph-mon[75227]: pgmap v456: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:17:49 compute-0 python3.9[160885]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_libvirt.target _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 08:17:49 compute-0 sudo[160883]: pam_unix(sudo:session): session closed for user root
Jan 31 08:17:49 compute-0 sudo[161036]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nnpozxuqetuldxgysnwhlhpfvnzkqxps ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847469.5499144-238-122979685444062/AnsiballZ_command.py'
Jan 31 08:17:49 compute-0 sudo[161036]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:17:49 compute-0 python3.9[161038]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtlogd_wrapper.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 08:17:49 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v457: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:17:49 compute-0 sudo[161036]: pam_unix(sudo:session): session closed for user root
Jan 31 08:17:50 compute-0 ceph-mon[75227]: pgmap v457: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:17:50 compute-0 sudo[161189]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-llcjarpodajulliidgoellkziiwinxkx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847470.0841672-238-271207855841026/AnsiballZ_command.py'
Jan 31 08:17:50 compute-0 sudo[161189]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:17:50 compute-0 python3.9[161191]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtnodedevd.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 08:17:50 compute-0 sudo[161189]: pam_unix(sudo:session): session closed for user root
Jan 31 08:17:50 compute-0 sudo[161342]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vndxumbdkonzdyzxxpndqfalbgqgqyxb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847470.6678402-238-246718807241021/AnsiballZ_command.py'
Jan 31 08:17:50 compute-0 sudo[161342]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:17:51 compute-0 python3.9[161344]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtproxyd.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 08:17:51 compute-0 sudo[161342]: pam_unix(sudo:session): session closed for user root
Jan 31 08:17:51 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 08:17:51 compute-0 sudo[161495]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vkvqxvjbfxkyykchurrpibtwxhjgawdt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847471.320322-238-176918700042271/AnsiballZ_command.py'
Jan 31 08:17:51 compute-0 sudo[161495]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:17:51 compute-0 python3.9[161497]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtqemud.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 08:17:51 compute-0 sudo[161495]: pam_unix(sudo:session): session closed for user root
Jan 31 08:17:51 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v458: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:17:52 compute-0 sudo[161648]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-txjjkucaxocwghcxxhweizxkpczvncdl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847471.9780357-238-135799796409300/AnsiballZ_command.py'
Jan 31 08:17:52 compute-0 sudo[161648]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:17:52 compute-0 python3.9[161650]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtsecretd.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 08:17:52 compute-0 sudo[161648]: pam_unix(sudo:session): session closed for user root
Jan 31 08:17:52 compute-0 sudo[161801]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-asnjzazlabvlkycybcjvljxkzvrfqdap ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847472.5388038-238-145441927946594/AnsiballZ_command.py'
Jan 31 08:17:52 compute-0 sudo[161801]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:17:52 compute-0 python3.9[161803]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtstoraged.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 08:17:53 compute-0 sudo[161801]: pam_unix(sudo:session): session closed for user root
Jan 31 08:17:53 compute-0 ceph-mon[75227]: pgmap v458: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:17:53 compute-0 sudo[161954]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sweslagrevoazkyaamsvvbmrhrkrtpsj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847473.3779364-292-124975141797165/AnsiballZ_getent.py'
Jan 31 08:17:53 compute-0 sudo[161954]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:17:53 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v459: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:17:53 compute-0 python3.9[161956]: ansible-ansible.builtin.getent Invoked with database=passwd key=libvirt fail_key=True service=None split=None
Jan 31 08:17:54 compute-0 sudo[161954]: pam_unix(sudo:session): session closed for user root
Jan 31 08:17:54 compute-0 sudo[162107]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rwzudiraeuileqolhhexdnojkejopsjg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847474.1774058-300-140624140379767/AnsiballZ_group.py'
Jan 31 08:17:54 compute-0 sudo[162107]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:17:54 compute-0 python3.9[162109]: ansible-ansible.builtin.group Invoked with gid=42473 name=libvirt state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Jan 31 08:17:54 compute-0 groupadd[162110]: group added to /etc/group: name=libvirt, GID=42473
Jan 31 08:17:54 compute-0 groupadd[162110]: group added to /etc/gshadow: name=libvirt
Jan 31 08:17:54 compute-0 groupadd[162110]: new group: name=libvirt, GID=42473
Jan 31 08:17:54 compute-0 sudo[162107]: pam_unix(sudo:session): session closed for user root
Jan 31 08:17:55 compute-0 ceph-mon[75227]: pgmap v459: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:17:55 compute-0 sudo[162265]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lgcdedvapnxocxrmxzhspevzbtsratbk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847475.1129386-308-25443167562683/AnsiballZ_user.py'
Jan 31 08:17:55 compute-0 sudo[162265]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:17:55 compute-0 python3.9[162267]: ansible-ansible.builtin.user Invoked with comment=libvirt user group=libvirt groups=[''] name=libvirt shell=/sbin/nologin state=present uid=42473 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on compute-0 update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None
Jan 31 08:17:55 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v460: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:17:55 compute-0 rsyslogd[1007]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 31 08:17:55 compute-0 rsyslogd[1007]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 31 08:17:56 compute-0 useradd[162269]: new user: name=libvirt, UID=42473, GID=42473, home=/home/libvirt, shell=/sbin/nologin, from=/dev/pts/0
Jan 31 08:17:56 compute-0 sudo[162265]: pam_unix(sudo:session): session closed for user root
Jan 31 08:17:56 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 08:17:56 compute-0 sudo[162426]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hwwweiklwgcvhntqzpfdoszeiuudqhkn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847476.6950066-319-115269990926896/AnsiballZ_setup.py'
Jan 31 08:17:56 compute-0 sudo[162426]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:17:57 compute-0 ceph-mon[75227]: pgmap v460: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:17:57 compute-0 python3.9[162428]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 31 08:17:57 compute-0 sudo[162426]: pam_unix(sudo:session): session closed for user root
Jan 31 08:17:57 compute-0 sudo[162510]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mqhtgrlfudttmiqxxglhgfvqnqekqbnm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847476.6950066-319-115269990926896/AnsiballZ_dnf.py'
Jan 31 08:17:57 compute-0 sudo[162510]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:17:57 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v461: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:17:58 compute-0 python3.9[162512]: ansible-ansible.legacy.dnf Invoked with name=['libvirt ', 'libvirt-admin ', 'libvirt-client ', 'libvirt-daemon ', 'qemu-kvm', 'qemu-img', 'libguestfs', 'libseccomp', 'swtpm', 'swtpm-tools', 'edk2-ovmf', 'ceph-common', 'cyrus-sasl-scram'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 31 08:17:59 compute-0 ceph-mon[75227]: pgmap v461: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:17:59 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v462: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:18:01 compute-0 ceph-mon[75227]: pgmap v462: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:18:01 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 08:18:01 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v463: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:18:02 compute-0 ceph-mon[75227]: pgmap v463: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:18:02 compute-0 ceph-mgr[75519]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:18:02 compute-0 ceph-mgr[75519]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:18:02 compute-0 ceph-mgr[75519]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:18:02 compute-0 ceph-mgr[75519]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:18:02 compute-0 ceph-mgr[75519]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:18:02 compute-0 ceph-mgr[75519]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:18:04 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v464: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:18:06 compute-0 ceph-mon[75227]: pgmap v464: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:18:06 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 08:18:06 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v465: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:18:08 compute-0 ceph-mon[75227]: pgmap v465: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:18:08 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v466: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:18:10 compute-0 ceph-mon[75227]: pgmap v466: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:18:10 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v467: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:18:11 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 08:18:12 compute-0 ceph-mon[75227]: pgmap v467: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:18:12 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v468: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:18:13 compute-0 podman[162586]: 2026-01-31 08:18:13.202749375 +0000 UTC m=+0.089102104 container health_status 14c3c41ef3ecc0cb180f3b1c6b2646401de390411c75f2edf127900ced71a3ae (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '3b798815db4eef76ff54d2cfe5801aee605c637f16e47a4297de289ab1fdb9c1-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_controller, org.label-schema.schema-version=1.0, container_name=ovn_controller, io.buildah.version=1.41.3)
Jan 31 08:18:13 compute-0 ceph-mon[75227]: pgmap v468: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:18:14 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v469: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:18:15 compute-0 ceph-mon[75227]: pgmap v469: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:18:16 compute-0 podman[162703]: 2026-01-31 08:18:16.16637649 +0000 UTC m=+0.055510303 container health_status 5cc46d1955888fed41771eb977e7f9416e280539f01559118253757fe3eb0869 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '3b798815db4eef76ff54d2cfe5801aee605c637f16e47a4297de289ab1fdb9c1-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0)
Jan 31 08:18:16 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 08:18:16 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v470: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:18:17 compute-0 ovn_metadata_agent[154972]: 2026-01-31 08:18:17.876 154977 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:18:17 compute-0 ovn_metadata_agent[154972]: 2026-01-31 08:18:17.876 154977 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:18:17 compute-0 ovn_metadata_agent[154972]: 2026-01-31 08:18:17.876 154977 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:18:18 compute-0 ceph-mon[75227]: pgmap v470: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:18:18 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v471: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:18:19 compute-0 ceph-mon[75227]: pgmap v471: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:18:20 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v472: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:18:21 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 08:18:21 compute-0 ceph-mon[75227]: pgmap v472: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:18:22 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v473: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:18:23 compute-0 ceph-mon[75227]: pgmap v473: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:18:24 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v474: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:18:26 compute-0 ceph-mon[75227]: pgmap v474: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:18:26 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 08:18:26 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v475: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:18:28 compute-0 ceph-mon[75227]: pgmap v475: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:18:28 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v476: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:18:30 compute-0 ceph-mon[75227]: pgmap v476: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:18:30 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v477: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Jan 31 08:18:31 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 08:18:31 compute-0 ceph-mgr[75519]: [balancer INFO root] Optimize plan auto_2026-01-31_08:18:31
Jan 31 08:18:31 compute-0 ceph-mgr[75519]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 31 08:18:31 compute-0 ceph-mgr[75519]: [balancer INFO root] do_upmap
Jan 31 08:18:31 compute-0 ceph-mgr[75519]: [balancer INFO root] pools ['volumes', 'backups', 'images', 'cephfs.cephfs.meta', '.mgr', 'cephfs.cephfs.data', 'default.rgw.meta', 'vms', 'default.rgw.control', 'default.rgw.log', '.rgw.root']
Jan 31 08:18:31 compute-0 ceph-mgr[75519]: [balancer INFO root] prepared 0/10 upmap changes
Jan 31 08:18:32 compute-0 ceph-mon[75227]: pgmap v477: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Jan 31 08:18:32 compute-0 ceph-mgr[75519]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:18:32 compute-0 ceph-mgr[75519]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:18:32 compute-0 ceph-mgr[75519]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:18:32 compute-0 ceph-mgr[75519]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:18:32 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v478: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Jan 31 08:18:32 compute-0 ceph-mgr[75519]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:18:32 compute-0 ceph-mgr[75519]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:18:32 compute-0 ceph-mgr[75519]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 31 08:18:32 compute-0 ceph-mgr[75519]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 08:18:32 compute-0 ceph-mgr[75519]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 31 08:18:32 compute-0 ceph-mgr[75519]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 08:18:32 compute-0 ceph-mgr[75519]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 08:18:32 compute-0 ceph-mgr[75519]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 08:18:32 compute-0 ceph-mgr[75519]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 08:18:32 compute-0 ceph-mgr[75519]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 08:18:32 compute-0 ceph-mgr[75519]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 08:18:32 compute-0 ceph-mgr[75519]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 08:18:34 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v479: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Jan 31 08:18:35 compute-0 ceph-mon[75227]: pgmap v478: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Jan 31 08:18:35 compute-0 kernel: SELinux:  Converting 2777 SID table entries...
Jan 31 08:18:35 compute-0 kernel: SELinux:  policy capability network_peer_controls=1
Jan 31 08:18:35 compute-0 kernel: SELinux:  policy capability open_perms=1
Jan 31 08:18:35 compute-0 kernel: SELinux:  policy capability extended_socket_class=1
Jan 31 08:18:35 compute-0 kernel: SELinux:  policy capability always_check_network=0
Jan 31 08:18:35 compute-0 kernel: SELinux:  policy capability cgroup_seclabel=1
Jan 31 08:18:35 compute-0 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Jan 31 08:18:35 compute-0 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Jan 31 08:18:36 compute-0 ceph-mon[75227]: pgmap v479: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Jan 31 08:18:36 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 08:18:36 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v480: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Jan 31 08:18:37 compute-0 sudo[162759]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 08:18:37 compute-0 dbus-broker-launch[778]: avc:  op=load_policy lsm=selinux seqno=10 res=1
Jan 31 08:18:37 compute-0 sudo[162759]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:18:37 compute-0 sudo[162759]: pam_unix(sudo:session): session closed for user root
Jan 31 08:18:37 compute-0 sudo[162784]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/82c880e6-d992-5408-8b12-efff9c275473/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ls
Jan 31 08:18:37 compute-0 sudo[162784]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:18:37 compute-0 podman[162854]: 2026-01-31 08:18:37.72611763 +0000 UTC m=+0.083372880 container exec 2c160fb9852a007dc977740f88f96001cc57b1cb392a9e315d541aef8037777a (image=quay.io/ceph/ceph:v20, name=ceph-82c880e6-d992-5408-8b12-efff9c275473-mon-compute-0, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 08:18:37 compute-0 podman[162854]: 2026-01-31 08:18:37.883640445 +0000 UTC m=+0.240895645 container exec_died 2c160fb9852a007dc977740f88f96001cc57b1cb392a9e315d541aef8037777a (image=quay.io/ceph/ceph:v20, name=ceph-82c880e6-d992-5408-8b12-efff9c275473-mon-compute-0, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 08:18:38 compute-0 ceph-mon[75227]: pgmap v480: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Jan 31 08:18:38 compute-0 sudo[162784]: pam_unix(sudo:session): session closed for user root
Jan 31 08:18:38 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 31 08:18:38 compute-0 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 08:18:38 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 31 08:18:38 compute-0 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 08:18:38 compute-0 sudo[163040]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 08:18:38 compute-0 sudo[163040]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:18:38 compute-0 sudo[163040]: pam_unix(sudo:session): session closed for user root
Jan 31 08:18:38 compute-0 sudo[163065]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/82c880e6-d992-5408-8b12-efff9c275473/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Jan 31 08:18:38 compute-0 sudo[163065]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:18:38 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v481: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Jan 31 08:18:39 compute-0 sudo[163065]: pam_unix(sudo:session): session closed for user root
Jan 31 08:18:39 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0)
Jan 31 08:18:39 compute-0 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} : dispatch
Jan 31 08:18:39 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 31 08:18:39 compute-0 ceph-mon[75227]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 08:18:39 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 31 08:18:39 compute-0 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 31 08:18:39 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 31 08:18:39 compute-0 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 08:18:39 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 31 08:18:39 compute-0 ceph-mon[75227]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 31 08:18:39 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 31 08:18:39 compute-0 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 31 08:18:39 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 31 08:18:39 compute-0 ceph-mon[75227]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 08:18:39 compute-0 sudo[163121]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 08:18:39 compute-0 sudo[163121]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:18:39 compute-0 sudo[163121]: pam_unix(sudo:session): session closed for user root
Jan 31 08:18:39 compute-0 sudo[163146]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/82c880e6-d992-5408-8b12-efff9c275473/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 82c880e6-d992-5408-8b12-efff9c275473 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Jan 31 08:18:39 compute-0 sudo[163146]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:18:39 compute-0 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 08:18:39 compute-0 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 08:18:39 compute-0 ceph-mon[75227]: pgmap v481: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Jan 31 08:18:39 compute-0 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} : dispatch
Jan 31 08:18:39 compute-0 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 08:18:39 compute-0 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 31 08:18:39 compute-0 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 08:18:39 compute-0 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 31 08:18:39 compute-0 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 31 08:18:39 compute-0 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 08:18:39 compute-0 podman[163183]: 2026-01-31 08:18:39.737441689 +0000 UTC m=+0.051473376 container create 092718be4ccc9f643577e34d6b6eeda389da4f8148ef74cd0c19d90b71c4d7c7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quizzical_keldysh, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 08:18:39 compute-0 systemd[1]: Started libpod-conmon-092718be4ccc9f643577e34d6b6eeda389da4f8148ef74cd0c19d90b71c4d7c7.scope.
Jan 31 08:18:39 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:18:39 compute-0 podman[163183]: 2026-01-31 08:18:39.717891616 +0000 UTC m=+0.031923273 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 08:18:39 compute-0 podman[163183]: 2026-01-31 08:18:39.819090609 +0000 UTC m=+0.133122286 container init 092718be4ccc9f643577e34d6b6eeda389da4f8148ef74cd0c19d90b71c4d7c7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quizzical_keldysh, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 08:18:39 compute-0 podman[163183]: 2026-01-31 08:18:39.826174479 +0000 UTC m=+0.140206166 container start 092718be4ccc9f643577e34d6b6eeda389da4f8148ef74cd0c19d90b71c4d7c7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quizzical_keldysh, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3)
Jan 31 08:18:39 compute-0 podman[163183]: 2026-01-31 08:18:39.82973148 +0000 UTC m=+0.143763137 container attach 092718be4ccc9f643577e34d6b6eeda389da4f8148ef74cd0c19d90b71c4d7c7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quizzical_keldysh, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 31 08:18:39 compute-0 quizzical_keldysh[163199]: 167 167
Jan 31 08:18:39 compute-0 systemd[1]: libpod-092718be4ccc9f643577e34d6b6eeda389da4f8148ef74cd0c19d90b71c4d7c7.scope: Deactivated successfully.
Jan 31 08:18:39 compute-0 podman[163183]: 2026-01-31 08:18:39.834657909 +0000 UTC m=+0.148689586 container died 092718be4ccc9f643577e34d6b6eeda389da4f8148ef74cd0c19d90b71c4d7c7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quizzical_keldysh, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, OSD_FLAVOR=default, io.buildah.version=1.41.3, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 31 08:18:39 compute-0 systemd[1]: var-lib-containers-storage-overlay-bffd6286bd175dd2399a6957c1ff7357cc89828993b89c0dd3d1193d32fc49a1-merged.mount: Deactivated successfully.
Jan 31 08:18:39 compute-0 podman[163183]: 2026-01-31 08:18:39.893660238 +0000 UTC m=+0.207691915 container remove 092718be4ccc9f643577e34d6b6eeda389da4f8148ef74cd0c19d90b71c4d7c7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quizzical_keldysh, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 08:18:39 compute-0 systemd[1]: libpod-conmon-092718be4ccc9f643577e34d6b6eeda389da4f8148ef74cd0c19d90b71c4d7c7.scope: Deactivated successfully.
Jan 31 08:18:40 compute-0 podman[163225]: 2026-01-31 08:18:40.088817418 +0000 UTC m=+0.060317247 container create e31622a66efac46973aaa4899d195efbcd810331cd0b5265efc83daf8425514e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=frosty_cohen, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 31 08:18:40 compute-0 systemd[1]: Started libpod-conmon-e31622a66efac46973aaa4899d195efbcd810331cd0b5265efc83daf8425514e.scope.
Jan 31 08:18:40 compute-0 podman[163225]: 2026-01-31 08:18:40.060678052 +0000 UTC m=+0.032177931 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 08:18:40 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:18:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/783ccef7283116ea235762d4991b6869c821bff71329749f151a2163e6976ffd/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 08:18:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/783ccef7283116ea235762d4991b6869c821bff71329749f151a2163e6976ffd/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 08:18:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/783ccef7283116ea235762d4991b6869c821bff71329749f151a2163e6976ffd/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 08:18:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/783ccef7283116ea235762d4991b6869c821bff71329749f151a2163e6976ffd/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 08:18:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/783ccef7283116ea235762d4991b6869c821bff71329749f151a2163e6976ffd/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 31 08:18:40 compute-0 podman[163225]: 2026-01-31 08:18:40.207559457 +0000 UTC m=+0.179059266 container init e31622a66efac46973aaa4899d195efbcd810331cd0b5265efc83daf8425514e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=frosty_cohen, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Jan 31 08:18:40 compute-0 podman[163225]: 2026-01-31 08:18:40.216889991 +0000 UTC m=+0.188389830 container start e31622a66efac46973aaa4899d195efbcd810331cd0b5265efc83daf8425514e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=frosty_cohen, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_REF=tentacle)
Jan 31 08:18:40 compute-0 podman[163225]: 2026-01-31 08:18:40.221333226 +0000 UTC m=+0.192833075 container attach e31622a66efac46973aaa4899d195efbcd810331cd0b5265efc83daf8425514e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=frosty_cohen, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle)
Jan 31 08:18:40 compute-0 frosty_cohen[163243]: --> passed data devices: 0 physical, 3 LVM
Jan 31 08:18:40 compute-0 frosty_cohen[163243]: --> All data devices are unavailable
Jan 31 08:18:40 compute-0 systemd[1]: libpod-e31622a66efac46973aaa4899d195efbcd810331cd0b5265efc83daf8425514e.scope: Deactivated successfully.
Jan 31 08:18:40 compute-0 podman[163225]: 2026-01-31 08:18:40.689101237 +0000 UTC m=+0.660601076 container died e31622a66efac46973aaa4899d195efbcd810331cd0b5265efc83daf8425514e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=frosty_cohen, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 31 08:18:40 compute-0 systemd[1]: var-lib-containers-storage-overlay-783ccef7283116ea235762d4991b6869c821bff71329749f151a2163e6976ffd-merged.mount: Deactivated successfully.
Jan 31 08:18:40 compute-0 podman[163225]: 2026-01-31 08:18:40.7426047 +0000 UTC m=+0.714104509 container remove e31622a66efac46973aaa4899d195efbcd810331cd0b5265efc83daf8425514e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=frosty_cohen, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0)
Jan 31 08:18:40 compute-0 systemd[1]: libpod-conmon-e31622a66efac46973aaa4899d195efbcd810331cd0b5265efc83daf8425514e.scope: Deactivated successfully.
Jan 31 08:18:40 compute-0 sudo[163146]: pam_unix(sudo:session): session closed for user root
Jan 31 08:18:40 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v482: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Jan 31 08:18:40 compute-0 sudo[163274]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 08:18:40 compute-0 sudo[163274]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:18:40 compute-0 sudo[163274]: pam_unix(sudo:session): session closed for user root
Jan 31 08:18:40 compute-0 sudo[163299]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/82c880e6-d992-5408-8b12-efff9c275473/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 82c880e6-d992-5408-8b12-efff9c275473 -- lvm list --format json
Jan 31 08:18:40 compute-0 sudo[163299]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:18:41 compute-0 podman[163335]: 2026-01-31 08:18:41.225835567 +0000 UTC m=+0.044803268 container create 1415b2156ce114c9e46a93de80c2f22dd729792f475b6c785444c5ff3664ce0d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cool_stonebraker, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Jan 31 08:18:41 compute-0 systemd[1]: Started libpod-conmon-1415b2156ce114c9e46a93de80c2f22dd729792f475b6c785444c5ff3664ce0d.scope.
Jan 31 08:18:41 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:18:41 compute-0 podman[163335]: 2026-01-31 08:18:41.298804851 +0000 UTC m=+0.117772562 container init 1415b2156ce114c9e46a93de80c2f22dd729792f475b6c785444c5ff3664ce0d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cool_stonebraker, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Jan 31 08:18:41 compute-0 podman[163335]: 2026-01-31 08:18:41.303030021 +0000 UTC m=+0.121997712 container start 1415b2156ce114c9e46a93de80c2f22dd729792f475b6c785444c5ff3664ce0d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cool_stonebraker, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 08:18:41 compute-0 podman[163335]: 2026-01-31 08:18:41.306122898 +0000 UTC m=+0.125090579 container attach 1415b2156ce114c9e46a93de80c2f22dd729792f475b6c785444c5ff3664ce0d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cool_stonebraker, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 31 08:18:41 compute-0 podman[163335]: 2026-01-31 08:18:41.210483613 +0000 UTC m=+0.029451344 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 08:18:41 compute-0 cool_stonebraker[163352]: 167 167
Jan 31 08:18:41 compute-0 systemd[1]: libpod-1415b2156ce114c9e46a93de80c2f22dd729792f475b6c785444c5ff3664ce0d.scope: Deactivated successfully.
Jan 31 08:18:41 compute-0 podman[163335]: 2026-01-31 08:18:41.309579356 +0000 UTC m=+0.128547077 container died 1415b2156ce114c9e46a93de80c2f22dd729792f475b6c785444c5ff3664ce0d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cool_stonebraker, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 31 08:18:41 compute-0 systemd[1]: var-lib-containers-storage-overlay-24093636e9012feba5f33cce0c22639e47ffd8b5c9c85c3df0816e932b291980-merged.mount: Deactivated successfully.
Jan 31 08:18:41 compute-0 podman[163335]: 2026-01-31 08:18:41.352275234 +0000 UTC m=+0.171242955 container remove 1415b2156ce114c9e46a93de80c2f22dd729792f475b6c785444c5ff3664ce0d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cool_stonebraker, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_REF=tentacle, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 08:18:41 compute-0 systemd[1]: libpod-conmon-1415b2156ce114c9e46a93de80c2f22dd729792f475b6c785444c5ff3664ce0d.scope: Deactivated successfully.
Jan 31 08:18:41 compute-0 podman[163377]: 2026-01-31 08:18:41.525359959 +0000 UTC m=+0.060337037 container create 99e7b1a086da7dc97988475e17baf0b96f799dd8889249eaa6574c2df2763e76 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=musing_nash, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_REF=tentacle, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 31 08:18:41 compute-0 systemd[1]: Started libpod-conmon-99e7b1a086da7dc97988475e17baf0b96f799dd8889249eaa6574c2df2763e76.scope.
Jan 31 08:18:41 compute-0 podman[163377]: 2026-01-31 08:18:41.497746508 +0000 UTC m=+0.032723626 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 08:18:41 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:18:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/88c95edd5b98f616af4ba90d73e7988b590c0a023d3d638cb189d09a5ab061f6/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 08:18:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/88c95edd5b98f616af4ba90d73e7988b590c0a023d3d638cb189d09a5ab061f6/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 08:18:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/88c95edd5b98f616af4ba90d73e7988b590c0a023d3d638cb189d09a5ab061f6/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 08:18:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/88c95edd5b98f616af4ba90d73e7988b590c0a023d3d638cb189d09a5ab061f6/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 08:18:41 compute-0 podman[163377]: 2026-01-31 08:18:41.61870692 +0000 UTC m=+0.153684038 container init 99e7b1a086da7dc97988475e17baf0b96f799dd8889249eaa6574c2df2763e76 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=musing_nash, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 08:18:41 compute-0 podman[163377]: 2026-01-31 08:18:41.632248743 +0000 UTC m=+0.167225811 container start 99e7b1a086da7dc97988475e17baf0b96f799dd8889249eaa6574c2df2763e76 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=musing_nash, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 08:18:41 compute-0 podman[163377]: 2026-01-31 08:18:41.636558935 +0000 UTC m=+0.171536013 container attach 99e7b1a086da7dc97988475e17baf0b96f799dd8889249eaa6574c2df2763e76 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=musing_nash, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 08:18:41 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 08:18:41 compute-0 ceph-mon[75227]: pgmap v482: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Jan 31 08:18:41 compute-0 musing_nash[163393]: {
Jan 31 08:18:41 compute-0 musing_nash[163393]:     "0": [
Jan 31 08:18:41 compute-0 musing_nash[163393]:         {
Jan 31 08:18:41 compute-0 musing_nash[163393]:             "devices": [
Jan 31 08:18:41 compute-0 musing_nash[163393]:                 "/dev/loop3"
Jan 31 08:18:41 compute-0 musing_nash[163393]:             ],
Jan 31 08:18:41 compute-0 musing_nash[163393]:             "lv_name": "ceph_lv0",
Jan 31 08:18:41 compute-0 musing_nash[163393]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 08:18:41 compute-0 musing_nash[163393]:             "lv_size": "21470642176",
Jan 31 08:18:41 compute-0 musing_nash[163393]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=MTsNbY-MKaT-jGv0-3onj-5WQa-gnK0-BbfLsK,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=82c880e6-d992-5408-8b12-efff9c275473,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=39c36249-2898-4a76-b317-8e4ca379866f,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 31 08:18:41 compute-0 musing_nash[163393]:             "lv_uuid": "MTsNbY-MKaT-jGv0-3onj-5WQa-gnK0-BbfLsK",
Jan 31 08:18:41 compute-0 musing_nash[163393]:             "name": "ceph_lv0",
Jan 31 08:18:41 compute-0 musing_nash[163393]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 08:18:41 compute-0 musing_nash[163393]:             "tags": {
Jan 31 08:18:41 compute-0 musing_nash[163393]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 31 08:18:41 compute-0 musing_nash[163393]:                 "ceph.block_uuid": "MTsNbY-MKaT-jGv0-3onj-5WQa-gnK0-BbfLsK",
Jan 31 08:18:41 compute-0 musing_nash[163393]:                 "ceph.cephx_lockbox_secret": "",
Jan 31 08:18:41 compute-0 musing_nash[163393]:                 "ceph.cluster_fsid": "82c880e6-d992-5408-8b12-efff9c275473",
Jan 31 08:18:41 compute-0 musing_nash[163393]:                 "ceph.cluster_name": "ceph",
Jan 31 08:18:41 compute-0 musing_nash[163393]:                 "ceph.crush_device_class": "",
Jan 31 08:18:41 compute-0 musing_nash[163393]:                 "ceph.encrypted": "0",
Jan 31 08:18:41 compute-0 musing_nash[163393]:                 "ceph.objectstore": "bluestore",
Jan 31 08:18:41 compute-0 musing_nash[163393]:                 "ceph.osd_fsid": "39c36249-2898-4a76-b317-8e4ca379866f",
Jan 31 08:18:41 compute-0 musing_nash[163393]:                 "ceph.osd_id": "0",
Jan 31 08:18:41 compute-0 musing_nash[163393]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 31 08:18:41 compute-0 musing_nash[163393]:                 "ceph.type": "block",
Jan 31 08:18:41 compute-0 musing_nash[163393]:                 "ceph.vdo": "0",
Jan 31 08:18:41 compute-0 musing_nash[163393]:                 "ceph.with_tpm": "0"
Jan 31 08:18:41 compute-0 musing_nash[163393]:             },
Jan 31 08:18:41 compute-0 musing_nash[163393]:             "type": "block",
Jan 31 08:18:41 compute-0 musing_nash[163393]:             "vg_name": "ceph_vg0"
Jan 31 08:18:41 compute-0 musing_nash[163393]:         }
Jan 31 08:18:41 compute-0 musing_nash[163393]:     ],
Jan 31 08:18:41 compute-0 musing_nash[163393]:     "1": [
Jan 31 08:18:41 compute-0 musing_nash[163393]:         {
Jan 31 08:18:41 compute-0 musing_nash[163393]:             "devices": [
Jan 31 08:18:41 compute-0 musing_nash[163393]:                 "/dev/loop4"
Jan 31 08:18:41 compute-0 musing_nash[163393]:             ],
Jan 31 08:18:41 compute-0 musing_nash[163393]:             "lv_name": "ceph_lv1",
Jan 31 08:18:41 compute-0 musing_nash[163393]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 31 08:18:41 compute-0 musing_nash[163393]:             "lv_size": "21470642176",
Jan 31 08:18:41 compute-0 musing_nash[163393]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=p93Mbf-DMxT-pcUt-jSJE-SFna-oscq-yTAd40,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=82c880e6-d992-5408-8b12-efff9c275473,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=dacad4fa-56d8-4937-b2d8-306fb75187f3,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 31 08:18:41 compute-0 musing_nash[163393]:             "lv_uuid": "p93Mbf-DMxT-pcUt-jSJE-SFna-oscq-yTAd40",
Jan 31 08:18:41 compute-0 musing_nash[163393]:             "name": "ceph_lv1",
Jan 31 08:18:41 compute-0 musing_nash[163393]:             "path": "/dev/ceph_vg1/ceph_lv1",
Jan 31 08:18:41 compute-0 musing_nash[163393]:             "tags": {
Jan 31 08:18:41 compute-0 musing_nash[163393]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 31 08:18:41 compute-0 musing_nash[163393]:                 "ceph.block_uuid": "p93Mbf-DMxT-pcUt-jSJE-SFna-oscq-yTAd40",
Jan 31 08:18:41 compute-0 musing_nash[163393]:                 "ceph.cephx_lockbox_secret": "",
Jan 31 08:18:41 compute-0 musing_nash[163393]:                 "ceph.cluster_fsid": "82c880e6-d992-5408-8b12-efff9c275473",
Jan 31 08:18:41 compute-0 musing_nash[163393]:                 "ceph.cluster_name": "ceph",
Jan 31 08:18:41 compute-0 musing_nash[163393]:                 "ceph.crush_device_class": "",
Jan 31 08:18:41 compute-0 musing_nash[163393]:                 "ceph.encrypted": "0",
Jan 31 08:18:41 compute-0 musing_nash[163393]:                 "ceph.objectstore": "bluestore",
Jan 31 08:18:41 compute-0 musing_nash[163393]:                 "ceph.osd_fsid": "dacad4fa-56d8-4937-b2d8-306fb75187f3",
Jan 31 08:18:41 compute-0 musing_nash[163393]:                 "ceph.osd_id": "1",
Jan 31 08:18:41 compute-0 musing_nash[163393]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 31 08:18:41 compute-0 musing_nash[163393]:                 "ceph.type": "block",
Jan 31 08:18:41 compute-0 musing_nash[163393]:                 "ceph.vdo": "0",
Jan 31 08:18:41 compute-0 musing_nash[163393]:                 "ceph.with_tpm": "0"
Jan 31 08:18:41 compute-0 musing_nash[163393]:             },
Jan 31 08:18:41 compute-0 musing_nash[163393]:             "type": "block",
Jan 31 08:18:41 compute-0 musing_nash[163393]:             "vg_name": "ceph_vg1"
Jan 31 08:18:41 compute-0 musing_nash[163393]:         }
Jan 31 08:18:41 compute-0 musing_nash[163393]:     ],
Jan 31 08:18:41 compute-0 musing_nash[163393]:     "2": [
Jan 31 08:18:41 compute-0 musing_nash[163393]:         {
Jan 31 08:18:41 compute-0 musing_nash[163393]:             "devices": [
Jan 31 08:18:41 compute-0 musing_nash[163393]:                 "/dev/loop5"
Jan 31 08:18:41 compute-0 musing_nash[163393]:             ],
Jan 31 08:18:41 compute-0 musing_nash[163393]:             "lv_name": "ceph_lv2",
Jan 31 08:18:41 compute-0 musing_nash[163393]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 31 08:18:41 compute-0 musing_nash[163393]:             "lv_size": "21470642176",
Jan 31 08:18:41 compute-0 musing_nash[163393]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=fxd7JU-HnwP-NvcE-M4xv-EgEF-kK7y-w6dXCS,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=82c880e6-d992-5408-8b12-efff9c275473,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=faa25865-e7b6-44f9-8188-08bf287b941b,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 31 08:18:41 compute-0 musing_nash[163393]:             "lv_uuid": "fxd7JU-HnwP-NvcE-M4xv-EgEF-kK7y-w6dXCS",
Jan 31 08:18:41 compute-0 musing_nash[163393]:             "name": "ceph_lv2",
Jan 31 08:18:41 compute-0 musing_nash[163393]:             "path": "/dev/ceph_vg2/ceph_lv2",
Jan 31 08:18:41 compute-0 musing_nash[163393]:             "tags": {
Jan 31 08:18:41 compute-0 musing_nash[163393]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 31 08:18:41 compute-0 musing_nash[163393]:                 "ceph.block_uuid": "fxd7JU-HnwP-NvcE-M4xv-EgEF-kK7y-w6dXCS",
Jan 31 08:18:41 compute-0 musing_nash[163393]:                 "ceph.cephx_lockbox_secret": "",
Jan 31 08:18:41 compute-0 musing_nash[163393]:                 "ceph.cluster_fsid": "82c880e6-d992-5408-8b12-efff9c275473",
Jan 31 08:18:41 compute-0 musing_nash[163393]:                 "ceph.cluster_name": "ceph",
Jan 31 08:18:41 compute-0 musing_nash[163393]:                 "ceph.crush_device_class": "",
Jan 31 08:18:41 compute-0 musing_nash[163393]:                 "ceph.encrypted": "0",
Jan 31 08:18:41 compute-0 musing_nash[163393]:                 "ceph.objectstore": "bluestore",
Jan 31 08:18:41 compute-0 musing_nash[163393]:                 "ceph.osd_fsid": "faa25865-e7b6-44f9-8188-08bf287b941b",
Jan 31 08:18:41 compute-0 musing_nash[163393]:                 "ceph.osd_id": "2",
Jan 31 08:18:41 compute-0 musing_nash[163393]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 31 08:18:41 compute-0 musing_nash[163393]:                 "ceph.type": "block",
Jan 31 08:18:41 compute-0 musing_nash[163393]:                 "ceph.vdo": "0",
Jan 31 08:18:41 compute-0 musing_nash[163393]:                 "ceph.with_tpm": "0"
Jan 31 08:18:41 compute-0 musing_nash[163393]:             },
Jan 31 08:18:41 compute-0 musing_nash[163393]:             "type": "block",
Jan 31 08:18:41 compute-0 musing_nash[163393]:             "vg_name": "ceph_vg2"
Jan 31 08:18:41 compute-0 musing_nash[163393]:         }
Jan 31 08:18:41 compute-0 musing_nash[163393]:     ]
Jan 31 08:18:41 compute-0 musing_nash[163393]: }
Jan 31 08:18:41 compute-0 systemd[1]: libpod-99e7b1a086da7dc97988475e17baf0b96f799dd8889249eaa6574c2df2763e76.scope: Deactivated successfully.
Jan 31 08:18:41 compute-0 podman[163377]: 2026-01-31 08:18:41.962120493 +0000 UTC m=+0.497097571 container died 99e7b1a086da7dc97988475e17baf0b96f799dd8889249eaa6574c2df2763e76 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=musing_nash, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.vendor=CentOS)
Jan 31 08:18:41 compute-0 systemd[1]: var-lib-containers-storage-overlay-88c95edd5b98f616af4ba90d73e7988b590c0a023d3d638cb189d09a5ab061f6-merged.mount: Deactivated successfully.
Jan 31 08:18:42 compute-0 podman[163377]: 2026-01-31 08:18:42.01470604 +0000 UTC m=+0.549683108 container remove 99e7b1a086da7dc97988475e17baf0b96f799dd8889249eaa6574c2df2763e76 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=musing_nash, ceph=True, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 08:18:42 compute-0 systemd[1]: libpod-conmon-99e7b1a086da7dc97988475e17baf0b96f799dd8889249eaa6574c2df2763e76.scope: Deactivated successfully.
Jan 31 08:18:42 compute-0 sudo[163299]: pam_unix(sudo:session): session closed for user root
Jan 31 08:18:42 compute-0 sudo[163413]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 08:18:42 compute-0 sudo[163413]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:18:42 compute-0 sudo[163413]: pam_unix(sudo:session): session closed for user root
Jan 31 08:18:42 compute-0 sudo[163438]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/82c880e6-d992-5408-8b12-efff9c275473/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 82c880e6-d992-5408-8b12-efff9c275473 -- raw list --format json
Jan 31 08:18:42 compute-0 sudo[163438]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:18:42 compute-0 podman[163474]: 2026-01-31 08:18:42.487778601 +0000 UTC m=+0.056270002 container create e678fa31ede37e2ef31b59a3b386c2605728dd42346eec16c67bfdbc43896243 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=frosty_brahmagupta, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 08:18:42 compute-0 systemd[1]: Started libpod-conmon-e678fa31ede37e2ef31b59a3b386c2605728dd42346eec16c67bfdbc43896243.scope.
Jan 31 08:18:42 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:18:42 compute-0 podman[163474]: 2026-01-31 08:18:42.463542416 +0000 UTC m=+0.032033877 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 08:18:42 compute-0 podman[163474]: 2026-01-31 08:18:42.567586869 +0000 UTC m=+0.136078260 container init e678fa31ede37e2ef31b59a3b386c2605728dd42346eec16c67bfdbc43896243 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=frosty_brahmagupta, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Jan 31 08:18:42 compute-0 podman[163474]: 2026-01-31 08:18:42.575031179 +0000 UTC m=+0.143522570 container start e678fa31ede37e2ef31b59a3b386c2605728dd42346eec16c67bfdbc43896243 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=frosty_brahmagupta, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 31 08:18:42 compute-0 podman[163474]: 2026-01-31 08:18:42.5785922 +0000 UTC m=+0.147083601 container attach e678fa31ede37e2ef31b59a3b386c2605728dd42346eec16c67bfdbc43896243 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=frosty_brahmagupta, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle)
Jan 31 08:18:42 compute-0 frosty_brahmagupta[163490]: 167 167
Jan 31 08:18:42 compute-0 systemd[1]: libpod-e678fa31ede37e2ef31b59a3b386c2605728dd42346eec16c67bfdbc43896243.scope: Deactivated successfully.
Jan 31 08:18:42 compute-0 podman[163474]: 2026-01-31 08:18:42.582205062 +0000 UTC m=+0.150696463 container died e678fa31ede37e2ef31b59a3b386c2605728dd42346eec16c67bfdbc43896243 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=frosty_brahmagupta, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2)
Jan 31 08:18:42 compute-0 systemd[1]: var-lib-containers-storage-overlay-8c5f89f6baf97ac9c1cc600035dd7a3f8a0b71f0190eb9795857462ea51d37e5-merged.mount: Deactivated successfully.
Jan 31 08:18:42 compute-0 podman[163474]: 2026-01-31 08:18:42.625902608 +0000 UTC m=+0.194394009 container remove e678fa31ede37e2ef31b59a3b386c2605728dd42346eec16c67bfdbc43896243 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=frosty_brahmagupta, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.license=GPLv2)
Jan 31 08:18:42 compute-0 systemd[1]: libpod-conmon-e678fa31ede37e2ef31b59a3b386c2605728dd42346eec16c67bfdbc43896243.scope: Deactivated successfully.
Jan 31 08:18:42 compute-0 podman[163513]: 2026-01-31 08:18:42.785305647 +0000 UTC m=+0.045269042 container create 400faceecb25d19aea9cb280f3df963b144b54c4d134dea4601e9ff9824d49fd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=keen_maxwell, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 31 08:18:42 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v483: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:18:42 compute-0 systemd[1]: Started libpod-conmon-400faceecb25d19aea9cb280f3df963b144b54c4d134dea4601e9ff9824d49fd.scope.
Jan 31 08:18:42 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:18:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/41d6b582070ab0c9981cd9d2f9f0d0852e4d038748ec1651ebfec74bae432dc0/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 08:18:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/41d6b582070ab0c9981cd9d2f9f0d0852e4d038748ec1651ebfec74bae432dc0/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 08:18:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/41d6b582070ab0c9981cd9d2f9f0d0852e4d038748ec1651ebfec74bae432dc0/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 08:18:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/41d6b582070ab0c9981cd9d2f9f0d0852e4d038748ec1651ebfec74bae432dc0/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 08:18:42 compute-0 podman[163513]: 2026-01-31 08:18:42.767412531 +0000 UTC m=+0.027375966 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 08:18:42 compute-0 podman[163513]: 2026-01-31 08:18:42.86745559 +0000 UTC m=+0.127418995 container init 400faceecb25d19aea9cb280f3df963b144b54c4d134dea4601e9ff9824d49fd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=keen_maxwell, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_REF=tentacle, ceph=True, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 08:18:42 compute-0 podman[163513]: 2026-01-31 08:18:42.875278862 +0000 UTC m=+0.135242217 container start 400faceecb25d19aea9cb280f3df963b144b54c4d134dea4601e9ff9824d49fd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=keen_maxwell, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 08:18:42 compute-0 podman[163513]: 2026-01-31 08:18:42.878469602 +0000 UTC m=+0.138432947 container attach 400faceecb25d19aea9cb280f3df963b144b54c4d134dea4601e9ff9824d49fd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=keen_maxwell, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 08:18:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] _maybe_adjust
Jan 31 08:18:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:18:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 31 08:18:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:18:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 08:18:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:18:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 08:18:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:18:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 08:18:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:18:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 08:18:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:18:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 2.6947183441958982e-06 of space, bias 4.0, pg target 0.003233662013035078 quantized to 16 (current 16)
Jan 31 08:18:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:18:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 08:18:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:18:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 31 08:18:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:18:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Jan 31 08:18:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:18:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 08:18:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:18:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 31 08:18:43 compute-0 lvm[163616]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 31 08:18:43 compute-0 lvm[163616]: VG ceph_vg1 finished
Jan 31 08:18:43 compute-0 lvm[163615]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 31 08:18:43 compute-0 lvm[163615]: VG ceph_vg0 finished
Jan 31 08:18:43 compute-0 lvm[163617]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 31 08:18:43 compute-0 lvm[163617]: VG ceph_vg2 finished
Jan 31 08:18:43 compute-0 podman[163605]: 2026-01-31 08:18:43.537183083 +0000 UTC m=+0.101171262 container health_status 14c3c41ef3ecc0cb180f3b1c6b2646401de390411c75f2edf127900ced71a3ae (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, container_name=ovn_controller, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '3b798815db4eef76ff54d2cfe5801aee605c637f16e47a4297de289ab1fdb9c1-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller)
Jan 31 08:18:43 compute-0 keen_maxwell[163530]: {}
Jan 31 08:18:43 compute-0 systemd[1]: libpod-400faceecb25d19aea9cb280f3df963b144b54c4d134dea4601e9ff9824d49fd.scope: Deactivated successfully.
Jan 31 08:18:43 compute-0 systemd[1]: libpod-400faceecb25d19aea9cb280f3df963b144b54c4d134dea4601e9ff9824d49fd.scope: Consumed 1.008s CPU time.
Jan 31 08:18:43 compute-0 podman[163513]: 2026-01-31 08:18:43.582609708 +0000 UTC m=+0.842573103 container died 400faceecb25d19aea9cb280f3df963b144b54c4d134dea4601e9ff9824d49fd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=keen_maxwell, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Jan 31 08:18:43 compute-0 systemd[1]: var-lib-containers-storage-overlay-41d6b582070ab0c9981cd9d2f9f0d0852e4d038748ec1651ebfec74bae432dc0-merged.mount: Deactivated successfully.
Jan 31 08:18:43 compute-0 podman[163513]: 2026-01-31 08:18:43.630244636 +0000 UTC m=+0.890208061 container remove 400faceecb25d19aea9cb280f3df963b144b54c4d134dea4601e9ff9824d49fd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=keen_maxwell, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2)
Jan 31 08:18:43 compute-0 systemd[1]: libpod-conmon-400faceecb25d19aea9cb280f3df963b144b54c4d134dea4601e9ff9824d49fd.scope: Deactivated successfully.
Jan 31 08:18:43 compute-0 sudo[163438]: pam_unix(sudo:session): session closed for user root
Jan 31 08:18:43 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 31 08:18:43 compute-0 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 08:18:43 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 31 08:18:43 compute-0 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 08:18:43 compute-0 sudo[163652]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 31 08:18:43 compute-0 sudo[163652]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:18:43 compute-0 sudo[163652]: pam_unix(sudo:session): session closed for user root
Jan 31 08:18:43 compute-0 ceph-mon[75227]: pgmap v483: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:18:43 compute-0 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 08:18:43 compute-0 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 08:18:44 compute-0 kernel: SELinux:  Converting 2777 SID table entries...
Jan 31 08:18:44 compute-0 kernel: SELinux:  policy capability network_peer_controls=1
Jan 31 08:18:44 compute-0 kernel: SELinux:  policy capability open_perms=1
Jan 31 08:18:44 compute-0 kernel: SELinux:  policy capability extended_socket_class=1
Jan 31 08:18:44 compute-0 kernel: SELinux:  policy capability always_check_network=0
Jan 31 08:18:44 compute-0 kernel: SELinux:  policy capability cgroup_seclabel=1
Jan 31 08:18:44 compute-0 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Jan 31 08:18:44 compute-0 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Jan 31 08:18:44 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v484: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:18:45 compute-0 ceph-mon[75227]: pgmap v484: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:18:46 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 08:18:46 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v485: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:18:47 compute-0 dbus-broker-launch[778]: avc:  op=load_policy lsm=selinux seqno=11 res=1
Jan 31 08:18:47 compute-0 podman[163684]: 2026-01-31 08:18:47.174120912 +0000 UTC m=+0.064624859 container health_status 5cc46d1955888fed41771eb977e7f9416e280539f01559118253757fe3eb0869 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '3b798815db4eef76ff54d2cfe5801aee605c637f16e47a4297de289ab1fdb9c1-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent)
Jan 31 08:18:47 compute-0 ceph-mon[75227]: pgmap v485: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:18:48 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v486: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:18:49 compute-0 ceph-mon[75227]: pgmap v486: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:18:50 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v487: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:18:51 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 08:18:51 compute-0 ceph-mon[75227]: pgmap v487: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:18:52 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v488: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:18:54 compute-0 ceph-mon[75227]: pgmap v488: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:18:54 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v489: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:18:56 compute-0 ceph-mon[75227]: pgmap v489: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:18:56 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 08:18:56 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v490: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:18:58 compute-0 ceph-mon[75227]: pgmap v490: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:18:58 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v491: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:19:00 compute-0 ceph-mon[75227]: pgmap v491: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:19:00 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v492: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:19:01 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 08:19:02 compute-0 ceph-mon[75227]: pgmap v492: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:19:02 compute-0 ceph-mgr[75519]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:19:02 compute-0 ceph-mgr[75519]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:19:02 compute-0 ceph-mgr[75519]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:19:02 compute-0 ceph-mgr[75519]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:19:02 compute-0 ceph-mgr[75519]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:19:02 compute-0 ceph-mgr[75519]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:19:02 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v493: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:19:03 compute-0 ceph-mon[75227]: pgmap v493: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:19:04 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v494: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:19:05 compute-0 ceph-mon[75227]: pgmap v494: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:19:06 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 08:19:06 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v495: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:19:07 compute-0 ceph-mon[75227]: pgmap v495: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:19:08 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v496: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:19:09 compute-0 ceph-mon[75227]: pgmap v496: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:19:10 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v497: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:19:11 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 08:19:12 compute-0 ceph-mon[75227]: pgmap v497: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:19:12 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v498: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:19:14 compute-0 ceph-mon[75227]: pgmap v498: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:19:14 compute-0 podman[179769]: 2026-01-31 08:19:14.166244772 +0000 UTC m=+0.062525129 container health_status 14c3c41ef3ecc0cb180f3b1c6b2646401de390411c75f2edf127900ced71a3ae (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '3b798815db4eef76ff54d2cfe5801aee605c637f16e47a4297de289ab1fdb9c1-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Jan 31 08:19:14 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v499: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:19:16 compute-0 ceph-mon[75227]: pgmap v499: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:19:16 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 08:19:16 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v500: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:19:17 compute-0 ovn_metadata_agent[154972]: 2026-01-31 08:19:17.877 154977 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:19:17 compute-0 ovn_metadata_agent[154972]: 2026-01-31 08:19:17.877 154977 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:19:17 compute-0 ovn_metadata_agent[154972]: 2026-01-31 08:19:17.877 154977 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:19:18 compute-0 podman[180586]: 2026-01-31 08:19:18.000113612 +0000 UTC m=+0.053962747 container health_status 5cc46d1955888fed41771eb977e7f9416e280539f01559118253757fe3eb0869 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '3b798815db4eef76ff54d2cfe5801aee605c637f16e47a4297de289ab1fdb9c1-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.vendor=CentOS)
Jan 31 08:19:18 compute-0 ceph-mon[75227]: pgmap v500: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:19:18 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v501: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:19:19 compute-0 ceph-mon[75227]: pgmap v501: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:19:20 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v502: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:19:21 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 08:19:22 compute-0 ceph-mon[75227]: pgmap v502: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:19:22 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v503: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:19:23 compute-0 ceph-mon[75227]: pgmap v503: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:19:24 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v504: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:19:25 compute-0 ceph-mon[75227]: pgmap v504: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:19:26 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 08:19:26 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v505: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:19:28 compute-0 ceph-mon[75227]: pgmap v505: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:19:28 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v506: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:19:29 compute-0 kernel: SELinux:  Converting 2778 SID table entries...
Jan 31 08:19:29 compute-0 kernel: SELinux:  policy capability network_peer_controls=1
Jan 31 08:19:29 compute-0 kernel: SELinux:  policy capability open_perms=1
Jan 31 08:19:29 compute-0 kernel: SELinux:  policy capability extended_socket_class=1
Jan 31 08:19:29 compute-0 kernel: SELinux:  policy capability always_check_network=0
Jan 31 08:19:29 compute-0 kernel: SELinux:  policy capability cgroup_seclabel=1
Jan 31 08:19:29 compute-0 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Jan 31 08:19:29 compute-0 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Jan 31 08:19:30 compute-0 ceph-mon[75227]: pgmap v506: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:19:30 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v507: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:19:31 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 08:19:31 compute-0 ceph-mgr[75519]: [balancer INFO root] Optimize plan auto_2026-01-31_08:19:31
Jan 31 08:19:31 compute-0 ceph-mgr[75519]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 31 08:19:31 compute-0 ceph-mgr[75519]: [balancer INFO root] do_upmap
Jan 31 08:19:31 compute-0 ceph-mgr[75519]: [balancer INFO root] pools ['default.rgw.control', 'vms', 'backups', 'cephfs.cephfs.data', 'images', '.mgr', 'default.rgw.meta', 'cephfs.cephfs.meta', 'default.rgw.log', 'volumes', '.rgw.root']
Jan 31 08:19:31 compute-0 ceph-mgr[75519]: [balancer INFO root] prepared 0/10 upmap changes
Jan 31 08:19:32 compute-0 groupadd[180630]: group added to /etc/group: name=dnsmasq, GID=992
Jan 31 08:19:32 compute-0 ceph-mon[75227]: pgmap v507: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:19:32 compute-0 groupadd[180630]: group added to /etc/gshadow: name=dnsmasq
Jan 31 08:19:32 compute-0 groupadd[180630]: new group: name=dnsmasq, GID=992
Jan 31 08:19:32 compute-0 useradd[180637]: new user: name=dnsmasq, UID=991, GID=992, home=/var/lib/dnsmasq, shell=/usr/sbin/nologin, from=none
Jan 31 08:19:32 compute-0 dbus-broker-launch[771]: Noticed file-system modification, trigger reload.
Jan 31 08:19:32 compute-0 dbus-broker-launch[778]: avc:  op=load_policy lsm=selinux seqno=12 res=1
Jan 31 08:19:32 compute-0 dbus-broker-launch[771]: Noticed file-system modification, trigger reload.
Jan 31 08:19:32 compute-0 ceph-mgr[75519]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:19:32 compute-0 ceph-mgr[75519]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:19:32 compute-0 ceph-mgr[75519]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:19:32 compute-0 ceph-mgr[75519]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:19:32 compute-0 ceph-mgr[75519]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:19:32 compute-0 ceph-mgr[75519]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:19:32 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v508: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:19:32 compute-0 ceph-mgr[75519]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 31 08:19:32 compute-0 ceph-mgr[75519]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 08:19:32 compute-0 ceph-mgr[75519]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 31 08:19:32 compute-0 ceph-mgr[75519]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 08:19:32 compute-0 ceph-mgr[75519]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 08:19:32 compute-0 ceph-mgr[75519]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 08:19:32 compute-0 ceph-mgr[75519]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 08:19:32 compute-0 ceph-mgr[75519]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 08:19:32 compute-0 ceph-mgr[75519]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 08:19:32 compute-0 ceph-mgr[75519]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 08:19:33 compute-0 ceph-mon[75227]: pgmap v508: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:19:34 compute-0 groupadd[180650]: group added to /etc/group: name=clevis, GID=991
Jan 31 08:19:34 compute-0 groupadd[180650]: group added to /etc/gshadow: name=clevis
Jan 31 08:19:34 compute-0 groupadd[180650]: new group: name=clevis, GID=991
Jan 31 08:19:34 compute-0 useradd[180657]: new user: name=clevis, UID=990, GID=991, home=/var/cache/clevis, shell=/usr/sbin/nologin, from=none
Jan 31 08:19:34 compute-0 usermod[180667]: add 'clevis' to group 'tss'
Jan 31 08:19:34 compute-0 usermod[180667]: add 'clevis' to shadow group 'tss'
Jan 31 08:19:34 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v509: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:19:35 compute-0 ceph-mon[75227]: pgmap v509: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:19:36 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 08:19:36 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v510: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:19:37 compute-0 polkitd[43527]: Reloading rules
Jan 31 08:19:37 compute-0 polkitd[43527]: Collecting garbage unconditionally...
Jan 31 08:19:37 compute-0 polkitd[43527]: Loading rules from directory /etc/polkit-1/rules.d
Jan 31 08:19:37 compute-0 polkitd[43527]: Loading rules from directory /usr/share/polkit-1/rules.d
Jan 31 08:19:37 compute-0 polkitd[43527]: Finished loading, compiling and executing 3 rules
Jan 31 08:19:37 compute-0 polkitd[43527]: Reloading rules
Jan 31 08:19:37 compute-0 polkitd[43527]: Collecting garbage unconditionally...
Jan 31 08:19:37 compute-0 polkitd[43527]: Loading rules from directory /etc/polkit-1/rules.d
Jan 31 08:19:37 compute-0 polkitd[43527]: Loading rules from directory /usr/share/polkit-1/rules.d
Jan 31 08:19:37 compute-0 polkitd[43527]: Finished loading, compiling and executing 3 rules
Jan 31 08:19:37 compute-0 ceph-mon[75227]: pgmap v510: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:19:38 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v511: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:19:39 compute-0 ceph-mon[75227]: pgmap v511: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:19:40 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v512: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:19:41 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 08:19:41 compute-0 systemd[1]: Stopping OpenSSH server daemon...
Jan 31 08:19:41 compute-0 sshd[1008]: Received signal 15; terminating.
Jan 31 08:19:41 compute-0 systemd[1]: sshd.service: Deactivated successfully.
Jan 31 08:19:41 compute-0 systemd[1]: Stopped OpenSSH server daemon.
Jan 31 08:19:41 compute-0 systemd[1]: sshd.service: Consumed 2.442s CPU time, read 32.0K from disk, written 28.0K to disk.
Jan 31 08:19:41 compute-0 systemd[1]: Stopped target sshd-keygen.target.
Jan 31 08:19:41 compute-0 systemd[1]: Stopping sshd-keygen.target...
Jan 31 08:19:41 compute-0 systemd[1]: OpenSSH ecdsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Jan 31 08:19:41 compute-0 systemd[1]: OpenSSH ed25519 Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Jan 31 08:19:41 compute-0 systemd[1]: OpenSSH rsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Jan 31 08:19:41 compute-0 systemd[1]: Reached target sshd-keygen.target.
Jan 31 08:19:41 compute-0 systemd[1]: Starting OpenSSH server daemon...
Jan 31 08:19:41 compute-0 sshd[181475]: Server listening on 0.0.0.0 port 22.
Jan 31 08:19:41 compute-0 sshd[181475]: Server listening on :: port 22.
Jan 31 08:19:41 compute-0 systemd[1]: Started OpenSSH server daemon.
Jan 31 08:19:41 compute-0 ceph-mon[75227]: pgmap v512: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:19:42 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v513: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:19:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] _maybe_adjust
Jan 31 08:19:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:19:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 31 08:19:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:19:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 08:19:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:19:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 08:19:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:19:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 08:19:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:19:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 08:19:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:19:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 2.6947183441958982e-06 of space, bias 4.0, pg target 0.003233662013035078 quantized to 16 (current 16)
Jan 31 08:19:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:19:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 08:19:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:19:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 31 08:19:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:19:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Jan 31 08:19:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:19:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 08:19:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:19:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 31 08:19:43 compute-0 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Jan 31 08:19:43 compute-0 systemd[1]: Starting man-db-cache-update.service...
Jan 31 08:19:43 compute-0 systemd[1]: Reloading.
Jan 31 08:19:43 compute-0 systemd-rc-local-generator[181758]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 08:19:43 compute-0 systemd-sysv-generator[181761]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 08:19:44 compute-0 ceph-mon[75227]: pgmap v513: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:19:44 compute-0 sudo[181706]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 08:19:44 compute-0 systemd[1]: Queuing reload/restart jobs for marked units…
Jan 31 08:19:44 compute-0 sudo[181706]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:19:44 compute-0 sudo[181706]: pam_unix(sudo:session): session closed for user root
Jan 31 08:19:44 compute-0 sudo[181948]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/82c880e6-d992-5408-8b12-efff9c275473/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Jan 31 08:19:44 compute-0 sudo[181948]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:19:44 compute-0 sudo[181948]: pam_unix(sudo:session): session closed for user root
Jan 31 08:19:44 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 31 08:19:44 compute-0 ceph-mon[75227]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 08:19:44 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 31 08:19:44 compute-0 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 31 08:19:44 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 31 08:19:44 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v514: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:19:45 compute-0 podman[183634]: 2026-01-31 08:19:45.257583906 +0000 UTC m=+0.140189961 container health_status 14c3c41ef3ecc0cb180f3b1c6b2646401de390411c75f2edf127900ced71a3ae (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20260127, tcib_managed=true, managed_by=edpm_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '3b798815db4eef76ff54d2cfe5801aee605c637f16e47a4297de289ab1fdb9c1-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Jan 31 08:19:45 compute-0 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 08:19:46 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 31 08:19:46 compute-0 ceph-mon[75227]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 31 08:19:46 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 31 08:19:46 compute-0 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 31 08:19:46 compute-0 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 08:19:46 compute-0 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 31 08:19:46 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 31 08:19:46 compute-0 ceph-mon[75227]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 08:19:46 compute-0 sudo[185208]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 08:19:46 compute-0 sudo[185208]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:19:46 compute-0 sudo[185208]: pam_unix(sudo:session): session closed for user root
Jan 31 08:19:46 compute-0 sudo[185300]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/82c880e6-d992-5408-8b12-efff9c275473/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 82c880e6-d992-5408-8b12-efff9c275473 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Jan 31 08:19:46 compute-0 sudo[185300]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:19:46 compute-0 podman[185612]: 2026-01-31 08:19:46.474658483 +0000 UTC m=+0.016721461 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 08:19:46 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 08:19:46 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v515: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:19:46 compute-0 podman[185612]: 2026-01-31 08:19:46.930205886 +0000 UTC m=+0.472268824 container create c98d8819fdba8b7e13b9eefeed1c15fd98151f8c78d41d0307ac61ae81eff8b9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=festive_allen, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 08:19:47 compute-0 systemd[1]: Started libpod-conmon-c98d8819fdba8b7e13b9eefeed1c15fd98151f8c78d41d0307ac61ae81eff8b9.scope.
Jan 31 08:19:47 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:19:47 compute-0 ceph-mon[75227]: pgmap v514: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:19:47 compute-0 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 08:19:47 compute-0 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 31 08:19:47 compute-0 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 31 08:19:47 compute-0 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 08:19:47 compute-0 podman[185612]: 2026-01-31 08:19:47.242920056 +0000 UTC m=+0.784983014 container init c98d8819fdba8b7e13b9eefeed1c15fd98151f8c78d41d0307ac61ae81eff8b9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=festive_allen, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, OSD_FLAVOR=default)
Jan 31 08:19:47 compute-0 podman[185612]: 2026-01-31 08:19:47.250301703 +0000 UTC m=+0.792364641 container start c98d8819fdba8b7e13b9eefeed1c15fd98151f8c78d41d0307ac61ae81eff8b9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=festive_allen, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030)
Jan 31 08:19:47 compute-0 festive_allen[186553]: 167 167
Jan 31 08:19:47 compute-0 systemd[1]: libpod-c98d8819fdba8b7e13b9eefeed1c15fd98151f8c78d41d0307ac61ae81eff8b9.scope: Deactivated successfully.
Jan 31 08:19:47 compute-0 podman[185612]: 2026-01-31 08:19:47.394467915 +0000 UTC m=+0.936530873 container attach c98d8819fdba8b7e13b9eefeed1c15fd98151f8c78d41d0307ac61ae81eff8b9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=festive_allen, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 08:19:47 compute-0 podman[185612]: 2026-01-31 08:19:47.395176705 +0000 UTC m=+0.937239643 container died c98d8819fdba8b7e13b9eefeed1c15fd98151f8c78d41d0307ac61ae81eff8b9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=festive_allen, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030)
Jan 31 08:19:47 compute-0 systemd[1]: var-lib-containers-storage-overlay-fbfaa65765c18f7038f2e6cf54d502a6f8a6e2f2f3babc28e3d55ae0fc407d56-merged.mount: Deactivated successfully.
Jan 31 08:19:47 compute-0 podman[185612]: 2026-01-31 08:19:47.933334071 +0000 UTC m=+1.475397019 container remove c98d8819fdba8b7e13b9eefeed1c15fd98151f8c78d41d0307ac61ae81eff8b9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=festive_allen, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True)
Jan 31 08:19:48 compute-0 systemd[1]: libpod-conmon-c98d8819fdba8b7e13b9eefeed1c15fd98151f8c78d41d0307ac61ae81eff8b9.scope: Deactivated successfully.
Jan 31 08:19:48 compute-0 podman[187734]: 2026-01-31 08:19:48.108524345 +0000 UTC m=+0.099873218 container create b491cfa6e48de76553b72aa2733252a1c602f08b3445b8251485801e5fbb1268 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=compassionate_wilbur, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 08:19:48 compute-0 podman[187734]: 2026-01-31 08:19:48.028388102 +0000 UTC m=+0.019736955 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 08:19:48 compute-0 systemd[1]: Started libpod-conmon-b491cfa6e48de76553b72aa2733252a1c602f08b3445b8251485801e5fbb1268.scope.
Jan 31 08:19:48 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:19:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2b41aa5ed63c482d377b1add981bf789a9e3d9eea846995d7f19cdc3df6111df/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 08:19:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2b41aa5ed63c482d377b1add981bf789a9e3d9eea846995d7f19cdc3df6111df/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 08:19:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2b41aa5ed63c482d377b1add981bf789a9e3d9eea846995d7f19cdc3df6111df/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 08:19:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2b41aa5ed63c482d377b1add981bf789a9e3d9eea846995d7f19cdc3df6111df/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 08:19:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2b41aa5ed63c482d377b1add981bf789a9e3d9eea846995d7f19cdc3df6111df/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 31 08:19:48 compute-0 podman[187774]: 2026-01-31 08:19:48.217181149 +0000 UTC m=+0.178565810 container health_status 5cc46d1955888fed41771eb977e7f9416e280539f01559118253757fe3eb0869 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '3b798815db4eef76ff54d2cfe5801aee605c637f16e47a4297de289ab1fdb9c1-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, container_name=ovn_metadata_agent, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true)
Jan 31 08:19:48 compute-0 ceph-mon[75227]: pgmap v515: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:19:48 compute-0 podman[187734]: 2026-01-31 08:19:48.370330153 +0000 UTC m=+0.361679006 container init b491cfa6e48de76553b72aa2733252a1c602f08b3445b8251485801e5fbb1268 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=compassionate_wilbur, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Jan 31 08:19:48 compute-0 podman[187734]: 2026-01-31 08:19:48.379021958 +0000 UTC m=+0.370370791 container start b491cfa6e48de76553b72aa2733252a1c602f08b3445b8251485801e5fbb1268 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=compassionate_wilbur, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Jan 31 08:19:48 compute-0 podman[187734]: 2026-01-31 08:19:48.442770889 +0000 UTC m=+0.434119752 container attach b491cfa6e48de76553b72aa2733252a1c602f08b3445b8251485801e5fbb1268 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=compassionate_wilbur, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3)
Jan 31 08:19:48 compute-0 compassionate_wilbur[187996]: --> passed data devices: 0 physical, 3 LVM
Jan 31 08:19:48 compute-0 compassionate_wilbur[187996]: --> All data devices are unavailable
Jan 31 08:19:48 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v516: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:19:48 compute-0 systemd[1]: libpod-b491cfa6e48de76553b72aa2733252a1c602f08b3445b8251485801e5fbb1268.scope: Deactivated successfully.
Jan 31 08:19:48 compute-0 podman[187734]: 2026-01-31 08:19:48.858867584 +0000 UTC m=+0.850216427 container died b491cfa6e48de76553b72aa2733252a1c602f08b3445b8251485801e5fbb1268 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=compassionate_wilbur, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Jan 31 08:19:49 compute-0 systemd[1]: var-lib-containers-storage-overlay-2b41aa5ed63c482d377b1add981bf789a9e3d9eea846995d7f19cdc3df6111df-merged.mount: Deactivated successfully.
Jan 31 08:19:49 compute-0 podman[187734]: 2026-01-31 08:19:49.164067772 +0000 UTC m=+1.155416615 container remove b491cfa6e48de76553b72aa2733252a1c602f08b3445b8251485801e5fbb1268 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=compassionate_wilbur, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, io.buildah.version=1.41.3)
Jan 31 08:19:49 compute-0 systemd[1]: libpod-conmon-b491cfa6e48de76553b72aa2733252a1c602f08b3445b8251485801e5fbb1268.scope: Deactivated successfully.
Jan 31 08:19:49 compute-0 sudo[185300]: pam_unix(sudo:session): session closed for user root
Jan 31 08:19:49 compute-0 sudo[189477]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 08:19:49 compute-0 sudo[189477]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:19:49 compute-0 sudo[189477]: pam_unix(sudo:session): session closed for user root
Jan 31 08:19:49 compute-0 sudo[189555]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/82c880e6-d992-5408-8b12-efff9c275473/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 82c880e6-d992-5408-8b12-efff9c275473 -- lvm list --format json
Jan 31 08:19:49 compute-0 sudo[189555]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:19:49 compute-0 ceph-mon[75227]: pgmap v516: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:19:49 compute-0 podman[189887]: 2026-01-31 08:19:49.578926072 +0000 UTC m=+0.060301146 container create 561759c7c651c0a2fb39c6fdbdf12ca276a69976f87cc98496765f3a7b1794cf (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=distracted_mcnulty, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.build-date=20251030)
Jan 31 08:19:49 compute-0 podman[189887]: 2026-01-31 08:19:49.535858571 +0000 UTC m=+0.017233665 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 08:19:49 compute-0 systemd[1]: Started libpod-conmon-561759c7c651c0a2fb39c6fdbdf12ca276a69976f87cc98496765f3a7b1794cf.scope.
Jan 31 08:19:49 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:19:49 compute-0 podman[189887]: 2026-01-31 08:19:49.748858908 +0000 UTC m=+0.230234042 container init 561759c7c651c0a2fb39c6fdbdf12ca276a69976f87cc98496765f3a7b1794cf (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=distracted_mcnulty, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, OSD_FLAVOR=default, io.buildah.version=1.41.3)
Jan 31 08:19:49 compute-0 podman[189887]: 2026-01-31 08:19:49.75569314 +0000 UTC m=+0.237068244 container start 561759c7c651c0a2fb39c6fdbdf12ca276a69976f87cc98496765f3a7b1794cf (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=distracted_mcnulty, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Jan 31 08:19:49 compute-0 distracted_mcnulty[190076]: 167 167
Jan 31 08:19:49 compute-0 systemd[1]: libpod-561759c7c651c0a2fb39c6fdbdf12ca276a69976f87cc98496765f3a7b1794cf.scope: Deactivated successfully.
Jan 31 08:19:49 compute-0 podman[189887]: 2026-01-31 08:19:49.903161475 +0000 UTC m=+0.384536589 container attach 561759c7c651c0a2fb39c6fdbdf12ca276a69976f87cc98496765f3a7b1794cf (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=distracted_mcnulty, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 31 08:19:49 compute-0 podman[189887]: 2026-01-31 08:19:49.903533025 +0000 UTC m=+0.384908109 container died 561759c7c651c0a2fb39c6fdbdf12ca276a69976f87cc98496765f3a7b1794cf (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=distracted_mcnulty, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3)
Jan 31 08:19:50 compute-0 systemd[1]: var-lib-containers-storage-overlay-7bbbede171c708cbf57bab0953a4741631f5eba4cdd37be644667fa6e14b8e70-merged.mount: Deactivated successfully.
Jan 31 08:19:50 compute-0 podman[189887]: 2026-01-31 08:19:50.271001874 +0000 UTC m=+0.752376958 container remove 561759c7c651c0a2fb39c6fdbdf12ca276a69976f87cc98496765f3a7b1794cf (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=distracted_mcnulty, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Jan 31 08:19:50 compute-0 systemd[1]: libpod-conmon-561759c7c651c0a2fb39c6fdbdf12ca276a69976f87cc98496765f3a7b1794cf.scope: Deactivated successfully.
Jan 31 08:19:50 compute-0 podman[190501]: 2026-01-31 08:19:50.375382108 +0000 UTC m=+0.027198886 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 08:19:50 compute-0 podman[190501]: 2026-01-31 08:19:50.787976114 +0000 UTC m=+0.439792872 container create 00c980849a9dea84a5360cb93b309b21931ccbc26eacd3588140bdfb20c76cba (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=relaxed_chatterjee, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.schema-version=1.0)
Jan 31 08:19:50 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v517: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:19:50 compute-0 systemd[1]: Started libpod-conmon-00c980849a9dea84a5360cb93b309b21931ccbc26eacd3588140bdfb20c76cba.scope.
Jan 31 08:19:50 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:19:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8556f11331ad4fd5c5607f0c1e9a6a21d8de26b2f618cba467ea94ab8a84f5a1/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 08:19:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8556f11331ad4fd5c5607f0c1e9a6a21d8de26b2f618cba467ea94ab8a84f5a1/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 08:19:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8556f11331ad4fd5c5607f0c1e9a6a21d8de26b2f618cba467ea94ab8a84f5a1/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 08:19:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8556f11331ad4fd5c5607f0c1e9a6a21d8de26b2f618cba467ea94ab8a84f5a1/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 08:19:50 compute-0 sudo[162510]: pam_unix(sudo:session): session closed for user root
Jan 31 08:19:50 compute-0 podman[190501]: 2026-01-31 08:19:50.904031546 +0000 UTC m=+0.555848324 container init 00c980849a9dea84a5360cb93b309b21931ccbc26eacd3588140bdfb20c76cba (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=relaxed_chatterjee, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle)
Jan 31 08:19:50 compute-0 podman[190501]: 2026-01-31 08:19:50.912442612 +0000 UTC m=+0.564259340 container start 00c980849a9dea84a5360cb93b309b21931ccbc26eacd3588140bdfb20c76cba (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=relaxed_chatterjee, io.buildah.version=1.41.3, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 31 08:19:50 compute-0 podman[190501]: 2026-01-31 08:19:50.933907776 +0000 UTC m=+0.585724604 container attach 00c980849a9dea84a5360cb93b309b21931ccbc26eacd3588140bdfb20c76cba (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=relaxed_chatterjee, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Jan 31 08:19:51 compute-0 relaxed_chatterjee[190517]: {
Jan 31 08:19:51 compute-0 relaxed_chatterjee[190517]:     "0": [
Jan 31 08:19:51 compute-0 relaxed_chatterjee[190517]:         {
Jan 31 08:19:51 compute-0 relaxed_chatterjee[190517]:             "devices": [
Jan 31 08:19:51 compute-0 relaxed_chatterjee[190517]:                 "/dev/loop3"
Jan 31 08:19:51 compute-0 relaxed_chatterjee[190517]:             ],
Jan 31 08:19:51 compute-0 relaxed_chatterjee[190517]:             "lv_name": "ceph_lv0",
Jan 31 08:19:51 compute-0 relaxed_chatterjee[190517]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 08:19:51 compute-0 relaxed_chatterjee[190517]:             "lv_size": "21470642176",
Jan 31 08:19:51 compute-0 relaxed_chatterjee[190517]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=MTsNbY-MKaT-jGv0-3onj-5WQa-gnK0-BbfLsK,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=82c880e6-d992-5408-8b12-efff9c275473,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=39c36249-2898-4a76-b317-8e4ca379866f,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 31 08:19:51 compute-0 relaxed_chatterjee[190517]:             "lv_uuid": "MTsNbY-MKaT-jGv0-3onj-5WQa-gnK0-BbfLsK",
Jan 31 08:19:51 compute-0 relaxed_chatterjee[190517]:             "name": "ceph_lv0",
Jan 31 08:19:51 compute-0 relaxed_chatterjee[190517]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 08:19:51 compute-0 relaxed_chatterjee[190517]:             "tags": {
Jan 31 08:19:51 compute-0 relaxed_chatterjee[190517]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 31 08:19:51 compute-0 relaxed_chatterjee[190517]:                 "ceph.block_uuid": "MTsNbY-MKaT-jGv0-3onj-5WQa-gnK0-BbfLsK",
Jan 31 08:19:51 compute-0 relaxed_chatterjee[190517]:                 "ceph.cephx_lockbox_secret": "",
Jan 31 08:19:51 compute-0 relaxed_chatterjee[190517]:                 "ceph.cluster_fsid": "82c880e6-d992-5408-8b12-efff9c275473",
Jan 31 08:19:51 compute-0 relaxed_chatterjee[190517]:                 "ceph.cluster_name": "ceph",
Jan 31 08:19:51 compute-0 relaxed_chatterjee[190517]:                 "ceph.crush_device_class": "",
Jan 31 08:19:51 compute-0 relaxed_chatterjee[190517]:                 "ceph.encrypted": "0",
Jan 31 08:19:51 compute-0 relaxed_chatterjee[190517]:                 "ceph.objectstore": "bluestore",
Jan 31 08:19:51 compute-0 relaxed_chatterjee[190517]:                 "ceph.osd_fsid": "39c36249-2898-4a76-b317-8e4ca379866f",
Jan 31 08:19:51 compute-0 relaxed_chatterjee[190517]:                 "ceph.osd_id": "0",
Jan 31 08:19:51 compute-0 relaxed_chatterjee[190517]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 31 08:19:51 compute-0 relaxed_chatterjee[190517]:                 "ceph.type": "block",
Jan 31 08:19:51 compute-0 relaxed_chatterjee[190517]:                 "ceph.vdo": "0",
Jan 31 08:19:51 compute-0 relaxed_chatterjee[190517]:                 "ceph.with_tpm": "0"
Jan 31 08:19:51 compute-0 relaxed_chatterjee[190517]:             },
Jan 31 08:19:51 compute-0 relaxed_chatterjee[190517]:             "type": "block",
Jan 31 08:19:51 compute-0 relaxed_chatterjee[190517]:             "vg_name": "ceph_vg0"
Jan 31 08:19:51 compute-0 relaxed_chatterjee[190517]:         }
Jan 31 08:19:51 compute-0 relaxed_chatterjee[190517]:     ],
Jan 31 08:19:51 compute-0 relaxed_chatterjee[190517]:     "1": [
Jan 31 08:19:51 compute-0 relaxed_chatterjee[190517]:         {
Jan 31 08:19:51 compute-0 relaxed_chatterjee[190517]:             "devices": [
Jan 31 08:19:51 compute-0 relaxed_chatterjee[190517]:                 "/dev/loop4"
Jan 31 08:19:51 compute-0 relaxed_chatterjee[190517]:             ],
Jan 31 08:19:51 compute-0 relaxed_chatterjee[190517]:             "lv_name": "ceph_lv1",
Jan 31 08:19:51 compute-0 relaxed_chatterjee[190517]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 31 08:19:51 compute-0 relaxed_chatterjee[190517]:             "lv_size": "21470642176",
Jan 31 08:19:51 compute-0 relaxed_chatterjee[190517]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=p93Mbf-DMxT-pcUt-jSJE-SFna-oscq-yTAd40,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=82c880e6-d992-5408-8b12-efff9c275473,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=dacad4fa-56d8-4937-b2d8-306fb75187f3,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 31 08:19:51 compute-0 relaxed_chatterjee[190517]:             "lv_uuid": "p93Mbf-DMxT-pcUt-jSJE-SFna-oscq-yTAd40",
Jan 31 08:19:51 compute-0 relaxed_chatterjee[190517]:             "name": "ceph_lv1",
Jan 31 08:19:51 compute-0 relaxed_chatterjee[190517]:             "path": "/dev/ceph_vg1/ceph_lv1",
Jan 31 08:19:51 compute-0 relaxed_chatterjee[190517]:             "tags": {
Jan 31 08:19:51 compute-0 relaxed_chatterjee[190517]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 31 08:19:51 compute-0 relaxed_chatterjee[190517]:                 "ceph.block_uuid": "p93Mbf-DMxT-pcUt-jSJE-SFna-oscq-yTAd40",
Jan 31 08:19:51 compute-0 relaxed_chatterjee[190517]:                 "ceph.cephx_lockbox_secret": "",
Jan 31 08:19:51 compute-0 relaxed_chatterjee[190517]:                 "ceph.cluster_fsid": "82c880e6-d992-5408-8b12-efff9c275473",
Jan 31 08:19:51 compute-0 relaxed_chatterjee[190517]:                 "ceph.cluster_name": "ceph",
Jan 31 08:19:51 compute-0 relaxed_chatterjee[190517]:                 "ceph.crush_device_class": "",
Jan 31 08:19:51 compute-0 relaxed_chatterjee[190517]:                 "ceph.encrypted": "0",
Jan 31 08:19:51 compute-0 relaxed_chatterjee[190517]:                 "ceph.objectstore": "bluestore",
Jan 31 08:19:51 compute-0 relaxed_chatterjee[190517]:                 "ceph.osd_fsid": "dacad4fa-56d8-4937-b2d8-306fb75187f3",
Jan 31 08:19:51 compute-0 relaxed_chatterjee[190517]:                 "ceph.osd_id": "1",
Jan 31 08:19:51 compute-0 relaxed_chatterjee[190517]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 31 08:19:51 compute-0 relaxed_chatterjee[190517]:                 "ceph.type": "block",
Jan 31 08:19:51 compute-0 relaxed_chatterjee[190517]:                 "ceph.vdo": "0",
Jan 31 08:19:51 compute-0 relaxed_chatterjee[190517]:                 "ceph.with_tpm": "0"
Jan 31 08:19:51 compute-0 relaxed_chatterjee[190517]:             },
Jan 31 08:19:51 compute-0 relaxed_chatterjee[190517]:             "type": "block",
Jan 31 08:19:51 compute-0 relaxed_chatterjee[190517]:             "vg_name": "ceph_vg1"
Jan 31 08:19:51 compute-0 relaxed_chatterjee[190517]:         }
Jan 31 08:19:51 compute-0 relaxed_chatterjee[190517]:     ],
Jan 31 08:19:51 compute-0 relaxed_chatterjee[190517]:     "2": [
Jan 31 08:19:51 compute-0 relaxed_chatterjee[190517]:         {
Jan 31 08:19:51 compute-0 relaxed_chatterjee[190517]:             "devices": [
Jan 31 08:19:51 compute-0 relaxed_chatterjee[190517]:                 "/dev/loop5"
Jan 31 08:19:51 compute-0 relaxed_chatterjee[190517]:             ],
Jan 31 08:19:51 compute-0 relaxed_chatterjee[190517]:             "lv_name": "ceph_lv2",
Jan 31 08:19:51 compute-0 relaxed_chatterjee[190517]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 31 08:19:51 compute-0 relaxed_chatterjee[190517]:             "lv_size": "21470642176",
Jan 31 08:19:51 compute-0 relaxed_chatterjee[190517]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=fxd7JU-HnwP-NvcE-M4xv-EgEF-kK7y-w6dXCS,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=82c880e6-d992-5408-8b12-efff9c275473,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=faa25865-e7b6-44f9-8188-08bf287b941b,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 31 08:19:51 compute-0 relaxed_chatterjee[190517]:             "lv_uuid": "fxd7JU-HnwP-NvcE-M4xv-EgEF-kK7y-w6dXCS",
Jan 31 08:19:51 compute-0 relaxed_chatterjee[190517]:             "name": "ceph_lv2",
Jan 31 08:19:51 compute-0 relaxed_chatterjee[190517]:             "path": "/dev/ceph_vg2/ceph_lv2",
Jan 31 08:19:51 compute-0 relaxed_chatterjee[190517]:             "tags": {
Jan 31 08:19:51 compute-0 relaxed_chatterjee[190517]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 31 08:19:51 compute-0 relaxed_chatterjee[190517]:                 "ceph.block_uuid": "fxd7JU-HnwP-NvcE-M4xv-EgEF-kK7y-w6dXCS",
Jan 31 08:19:51 compute-0 relaxed_chatterjee[190517]:                 "ceph.cephx_lockbox_secret": "",
Jan 31 08:19:51 compute-0 relaxed_chatterjee[190517]:                 "ceph.cluster_fsid": "82c880e6-d992-5408-8b12-efff9c275473",
Jan 31 08:19:51 compute-0 relaxed_chatterjee[190517]:                 "ceph.cluster_name": "ceph",
Jan 31 08:19:51 compute-0 relaxed_chatterjee[190517]:                 "ceph.crush_device_class": "",
Jan 31 08:19:51 compute-0 relaxed_chatterjee[190517]:                 "ceph.encrypted": "0",
Jan 31 08:19:51 compute-0 relaxed_chatterjee[190517]:                 "ceph.objectstore": "bluestore",
Jan 31 08:19:51 compute-0 relaxed_chatterjee[190517]:                 "ceph.osd_fsid": "faa25865-e7b6-44f9-8188-08bf287b941b",
Jan 31 08:19:51 compute-0 relaxed_chatterjee[190517]:                 "ceph.osd_id": "2",
Jan 31 08:19:51 compute-0 relaxed_chatterjee[190517]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 31 08:19:51 compute-0 relaxed_chatterjee[190517]:                 "ceph.type": "block",
Jan 31 08:19:51 compute-0 relaxed_chatterjee[190517]:                 "ceph.vdo": "0",
Jan 31 08:19:51 compute-0 relaxed_chatterjee[190517]:                 "ceph.with_tpm": "0"
Jan 31 08:19:51 compute-0 relaxed_chatterjee[190517]:             },
Jan 31 08:19:51 compute-0 relaxed_chatterjee[190517]:             "type": "block",
Jan 31 08:19:51 compute-0 relaxed_chatterjee[190517]:             "vg_name": "ceph_vg2"
Jan 31 08:19:51 compute-0 relaxed_chatterjee[190517]:         }
Jan 31 08:19:51 compute-0 relaxed_chatterjee[190517]:     ]
Jan 31 08:19:51 compute-0 relaxed_chatterjee[190517]: }
Jan 31 08:19:51 compute-0 systemd[1]: libpod-00c980849a9dea84a5360cb93b309b21931ccbc26eacd3588140bdfb20c76cba.scope: Deactivated successfully.
Jan 31 08:19:51 compute-0 podman[190501]: 2026-01-31 08:19:51.221478888 +0000 UTC m=+0.873295676 container died 00c980849a9dea84a5360cb93b309b21931ccbc26eacd3588140bdfb20c76cba (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=relaxed_chatterjee, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20251030, ceph=True)
Jan 31 08:19:51 compute-0 systemd[1]: var-lib-containers-storage-overlay-8556f11331ad4fd5c5607f0c1e9a6a21d8de26b2f618cba467ea94ab8a84f5a1-merged.mount: Deactivated successfully.
Jan 31 08:19:51 compute-0 podman[190501]: 2026-01-31 08:19:51.54103835 +0000 UTC m=+1.192855108 container remove 00c980849a9dea84a5360cb93b309b21931ccbc26eacd3588140bdfb20c76cba (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=relaxed_chatterjee, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 31 08:19:51 compute-0 sudo[189555]: pam_unix(sudo:session): session closed for user root
Jan 31 08:19:51 compute-0 systemd[1]: libpod-conmon-00c980849a9dea84a5360cb93b309b21931ccbc26eacd3588140bdfb20c76cba.scope: Deactivated successfully.
Jan 31 08:19:51 compute-0 sudo[190688]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lsewxlocxwwhrjyeoezjzytquhhdwufz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847591.0124445-331-173823504266137/AnsiballZ_systemd.py'
Jan 31 08:19:51 compute-0 sudo[190688]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:19:51 compute-0 sudo[190691]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 08:19:51 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 08:19:51 compute-0 sudo[190691]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:19:51 compute-0 sudo[190691]: pam_unix(sudo:session): session closed for user root
Jan 31 08:19:51 compute-0 sudo[190717]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/82c880e6-d992-5408-8b12-efff9c275473/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 82c880e6-d992-5408-8b12-efff9c275473 -- raw list --format json
Jan 31 08:19:51 compute-0 sudo[190717]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:19:51 compute-0 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Jan 31 08:19:51 compute-0 systemd[1]: Finished man-db-cache-update.service.
Jan 31 08:19:51 compute-0 systemd[1]: man-db-cache-update.service: Consumed 7.684s CPU time.
Jan 31 08:19:51 compute-0 systemd[1]: run-r058dbcfb0e9246cab0572b18b149369a.service: Deactivated successfully.
Jan 31 08:19:51 compute-0 python3.9[190693]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=libvirtd state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Jan 31 08:19:51 compute-0 systemd[1]: Reloading.
Jan 31 08:19:51 compute-0 ceph-mon[75227]: pgmap v517: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:19:51 compute-0 podman[190757]: 2026-01-31 08:19:51.992511189 +0000 UTC m=+0.068949559 container create b8078f8779e45a6ea182c6fd2f664a9bf0988928a3c78cb09a899817eba7df19 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=keen_kilby, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 31 08:19:52 compute-0 systemd-sysv-generator[190795]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 08:19:52 compute-0 systemd-rc-local-generator[190789]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 08:19:52 compute-0 podman[190757]: 2026-01-31 08:19:51.946104365 +0000 UTC m=+0.022542775 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 08:19:52 compute-0 systemd[1]: Started libpod-conmon-b8078f8779e45a6ea182c6fd2f664a9bf0988928a3c78cb09a899817eba7df19.scope.
Jan 31 08:19:52 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:19:52 compute-0 podman[190757]: 2026-01-31 08:19:52.246978782 +0000 UTC m=+0.323417192 container init b8078f8779e45a6ea182c6fd2f664a9bf0988928a3c78cb09a899817eba7df19 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=keen_kilby, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 08:19:52 compute-0 sudo[190688]: pam_unix(sudo:session): session closed for user root
Jan 31 08:19:52 compute-0 podman[190757]: 2026-01-31 08:19:52.251514149 +0000 UTC m=+0.327952519 container start b8078f8779e45a6ea182c6fd2f664a9bf0988928a3c78cb09a899817eba7df19 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=keen_kilby, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Jan 31 08:19:52 compute-0 keen_kilby[190810]: 167 167
Jan 31 08:19:52 compute-0 systemd[1]: libpod-b8078f8779e45a6ea182c6fd2f664a9bf0988928a3c78cb09a899817eba7df19.scope: Deactivated successfully.
Jan 31 08:19:52 compute-0 podman[190757]: 2026-01-31 08:19:52.266068728 +0000 UTC m=+0.342507108 container attach b8078f8779e45a6ea182c6fd2f664a9bf0988928a3c78cb09a899817eba7df19 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=keen_kilby, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030)
Jan 31 08:19:52 compute-0 podman[190757]: 2026-01-31 08:19:52.266975044 +0000 UTC m=+0.343413424 container died b8078f8779e45a6ea182c6fd2f664a9bf0988928a3c78cb09a899817eba7df19 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=keen_kilby, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 08:19:52 compute-0 systemd[1]: var-lib-containers-storage-overlay-b656e9f857119f263029f74c354e410357fc9d608e7b1649fc28c29414d4e81b-merged.mount: Deactivated successfully.
Jan 31 08:19:52 compute-0 podman[190757]: 2026-01-31 08:19:52.389888958 +0000 UTC m=+0.466327328 container remove b8078f8779e45a6ea182c6fd2f664a9bf0988928a3c78cb09a899817eba7df19 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=keen_kilby, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, ceph=True)
Jan 31 08:19:52 compute-0 systemd[1]: libpod-conmon-b8078f8779e45a6ea182c6fd2f664a9bf0988928a3c78cb09a899817eba7df19.scope: Deactivated successfully.
Jan 31 08:19:52 compute-0 podman[190941]: 2026-01-31 08:19:52.513209194 +0000 UTC m=+0.051796436 container create e4e819cf76dda74f139d8107660e0e414b8873ca4af04464522d0c2024e78180 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=brave_keldysh, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, OSD_FLAVOR=default)
Jan 31 08:19:52 compute-0 sudo[190999]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ikjbbsxdjonoqjvwcwxnpjforeijwatx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847592.3397532-331-4378877024195/AnsiballZ_systemd.py'
Jan 31 08:19:52 compute-0 sudo[190999]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:19:52 compute-0 systemd[1]: Started libpod-conmon-e4e819cf76dda74f139d8107660e0e414b8873ca4af04464522d0c2024e78180.scope.
Jan 31 08:19:52 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:19:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2efcf992e377be3324ffbf5d6d6c226138758338c8445157e21c6ae322bfe5be/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 08:19:52 compute-0 podman[190941]: 2026-01-31 08:19:52.48672686 +0000 UTC m=+0.025314132 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 08:19:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2efcf992e377be3324ffbf5d6d6c226138758338c8445157e21c6ae322bfe5be/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 08:19:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2efcf992e377be3324ffbf5d6d6c226138758338c8445157e21c6ae322bfe5be/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 08:19:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2efcf992e377be3324ffbf5d6d6c226138758338c8445157e21c6ae322bfe5be/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 08:19:52 compute-0 podman[190941]: 2026-01-31 08:19:52.604177571 +0000 UTC m=+0.142764823 container init e4e819cf76dda74f139d8107660e0e414b8873ca4af04464522d0c2024e78180 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=brave_keldysh, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 08:19:52 compute-0 podman[190941]: 2026-01-31 08:19:52.610281512 +0000 UTC m=+0.148869194 container start e4e819cf76dda74f139d8107660e0e414b8873ca4af04464522d0c2024e78180 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=brave_keldysh, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Jan 31 08:19:52 compute-0 podman[190941]: 2026-01-31 08:19:52.632843816 +0000 UTC m=+0.171431078 container attach e4e819cf76dda74f139d8107660e0e414b8873ca4af04464522d0c2024e78180 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=brave_keldysh, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, ceph=True)
Jan 31 08:19:52 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v518: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:19:52 compute-0 python3.9[191001]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=libvirtd-tcp.socket state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Jan 31 08:19:52 compute-0 systemd[1]: Reloading.
Jan 31 08:19:52 compute-0 systemd-rc-local-generator[191068]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 08:19:52 compute-0 systemd-sysv-generator[191074]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 08:19:53 compute-0 lvm[191120]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 31 08:19:53 compute-0 lvm[191120]: VG ceph_vg0 finished
Jan 31 08:19:53 compute-0 sudo[190999]: pam_unix(sudo:session): session closed for user root
Jan 31 08:19:53 compute-0 lvm[191121]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 31 08:19:53 compute-0 lvm[191121]: VG ceph_vg1 finished
Jan 31 08:19:53 compute-0 lvm[191123]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 31 08:19:53 compute-0 lvm[191123]: VG ceph_vg2 finished
Jan 31 08:19:53 compute-0 brave_keldysh[191004]: {}
Jan 31 08:19:53 compute-0 systemd[1]: libpod-e4e819cf76dda74f139d8107660e0e414b8873ca4af04464522d0c2024e78180.scope: Deactivated successfully.
Jan 31 08:19:53 compute-0 systemd[1]: libpod-e4e819cf76dda74f139d8107660e0e414b8873ca4af04464522d0c2024e78180.scope: Consumed 1.002s CPU time.
Jan 31 08:19:53 compute-0 podman[190941]: 2026-01-31 08:19:53.368867873 +0000 UTC m=+0.907455125 container died e4e819cf76dda74f139d8107660e0e414b8873ca4af04464522d0c2024e78180 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=brave_keldysh, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle)
Jan 31 08:19:53 compute-0 sudo[191289]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yhvlawvpynyyvgkwawahlybefaihvunw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847593.285026-331-199916278537426/AnsiballZ_systemd.py'
Jan 31 08:19:53 compute-0 sudo[191289]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:19:53 compute-0 systemd[1]: var-lib-containers-storage-overlay-2efcf992e377be3324ffbf5d6d6c226138758338c8445157e21c6ae322bfe5be-merged.mount: Deactivated successfully.
Jan 31 08:19:53 compute-0 python3.9[191291]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=libvirtd-tls.socket state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Jan 31 08:19:53 compute-0 systemd[1]: Reloading.
Jan 31 08:19:53 compute-0 podman[190941]: 2026-01-31 08:19:53.979455095 +0000 UTC m=+1.518042337 container remove e4e819cf76dda74f139d8107660e0e414b8873ca4af04464522d0c2024e78180 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=brave_keldysh, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 08:19:54 compute-0 sudo[190717]: pam_unix(sudo:session): session closed for user root
Jan 31 08:19:54 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 31 08:19:54 compute-0 ceph-mon[75227]: pgmap v518: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:19:54 compute-0 systemd-sysv-generator[191320]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 08:19:54 compute-0 systemd-rc-local-generator[191317]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 08:19:54 compute-0 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 08:19:54 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 31 08:19:54 compute-0 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 08:19:54 compute-0 systemd[1]: libpod-conmon-e4e819cf76dda74f139d8107660e0e414b8873ca4af04464522d0c2024e78180.scope: Deactivated successfully.
Jan 31 08:19:54 compute-0 sudo[191329]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 31 08:19:54 compute-0 sudo[191329]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:19:54 compute-0 sudo[191329]: pam_unix(sudo:session): session closed for user root
Jan 31 08:19:54 compute-0 sudo[191289]: pam_unix(sudo:session): session closed for user root
Jan 31 08:19:54 compute-0 sudo[191504]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ktlyewuzpftfdmdbcusmduenceaorrzp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847594.3623579-331-169704850147284/AnsiballZ_systemd.py'
Jan 31 08:19:54 compute-0 sudo[191504]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:19:54 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v519: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:19:54 compute-0 python3.9[191506]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=virtproxyd-tcp.socket state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Jan 31 08:19:54 compute-0 systemd[1]: Reloading.
Jan 31 08:19:55 compute-0 systemd-sysv-generator[191536]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 08:19:55 compute-0 systemd-rc-local-generator[191533]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 08:19:55 compute-0 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 08:19:55 compute-0 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 08:19:55 compute-0 sudo[191504]: pam_unix(sudo:session): session closed for user root
Jan 31 08:19:55 compute-0 sudo[191693]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fdddggpgbfqawilfijtpqvhslskertmp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847595.394839-360-184790753612671/AnsiballZ_systemd.py'
Jan 31 08:19:55 compute-0 sudo[191693]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:19:55 compute-0 python3.9[191695]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtlogd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 31 08:19:56 compute-0 systemd[1]: Reloading.
Jan 31 08:19:56 compute-0 systemd-rc-local-generator[191723]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 08:19:56 compute-0 systemd-sysv-generator[191727]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 08:19:56 compute-0 ceph-mon[75227]: pgmap v519: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:19:56 compute-0 sudo[191693]: pam_unix(sudo:session): session closed for user root
Jan 31 08:19:56 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 08:19:56 compute-0 sudo[191884]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-izcjkuymeiscapkoafrsvsmarexijwna ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847596.582366-360-2431539330082/AnsiballZ_systemd.py'
Jan 31 08:19:56 compute-0 sudo[191884]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:19:56 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v520: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:19:57 compute-0 python3.9[191886]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 31 08:19:57 compute-0 systemd[1]: Reloading.
Jan 31 08:19:57 compute-0 systemd-sysv-generator[191916]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 08:19:57 compute-0 systemd-rc-local-generator[191913]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 08:19:57 compute-0 ceph-mon[75227]: pgmap v520: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:19:57 compute-0 sudo[191884]: pam_unix(sudo:session): session closed for user root
Jan 31 08:19:57 compute-0 sudo[192074]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rjrvyxabxmvgveeeexalnodlwnmifnav ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847597.5687785-360-62809669681930/AnsiballZ_systemd.py'
Jan 31 08:19:57 compute-0 sudo[192074]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:19:58 compute-0 python3.9[192076]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 31 08:19:58 compute-0 systemd[1]: Reloading.
Jan 31 08:19:58 compute-0 systemd-sysv-generator[192101]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 08:19:58 compute-0 systemd-rc-local-generator[192097]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 08:19:58 compute-0 sudo[192074]: pam_unix(sudo:session): session closed for user root
Jan 31 08:19:58 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v521: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:19:58 compute-0 sudo[192264]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wywnxuudpglkvllmasfdtowbjvmfeysb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847598.5836942-360-152309467214589/AnsiballZ_systemd.py'
Jan 31 08:19:58 compute-0 sudo[192264]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:19:59 compute-0 python3.9[192266]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 31 08:19:59 compute-0 sudo[192264]: pam_unix(sudo:session): session closed for user root
Jan 31 08:19:59 compute-0 sudo[192419]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nrfgrjlvyfhpgdrqmmqrnzxjjlptrvyr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847599.3904881-360-217400896602766/AnsiballZ_systemd.py'
Jan 31 08:19:59 compute-0 sudo[192419]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:19:59 compute-0 python3.9[192421]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 31 08:20:00 compute-0 systemd[1]: Reloading.
Jan 31 08:20:00 compute-0 systemd-rc-local-generator[192451]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 08:20:00 compute-0 systemd-sysv-generator[192455]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 08:20:00 compute-0 ceph-mon[75227]: pgmap v521: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:20:00 compute-0 sudo[192419]: pam_unix(sudo:session): session closed for user root
Jan 31 08:20:00 compute-0 sudo[192609]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wgmjsirrpwixcmlanouctycshhitzqcp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847600.5027628-396-256747824609271/AnsiballZ_systemd.py'
Jan 31 08:20:00 compute-0 sudo[192609]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:20:00 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v522: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:20:01 compute-0 python3.9[192611]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd-tls.socket state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Jan 31 08:20:01 compute-0 systemd[1]: Reloading.
Jan 31 08:20:01 compute-0 systemd-rc-local-generator[192638]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 08:20:01 compute-0 systemd-sysv-generator[192641]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 08:20:01 compute-0 systemd[1]: Listening on libvirt proxy daemon socket.
Jan 31 08:20:01 compute-0 systemd[1]: Listening on libvirt proxy daemon TLS IP socket.
Jan 31 08:20:01 compute-0 sudo[192609]: pam_unix(sudo:session): session closed for user root
Jan 31 08:20:01 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 08:20:01 compute-0 sudo[192801]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qdqyawchkeisrksqxvvittqpaochpupe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847601.5624783-404-209980564883485/AnsiballZ_systemd.py'
Jan 31 08:20:01 compute-0 sudo[192801]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:20:02 compute-0 python3.9[192803]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtlogd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 31 08:20:02 compute-0 sudo[192801]: pam_unix(sudo:session): session closed for user root
Jan 31 08:20:02 compute-0 ceph-mon[75227]: pgmap v522: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:20:02 compute-0 ceph-mgr[75519]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:20:02 compute-0 ceph-mgr[75519]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:20:02 compute-0 ceph-mgr[75519]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:20:02 compute-0 ceph-mgr[75519]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:20:02 compute-0 ceph-mgr[75519]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:20:02 compute-0 ceph-mgr[75519]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:20:02 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v523: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:20:02 compute-0 sudo[192956]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bwdnevtehyhokkyecngjldxyvsyqfuwb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847602.5623946-404-168531881410143/AnsiballZ_systemd.py'
Jan 31 08:20:02 compute-0 sudo[192956]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:20:03 compute-0 python3.9[192958]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtlogd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 31 08:20:03 compute-0 sudo[192956]: pam_unix(sudo:session): session closed for user root
Jan 31 08:20:03 compute-0 ceph-mon[75227]: pgmap v523: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:20:03 compute-0 sudo[193111]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nxqtoovmlsnjisovnxtkqhohkstmsfxu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847603.368235-404-218921987727790/AnsiballZ_systemd.py'
Jan 31 08:20:03 compute-0 sudo[193111]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:20:03 compute-0 python3.9[193113]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 31 08:20:04 compute-0 sudo[193111]: pam_unix(sudo:session): session closed for user root
Jan 31 08:20:04 compute-0 sudo[193266]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-imcagxiqgfvwkchxkryrgjevxlinuylp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847604.1432834-404-11239288425606/AnsiballZ_systemd.py'
Jan 31 08:20:04 compute-0 sudo[193266]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:20:04 compute-0 python3.9[193268]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 31 08:20:04 compute-0 sudo[193266]: pam_unix(sudo:session): session closed for user root
Jan 31 08:20:04 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v524: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:20:05 compute-0 sudo[193421]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hopnmhymfwdohbtrndeymztntuudmjky ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847604.942037-404-135737129739691/AnsiballZ_systemd.py'
Jan 31 08:20:05 compute-0 sudo[193421]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:20:05 compute-0 python3.9[193423]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 31 08:20:05 compute-0 sudo[193421]: pam_unix(sudo:session): session closed for user root
Jan 31 08:20:05 compute-0 sudo[193576]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mmytksbsqxrhfwblaegkbetkqwavanis ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847605.642636-404-125246170698459/AnsiballZ_systemd.py'
Jan 31 08:20:05 compute-0 sudo[193576]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:20:05 compute-0 ceph-mon[75227]: pgmap v524: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:20:06 compute-0 python3.9[193578]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 31 08:20:06 compute-0 sudo[193576]: pam_unix(sudo:session): session closed for user root
Jan 31 08:20:06 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 08:20:06 compute-0 sudo[193731]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mkfltzueopxhgpmritynpilygnyvmwob ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847606.4276197-404-128856121231378/AnsiballZ_systemd.py'
Jan 31 08:20:06 compute-0 sudo[193731]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:20:06 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v525: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:20:07 compute-0 python3.9[193733]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 31 08:20:07 compute-0 sudo[193731]: pam_unix(sudo:session): session closed for user root
Jan 31 08:20:07 compute-0 sudo[193886]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vqdaksyaskaaprxdqocrzyqomcfdbsta ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847607.2645218-404-64863410013656/AnsiballZ_systemd.py'
Jan 31 08:20:07 compute-0 sudo[193886]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:20:07 compute-0 python3.9[193888]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 31 08:20:08 compute-0 sudo[193886]: pam_unix(sudo:session): session closed for user root
Jan 31 08:20:08 compute-0 ceph-mon[75227]: pgmap v525: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:20:08 compute-0 sudo[194041]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sujrlrjsroztwkczstosttbehqhlnzhx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847608.1315517-404-166097933980989/AnsiballZ_systemd.py'
Jan 31 08:20:08 compute-0 sudo[194041]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:20:08 compute-0 python3.9[194043]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 31 08:20:08 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v526: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:20:08 compute-0 sudo[194041]: pam_unix(sudo:session): session closed for user root
Jan 31 08:20:09 compute-0 sudo[194196]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ruxcumuemhbxeaowadmyrxofhrjowfrx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847608.9967244-404-253542747869317/AnsiballZ_systemd.py'
Jan 31 08:20:09 compute-0 sudo[194196]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:20:09 compute-0 python3.9[194198]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 31 08:20:09 compute-0 ceph-mon[75227]: pgmap v526: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:20:09 compute-0 sudo[194196]: pam_unix(sudo:session): session closed for user root
Jan 31 08:20:10 compute-0 sudo[194351]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rqmztensgmjuouiqnlehctagzszlunve ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847609.9259117-404-194014998653140/AnsiballZ_systemd.py'
Jan 31 08:20:10 compute-0 sudo[194351]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:20:10 compute-0 python3.9[194353]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 31 08:20:10 compute-0 sudo[194351]: pam_unix(sudo:session): session closed for user root
Jan 31 08:20:10 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v527: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:20:11 compute-0 sudo[194506]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zpcbcfqwgxvvradgbaryzmytztunhguz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847610.8846798-404-264490961079557/AnsiballZ_systemd.py'
Jan 31 08:20:11 compute-0 sudo[194506]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:20:11 compute-0 python3.9[194508]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 31 08:20:11 compute-0 sudo[194506]: pam_unix(sudo:session): session closed for user root
Jan 31 08:20:11 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 08:20:11 compute-0 sudo[194661]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-enrnizwrnfuodfeazbzfhoqjdwtnmova ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847611.6878417-404-267898006420/AnsiballZ_systemd.py'
Jan 31 08:20:11 compute-0 sudo[194661]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:20:12 compute-0 ceph-mon[75227]: pgmap v527: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:20:12 compute-0 python3.9[194663]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 31 08:20:12 compute-0 sudo[194661]: pam_unix(sudo:session): session closed for user root
Jan 31 08:20:12 compute-0 sudo[194816]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dmegcifzcclavoszoclbxfucofecymgl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847612.3957443-404-264333399412710/AnsiballZ_systemd.py'
Jan 31 08:20:12 compute-0 sudo[194816]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:20:12 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v528: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:20:12 compute-0 python3.9[194818]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 31 08:20:13 compute-0 sudo[194816]: pam_unix(sudo:session): session closed for user root
Jan 31 08:20:13 compute-0 sudo[194971]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xpxtkkdskrtrwqrvgqgfsrsrjhmfmvwd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847613.3763485-506-84195958471236/AnsiballZ_file.py'
Jan 31 08:20:13 compute-0 sudo[194971]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:20:13 compute-0 python3.9[194973]: ansible-ansible.builtin.file Invoked with group=root owner=root path=/etc/tmpfiles.d/ setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Jan 31 08:20:13 compute-0 sudo[194971]: pam_unix(sudo:session): session closed for user root
Jan 31 08:20:14 compute-0 sudo[195123]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zwsjewzzznosuituqonuwyctxxffbqsg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847613.975314-506-113376224355840/AnsiballZ_file.py'
Jan 31 08:20:14 compute-0 sudo[195123]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:20:14 compute-0 python3.9[195125]: ansible-ansible.builtin.file Invoked with group=root owner=root path=/var/lib/edpm-config/firewall setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Jan 31 08:20:14 compute-0 ceph-mon[75227]: pgmap v528: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:20:14 compute-0 sudo[195123]: pam_unix(sudo:session): session closed for user root
Jan 31 08:20:14 compute-0 sudo[195275]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pwswbqthkxavgmdpjerhextmwyzdpluk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847614.5481486-506-262383008489004/AnsiballZ_file.py'
Jan 31 08:20:14 compute-0 sudo[195275]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:20:14 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v529: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:20:14 compute-0 python3.9[195277]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/libvirt setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 31 08:20:14 compute-0 sudo[195275]: pam_unix(sudo:session): session closed for user root
Jan 31 08:20:15 compute-0 sudo[195427]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-spennfebeaqefmjrydghcibxdvhlsakc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847615.0677-506-132318892062605/AnsiballZ_file.py'
Jan 31 08:20:15 compute-0 sudo[195427]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:20:15 compute-0 podman[195429]: 2026-01-31 08:20:15.358126377 +0000 UTC m=+0.063190607 container health_status 14c3c41ef3ecc0cb180f3b1c6b2646401de390411c75f2edf127900ced71a3ae (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '3b798815db4eef76ff54d2cfe5801aee605c637f16e47a4297de289ab1fdb9c1-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.build-date=20260127)
Jan 31 08:20:15 compute-0 ceph-mon[75227]: pgmap v529: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:20:15 compute-0 python3.9[195430]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/libvirt/private setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 31 08:20:15 compute-0 sudo[195427]: pam_unix(sudo:session): session closed for user root
Jan 31 08:20:15 compute-0 sudo[195605]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qcvvnkubjkbetwotbkgczplndytfsyih ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847615.658662-506-65860593029051/AnsiballZ_file.py'
Jan 31 08:20:15 compute-0 sudo[195605]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:20:16 compute-0 python3.9[195607]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/CA setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 31 08:20:16 compute-0 sudo[195605]: pam_unix(sudo:session): session closed for user root
Jan 31 08:20:16 compute-0 sudo[195757]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ratcidjprcelltlzlpepljmgukfajhkv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847616.245684-506-104535647479054/AnsiballZ_file.py'
Jan 31 08:20:16 compute-0 sudo[195757]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:20:16 compute-0 python3.9[195759]: ansible-ansible.builtin.file Invoked with group=qemu owner=root path=/etc/pki/qemu setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Jan 31 08:20:16 compute-0 sudo[195757]: pam_unix(sudo:session): session closed for user root
Jan 31 08:20:16 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 08:20:16 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v530: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:20:17 compute-0 python3.9[195909]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'selinux'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 31 08:20:17 compute-0 ovn_metadata_agent[154972]: 2026-01-31 08:20:17.878 154977 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:20:17 compute-0 ovn_metadata_agent[154972]: 2026-01-31 08:20:17.879 154977 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:20:17 compute-0 ovn_metadata_agent[154972]: 2026-01-31 08:20:17.879 154977 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:20:17 compute-0 ceph-mon[75227]: pgmap v530: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:20:17 compute-0 sudo[196059]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nghsmtiyszrgusqagktkdwqsoahtuzuw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847617.5111723-557-228513413920331/AnsiballZ_stat.py'
Jan 31 08:20:17 compute-0 sudo[196059]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:20:18 compute-0 python3.9[196061]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtlogd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 08:20:18 compute-0 sudo[196059]: pam_unix(sudo:session): session closed for user root
Jan 31 08:20:18 compute-0 sudo[196195]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zaieeussnlipzlamtsbacoirwfmveumi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847617.5111723-557-228513413920331/AnsiballZ_copy.py'
Jan 31 08:20:18 compute-0 sudo[196195]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:20:18 compute-0 podman[196158]: 2026-01-31 08:20:18.736746686 +0000 UTC m=+0.073415904 container health_status 5cc46d1955888fed41771eb977e7f9416e280539f01559118253757fe3eb0869 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '3b798815db4eef76ff54d2cfe5801aee605c637f16e47a4297de289ab1fdb9c1-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 08:20:18 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v531: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:20:18 compute-0 python3.9[196203]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtlogd.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1769847617.5111723-557-228513413920331/.source.conf follow=False _original_basename=virtlogd.conf checksum=d7a72ae92c2c205983b029473e05a6aa4c58ec24 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 08:20:18 compute-0 sudo[196195]: pam_unix(sudo:session): session closed for user root
Jan 31 08:20:19 compute-0 sudo[196355]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vrdifjvaulafmukmvlzdqfpovxfzmhjy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847619.0429628-557-11786905300880/AnsiballZ_stat.py'
Jan 31 08:20:19 compute-0 sudo[196355]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:20:19 compute-0 python3.9[196357]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtnodedevd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 08:20:19 compute-0 sudo[196355]: pam_unix(sudo:session): session closed for user root
Jan 31 08:20:19 compute-0 sudo[196480]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tjwnqlfhapciynljfhjfxzwaybzivccl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847619.0429628-557-11786905300880/AnsiballZ_copy.py'
Jan 31 08:20:19 compute-0 sudo[196480]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:20:19 compute-0 python3.9[196482]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtnodedevd.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1769847619.0429628-557-11786905300880/.source.conf follow=False _original_basename=virtnodedevd.conf checksum=7a604468adb2868f1ab6ebd0fd4622286e6373e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 08:20:19 compute-0 ceph-mon[75227]: pgmap v531: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:20:19 compute-0 sudo[196480]: pam_unix(sudo:session): session closed for user root
Jan 31 08:20:20 compute-0 sudo[196632]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bupwbugiqmefynqolwlqwgkemymwjber ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847620.0355444-557-90210697993411/AnsiballZ_stat.py'
Jan 31 08:20:20 compute-0 sudo[196632]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:20:20 compute-0 python3.9[196634]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtproxyd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 08:20:20 compute-0 sudo[196632]: pam_unix(sudo:session): session closed for user root
Jan 31 08:20:20 compute-0 sudo[196757]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yntdiplqtwnwrnhoawbeuykdesfbffmf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847620.0355444-557-90210697993411/AnsiballZ_copy.py'
Jan 31 08:20:20 compute-0 sudo[196757]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:20:20 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v532: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:20:20 compute-0 python3.9[196759]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtproxyd.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1769847620.0355444-557-90210697993411/.source.conf follow=False _original_basename=virtproxyd.conf checksum=28bc484b7c9988e03de49d4fcc0a088ea975f716 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 08:20:21 compute-0 sudo[196757]: pam_unix(sudo:session): session closed for user root
Jan 31 08:20:21 compute-0 sudo[196909]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qejflqnecjnhqnjfciztbthejogttpym ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847621.1396127-557-262985101030338/AnsiballZ_stat.py'
Jan 31 08:20:21 compute-0 sudo[196909]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:20:21 compute-0 python3.9[196911]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtqemud.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 08:20:21 compute-0 sudo[196909]: pam_unix(sudo:session): session closed for user root
Jan 31 08:20:21 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 08:20:21 compute-0 sudo[197034]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vrjqrjbvetvgcwdwmdiovzotobcgcnsv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847621.1396127-557-262985101030338/AnsiballZ_copy.py'
Jan 31 08:20:21 compute-0 sudo[197034]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:20:21 compute-0 ceph-mon[75227]: pgmap v532: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:20:22 compute-0 python3.9[197036]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtqemud.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1769847621.1396127-557-262985101030338/.source.conf follow=False _original_basename=virtqemud.conf checksum=7a604468adb2868f1ab6ebd0fd4622286e6373e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 08:20:22 compute-0 sudo[197034]: pam_unix(sudo:session): session closed for user root
Jan 31 08:20:22 compute-0 sudo[197186]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dnhhfkfbouqtppmdzwdvjwqvduawdqlb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847622.2890234-557-59066174173586/AnsiballZ_stat.py'
Jan 31 08:20:22 compute-0 sudo[197186]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:20:22 compute-0 python3.9[197188]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/qemu.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 08:20:22 compute-0 sudo[197186]: pam_unix(sudo:session): session closed for user root
Jan 31 08:20:22 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v533: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:20:23 compute-0 sudo[197311]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bwtajxfpxnpyhhuzutabevqobildmalb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847622.2890234-557-59066174173586/AnsiballZ_copy.py'
Jan 31 08:20:23 compute-0 sudo[197311]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:20:23 compute-0 python3.9[197313]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/qemu.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1769847622.2890234-557-59066174173586/.source.conf follow=False _original_basename=qemu.conf.j2 checksum=c44de21af13c90603565570f09ff60c6a41ed8df backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 08:20:23 compute-0 sudo[197311]: pam_unix(sudo:session): session closed for user root
Jan 31 08:20:23 compute-0 sudo[197463]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-avgidvjlmzutipoznlyqftgjtbbpnxqh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847623.3872836-557-39375791179528/AnsiballZ_stat.py'
Jan 31 08:20:23 compute-0 sudo[197463]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:20:23 compute-0 ceph-mon[75227]: pgmap v533: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:20:23 compute-0 python3.9[197465]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtsecretd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 08:20:23 compute-0 sudo[197463]: pam_unix(sudo:session): session closed for user root
Jan 31 08:20:24 compute-0 sudo[197588]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zptmvyzxrrbummidelfegkunlylxiirm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847623.3872836-557-39375791179528/AnsiballZ_copy.py'
Jan 31 08:20:24 compute-0 sudo[197588]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:20:24 compute-0 python3.9[197590]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtsecretd.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1769847623.3872836-557-39375791179528/.source.conf follow=False _original_basename=virtsecretd.conf checksum=7a604468adb2868f1ab6ebd0fd4622286e6373e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 08:20:24 compute-0 sudo[197588]: pam_unix(sudo:session): session closed for user root
Jan 31 08:20:24 compute-0 sudo[197740]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vtqxbmmeabjkbjwjawcaqmcmsyergcro ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847624.5778432-557-273618214377861/AnsiballZ_stat.py'
Jan 31 08:20:24 compute-0 sudo[197740]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:20:24 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v534: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:20:24 compute-0 python3.9[197742]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/auth.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 08:20:25 compute-0 sudo[197740]: pam_unix(sudo:session): session closed for user root
Jan 31 08:20:25 compute-0 sudo[197863]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zhiztregrrvjerniqwoxjzkifbpmpaoq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847624.5778432-557-273618214377861/AnsiballZ_copy.py'
Jan 31 08:20:25 compute-0 sudo[197863]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:20:25 compute-0 python3.9[197865]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/auth.conf group=libvirt mode=0600 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1769847624.5778432-557-273618214377861/.source.conf follow=False _original_basename=auth.conf checksum=a94cd818c374cec2c8425b70d2e0e2f41b743ae4 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 08:20:25 compute-0 sudo[197863]: pam_unix(sudo:session): session closed for user root
Jan 31 08:20:25 compute-0 sudo[198015]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-olnxbahgzapsfnfhxxitlfqjepstbtdy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847625.6731327-557-128161599688947/AnsiballZ_stat.py'
Jan 31 08:20:25 compute-0 sudo[198015]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:20:25 compute-0 ceph-mon[75227]: pgmap v534: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:20:26 compute-0 python3.9[198017]: ansible-ansible.legacy.stat Invoked with path=/etc/sasl2/libvirt.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 08:20:26 compute-0 sudo[198015]: pam_unix(sudo:session): session closed for user root
Jan 31 08:20:26 compute-0 sudo[198140]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-skqynmjutcgrtbrvhcficuhsokbqwfhn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847625.6731327-557-128161599688947/AnsiballZ_copy.py'
Jan 31 08:20:26 compute-0 sudo[198140]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:20:26 compute-0 python3.9[198142]: ansible-ansible.legacy.copy Invoked with dest=/etc/sasl2/libvirt.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1769847625.6731327-557-128161599688947/.source.conf follow=False _original_basename=sasl_libvirt.conf checksum=652e4d404bf79253d06956b8e9847c9364979d4a backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 08:20:26 compute-0 sudo[198140]: pam_unix(sudo:session): session closed for user root
Jan 31 08:20:26 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 08:20:26 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v535: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:20:27 compute-0 sudo[198292]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yrixttixpgzacplhtimaegwvpmkkvtss ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847626.8115017-670-105938483195496/AnsiballZ_command.py'
Jan 31 08:20:27 compute-0 sudo[198292]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:20:27 compute-0 python3.9[198294]: ansible-ansible.legacy.command Invoked with cmd=saslpasswd2 -f /etc/libvirt/passwd.db -p -a libvirt -u openstack migration stdin=12345678 _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None
Jan 31 08:20:27 compute-0 sudo[198292]: pam_unix(sudo:session): session closed for user root
Jan 31 08:20:27 compute-0 sudo[198445]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kdnqdcrubeaaciyjalvlhjqimhhdohyb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847627.4765887-679-26470306847945/AnsiballZ_file.py'
Jan 31 08:20:27 compute-0 sudo[198445]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:20:27 compute-0 ceph-mon[75227]: pgmap v535: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:20:27 compute-0 python3.9[198447]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtlogd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 08:20:28 compute-0 sudo[198445]: pam_unix(sudo:session): session closed for user root
Jan 31 08:20:28 compute-0 sudo[198597]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vscvkgnelulucacrjhnfusoazaiqucrb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847628.1238637-679-84135697684963/AnsiballZ_file.py'
Jan 31 08:20:28 compute-0 sudo[198597]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:20:28 compute-0 python3.9[198599]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtlogd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 08:20:28 compute-0 sudo[198597]: pam_unix(sudo:session): session closed for user root
Jan 31 08:20:28 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v536: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:20:28 compute-0 sudo[198749]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uxuijdxvrggecpphqkxveegnhhlcddyg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847628.6488237-679-183176186897370/AnsiballZ_file.py'
Jan 31 08:20:28 compute-0 sudo[198749]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:20:29 compute-0 python3.9[198751]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtnodedevd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 08:20:29 compute-0 sudo[198749]: pam_unix(sudo:session): session closed for user root
Jan 31 08:20:29 compute-0 sudo[198901]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-glrmvspilwvmbwaeharduhshukoceauh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847629.1476119-679-253582568064456/AnsiballZ_file.py'
Jan 31 08:20:29 compute-0 sudo[198901]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:20:29 compute-0 python3.9[198903]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtnodedevd-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 08:20:29 compute-0 sudo[198901]: pam_unix(sudo:session): session closed for user root
Jan 31 08:20:29 compute-0 sudo[199053]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ynxundmugggdfarnetsbyzrcftwqpdnm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847629.6787927-679-159425712585789/AnsiballZ_file.py'
Jan 31 08:20:29 compute-0 sudo[199053]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:20:29 compute-0 ceph-mon[75227]: pgmap v536: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:20:30 compute-0 python3.9[199055]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtnodedevd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 08:20:30 compute-0 sudo[199053]: pam_unix(sudo:session): session closed for user root
Jan 31 08:20:30 compute-0 sudo[199205]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-airmrqwelorzpiejiswmuhbmxwuqpzld ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847630.2609572-679-60783234990636/AnsiballZ_file.py'
Jan 31 08:20:30 compute-0 sudo[199205]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:20:30 compute-0 python3.9[199207]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtproxyd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 08:20:30 compute-0 sudo[199205]: pam_unix(sudo:session): session closed for user root
Jan 31 08:20:30 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v537: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:20:30 compute-0 sudo[199357]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kvseyrpofqwwnordblueemedqkwwxuao ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847630.8129406-679-232355715720555/AnsiballZ_file.py'
Jan 31 08:20:30 compute-0 sudo[199357]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:20:31 compute-0 python3.9[199359]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtproxyd-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 08:20:31 compute-0 sudo[199357]: pam_unix(sudo:session): session closed for user root
Jan 31 08:20:31 compute-0 sudo[199509]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ladrnafzbljvidheoshmiogtflphduhm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847631.2892652-679-15781270861705/AnsiballZ_file.py'
Jan 31 08:20:31 compute-0 sudo[199509]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:20:31 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 08:20:31 compute-0 ceph-mgr[75519]: [balancer INFO root] Optimize plan auto_2026-01-31_08:20:31
Jan 31 08:20:31 compute-0 ceph-mgr[75519]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 31 08:20:31 compute-0 ceph-mgr[75519]: [balancer INFO root] do_upmap
Jan 31 08:20:31 compute-0 ceph-mgr[75519]: [balancer INFO root] pools ['default.rgw.meta', 'backups', 'images', '.rgw.root', '.mgr', 'default.rgw.log', 'cephfs.cephfs.meta', 'volumes', 'default.rgw.control', 'vms', 'cephfs.cephfs.data']
Jan 31 08:20:31 compute-0 ceph-mgr[75519]: [balancer INFO root] prepared 0/10 upmap changes
Jan 31 08:20:31 compute-0 python3.9[199511]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtproxyd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 08:20:31 compute-0 sudo[199509]: pam_unix(sudo:session): session closed for user root
Jan 31 08:20:31 compute-0 ceph-mon[75227]: pgmap v537: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:20:32 compute-0 sudo[199661]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ddtexvdnjuabpbeaenaxqbbieddqilhb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847631.8786206-679-232383581013918/AnsiballZ_file.py'
Jan 31 08:20:32 compute-0 sudo[199661]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:20:32 compute-0 python3.9[199663]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtqemud.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 08:20:32 compute-0 sudo[199661]: pam_unix(sudo:session): session closed for user root
Jan 31 08:20:32 compute-0 sudo[199813]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dgsphokowoniwwualkgsgjysfozdiaol ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847632.451542-679-85223693534538/AnsiballZ_file.py'
Jan 31 08:20:32 compute-0 sudo[199813]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:20:32 compute-0 ceph-mgr[75519]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:20:32 compute-0 ceph-mgr[75519]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:20:32 compute-0 ceph-mgr[75519]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:20:32 compute-0 ceph-mgr[75519]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:20:32 compute-0 ceph-mgr[75519]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:20:32 compute-0 ceph-mgr[75519]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:20:32 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v538: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:20:32 compute-0 python3.9[199815]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtqemud-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 08:20:32 compute-0 sudo[199813]: pam_unix(sudo:session): session closed for user root
Jan 31 08:20:32 compute-0 ceph-mgr[75519]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 31 08:20:32 compute-0 ceph-mgr[75519]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 31 08:20:32 compute-0 ceph-mgr[75519]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 08:20:32 compute-0 ceph-mgr[75519]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 08:20:32 compute-0 ceph-mgr[75519]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 08:20:32 compute-0 ceph-mgr[75519]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 08:20:32 compute-0 ceph-mgr[75519]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 08:20:32 compute-0 ceph-mgr[75519]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 08:20:32 compute-0 ceph-mgr[75519]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 08:20:32 compute-0 ceph-mgr[75519]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 08:20:33 compute-0 sudo[199965]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jhdnzzmyffpsfdavgqaznwcbtvebungo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847632.9944043-679-188410167022761/AnsiballZ_file.py'
Jan 31 08:20:33 compute-0 sudo[199965]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:20:33 compute-0 python3.9[199967]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtqemud-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 08:20:33 compute-0 sudo[199965]: pam_unix(sudo:session): session closed for user root
Jan 31 08:20:33 compute-0 sudo[200117]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-igftdhnnknjmzsouulbyuscsefxnmmtp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847633.518627-679-79113010478580/AnsiballZ_file.py'
Jan 31 08:20:33 compute-0 sudo[200117]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:20:33 compute-0 python3.9[200119]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtsecretd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 08:20:33 compute-0 sudo[200117]: pam_unix(sudo:session): session closed for user root
Jan 31 08:20:33 compute-0 ceph-mon[75227]: pgmap v538: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:20:34 compute-0 sudo[200269]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wetbfmztwhgdzlkwgjrpgwdrwdtzuzql ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847634.0448573-679-185220805592382/AnsiballZ_file.py'
Jan 31 08:20:34 compute-0 sudo[200269]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:20:34 compute-0 python3.9[200271]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtsecretd-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 08:20:34 compute-0 sudo[200269]: pam_unix(sudo:session): session closed for user root
Jan 31 08:20:34 compute-0 sudo[200421]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gdvfnkwutxbrttvvnfxzelshwpnwlzcd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847634.595786-679-97326277231677/AnsiballZ_file.py'
Jan 31 08:20:34 compute-0 sudo[200421]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:20:34 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v539: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:20:35 compute-0 python3.9[200423]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtsecretd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 08:20:35 compute-0 sudo[200421]: pam_unix(sudo:session): session closed for user root
Jan 31 08:20:35 compute-0 sudo[200573]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gqgywmbqycgedotowmxkjfqvjrabzzmq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847635.198031-778-225041944612494/AnsiballZ_stat.py'
Jan 31 08:20:35 compute-0 sudo[200573]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:20:35 compute-0 python3.9[200575]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtlogd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 08:20:35 compute-0 sudo[200573]: pam_unix(sudo:session): session closed for user root
Jan 31 08:20:35 compute-0 sudo[200696]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-acrhkzddjzqhakmoiydzygrusuvphrkv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847635.198031-778-225041944612494/AnsiballZ_copy.py'
Jan 31 08:20:35 compute-0 sudo[200696]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:20:35 compute-0 ceph-mon[75227]: pgmap v539: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:20:36 compute-0 python3.9[200698]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtlogd.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769847635.198031-778-225041944612494/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 08:20:36 compute-0 sudo[200696]: pam_unix(sudo:session): session closed for user root
Jan 31 08:20:36 compute-0 sudo[200848]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vqhjhlrwjenvzwccqyzatwfurvuxngri ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847636.2677617-778-259874175054680/AnsiballZ_stat.py'
Jan 31 08:20:36 compute-0 sudo[200848]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:20:36 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 08:20:36 compute-0 python3.9[200850]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtlogd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 08:20:36 compute-0 sudo[200848]: pam_unix(sudo:session): session closed for user root
Jan 31 08:20:36 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v540: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:20:37 compute-0 sudo[200971]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vspbcpxhnciojlcuykvqqnqmsaocvpbl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847636.2677617-778-259874175054680/AnsiballZ_copy.py'
Jan 31 08:20:37 compute-0 sudo[200971]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:20:37 compute-0 python3.9[200973]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtlogd-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769847636.2677617-778-259874175054680/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 08:20:37 compute-0 sudo[200971]: pam_unix(sudo:session): session closed for user root
Jan 31 08:20:37 compute-0 sudo[201123]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ytdhulhuwlqfcuuupkkomhxsecylbkqt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847637.307086-778-187944831826310/AnsiballZ_stat.py'
Jan 31 08:20:37 compute-0 sudo[201123]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:20:37 compute-0 python3.9[201125]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtnodedevd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 08:20:37 compute-0 sudo[201123]: pam_unix(sudo:session): session closed for user root
Jan 31 08:20:37 compute-0 ceph-mon[75227]: pgmap v540: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:20:38 compute-0 sudo[201246]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jeniqcyrlutpmxaufduotvgsjbhlvvoa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847637.307086-778-187944831826310/AnsiballZ_copy.py'
Jan 31 08:20:38 compute-0 sudo[201246]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:20:38 compute-0 python3.9[201248]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtnodedevd.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769847637.307086-778-187944831826310/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 08:20:38 compute-0 sudo[201246]: pam_unix(sudo:session): session closed for user root
Jan 31 08:20:38 compute-0 sudo[201398]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-myacykkdmhrfqilutjxoonxyrdymxxzk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847638.3523028-778-118226693732491/AnsiballZ_stat.py'
Jan 31 08:20:38 compute-0 sudo[201398]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:20:38 compute-0 python3.9[201400]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtnodedevd-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 08:20:38 compute-0 sudo[201398]: pam_unix(sudo:session): session closed for user root
Jan 31 08:20:38 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v541: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:20:39 compute-0 sudo[201521]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gcalrrepqxdzfwyelzsxcozpxqnlaawu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847638.3523028-778-118226693732491/AnsiballZ_copy.py'
Jan 31 08:20:39 compute-0 sudo[201521]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:20:39 compute-0 python3.9[201523]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtnodedevd-ro.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769847638.3523028-778-118226693732491/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 08:20:39 compute-0 sudo[201521]: pam_unix(sudo:session): session closed for user root
Jan 31 08:20:39 compute-0 sudo[201673]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wiswsxpfuwicjfzqikhxshfeflmfjday ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847639.3471482-778-274815403422036/AnsiballZ_stat.py'
Jan 31 08:20:39 compute-0 sudo[201673]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:20:39 compute-0 python3.9[201675]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtnodedevd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 08:20:39 compute-0 sudo[201673]: pam_unix(sudo:session): session closed for user root
Jan 31 08:20:39 compute-0 ceph-mon[75227]: pgmap v541: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:20:40 compute-0 sudo[201796]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tgxkgrlaqqqwnnrznnkndlhvofbuiyzc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847639.3471482-778-274815403422036/AnsiballZ_copy.py'
Jan 31 08:20:40 compute-0 sudo[201796]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:20:40 compute-0 python3.9[201798]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtnodedevd-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769847639.3471482-778-274815403422036/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 08:20:40 compute-0 sudo[201796]: pam_unix(sudo:session): session closed for user root
Jan 31 08:20:40 compute-0 sudo[201948]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qyodrcywsuxwgqqzwonuypngipesqymj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847640.414436-778-153144207950861/AnsiballZ_stat.py'
Jan 31 08:20:40 compute-0 sudo[201948]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:20:40 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v542: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:20:40 compute-0 python3.9[201950]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtproxyd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 08:20:40 compute-0 sudo[201948]: pam_unix(sudo:session): session closed for user root
Jan 31 08:20:41 compute-0 sudo[202071]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gcrittdcghkzwhcarbqioxktjguokutk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847640.414436-778-153144207950861/AnsiballZ_copy.py'
Jan 31 08:20:41 compute-0 sudo[202071]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:20:41 compute-0 python3.9[202073]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtproxyd.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769847640.414436-778-153144207950861/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 08:20:41 compute-0 sudo[202071]: pam_unix(sudo:session): session closed for user root
Jan 31 08:20:41 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 08:20:41 compute-0 sudo[202223]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-girsdjhgolecvtzyetqwtqwttrxywbkh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847641.551892-778-29050396467474/AnsiballZ_stat.py'
Jan 31 08:20:41 compute-0 sudo[202223]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:20:41 compute-0 python3.9[202225]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtproxyd-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 08:20:42 compute-0 ceph-mon[75227]: pgmap v542: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:20:42 compute-0 sudo[202223]: pam_unix(sudo:session): session closed for user root
Jan 31 08:20:42 compute-0 sudo[202346]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tqpbbohpimnuuohgvdslpqehwgftitxq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847641.551892-778-29050396467474/AnsiballZ_copy.py'
Jan 31 08:20:42 compute-0 sudo[202346]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:20:42 compute-0 python3.9[202348]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtproxyd-ro.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769847641.551892-778-29050396467474/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 08:20:42 compute-0 sudo[202346]: pam_unix(sudo:session): session closed for user root
Jan 31 08:20:42 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v543: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:20:42 compute-0 sudo[202498]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qwrezcarlfcgcxrdxgxxhzyyiaginqpg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847642.6241682-778-168187885785000/AnsiballZ_stat.py'
Jan 31 08:20:42 compute-0 sudo[202498]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:20:43 compute-0 python3.9[202500]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtproxyd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 08:20:43 compute-0 sudo[202498]: pam_unix(sudo:session): session closed for user root
Jan 31 08:20:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] _maybe_adjust
Jan 31 08:20:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:20:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 31 08:20:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:20:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 08:20:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:20:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 08:20:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:20:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 08:20:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:20:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 08:20:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:20:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 2.6947183441958982e-06 of space, bias 4.0, pg target 0.003233662013035078 quantized to 16 (current 16)
Jan 31 08:20:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:20:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 08:20:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:20:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 31 08:20:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:20:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Jan 31 08:20:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:20:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 08:20:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:20:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 31 08:20:43 compute-0 sudo[202621]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-evwjdsjplkpblsbybqozkygszqyzqqxf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847642.6241682-778-168187885785000/AnsiballZ_copy.py'
Jan 31 08:20:43 compute-0 sudo[202621]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:20:43 compute-0 python3.9[202623]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtproxyd-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769847642.6241682-778-168187885785000/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 08:20:43 compute-0 sudo[202621]: pam_unix(sudo:session): session closed for user root
Jan 31 08:20:44 compute-0 ceph-mon[75227]: pgmap v543: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:20:44 compute-0 sudo[202773]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-omotfrgwgmfbwciabnslnkpjfmpbawmm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847643.7442172-778-209186874680410/AnsiballZ_stat.py'
Jan 31 08:20:44 compute-0 sudo[202773]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:20:44 compute-0 python3.9[202775]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtqemud.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 08:20:44 compute-0 sudo[202773]: pam_unix(sudo:session): session closed for user root
Jan 31 08:20:44 compute-0 sudo[202896]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hiyynkjbyelgpswwcuxwlfvsnbxuyhcu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847643.7442172-778-209186874680410/AnsiballZ_copy.py'
Jan 31 08:20:44 compute-0 sudo[202896]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:20:44 compute-0 python3.9[202898]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtqemud.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769847643.7442172-778-209186874680410/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 08:20:44 compute-0 sudo[202896]: pam_unix(sudo:session): session closed for user root
Jan 31 08:20:44 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v544: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:20:45 compute-0 sudo[203048]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-usyywcukomvsodmdtloctrwjimbklftn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847644.8442516-778-226616864293055/AnsiballZ_stat.py'
Jan 31 08:20:45 compute-0 sudo[203048]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:20:45 compute-0 python3.9[203050]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtqemud-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 08:20:45 compute-0 sudo[203048]: pam_unix(sudo:session): session closed for user root
Jan 31 08:20:45 compute-0 sudo[203184]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ezipgngskbzncmhbcuxqkbtvfcoqogjm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847644.8442516-778-226616864293055/AnsiballZ_copy.py'
Jan 31 08:20:45 compute-0 sudo[203184]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:20:45 compute-0 podman[203145]: 2026-01-31 08:20:45.619232131 +0000 UTC m=+0.082832173 container health_status 14c3c41ef3ecc0cb180f3b1c6b2646401de390411c75f2edf127900ced71a3ae (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '3b798815db4eef76ff54d2cfe5801aee605c637f16e47a4297de289ab1fdb9c1-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 31 08:20:45 compute-0 python3.9[203192]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtqemud-ro.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769847644.8442516-778-226616864293055/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 08:20:45 compute-0 sudo[203184]: pam_unix(sudo:session): session closed for user root
Jan 31 08:20:46 compute-0 ceph-mon[75227]: pgmap v544: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:20:46 compute-0 sudo[203349]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ltfqiejuccngzqjfjhotjhwudqsgazsq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847645.9082956-778-198690324807758/AnsiballZ_stat.py'
Jan 31 08:20:46 compute-0 sudo[203349]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:20:46 compute-0 python3.9[203351]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtqemud-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 08:20:46 compute-0 sudo[203349]: pam_unix(sudo:session): session closed for user root
Jan 31 08:20:46 compute-0 sudo[203472]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cnigoypyqvmabuaondybmchrtnlacxqu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847645.9082956-778-198690324807758/AnsiballZ_copy.py'
Jan 31 08:20:46 compute-0 sudo[203472]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:20:46 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 08:20:46 compute-0 python3.9[203474]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtqemud-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769847645.9082956-778-198690324807758/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 08:20:46 compute-0 sudo[203472]: pam_unix(sudo:session): session closed for user root
Jan 31 08:20:46 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v545: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:20:47 compute-0 sudo[203624]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nrtifhxsmtnowghiyusdrziczaaodbcm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847646.9280689-778-147339654939247/AnsiballZ_stat.py'
Jan 31 08:20:47 compute-0 sudo[203624]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:20:47 compute-0 python3.9[203626]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtsecretd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 08:20:47 compute-0 sudo[203624]: pam_unix(sudo:session): session closed for user root
Jan 31 08:20:47 compute-0 sudo[203747]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qdovuxldbdhywjvrhydlaioywdnhbtvx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847646.9280689-778-147339654939247/AnsiballZ_copy.py'
Jan 31 08:20:47 compute-0 sudo[203747]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:20:47 compute-0 python3.9[203749]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtsecretd.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769847646.9280689-778-147339654939247/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 08:20:47 compute-0 sudo[203747]: pam_unix(sudo:session): session closed for user root
Jan 31 08:20:48 compute-0 ceph-mon[75227]: pgmap v545: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:20:48 compute-0 sudo[203899]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xprqqelkyrdnmseayynxmxxndcslczqs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847648.1116261-778-43828169430092/AnsiballZ_stat.py'
Jan 31 08:20:48 compute-0 sudo[203899]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:20:48 compute-0 python3.9[203901]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtsecretd-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 08:20:48 compute-0 sudo[203899]: pam_unix(sudo:session): session closed for user root
Jan 31 08:20:48 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v546: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:20:48 compute-0 sudo[204032]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ptzqvebmpflnnnzzuvuxwmllsgebciws ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847648.1116261-778-43828169430092/AnsiballZ_copy.py'
Jan 31 08:20:48 compute-0 sudo[204032]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:20:48 compute-0 podman[203996]: 2026-01-31 08:20:48.883129627 +0000 UTC m=+0.065393574 container health_status 5cc46d1955888fed41771eb977e7f9416e280539f01559118253757fe3eb0869 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '3b798815db4eef76ff54d2cfe5801aee605c637f16e47a4297de289ab1fdb9c1-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3)
Jan 31 08:20:49 compute-0 python3.9[204043]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtsecretd-ro.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769847648.1116261-778-43828169430092/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 08:20:49 compute-0 sudo[204032]: pam_unix(sudo:session): session closed for user root
Jan 31 08:20:49 compute-0 sudo[204193]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cqconfwnpforwylbvybdhlhqtjdkbrzd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847649.2118087-778-18053069276744/AnsiballZ_stat.py'
Jan 31 08:20:49 compute-0 sudo[204193]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:20:49 compute-0 python3.9[204195]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtsecretd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 08:20:49 compute-0 sudo[204193]: pam_unix(sudo:session): session closed for user root
Jan 31 08:20:49 compute-0 ceph-mon[75227]: pgmap v546: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:20:49 compute-0 ceph-mon[75227]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #27. Immutable memtables: 0.
Jan 31 08:20:49 compute-0 ceph-mon[75227]: rocksdb: (Original Log Time 2026/01/31-08:20:49.812456) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 31 08:20:49 compute-0 ceph-mon[75227]: rocksdb: [db/flush_job.cc:856] [default] [JOB 9] Flushing memtable with next log file: 27
Jan 31 08:20:49 compute-0 ceph-mon[75227]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769847649812484, "job": 9, "event": "flush_started", "num_memtables": 1, "num_entries": 2045, "num_deletes": 251, "total_data_size": 3584539, "memory_usage": 3648056, "flush_reason": "Manual Compaction"}
Jan 31 08:20:49 compute-0 ceph-mon[75227]: rocksdb: [db/flush_job.cc:885] [default] [JOB 9] Level-0 flush table #28: started
Jan 31 08:20:49 compute-0 sudo[204316]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zduodretquuatflqkfriehhgumauvsww ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847649.2118087-778-18053069276744/AnsiballZ_copy.py'
Jan 31 08:20:49 compute-0 sudo[204316]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:20:50 compute-0 ceph-mon[75227]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769847650032696, "cf_name": "default", "job": 9, "event": "table_file_creation", "file_number": 28, "file_size": 3508187, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 9788, "largest_seqno": 11832, "table_properties": {"data_size": 3498879, "index_size": 5930, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2309, "raw_key_size": 17833, "raw_average_key_size": 19, "raw_value_size": 3480458, "raw_average_value_size": 3795, "num_data_blocks": 269, "num_entries": 917, "num_filter_entries": 917, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769847413, "oldest_key_time": 1769847413, "file_creation_time": 1769847649, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "91992687-9ca4-489a-811f-a25b3432622d", "db_session_id": "RDN3DWKE2K2I6QTJYIJY", "orig_file_number": 28, "seqno_to_time_mapping": "N/A"}}
Jan 31 08:20:50 compute-0 ceph-mon[75227]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 9] Flush lasted 220343 microseconds, and 4530 cpu microseconds.
Jan 31 08:20:50 compute-0 ceph-mon[75227]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 31 08:20:50 compute-0 ceph-mon[75227]: rocksdb: (Original Log Time 2026/01/31-08:20:50.032792) [db/flush_job.cc:967] [default] [JOB 9] Level-0 flush table #28: 3508187 bytes OK
Jan 31 08:20:50 compute-0 ceph-mon[75227]: rocksdb: (Original Log Time 2026/01/31-08:20:50.032817) [db/memtable_list.cc:519] [default] Level-0 commit table #28 started
Jan 31 08:20:50 compute-0 ceph-mon[75227]: rocksdb: (Original Log Time 2026/01/31-08:20:50.053924) [db/memtable_list.cc:722] [default] Level-0 commit table #28: memtable #1 done
Jan 31 08:20:50 compute-0 ceph-mon[75227]: rocksdb: (Original Log Time 2026/01/31-08:20:50.054005) EVENT_LOG_v1 {"time_micros": 1769847650053995, "job": 9, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 31 08:20:50 compute-0 ceph-mon[75227]: rocksdb: (Original Log Time 2026/01/31-08:20:50.054036) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 31 08:20:50 compute-0 ceph-mon[75227]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 9] Try to delete WAL files size 3575997, prev total WAL file size 3575997, number of live WAL files 2.
Jan 31 08:20:50 compute-0 ceph-mon[75227]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000024.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 08:20:50 compute-0 ceph-mon[75227]: rocksdb: (Original Log Time 2026/01/31-08:20:50.054976) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F7300353032' seq:72057594037927935, type:22 .. '7061786F7300373534' seq:0, type:0; will stop at (end)
Jan 31 08:20:50 compute-0 ceph-mon[75227]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 10] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 31 08:20:50 compute-0 ceph-mon[75227]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 9 Base level 0, inputs: [28(3425KB)], [26(6457KB)]
Jan 31 08:20:50 compute-0 ceph-mon[75227]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769847650055020, "job": 10, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [28], "files_L6": [26], "score": -1, "input_data_size": 10120320, "oldest_snapshot_seqno": -1}
Jan 31 08:20:50 compute-0 python3.9[204318]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtsecretd-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769847649.2118087-778-18053069276744/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 08:20:50 compute-0 sudo[204316]: pam_unix(sudo:session): session closed for user root
Jan 31 08:20:50 compute-0 ceph-mon[75227]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 10] Generated table #29: 3753 keys, 8432731 bytes, temperature: kUnknown
Jan 31 08:20:50 compute-0 ceph-mon[75227]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769847650449141, "cf_name": "default", "job": 10, "event": "table_file_creation", "file_number": 29, "file_size": 8432731, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 8403364, "index_size": 18889, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 9413, "raw_key_size": 90282, "raw_average_key_size": 24, "raw_value_size": 8331319, "raw_average_value_size": 2219, "num_data_blocks": 817, "num_entries": 3753, "num_filter_entries": 3753, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769846771, "oldest_key_time": 0, "file_creation_time": 1769847650, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "91992687-9ca4-489a-811f-a25b3432622d", "db_session_id": "RDN3DWKE2K2I6QTJYIJY", "orig_file_number": 29, "seqno_to_time_mapping": "N/A"}}
Jan 31 08:20:50 compute-0 ceph-mon[75227]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 31 08:20:50 compute-0 ceph-mon[75227]: rocksdb: (Original Log Time 2026/01/31-08:20:50.449362) [db/compaction/compaction_job.cc:1663] [default] [JOB 10] Compacted 1@0 + 1@6 files to L6 => 8432731 bytes
Jan 31 08:20:50 compute-0 ceph-mon[75227]: rocksdb: (Original Log Time 2026/01/31-08:20:50.508302) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 25.7 rd, 21.4 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.3, 6.3 +0.0 blob) out(8.0 +0.0 blob), read-write-amplify(5.3) write-amplify(2.4) OK, records in: 4267, records dropped: 514 output_compression: NoCompression
Jan 31 08:20:50 compute-0 ceph-mon[75227]: rocksdb: (Original Log Time 2026/01/31-08:20:50.508357) EVENT_LOG_v1 {"time_micros": 1769847650508332, "job": 10, "event": "compaction_finished", "compaction_time_micros": 394170, "compaction_time_cpu_micros": 24468, "output_level": 6, "num_output_files": 1, "total_output_size": 8432731, "num_input_records": 4267, "num_output_records": 3753, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 31 08:20:50 compute-0 ceph-mon[75227]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000028.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 08:20:50 compute-0 ceph-mon[75227]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769847650508812, "job": 10, "event": "table_file_deletion", "file_number": 28}
Jan 31 08:20:50 compute-0 ceph-mon[75227]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000026.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 08:20:50 compute-0 ceph-mon[75227]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769847650509480, "job": 10, "event": "table_file_deletion", "file_number": 26}
Jan 31 08:20:50 compute-0 ceph-mon[75227]: rocksdb: (Original Log Time 2026/01/31-08:20:50.054925) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 08:20:50 compute-0 ceph-mon[75227]: rocksdb: (Original Log Time 2026/01/31-08:20:50.509522) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 08:20:50 compute-0 ceph-mon[75227]: rocksdb: (Original Log Time 2026/01/31-08:20:50.509527) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 08:20:50 compute-0 ceph-mon[75227]: rocksdb: (Original Log Time 2026/01/31-08:20:50.509528) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 08:20:50 compute-0 ceph-mon[75227]: rocksdb: (Original Log Time 2026/01/31-08:20:50.509530) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 08:20:50 compute-0 ceph-mon[75227]: rocksdb: (Original Log Time 2026/01/31-08:20:50.509532) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 08:20:50 compute-0 python3.9[204468]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail
                                             ls -lRZ /run/libvirt | grep -E ':container_\S+_t'
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 08:20:50 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v547: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:20:51 compute-0 sudo[204621]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gjhblpxvalhuakhzhwbxlmvmxvnnixtn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847650.9976678-984-187740302097974/AnsiballZ_seboolean.py'
Jan 31 08:20:51 compute-0 sudo[204621]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:20:51 compute-0 python3.9[204623]: ansible-ansible.posix.seboolean Invoked with name=os_enable_vtpm persistent=True state=True ignore_selinux_state=False
Jan 31 08:20:51 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 08:20:52 compute-0 ceph-mon[75227]: pgmap v547: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:20:52 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v548: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:20:54 compute-0 sudo[204628]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 08:20:54 compute-0 dbus-broker-launch[778]: avc:  op=load_policy lsm=selinux seqno=13 res=1
Jan 31 08:20:54 compute-0 sudo[204628]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:20:54 compute-0 sudo[204628]: pam_unix(sudo:session): session closed for user root
Jan 31 08:20:54 compute-0 sudo[204653]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/82c880e6-d992-5408-8b12-efff9c275473/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Jan 31 08:20:54 compute-0 sudo[204653]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:20:54 compute-0 sudo[204653]: pam_unix(sudo:session): session closed for user root
Jan 31 08:20:54 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v549: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:20:55 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 31 08:20:55 compute-0 ceph-mon[75227]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 08:20:55 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 31 08:20:55 compute-0 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 31 08:20:55 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 31 08:20:56 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 08:20:56 compute-0 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 08:20:56 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v550: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:20:57 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 31 08:20:57 compute-0 ceph-mon[75227]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 31 08:20:57 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 31 08:20:57 compute-0 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 31 08:20:57 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 31 08:20:57 compute-0 ceph-mon[75227]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 08:20:57 compute-0 sudo[204709]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 08:20:57 compute-0 sudo[204709]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:20:57 compute-0 sudo[204709]: pam_unix(sudo:session): session closed for user root
Jan 31 08:20:57 compute-0 sudo[204734]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/82c880e6-d992-5408-8b12-efff9c275473/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 82c880e6-d992-5408-8b12-efff9c275473 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Jan 31 08:20:57 compute-0 sudo[204734]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:20:57 compute-0 ceph-mon[75227]: pgmap v548: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:20:57 compute-0 ceph-mon[75227]: pgmap v549: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:20:57 compute-0 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 08:20:57 compute-0 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 31 08:20:57 compute-0 podman[204772]: 2026-01-31 08:20:57.678683105 +0000 UTC m=+0.023956127 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 08:20:58 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v551: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:20:59 compute-0 podman[204772]: 2026-01-31 08:20:59.11268652 +0000 UTC m=+1.457959492 container create 32945d0d83b1a3e38149b5ad067f34f90c5d18a0dad740f653386b04a062b6b4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=infallible_meninsky, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Jan 31 08:20:59 compute-0 systemd[1]: Started libpod-conmon-32945d0d83b1a3e38149b5ad067f34f90c5d18a0dad740f653386b04a062b6b4.scope.
Jan 31 08:20:59 compute-0 sudo[204621]: pam_unix(sudo:session): session closed for user root
Jan 31 08:20:59 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:21:00 compute-0 auditd[706]: Audit daemon rotating log files
Jan 31 08:21:00 compute-0 sudo[204940]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lliddiooekgcswhfslbeujgqdwkjnzlc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847660.1019006-992-50854695994234/AnsiballZ_copy.py'
Jan 31 08:21:00 compute-0 sudo[204940]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:21:00 compute-0 python3.9[204942]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/libvirt/servercert.pem group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 08:21:00 compute-0 sudo[204940]: pam_unix(sudo:session): session closed for user root
Jan 31 08:21:00 compute-0 podman[204772]: 2026-01-31 08:21:00.674841233 +0000 UTC m=+3.020114255 container init 32945d0d83b1a3e38149b5ad067f34f90c5d18a0dad740f653386b04a062b6b4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=infallible_meninsky, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 08:21:00 compute-0 podman[204772]: 2026-01-31 08:21:00.680007791 +0000 UTC m=+3.025280763 container start 32945d0d83b1a3e38149b5ad067f34f90c5d18a0dad740f653386b04a062b6b4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=infallible_meninsky, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Jan 31 08:21:00 compute-0 infallible_meninsky[204788]: 167 167
Jan 31 08:21:00 compute-0 systemd[1]: libpod-32945d0d83b1a3e38149b5ad067f34f90c5d18a0dad740f653386b04a062b6b4.scope: Deactivated successfully.
Jan 31 08:21:00 compute-0 conmon[204788]: conmon 32945d0d83b1a3e38149 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-32945d0d83b1a3e38149b5ad067f34f90c5d18a0dad740f653386b04a062b6b4.scope/container/memory.events
Jan 31 08:21:00 compute-0 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 08:21:00 compute-0 ceph-mon[75227]: pgmap v550: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:21:00 compute-0 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 31 08:21:00 compute-0 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 31 08:21:00 compute-0 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 08:21:00 compute-0 podman[204772]: 2026-01-31 08:21:00.823723567 +0000 UTC m=+3.168996559 container attach 32945d0d83b1a3e38149b5ad067f34f90c5d18a0dad740f653386b04a062b6b4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=infallible_meninsky, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Jan 31 08:21:00 compute-0 podman[204772]: 2026-01-31 08:21:00.824554751 +0000 UTC m=+3.169827713 container died 32945d0d83b1a3e38149b5ad067f34f90c5d18a0dad740f653386b04a062b6b4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=infallible_meninsky, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 08:21:00 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v552: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:21:01 compute-0 sudo[205106]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gwdebsnxntjtynegzqedcflmouwrwyzk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847660.7494998-992-36547187287443/AnsiballZ_copy.py'
Jan 31 08:21:01 compute-0 sudo[205106]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:21:01 compute-0 systemd[1]: var-lib-containers-storage-overlay-8933966e7d79c518e07f119ecffa3d1452a29ead0eb174485b0d2011ecec3ea2-merged.mount: Deactivated successfully.
Jan 31 08:21:01 compute-0 python3.9[205108]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/libvirt/private/serverkey.pem group=root mode=0600 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.key backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 08:21:01 compute-0 sudo[205106]: pam_unix(sudo:session): session closed for user root
Jan 31 08:21:01 compute-0 sudo[205258]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bryggcmlmudsikbsjuvdcfxhsjpqzjtx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847661.313059-992-262951677494325/AnsiballZ_copy.py'
Jan 31 08:21:01 compute-0 sudo[205258]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:21:01 compute-0 podman[204772]: 2026-01-31 08:21:01.566922099 +0000 UTC m=+3.912195091 container remove 32945d0d83b1a3e38149b5ad067f34f90c5d18a0dad740f653386b04a062b6b4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=infallible_meninsky, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Jan 31 08:21:01 compute-0 systemd[1]: libpod-conmon-32945d0d83b1a3e38149b5ad067f34f90c5d18a0dad740f653386b04a062b6b4.scope: Deactivated successfully.
Jan 31 08:21:01 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 08:21:01 compute-0 podman[205268]: 2026-01-31 08:21:01.673394318 +0000 UTC m=+0.021986720 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 08:21:01 compute-0 python3.9[205260]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/libvirt/clientcert.pem group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 08:21:01 compute-0 sudo[205258]: pam_unix(sudo:session): session closed for user root
Jan 31 08:21:01 compute-0 podman[205268]: 2026-01-31 08:21:01.815272741 +0000 UTC m=+0.163865103 container create c38534bfc1f145c9297dc8748db793e70e3a55bfed577a9726c09e5fb0cf14b1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=kind_lovelace, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle)
Jan 31 08:21:02 compute-0 systemd[1]: Started libpod-conmon-c38534bfc1f145c9297dc8748db793e70e3a55bfed577a9726c09e5fb0cf14b1.scope.
Jan 31 08:21:02 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:21:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/903aeca8114bec3f63ea19ce7e92f39461967a6467aaa8c7754036075240ec1e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 08:21:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/903aeca8114bec3f63ea19ce7e92f39461967a6467aaa8c7754036075240ec1e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 08:21:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/903aeca8114bec3f63ea19ce7e92f39461967a6467aaa8c7754036075240ec1e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 08:21:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/903aeca8114bec3f63ea19ce7e92f39461967a6467aaa8c7754036075240ec1e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 08:21:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/903aeca8114bec3f63ea19ce7e92f39461967a6467aaa8c7754036075240ec1e/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 31 08:21:02 compute-0 ceph-mon[75227]: pgmap v551: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:21:02 compute-0 ceph-mon[75227]: pgmap v552: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:21:02 compute-0 sudo[205436]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rfuneqxkbooiachzolajqmptiphjbpsl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847661.9290175-992-129153342560066/AnsiballZ_copy.py'
Jan 31 08:21:02 compute-0 sudo[205436]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:21:02 compute-0 podman[205268]: 2026-01-31 08:21:02.278442365 +0000 UTC m=+0.627034747 container init c38534bfc1f145c9297dc8748db793e70e3a55bfed577a9726c09e5fb0cf14b1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=kind_lovelace, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 08:21:02 compute-0 podman[205268]: 2026-01-31 08:21:02.286301309 +0000 UTC m=+0.634893701 container start c38534bfc1f145c9297dc8748db793e70e3a55bfed577a9726c09e5fb0cf14b1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=kind_lovelace, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 31 08:21:02 compute-0 python3.9[205438]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/libvirt/private/clientkey.pem group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.key backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 08:21:02 compute-0 sudo[205436]: pam_unix(sudo:session): session closed for user root
Jan 31 08:21:02 compute-0 podman[205268]: 2026-01-31 08:21:02.440352931 +0000 UTC m=+0.788945313 container attach c38534bfc1f145c9297dc8748db793e70e3a55bfed577a9726c09e5fb0cf14b1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=kind_lovelace, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Jan 31 08:21:02 compute-0 kind_lovelace[205407]: --> passed data devices: 0 physical, 3 LVM
Jan 31 08:21:02 compute-0 kind_lovelace[205407]: --> All data devices are unavailable
Jan 31 08:21:02 compute-0 systemd[1]: libpod-c38534bfc1f145c9297dc8748db793e70e3a55bfed577a9726c09e5fb0cf14b1.scope: Deactivated successfully.
Jan 31 08:21:02 compute-0 podman[205268]: 2026-01-31 08:21:02.776120776 +0000 UTC m=+1.124713138 container died c38534bfc1f145c9297dc8748db793e70e3a55bfed577a9726c09e5fb0cf14b1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=kind_lovelace, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Jan 31 08:21:02 compute-0 ceph-mgr[75519]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:21:02 compute-0 ceph-mgr[75519]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:21:02 compute-0 ceph-mgr[75519]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:21:02 compute-0 ceph-mgr[75519]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:21:02 compute-0 ceph-mgr[75519]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:21:02 compute-0 ceph-mgr[75519]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:21:02 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v553: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:21:02 compute-0 sudo[205614]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ugqndjwyhusaqauspjnlxwodhwtzumeb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847662.535579-992-211568557040336/AnsiballZ_copy.py'
Jan 31 08:21:02 compute-0 sudo[205614]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:21:03 compute-0 python3.9[205616]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/CA/cacert.pem group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/ca.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 08:21:03 compute-0 sudo[205614]: pam_unix(sudo:session): session closed for user root
Jan 31 08:21:03 compute-0 systemd[1]: var-lib-containers-storage-overlay-903aeca8114bec3f63ea19ce7e92f39461967a6467aaa8c7754036075240ec1e-merged.mount: Deactivated successfully.
Jan 31 08:21:03 compute-0 sudo[205767]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cwuzkpxakhfjjnntkcmklfjbukwqttgq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847663.3316023-1028-47530629435756/AnsiballZ_copy.py'
Jan 31 08:21:03 compute-0 sudo[205767]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:21:03 compute-0 ceph-mon[75227]: pgmap v553: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:21:03 compute-0 python3.9[205769]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/server-cert.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 08:21:03 compute-0 sudo[205767]: pam_unix(sudo:session): session closed for user root
Jan 31 08:21:04 compute-0 sudo[205919]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kuwjcqdmujczvhxbbnmsfsmtwtintyoi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847664.3001928-1028-41459521228014/AnsiballZ_copy.py'
Jan 31 08:21:04 compute-0 sudo[205919]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:21:04 compute-0 podman[205268]: 2026-01-31 08:21:04.553017859 +0000 UTC m=+2.901610251 container remove c38534bfc1f145c9297dc8748db793e70e3a55bfed577a9726c09e5fb0cf14b1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=kind_lovelace, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 08:21:04 compute-0 sudo[204734]: pam_unix(sudo:session): session closed for user root
Jan 31 08:21:04 compute-0 systemd[1]: libpod-conmon-c38534bfc1f145c9297dc8748db793e70e3a55bfed577a9726c09e5fb0cf14b1.scope: Deactivated successfully.
Jan 31 08:21:04 compute-0 sudo[205922]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 08:21:04 compute-0 sudo[205922]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:21:04 compute-0 sudo[205922]: pam_unix(sudo:session): session closed for user root
Jan 31 08:21:04 compute-0 sudo[205947]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/82c880e6-d992-5408-8b12-efff9c275473/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 82c880e6-d992-5408-8b12-efff9c275473 -- lvm list --format json
Jan 31 08:21:04 compute-0 sudo[205947]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:21:04 compute-0 python3.9[205921]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/server-key.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.key backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 08:21:04 compute-0 sudo[205919]: pam_unix(sudo:session): session closed for user root
Jan 31 08:21:04 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v554: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:21:05 compute-0 podman[206008]: 2026-01-31 08:21:04.949878193 +0000 UTC m=+0.021766944 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 08:21:05 compute-0 podman[206008]: 2026-01-31 08:21:05.174824765 +0000 UTC m=+0.246713536 container create 3f2e42e15da9d85641c3ad87267f4c8f3824a9a223143feb3696db285d851338 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=funny_dubinsky, OSD_FLAVOR=default, org.label-schema.build-date=20251030, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 08:21:05 compute-0 sudo[206147]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mruegtlkbouhcjbkitgkrkiajubxjizr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847664.9803224-1028-43692697877290/AnsiballZ_copy.py'
Jan 31 08:21:05 compute-0 sudo[206147]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:21:05 compute-0 python3.9[206149]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/client-cert.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 08:21:05 compute-0 sudo[206147]: pam_unix(sudo:session): session closed for user root
Jan 31 08:21:05 compute-0 systemd[1]: Started libpod-conmon-3f2e42e15da9d85641c3ad87267f4c8f3824a9a223143feb3696db285d851338.scope.
Jan 31 08:21:05 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:21:05 compute-0 podman[206008]: 2026-01-31 08:21:05.669420638 +0000 UTC m=+0.741309399 container init 3f2e42e15da9d85641c3ad87267f4c8f3824a9a223143feb3696db285d851338 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=funny_dubinsky, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_REF=tentacle, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 08:21:05 compute-0 podman[206008]: 2026-01-31 08:21:05.679220519 +0000 UTC m=+0.751109290 container start 3f2e42e15da9d85641c3ad87267f4c8f3824a9a223143feb3696db285d851338 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=funny_dubinsky, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 08:21:05 compute-0 funny_dubinsky[206152]: 167 167
Jan 31 08:21:05 compute-0 systemd[1]: libpod-3f2e42e15da9d85641c3ad87267f4c8f3824a9a223143feb3696db285d851338.scope: Deactivated successfully.
Jan 31 08:21:05 compute-0 conmon[206152]: conmon 3f2e42e15da9d85641c3 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-3f2e42e15da9d85641c3ad87267f4c8f3824a9a223143feb3696db285d851338.scope/container/memory.events
Jan 31 08:21:05 compute-0 podman[206008]: 2026-01-31 08:21:05.81163173 +0000 UTC m=+0.883520521 container attach 3f2e42e15da9d85641c3ad87267f4c8f3824a9a223143feb3696db285d851338 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=funny_dubinsky, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030)
Jan 31 08:21:05 compute-0 podman[206008]: 2026-01-31 08:21:05.812134145 +0000 UTC m=+0.884022916 container died 3f2e42e15da9d85641c3ad87267f4c8f3824a9a223143feb3696db285d851338 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=funny_dubinsky, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, ceph=True)
Jan 31 08:21:05 compute-0 sudo[206318]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rnsxzvxnicidvsjjwfafusblidjzfcwn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847665.669365-1028-211818118394540/AnsiballZ_copy.py'
Jan 31 08:21:05 compute-0 sudo[206318]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:21:06 compute-0 systemd[1]: var-lib-containers-storage-overlay-5747cdaa4d27e9bd9e15a8409b313a0eb84e2b0120676318fb0f5fef53ef8ed1-merged.mount: Deactivated successfully.
Jan 31 08:21:06 compute-0 python3.9[206320]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/client-key.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.key backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 08:21:06 compute-0 sudo[206318]: pam_unix(sudo:session): session closed for user root
Jan 31 08:21:06 compute-0 ceph-mon[75227]: pgmap v554: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:21:06 compute-0 sudo[206471]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rfhwliwzyxxbbpqlhjhfbdznoetilfnd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847666.2231975-1028-154121911997004/AnsiballZ_copy.py'
Jan 31 08:21:06 compute-0 sudo[206471]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:21:06 compute-0 python3.9[206473]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/ca-cert.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/ca.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 08:21:06 compute-0 sudo[206471]: pam_unix(sudo:session): session closed for user root
Jan 31 08:21:06 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v555: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:21:07 compute-0 sudo[206623]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lhcwdqofixsfyisiysqxotktqhhnjusy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847666.8871138-1064-97951143683164/AnsiballZ_systemd.py'
Jan 31 08:21:07 compute-0 sudo[206623]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:21:07 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 08:21:07 compute-0 podman[206008]: 2026-01-31 08:21:07.235805753 +0000 UTC m=+2.307694484 container remove 3f2e42e15da9d85641c3ad87267f4c8f3824a9a223143feb3696db285d851338 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=funny_dubinsky, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030)
Jan 31 08:21:07 compute-0 systemd[1]: libpod-conmon-3f2e42e15da9d85641c3ad87267f4c8f3824a9a223143feb3696db285d851338.scope: Deactivated successfully.
Jan 31 08:21:07 compute-0 python3.9[206625]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtlogd.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 31 08:21:07 compute-0 systemd[1]: Reloading.
Jan 31 08:21:07 compute-0 podman[206633]: 2026-01-31 08:21:07.377730327 +0000 UTC m=+0.025667766 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 08:21:07 compute-0 systemd-sysv-generator[206669]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 08:21:07 compute-0 systemd-rc-local-generator[206666]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 08:21:07 compute-0 podman[206633]: 2026-01-31 08:21:07.560757219 +0000 UTC m=+0.208694628 container create fda45f451a00037a76dfef0ca7a653a35ac5d6afe2b9e0a1307caebe47199f11 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=mystifying_chaum, ceph=True, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=tentacle)
Jan 31 08:21:07 compute-0 ceph-mon[75227]: pgmap v555: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:21:07 compute-0 systemd[1]: Started libpod-conmon-fda45f451a00037a76dfef0ca7a653a35ac5d6afe2b9e0a1307caebe47199f11.scope.
Jan 31 08:21:07 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:21:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/18abb392093924f753ee8c60bb2a028cf6153d4cbe2ad87c3c582051abfac9eb/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 08:21:07 compute-0 systemd[1]: Starting libvirt logging daemon socket...
Jan 31 08:21:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/18abb392093924f753ee8c60bb2a028cf6153d4cbe2ad87c3c582051abfac9eb/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 08:21:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/18abb392093924f753ee8c60bb2a028cf6153d4cbe2ad87c3c582051abfac9eb/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 08:21:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/18abb392093924f753ee8c60bb2a028cf6153d4cbe2ad87c3c582051abfac9eb/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 08:21:07 compute-0 systemd[1]: Listening on libvirt logging daemon socket.
Jan 31 08:21:07 compute-0 systemd[1]: Starting libvirt logging daemon admin socket...
Jan 31 08:21:07 compute-0 systemd[1]: Listening on libvirt logging daemon admin socket.
Jan 31 08:21:07 compute-0 systemd[1]: Starting libvirt logging daemon...
Jan 31 08:21:07 compute-0 systemd[1]: Started libvirt logging daemon.
Jan 31 08:21:07 compute-0 sudo[206623]: pam_unix(sudo:session): session closed for user root
Jan 31 08:21:08 compute-0 podman[206633]: 2026-01-31 08:21:08.030781327 +0000 UTC m=+0.678718696 container init fda45f451a00037a76dfef0ca7a653a35ac5d6afe2b9e0a1307caebe47199f11 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=mystifying_chaum, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 31 08:21:08 compute-0 podman[206633]: 2026-01-31 08:21:08.052128779 +0000 UTC m=+0.700066148 container start fda45f451a00037a76dfef0ca7a653a35ac5d6afe2b9e0a1307caebe47199f11 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=mystifying_chaum, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 08:21:08 compute-0 mystifying_chaum[206686]: {
Jan 31 08:21:08 compute-0 mystifying_chaum[206686]:     "0": [
Jan 31 08:21:08 compute-0 mystifying_chaum[206686]:         {
Jan 31 08:21:08 compute-0 mystifying_chaum[206686]:             "devices": [
Jan 31 08:21:08 compute-0 mystifying_chaum[206686]:                 "/dev/loop3"
Jan 31 08:21:08 compute-0 mystifying_chaum[206686]:             ],
Jan 31 08:21:08 compute-0 mystifying_chaum[206686]:             "lv_name": "ceph_lv0",
Jan 31 08:21:08 compute-0 mystifying_chaum[206686]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 08:21:08 compute-0 mystifying_chaum[206686]:             "lv_size": "21470642176",
Jan 31 08:21:08 compute-0 mystifying_chaum[206686]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=MTsNbY-MKaT-jGv0-3onj-5WQa-gnK0-BbfLsK,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=82c880e6-d992-5408-8b12-efff9c275473,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=39c36249-2898-4a76-b317-8e4ca379866f,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 31 08:21:08 compute-0 mystifying_chaum[206686]:             "lv_uuid": "MTsNbY-MKaT-jGv0-3onj-5WQa-gnK0-BbfLsK",
Jan 31 08:21:08 compute-0 mystifying_chaum[206686]:             "name": "ceph_lv0",
Jan 31 08:21:08 compute-0 mystifying_chaum[206686]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 08:21:08 compute-0 mystifying_chaum[206686]:             "tags": {
Jan 31 08:21:08 compute-0 mystifying_chaum[206686]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 31 08:21:08 compute-0 mystifying_chaum[206686]:                 "ceph.block_uuid": "MTsNbY-MKaT-jGv0-3onj-5WQa-gnK0-BbfLsK",
Jan 31 08:21:08 compute-0 mystifying_chaum[206686]:                 "ceph.cephx_lockbox_secret": "",
Jan 31 08:21:08 compute-0 mystifying_chaum[206686]:                 "ceph.cluster_fsid": "82c880e6-d992-5408-8b12-efff9c275473",
Jan 31 08:21:08 compute-0 mystifying_chaum[206686]:                 "ceph.cluster_name": "ceph",
Jan 31 08:21:08 compute-0 mystifying_chaum[206686]:                 "ceph.crush_device_class": "",
Jan 31 08:21:08 compute-0 mystifying_chaum[206686]:                 "ceph.encrypted": "0",
Jan 31 08:21:08 compute-0 mystifying_chaum[206686]:                 "ceph.objectstore": "bluestore",
Jan 31 08:21:08 compute-0 mystifying_chaum[206686]:                 "ceph.osd_fsid": "39c36249-2898-4a76-b317-8e4ca379866f",
Jan 31 08:21:08 compute-0 mystifying_chaum[206686]:                 "ceph.osd_id": "0",
Jan 31 08:21:08 compute-0 mystifying_chaum[206686]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 31 08:21:08 compute-0 mystifying_chaum[206686]:                 "ceph.type": "block",
Jan 31 08:21:08 compute-0 mystifying_chaum[206686]:                 "ceph.vdo": "0",
Jan 31 08:21:08 compute-0 mystifying_chaum[206686]:                 "ceph.with_tpm": "0"
Jan 31 08:21:08 compute-0 mystifying_chaum[206686]:             },
Jan 31 08:21:08 compute-0 mystifying_chaum[206686]:             "type": "block",
Jan 31 08:21:08 compute-0 mystifying_chaum[206686]:             "vg_name": "ceph_vg0"
Jan 31 08:21:08 compute-0 mystifying_chaum[206686]:         }
Jan 31 08:21:08 compute-0 mystifying_chaum[206686]:     ],
Jan 31 08:21:08 compute-0 mystifying_chaum[206686]:     "1": [
Jan 31 08:21:08 compute-0 mystifying_chaum[206686]:         {
Jan 31 08:21:08 compute-0 mystifying_chaum[206686]:             "devices": [
Jan 31 08:21:08 compute-0 mystifying_chaum[206686]:                 "/dev/loop4"
Jan 31 08:21:08 compute-0 mystifying_chaum[206686]:             ],
Jan 31 08:21:08 compute-0 mystifying_chaum[206686]:             "lv_name": "ceph_lv1",
Jan 31 08:21:08 compute-0 mystifying_chaum[206686]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 31 08:21:08 compute-0 mystifying_chaum[206686]:             "lv_size": "21470642176",
Jan 31 08:21:08 compute-0 mystifying_chaum[206686]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=p93Mbf-DMxT-pcUt-jSJE-SFna-oscq-yTAd40,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=82c880e6-d992-5408-8b12-efff9c275473,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=dacad4fa-56d8-4937-b2d8-306fb75187f3,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 31 08:21:08 compute-0 mystifying_chaum[206686]:             "lv_uuid": "p93Mbf-DMxT-pcUt-jSJE-SFna-oscq-yTAd40",
Jan 31 08:21:08 compute-0 mystifying_chaum[206686]:             "name": "ceph_lv1",
Jan 31 08:21:08 compute-0 mystifying_chaum[206686]:             "path": "/dev/ceph_vg1/ceph_lv1",
Jan 31 08:21:08 compute-0 mystifying_chaum[206686]:             "tags": {
Jan 31 08:21:08 compute-0 mystifying_chaum[206686]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 31 08:21:08 compute-0 mystifying_chaum[206686]:                 "ceph.block_uuid": "p93Mbf-DMxT-pcUt-jSJE-SFna-oscq-yTAd40",
Jan 31 08:21:08 compute-0 mystifying_chaum[206686]:                 "ceph.cephx_lockbox_secret": "",
Jan 31 08:21:08 compute-0 mystifying_chaum[206686]:                 "ceph.cluster_fsid": "82c880e6-d992-5408-8b12-efff9c275473",
Jan 31 08:21:08 compute-0 mystifying_chaum[206686]:                 "ceph.cluster_name": "ceph",
Jan 31 08:21:08 compute-0 mystifying_chaum[206686]:                 "ceph.crush_device_class": "",
Jan 31 08:21:08 compute-0 mystifying_chaum[206686]:                 "ceph.encrypted": "0",
Jan 31 08:21:08 compute-0 mystifying_chaum[206686]:                 "ceph.objectstore": "bluestore",
Jan 31 08:21:08 compute-0 mystifying_chaum[206686]:                 "ceph.osd_fsid": "dacad4fa-56d8-4937-b2d8-306fb75187f3",
Jan 31 08:21:08 compute-0 mystifying_chaum[206686]:                 "ceph.osd_id": "1",
Jan 31 08:21:08 compute-0 mystifying_chaum[206686]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 31 08:21:08 compute-0 mystifying_chaum[206686]:                 "ceph.type": "block",
Jan 31 08:21:08 compute-0 mystifying_chaum[206686]:                 "ceph.vdo": "0",
Jan 31 08:21:08 compute-0 mystifying_chaum[206686]:                 "ceph.with_tpm": "0"
Jan 31 08:21:08 compute-0 mystifying_chaum[206686]:             },
Jan 31 08:21:08 compute-0 mystifying_chaum[206686]:             "type": "block",
Jan 31 08:21:08 compute-0 mystifying_chaum[206686]:             "vg_name": "ceph_vg1"
Jan 31 08:21:08 compute-0 mystifying_chaum[206686]:         }
Jan 31 08:21:08 compute-0 mystifying_chaum[206686]:     ],
Jan 31 08:21:08 compute-0 mystifying_chaum[206686]:     "2": [
Jan 31 08:21:08 compute-0 mystifying_chaum[206686]:         {
Jan 31 08:21:08 compute-0 mystifying_chaum[206686]:             "devices": [
Jan 31 08:21:08 compute-0 mystifying_chaum[206686]:                 "/dev/loop5"
Jan 31 08:21:08 compute-0 mystifying_chaum[206686]:             ],
Jan 31 08:21:08 compute-0 mystifying_chaum[206686]:             "lv_name": "ceph_lv2",
Jan 31 08:21:08 compute-0 mystifying_chaum[206686]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 31 08:21:08 compute-0 mystifying_chaum[206686]:             "lv_size": "21470642176",
Jan 31 08:21:08 compute-0 mystifying_chaum[206686]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=fxd7JU-HnwP-NvcE-M4xv-EgEF-kK7y-w6dXCS,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=82c880e6-d992-5408-8b12-efff9c275473,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=faa25865-e7b6-44f9-8188-08bf287b941b,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 31 08:21:08 compute-0 mystifying_chaum[206686]:             "lv_uuid": "fxd7JU-HnwP-NvcE-M4xv-EgEF-kK7y-w6dXCS",
Jan 31 08:21:08 compute-0 mystifying_chaum[206686]:             "name": "ceph_lv2",
Jan 31 08:21:08 compute-0 mystifying_chaum[206686]:             "path": "/dev/ceph_vg2/ceph_lv2",
Jan 31 08:21:08 compute-0 mystifying_chaum[206686]:             "tags": {
Jan 31 08:21:08 compute-0 mystifying_chaum[206686]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 31 08:21:08 compute-0 mystifying_chaum[206686]:                 "ceph.block_uuid": "fxd7JU-HnwP-NvcE-M4xv-EgEF-kK7y-w6dXCS",
Jan 31 08:21:08 compute-0 mystifying_chaum[206686]:                 "ceph.cephx_lockbox_secret": "",
Jan 31 08:21:08 compute-0 mystifying_chaum[206686]:                 "ceph.cluster_fsid": "82c880e6-d992-5408-8b12-efff9c275473",
Jan 31 08:21:08 compute-0 mystifying_chaum[206686]:                 "ceph.cluster_name": "ceph",
Jan 31 08:21:08 compute-0 mystifying_chaum[206686]:                 "ceph.crush_device_class": "",
Jan 31 08:21:08 compute-0 mystifying_chaum[206686]:                 "ceph.encrypted": "0",
Jan 31 08:21:08 compute-0 mystifying_chaum[206686]:                 "ceph.objectstore": "bluestore",
Jan 31 08:21:08 compute-0 mystifying_chaum[206686]:                 "ceph.osd_fsid": "faa25865-e7b6-44f9-8188-08bf287b941b",
Jan 31 08:21:08 compute-0 mystifying_chaum[206686]:                 "ceph.osd_id": "2",
Jan 31 08:21:08 compute-0 mystifying_chaum[206686]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 31 08:21:08 compute-0 mystifying_chaum[206686]:                 "ceph.type": "block",
Jan 31 08:21:08 compute-0 mystifying_chaum[206686]:                 "ceph.vdo": "0",
Jan 31 08:21:08 compute-0 mystifying_chaum[206686]:                 "ceph.with_tpm": "0"
Jan 31 08:21:08 compute-0 mystifying_chaum[206686]:             },
Jan 31 08:21:08 compute-0 mystifying_chaum[206686]:             "type": "block",
Jan 31 08:21:08 compute-0 mystifying_chaum[206686]:             "vg_name": "ceph_vg2"
Jan 31 08:21:08 compute-0 mystifying_chaum[206686]:         }
Jan 31 08:21:08 compute-0 mystifying_chaum[206686]:     ]
Jan 31 08:21:08 compute-0 mystifying_chaum[206686]: }
Jan 31 08:21:08 compute-0 sudo[206849]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vlblbnfchkgemsdqsqyfdsnegqlnrnqo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847668.0722342-1064-90889624719655/AnsiballZ_systemd.py'
Jan 31 08:21:08 compute-0 sudo[206849]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:21:08 compute-0 podman[206633]: 2026-01-31 08:21:08.33117509 +0000 UTC m=+0.979112539 container attach fda45f451a00037a76dfef0ca7a653a35ac5d6afe2b9e0a1307caebe47199f11 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=mystifying_chaum, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 08:21:08 compute-0 systemd[1]: libpod-fda45f451a00037a76dfef0ca7a653a35ac5d6afe2b9e0a1307caebe47199f11.scope: Deactivated successfully.
Jan 31 08:21:08 compute-0 podman[206852]: 2026-01-31 08:21:08.381615784 +0000 UTC m=+0.037758202 container died fda45f451a00037a76dfef0ca7a653a35ac5d6afe2b9e0a1307caebe47199f11 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=mystifying_chaum, org.label-schema.build-date=20251030, CEPH_REF=tentacle, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 08:21:08 compute-0 python3.9[206851]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtnodedevd.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 31 08:21:08 compute-0 systemd[1]: Reloading.
Jan 31 08:21:08 compute-0 systemd-rc-local-generator[206885]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 08:21:08 compute-0 systemd-sysv-generator[206888]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 08:21:08 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v556: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:21:09 compute-0 systemd[1]: Starting SETroubleshoot daemon for processing new SELinux denial logs...
Jan 31 08:21:09 compute-0 systemd[1]: Starting libvirt nodedev daemon socket...
Jan 31 08:21:09 compute-0 systemd[1]: Listening on libvirt nodedev daemon socket.
Jan 31 08:21:09 compute-0 systemd[1]: Starting libvirt nodedev daemon admin socket...
Jan 31 08:21:09 compute-0 systemd[1]: Starting libvirt nodedev daemon read-only socket...
Jan 31 08:21:09 compute-0 systemd[1]: Listening on libvirt nodedev daemon admin socket.
Jan 31 08:21:09 compute-0 systemd[1]: Listening on libvirt nodedev daemon read-only socket.
Jan 31 08:21:09 compute-0 systemd[1]: Starting libvirt nodedev daemon...
Jan 31 08:21:09 compute-0 systemd[1]: Started libvirt nodedev daemon.
Jan 31 08:21:09 compute-0 systemd[1]: var-lib-containers-storage-overlay-18abb392093924f753ee8c60bb2a028cf6153d4cbe2ad87c3c582051abfac9eb-merged.mount: Deactivated successfully.
Jan 31 08:21:09 compute-0 sudo[206849]: pam_unix(sudo:session): session closed for user root
Jan 31 08:21:09 compute-0 systemd[1]: Started SETroubleshoot daemon for processing new SELinux denial logs.
Jan 31 08:21:09 compute-0 sudo[207079]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xtmefizprxsunywjjiiecdorgixyxrlq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847669.2779026-1064-202324709981346/AnsiballZ_systemd.py'
Jan 31 08:21:09 compute-0 sudo[207079]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:21:09 compute-0 systemd[1]: Created slice Slice /system/dbus-:1.0-org.fedoraproject.SetroubleshootPrivileged.
Jan 31 08:21:09 compute-0 systemd[1]: Started dbus-:1.0-org.fedoraproject.SetroubleshootPrivileged@0.service.
Jan 31 08:21:09 compute-0 python3.9[207081]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtproxyd.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 31 08:21:09 compute-0 systemd[1]: Reloading.
Jan 31 08:21:10 compute-0 systemd-sysv-generator[207117]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 08:21:10 compute-0 systemd-rc-local-generator[207114]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 08:21:10 compute-0 podman[206852]: 2026-01-31 08:21:10.129449315 +0000 UTC m=+1.785591713 container remove fda45f451a00037a76dfef0ca7a653a35ac5d6afe2b9e0a1307caebe47199f11 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=mystifying_chaum, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Jan 31 08:21:10 compute-0 sudo[205947]: pam_unix(sudo:session): session closed for user root
Jan 31 08:21:10 compute-0 systemd[1]: libpod-conmon-fda45f451a00037a76dfef0ca7a653a35ac5d6afe2b9e0a1307caebe47199f11.scope: Deactivated successfully.
Jan 31 08:21:10 compute-0 sudo[207123]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 08:21:10 compute-0 ceph-mon[75227]: pgmap v556: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:21:10 compute-0 sudo[207123]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:21:10 compute-0 systemd[1]: Starting libvirt proxy daemon admin socket...
Jan 31 08:21:10 compute-0 sudo[207123]: pam_unix(sudo:session): session closed for user root
Jan 31 08:21:10 compute-0 systemd[1]: Starting libvirt proxy daemon read-only socket...
Jan 31 08:21:10 compute-0 systemd[1]: Listening on libvirt proxy daemon admin socket.
Jan 31 08:21:10 compute-0 systemd[1]: Listening on libvirt proxy daemon read-only socket.
Jan 31 08:21:10 compute-0 systemd[1]: Starting libvirt proxy daemon...
Jan 31 08:21:10 compute-0 sudo[207154]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/82c880e6-d992-5408-8b12-efff9c275473/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 82c880e6-d992-5408-8b12-efff9c275473 -- raw list --format json
Jan 31 08:21:10 compute-0 sudo[207154]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:21:10 compute-0 systemd[1]: Started libvirt proxy daemon.
Jan 31 08:21:10 compute-0 sudo[207079]: pam_unix(sudo:session): session closed for user root
Jan 31 08:21:10 compute-0 podman[207279]: 2026-01-31 08:21:10.56157366 +0000 UTC m=+0.022491075 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 08:21:10 compute-0 podman[207279]: 2026-01-31 08:21:10.673441043 +0000 UTC m=+0.134358428 container create 3bd4f2211b1391868a35362bf67071b50260d0c671ccafffebead4077f8044a8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=infallible_shockley, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030)
Jan 31 08:21:10 compute-0 sudo[207374]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qbbvafuhqsnrkmujyzofyyqcypuwqvas ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847670.4941807-1064-21745251184410/AnsiballZ_systemd.py'
Jan 31 08:21:10 compute-0 sudo[207374]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:21:10 compute-0 systemd[1]: Started libpod-conmon-3bd4f2211b1391868a35362bf67071b50260d0c671ccafffebead4077f8044a8.scope.
Jan 31 08:21:10 compute-0 setroubleshoot[206901]: SELinux is preventing /usr/sbin/virtlogd from using the dac_read_search capability. For complete SELinux messages run: sealert -l 20804ff9-c985-44af-91ff-dcb7ac8409b8
Jan 31 08:21:10 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v557: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:21:10 compute-0 setroubleshoot[206901]: SELinux is preventing /usr/sbin/virtlogd from using the dac_read_search capability.
                                                  
                                                  *****  Plugin dac_override (91.4 confidence) suggests   **********************
                                                  
                                                  If you want to help identify if domain needs this access or you have a file with the wrong permissions on your system
                                                  Then turn on full auditing to get path information about the offending file and generate the error again.
                                                  Do
                                                  
                                                  Turn on full auditing
                                                  # auditctl -w /etc/shadow -p w
                                                  Try to recreate AVC. Then execute
                                                  # ausearch -m avc -ts recent
                                                  If you see PATH record check ownership/permissions on file, and fix it,
                                                  otherwise report as a bugzilla.
                                                  
                                                  *****  Plugin catchall (9.59 confidence) suggests   **************************
                                                  
                                                  If you believe that virtlogd should have the dac_read_search capability by default.
                                                  Then you should report this as a bug.
                                                  You can generate a local policy module to allow this access.
                                                  Do
                                                  allow this access for now by executing:
                                                  # ausearch -c 'virtlogd' --raw | audit2allow -M my-virtlogd
                                                  # semodule -X 300 -i my-virtlogd.pp
                                                  
Jan 31 08:21:10 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:21:10 compute-0 setroubleshoot[206901]: SELinux is preventing /usr/sbin/virtlogd from using the dac_read_search capability. For complete SELinux messages run: sealert -l 20804ff9-c985-44af-91ff-dcb7ac8409b8
Jan 31 08:21:10 compute-0 setroubleshoot[206901]: SELinux is preventing /usr/sbin/virtlogd from using the dac_read_search capability.
                                                  
                                                  *****  Plugin dac_override (91.4 confidence) suggests   **********************
                                                  
                                                  If you want to help identify if domain needs this access or you have a file with the wrong permissions on your system
                                                  Then turn on full auditing to get path information about the offending file and generate the error again.
                                                  Do
                                                  
                                                  Turn on full auditing
                                                  # auditctl -w /etc/shadow -p w
                                                  Try to recreate AVC. Then execute
                                                  # ausearch -m avc -ts recent
                                                  If you see PATH record check ownership/permissions on file, and fix it,
                                                  otherwise report as a bugzilla.
                                                  
                                                  *****  Plugin catchall (9.59 confidence) suggests   **************************
                                                  
                                                  If you believe that virtlogd should have the dac_read_search capability by default.
                                                  Then you should report this as a bug.
                                                  You can generate a local policy module to allow this access.
                                                  Do
                                                  allow this access for now by executing:
                                                  # ausearch -c 'virtlogd' --raw | audit2allow -M my-virtlogd
                                                  # semodule -X 300 -i my-virtlogd.pp
                                                  
Jan 31 08:21:11 compute-0 python3.9[207376]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtqemud.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 31 08:21:11 compute-0 systemd[1]: Reloading.
Jan 31 08:21:11 compute-0 podman[207279]: 2026-01-31 08:21:11.091952828 +0000 UTC m=+0.552870223 container init 3bd4f2211b1391868a35362bf67071b50260d0c671ccafffebead4077f8044a8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=infallible_shockley, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 08:21:11 compute-0 podman[207279]: 2026-01-31 08:21:11.097688282 +0000 UTC m=+0.558605677 container start 3bd4f2211b1391868a35362bf67071b50260d0c671ccafffebead4077f8044a8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=infallible_shockley, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 31 08:21:11 compute-0 infallible_shockley[207380]: 167 167
Jan 31 08:21:11 compute-0 systemd-rc-local-generator[207417]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 08:21:11 compute-0 systemd-sysv-generator[207424]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 08:21:11 compute-0 podman[207279]: 2026-01-31 08:21:11.279462587 +0000 UTC m=+0.740379992 container attach 3bd4f2211b1391868a35362bf67071b50260d0c671ccafffebead4077f8044a8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=infallible_shockley, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle)
Jan 31 08:21:11 compute-0 podman[207279]: 2026-01-31 08:21:11.280745444 +0000 UTC m=+0.741662829 container died 3bd4f2211b1391868a35362bf67071b50260d0c671ccafffebead4077f8044a8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=infallible_shockley, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=tentacle, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Jan 31 08:21:11 compute-0 systemd[1]: libpod-3bd4f2211b1391868a35362bf67071b50260d0c671ccafffebead4077f8044a8.scope: Deactivated successfully.
Jan 31 08:21:11 compute-0 systemd[1]: Listening on libvirt locking daemon socket.
Jan 31 08:21:11 compute-0 systemd[1]: Starting libvirt QEMU daemon socket...
Jan 31 08:21:11 compute-0 systemd[1]: Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw).
Jan 31 08:21:11 compute-0 systemd[1]: Starting Virtual Machine and Container Registration Service...
Jan 31 08:21:11 compute-0 systemd[1]: Listening on libvirt QEMU daemon socket.
Jan 31 08:21:11 compute-0 systemd[1]: Starting libvirt QEMU daemon admin socket...
Jan 31 08:21:11 compute-0 systemd[1]: Starting libvirt QEMU daemon read-only socket...
Jan 31 08:21:11 compute-0 systemd[1]: Listening on libvirt QEMU daemon admin socket.
Jan 31 08:21:11 compute-0 systemd[1]: Started Virtual Machine and Container Registration Service.
Jan 31 08:21:11 compute-0 systemd[1]: Listening on libvirt QEMU daemon read-only socket.
Jan 31 08:21:11 compute-0 systemd[1]: Starting libvirt QEMU daemon...
Jan 31 08:21:11 compute-0 systemd[1]: Started libvirt QEMU daemon.
Jan 31 08:21:11 compute-0 sudo[207374]: pam_unix(sudo:session): session closed for user root
Jan 31 08:21:11 compute-0 sudo[207609]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sehdhapzhxbryhyocbwkjsfuxcucqazw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847671.6362514-1064-3006620824780/AnsiballZ_systemd.py'
Jan 31 08:21:11 compute-0 sudo[207609]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:21:12 compute-0 systemd[1]: var-lib-containers-storage-overlay-228773be1f05df366aa23daeb83d3d9e5a8e8e293e59ba1cb0d8170de0c1961d-merged.mount: Deactivated successfully.
Jan 31 08:21:12 compute-0 python3.9[207611]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtsecretd.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 31 08:21:12 compute-0 systemd[1]: Reloading.
Jan 31 08:21:12 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 08:21:12 compute-0 systemd-rc-local-generator[207636]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 08:21:12 compute-0 systemd-sysv-generator[207639]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 08:21:12 compute-0 systemd[1]: Starting libvirt secret daemon socket...
Jan 31 08:21:12 compute-0 systemd[1]: Listening on libvirt secret daemon socket.
Jan 31 08:21:12 compute-0 systemd[1]: Starting libvirt secret daemon admin socket...
Jan 31 08:21:12 compute-0 systemd[1]: Starting libvirt secret daemon read-only socket...
Jan 31 08:21:12 compute-0 systemd[1]: Listening on libvirt secret daemon admin socket.
Jan 31 08:21:12 compute-0 systemd[1]: Listening on libvirt secret daemon read-only socket.
Jan 31 08:21:12 compute-0 systemd[1]: Starting libvirt secret daemon...
Jan 31 08:21:12 compute-0 systemd[1]: Started libvirt secret daemon.
Jan 31 08:21:12 compute-0 sudo[207609]: pam_unix(sudo:session): session closed for user root
Jan 31 08:21:12 compute-0 ceph-mon[75227]: pgmap v557: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:21:12 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v558: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:21:13 compute-0 podman[207279]: 2026-01-31 08:21:13.015434108 +0000 UTC m=+2.476351493 container remove 3bd4f2211b1391868a35362bf67071b50260d0c671ccafffebead4077f8044a8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=infallible_shockley, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 08:21:13 compute-0 systemd[1]: libpod-conmon-3bd4f2211b1391868a35362bf67071b50260d0c671ccafffebead4077f8044a8.scope: Deactivated successfully.
Jan 31 08:21:13 compute-0 sudo[207820]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yrgttvoysoqbirbryhuipljhltpcdqkc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847672.8358014-1101-168636032914642/AnsiballZ_file.py'
Jan 31 08:21:13 compute-0 sudo[207820]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:21:13 compute-0 podman[207830]: 2026-01-31 08:21:13.130423021 +0000 UTC m=+0.027398106 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 08:21:13 compute-0 python3.9[207824]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/openstack/config/ceph state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 08:21:13 compute-0 sudo[207820]: pam_unix(sudo:session): session closed for user root
Jan 31 08:21:13 compute-0 podman[207830]: 2026-01-31 08:21:13.509343731 +0000 UTC m=+0.406318786 container create 9883c44da536bcf556288ebfe7105dc0af6cfa0cbd3578ba364f20294f641cf2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nervous_nobel, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, ceph=True)
Jan 31 08:21:13 compute-0 sudo[207993]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-evkiwiwanpfjbsrslskdeqexpmqboufv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847673.5441303-1109-138493140518144/AnsiballZ_find.py'
Jan 31 08:21:13 compute-0 sudo[207993]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:21:13 compute-0 systemd[1]: Started libpod-conmon-9883c44da536bcf556288ebfe7105dc0af6cfa0cbd3578ba364f20294f641cf2.scope.
Jan 31 08:21:13 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:21:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/80249d8aa17dab01859aa2757709b3e3ad448cf5fd186e5dbf691bb86723ac00/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 08:21:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/80249d8aa17dab01859aa2757709b3e3ad448cf5fd186e5dbf691bb86723ac00/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 08:21:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/80249d8aa17dab01859aa2757709b3e3ad448cf5fd186e5dbf691bb86723ac00/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 08:21:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/80249d8aa17dab01859aa2757709b3e3ad448cf5fd186e5dbf691bb86723ac00/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 08:21:14 compute-0 python3.9[207995]: ansible-ansible.builtin.find Invoked with paths=['/var/lib/openstack/config/ceph'] patterns=['*.conf'] read_whole_file=False file_type=file age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Jan 31 08:21:14 compute-0 sudo[207993]: pam_unix(sudo:session): session closed for user root
Jan 31 08:21:14 compute-0 podman[207830]: 2026-01-31 08:21:14.108430897 +0000 UTC m=+1.005405992 container init 9883c44da536bcf556288ebfe7105dc0af6cfa0cbd3578ba364f20294f641cf2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nervous_nobel, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 08:21:14 compute-0 ceph-mon[75227]: pgmap v558: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:21:14 compute-0 podman[207830]: 2026-01-31 08:21:14.114846361 +0000 UTC m=+1.011821406 container start 9883c44da536bcf556288ebfe7105dc0af6cfa0cbd3578ba364f20294f641cf2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nervous_nobel, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 08:21:14 compute-0 podman[207830]: 2026-01-31 08:21:14.119434312 +0000 UTC m=+1.016409417 container attach 9883c44da536bcf556288ebfe7105dc0af6cfa0cbd3578ba364f20294f641cf2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nervous_nobel, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Jan 31 08:21:14 compute-0 sudo[208162]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ljocnpylidymclkjnakurgnyxvnpsefo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847674.1701033-1117-170707246107150/AnsiballZ_command.py'
Jan 31 08:21:14 compute-0 sudo[208162]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:21:14 compute-0 python3.9[208164]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail;
                                             echo ceph
                                             awk -F '=' '/fsid/ {print $2}' /var/lib/openstack/config/ceph/ceph.conf | xargs
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 08:21:14 compute-0 sudo[208162]: pam_unix(sudo:session): session closed for user root
Jan 31 08:21:14 compute-0 lvm[208256]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 31 08:21:14 compute-0 lvm[208257]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 31 08:21:14 compute-0 lvm[208257]: VG ceph_vg1 finished
Jan 31 08:21:14 compute-0 lvm[208256]: VG ceph_vg0 finished
Jan 31 08:21:14 compute-0 lvm[208259]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 31 08:21:14 compute-0 lvm[208259]: VG ceph_vg2 finished
Jan 31 08:21:14 compute-0 nervous_nobel[207998]: {}
Jan 31 08:21:14 compute-0 systemd[1]: libpod-9883c44da536bcf556288ebfe7105dc0af6cfa0cbd3578ba364f20294f641cf2.scope: Deactivated successfully.
Jan 31 08:21:14 compute-0 systemd[1]: libpod-9883c44da536bcf556288ebfe7105dc0af6cfa0cbd3578ba364f20294f641cf2.scope: Consumed 1.048s CPU time.
Jan 31 08:21:14 compute-0 podman[207830]: 2026-01-31 08:21:14.844612808 +0000 UTC m=+1.741587863 container died 9883c44da536bcf556288ebfe7105dc0af6cfa0cbd3578ba364f20294f641cf2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nervous_nobel, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 08:21:14 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v559: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:21:14 compute-0 systemd[1]: var-lib-containers-storage-overlay-80249d8aa17dab01859aa2757709b3e3ad448cf5fd186e5dbf691bb86723ac00-merged.mount: Deactivated successfully.
Jan 31 08:21:14 compute-0 podman[207830]: 2026-01-31 08:21:14.99450445 +0000 UTC m=+1.891479495 container remove 9883c44da536bcf556288ebfe7105dc0af6cfa0cbd3578ba364f20294f641cf2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nervous_nobel, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 08:21:15 compute-0 systemd[1]: libpod-conmon-9883c44da536bcf556288ebfe7105dc0af6cfa0cbd3578ba364f20294f641cf2.scope: Deactivated successfully.
Jan 31 08:21:15 compute-0 sudo[207154]: pam_unix(sudo:session): session closed for user root
Jan 31 08:21:15 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 31 08:21:15 compute-0 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 08:21:15 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 31 08:21:15 compute-0 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 08:21:15 compute-0 sudo[208400]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 31 08:21:15 compute-0 sudo[208400]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:21:15 compute-0 sudo[208400]: pam_unix(sudo:session): session closed for user root
Jan 31 08:21:15 compute-0 python3.9[208399]: ansible-ansible.builtin.find Invoked with paths=['/var/lib/openstack/config/ceph'] patterns=['*.keyring'] read_whole_file=False file_type=file age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Jan 31 08:21:15 compute-0 podman[208548]: 2026-01-31 08:21:15.750288162 +0000 UTC m=+0.076758679 container health_status 14c3c41ef3ecc0cb180f3b1c6b2646401de390411c75f2edf127900ced71a3ae (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '3b798815db4eef76ff54d2cfe5801aee605c637f16e47a4297de289ab1fdb9c1-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 08:21:15 compute-0 python3.9[208585]: ansible-ansible.legacy.stat Invoked with path=/tmp/secret.xml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 08:21:16 compute-0 ceph-mon[75227]: pgmap v559: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:21:16 compute-0 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 08:21:16 compute-0 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 08:21:16 compute-0 python3.9[208721]: ansible-ansible.legacy.copy Invoked with dest=/tmp/secret.xml mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1769847675.4712029-1136-14053700092707/.source.xml follow=False _original_basename=secret.xml.j2 checksum=9c2345731d8b82f59a4e13abe20e8b39f999829b backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 08:21:16 compute-0 sudo[208871]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zbsohrxykgwkgvxzfbypuflevxlzlqlp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847676.474958-1151-186706548734928/AnsiballZ_command.py'
Jan 31 08:21:16 compute-0 sudo[208871]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:21:16 compute-0 python3.9[208873]: ansible-ansible.legacy.command Invoked with _raw_params=virsh secret-undefine 82c880e6-d992-5408-8b12-efff9c275473
                                             virsh secret-define --file /tmp/secret.xml
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 08:21:16 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v560: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:21:16 compute-0 polkitd[43527]: Registered Authentication Agent for unix-process:208875:298322 (system bus name :1.2544 [pkttyagent --process 208875 --notify-fd 4 --fallback], object path /org/freedesktop/PolicyKit1/AuthenticationAgent, locale en_US.UTF-8)
Jan 31 08:21:16 compute-0 polkitd[43527]: Unregistered Authentication Agent for unix-process:208875:298322 (system bus name :1.2544, object path /org/freedesktop/PolicyKit1/AuthenticationAgent, locale en_US.UTF-8) (disconnected from bus)
Jan 31 08:21:16 compute-0 polkitd[43527]: Registered Authentication Agent for unix-process:208874:298322 (system bus name :1.2545 [pkttyagent --process 208874 --notify-fd 5 --fallback], object path /org/freedesktop/PolicyKit1/AuthenticationAgent, locale en_US.UTF-8)
Jan 31 08:21:16 compute-0 polkitd[43527]: Unregistered Authentication Agent for unix-process:208874:298322 (system bus name :1.2545, object path /org/freedesktop/PolicyKit1/AuthenticationAgent, locale en_US.UTF-8) (disconnected from bus)
Jan 31 08:21:16 compute-0 sudo[208871]: pam_unix(sudo:session): session closed for user root
Jan 31 08:21:17 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 08:21:17 compute-0 python3.9[209035]: ansible-ansible.builtin.file Invoked with path=/tmp/secret.xml state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 08:21:17 compute-0 sudo[209185]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-voaxkoyhwzbgmfhieayvorqbcsbmysby ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847677.6163566-1167-230889282381068/AnsiballZ_command.py'
Jan 31 08:21:17 compute-0 sudo[209185]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:21:17 compute-0 ovn_metadata_agent[154972]: 2026-01-31 08:21:17.880 154977 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:21:17 compute-0 ovn_metadata_agent[154972]: 2026-01-31 08:21:17.881 154977 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:21:17 compute-0 ovn_metadata_agent[154972]: 2026-01-31 08:21:17.881 154977 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:21:18 compute-0 ceph-mon[75227]: pgmap v560: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:21:18 compute-0 sudo[209185]: pam_unix(sudo:session): session closed for user root
Jan 31 08:21:18 compute-0 sudo[209338]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zblbhdkdcabiezehhscptmzaozzqffxb ; FSID=82c880e6-d992-5408-8b12-efff9c275473 KEY=AQDNt31pAAAAABAAYp99ADqsmeg1iSEhkwiYUA== /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847678.249699-1175-113996325192087/AnsiballZ_command.py'
Jan 31 08:21:18 compute-0 sudo[209338]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:21:18 compute-0 polkitd[43527]: Registered Authentication Agent for unix-process:209341:298510 (system bus name :1.2548 [pkttyagent --process 209341 --notify-fd 4 --fallback], object path /org/freedesktop/PolicyKit1/AuthenticationAgent, locale en_US.UTF-8)
Jan 31 08:21:18 compute-0 polkitd[43527]: Unregistered Authentication Agent for unix-process:209341:298510 (system bus name :1.2548, object path /org/freedesktop/PolicyKit1/AuthenticationAgent, locale en_US.UTF-8) (disconnected from bus)
Jan 31 08:21:18 compute-0 sudo[209338]: pam_unix(sudo:session): session closed for user root
Jan 31 08:21:18 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v561: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:21:19 compute-0 podman[209417]: 2026-01-31 08:21:19.183401023 +0000 UTC m=+0.067222186 container health_status 5cc46d1955888fed41771eb977e7f9416e280539f01559118253757fe3eb0869 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '3b798815db4eef76ff54d2cfe5801aee605c637f16e47a4297de289ab1fdb9c1-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, managed_by=edpm_ansible)
Jan 31 08:21:19 compute-0 sudo[209515]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xcyvfpzbnbxdikmezunemuvsurginczk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847679.0425878-1183-200004422056320/AnsiballZ_copy.py'
Jan 31 08:21:19 compute-0 sudo[209515]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:21:19 compute-0 python3.9[209517]: ansible-ansible.legacy.copy Invoked with dest=/etc/ceph/ceph.conf group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/config/ceph/ceph.conf backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 08:21:19 compute-0 sudo[209515]: pam_unix(sudo:session): session closed for user root
Jan 31 08:21:19 compute-0 sudo[209667]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gmlwqnvczhojhpvliaarzmveocvnyhqf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847679.709601-1191-26481978990601/AnsiballZ_stat.py'
Jan 31 08:21:19 compute-0 sudo[209667]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:21:20 compute-0 ceph-mon[75227]: pgmap v561: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:21:20 compute-0 python3.9[209669]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/libvirt.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 08:21:20 compute-0 sudo[209667]: pam_unix(sudo:session): session closed for user root
Jan 31 08:21:20 compute-0 sudo[209790]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ljslhwhlodiqmwchotygqpahczwxrovb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847679.709601-1191-26481978990601/AnsiballZ_copy.py'
Jan 31 08:21:20 compute-0 sudo[209790]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:21:20 compute-0 python3.9[209792]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/libvirt.yaml mode=0640 src=/home/zuul/.ansible/tmp/ansible-tmp-1769847679.709601-1191-26481978990601/.source.yaml follow=False _original_basename=firewall.yaml.j2 checksum=5ca83b1310a74c5e48c4c3d4640e1cb8fdac1061 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 08:21:20 compute-0 sudo[209790]: pam_unix(sudo:session): session closed for user root
Jan 31 08:21:20 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v562: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:21:20 compute-0 systemd[1]: dbus-:1.0-org.fedoraproject.SetroubleshootPrivileged@0.service: Deactivated successfully.
Jan 31 08:21:20 compute-0 systemd[1]: setroubleshootd.service: Deactivated successfully.
Jan 31 08:21:21 compute-0 sudo[209942]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cbabnhzhqkfxgbsrybrhqnmwpuizjvhf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847680.9842656-1207-176833815836039/AnsiballZ_file.py'
Jan 31 08:21:21 compute-0 sudo[209942]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:21:21 compute-0 python3.9[209944]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 08:21:21 compute-0 sudo[209942]: pam_unix(sudo:session): session closed for user root
Jan 31 08:21:21 compute-0 sudo[210094]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ebmqablaxcjhnpcgneafkfdzzccowwue ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847681.6012113-1215-175304916101619/AnsiballZ_stat.py'
Jan 31 08:21:21 compute-0 sudo[210094]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:21:22 compute-0 python3.9[210096]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 08:21:22 compute-0 sudo[210094]: pam_unix(sudo:session): session closed for user root
Jan 31 08:21:22 compute-0 ceph-mon[75227]: pgmap v562: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:21:22 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 08:21:22 compute-0 sudo[210172]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-icfcngfypgcvdfjlasmwvavlorvequmk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847681.6012113-1215-175304916101619/AnsiballZ_file.py'
Jan 31 08:21:22 compute-0 sudo[210172]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:21:22 compute-0 python3.9[210174]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml _original_basename=base-rules.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 08:21:22 compute-0 sudo[210172]: pam_unix(sudo:session): session closed for user root
Jan 31 08:21:22 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v563: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:21:22 compute-0 sudo[210324]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ougufpdwmvkkjuzothyvnhbyzncfvmya ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847682.6730034-1227-207004815977086/AnsiballZ_stat.py'
Jan 31 08:21:22 compute-0 sudo[210324]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:21:23 compute-0 python3.9[210326]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 08:21:23 compute-0 sudo[210324]: pam_unix(sudo:session): session closed for user root
Jan 31 08:21:23 compute-0 sudo[210402]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qrkxbojkjzdxrhpmjiupyvxorqigwtzp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847682.6730034-1227-207004815977086/AnsiballZ_file.py'
Jan 31 08:21:23 compute-0 sudo[210402]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:21:23 compute-0 python3.9[210404]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml _original_basename=.3hp1eybe recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 08:21:23 compute-0 sudo[210402]: pam_unix(sudo:session): session closed for user root
Jan 31 08:21:23 compute-0 sudo[210554]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kyagpiayenjzfyfahjanuiwatavkqlol ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847683.67755-1239-148469489993563/AnsiballZ_stat.py'
Jan 31 08:21:23 compute-0 sudo[210554]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:21:24 compute-0 python3.9[210556]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 08:21:24 compute-0 ceph-mon[75227]: pgmap v563: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:21:24 compute-0 sudo[210554]: pam_unix(sudo:session): session closed for user root
Jan 31 08:21:24 compute-0 sudo[210632]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-giowhcjumrahitihjxmsaclstrqefsre ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847683.67755-1239-148469489993563/AnsiballZ_file.py'
Jan 31 08:21:24 compute-0 sudo[210632]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:21:24 compute-0 python3.9[210634]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/iptables.nft _original_basename=iptables.nft recurse=False state=file path=/etc/nftables/iptables.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 08:21:24 compute-0 sudo[210632]: pam_unix(sudo:session): session closed for user root
Jan 31 08:21:24 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v564: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:21:25 compute-0 sudo[210784]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bngzqfgimwmgiivbgtlbnqtygltyenua ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847684.8085606-1252-278167902612017/AnsiballZ_command.py'
Jan 31 08:21:25 compute-0 sudo[210784]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:21:25 compute-0 python3.9[210786]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 08:21:25 compute-0 sudo[210784]: pam_unix(sudo:session): session closed for user root
Jan 31 08:21:25 compute-0 sudo[210937]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dgoizlmlezxpnwderdqpurajdctunbqi ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1769847685.4781291-1260-184554900724661/AnsiballZ_edpm_nftables_from_files.py'
Jan 31 08:21:25 compute-0 sudo[210937]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:21:26 compute-0 python3[210939]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall
Jan 31 08:21:26 compute-0 sudo[210937]: pam_unix(sudo:session): session closed for user root
Jan 31 08:21:26 compute-0 ceph-mon[75227]: pgmap v564: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:21:26 compute-0 sudo[211089]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fkaghlclijczvayvyqcnvayovhsowsol ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847686.1918628-1268-183643515798997/AnsiballZ_stat.py'
Jan 31 08:21:26 compute-0 sudo[211089]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:21:26 compute-0 python3.9[211091]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 08:21:26 compute-0 sudo[211089]: pam_unix(sudo:session): session closed for user root
Jan 31 08:21:26 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v565: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:21:26 compute-0 sudo[211167]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bbwdgepgmbetjqzzawnpgsdazsuxodor ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847686.1918628-1268-183643515798997/AnsiballZ_file.py'
Jan 31 08:21:26 compute-0 sudo[211167]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:21:27 compute-0 python3.9[211169]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-jumps.nft _original_basename=jump-chain.j2 recurse=False state=file path=/etc/nftables/edpm-jumps.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 08:21:27 compute-0 sudo[211167]: pam_unix(sudo:session): session closed for user root
Jan 31 08:21:27 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 08:21:27 compute-0 sudo[211319]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kdswuiinvddwfsbfiblpvtygmclrsuqu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847687.2365746-1280-166364935547996/AnsiballZ_stat.py'
Jan 31 08:21:27 compute-0 sudo[211319]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:21:27 compute-0 python3.9[211321]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 08:21:27 compute-0 sudo[211319]: pam_unix(sudo:session): session closed for user root
Jan 31 08:21:28 compute-0 sudo[211444]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-daspagxqzbedoyrkgykbmhzlvjutxojj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847687.2365746-1280-166364935547996/AnsiballZ_copy.py'
Jan 31 08:21:28 compute-0 sudo[211444]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:21:28 compute-0 python3.9[211446]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-update-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769847687.2365746-1280-166364935547996/.source.nft follow=False _original_basename=jump-chain.j2 checksum=3ce353c89bce3b135a0ed688d4e338b2efb15185 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 08:21:28 compute-0 sudo[211444]: pam_unix(sudo:session): session closed for user root
Jan 31 08:21:28 compute-0 ceph-mon[75227]: pgmap v565: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:21:28 compute-0 sudo[211596]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-raitcbqwdlcazsvpnpcunlevuvubhyje ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847688.46116-1295-249463400849074/AnsiballZ_stat.py'
Jan 31 08:21:28 compute-0 sudo[211596]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:21:28 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v566: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:21:28 compute-0 python3.9[211598]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 08:21:28 compute-0 sudo[211596]: pam_unix(sudo:session): session closed for user root
Jan 31 08:21:29 compute-0 sudo[211674]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cdjlqrwjlvwinbtsblvmpvgaxqeybbuj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847688.46116-1295-249463400849074/AnsiballZ_file.py'
Jan 31 08:21:29 compute-0 sudo[211674]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:21:29 compute-0 python3.9[211676]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-flushes.nft _original_basename=flush-chain.j2 recurse=False state=file path=/etc/nftables/edpm-flushes.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 08:21:29 compute-0 sudo[211674]: pam_unix(sudo:session): session closed for user root
Jan 31 08:21:29 compute-0 ceph-mon[75227]: pgmap v566: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:21:29 compute-0 sudo[211826]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fskdojteviyaiccrepmdkbwyixwclxcr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847689.5850036-1307-8915370944443/AnsiballZ_stat.py'
Jan 31 08:21:29 compute-0 sudo[211826]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:21:30 compute-0 python3.9[211828]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 08:21:30 compute-0 sudo[211826]: pam_unix(sudo:session): session closed for user root
Jan 31 08:21:30 compute-0 sudo[211904]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hxazkacintezcxqqqptieqhhatdejmki ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847689.5850036-1307-8915370944443/AnsiballZ_file.py'
Jan 31 08:21:30 compute-0 sudo[211904]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:21:30 compute-0 python3.9[211906]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-chains.nft _original_basename=chains.j2 recurse=False state=file path=/etc/nftables/edpm-chains.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 08:21:30 compute-0 sudo[211904]: pam_unix(sudo:session): session closed for user root
Jan 31 08:21:30 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v567: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:21:30 compute-0 sudo[212056]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gdsnvspkmilxnvlpkhnhkebtwdiebhip ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847690.5722914-1319-188979069630669/AnsiballZ_stat.py'
Jan 31 08:21:30 compute-0 sudo[212056]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:21:31 compute-0 python3.9[212058]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 08:21:31 compute-0 sudo[212056]: pam_unix(sudo:session): session closed for user root
Jan 31 08:21:31 compute-0 sudo[212181]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pwalwmnzximstlteodkxjsvaevfbkjnf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847690.5722914-1319-188979069630669/AnsiballZ_copy.py'
Jan 31 08:21:31 compute-0 sudo[212181]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:21:31 compute-0 python3.9[212183]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-rules.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769847690.5722914-1319-188979069630669/.source.nft follow=False _original_basename=ruleset.j2 checksum=ac3ce8ce2d33fa5fe0a79b0c811c97734ce43fa5 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 08:21:31 compute-0 sudo[212181]: pam_unix(sudo:session): session closed for user root
Jan 31 08:21:31 compute-0 ceph-mgr[75519]: [balancer INFO root] Optimize plan auto_2026-01-31_08:21:31
Jan 31 08:21:31 compute-0 ceph-mgr[75519]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 31 08:21:31 compute-0 ceph-mgr[75519]: [balancer INFO root] do_upmap
Jan 31 08:21:31 compute-0 ceph-mgr[75519]: [balancer INFO root] pools ['default.rgw.meta', 'default.rgw.control', 'default.rgw.log', 'cephfs.cephfs.data', 'backups', '.rgw.root', 'images', 'cephfs.cephfs.meta', '.mgr', 'volumes', 'vms']
Jan 31 08:21:31 compute-0 ceph-mgr[75519]: [balancer INFO root] prepared 0/10 upmap changes
Jan 31 08:21:31 compute-0 ceph-mon[75227]: pgmap v567: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:21:32 compute-0 sudo[212333]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dvkmwqqbcmzrbkybarfcrguewrdpwazc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847691.7484872-1334-235720496601139/AnsiballZ_file.py'
Jan 31 08:21:32 compute-0 sudo[212333]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:21:32 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 08:21:32 compute-0 python3.9[212335]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/nftables/edpm-rules.nft.changed state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 08:21:32 compute-0 sudo[212333]: pam_unix(sudo:session): session closed for user root
Jan 31 08:21:32 compute-0 sudo[212485]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-amxeorjzeknnbdncoyhoxbewposrodua ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847692.393143-1342-86656794704819/AnsiballZ_command.py'
Jan 31 08:21:32 compute-0 sudo[212485]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:21:32 compute-0 ceph-mgr[75519]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:21:32 compute-0 ceph-mgr[75519]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:21:32 compute-0 ceph-mgr[75519]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:21:32 compute-0 ceph-mgr[75519]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:21:32 compute-0 python3.9[212487]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 08:21:32 compute-0 ceph-mgr[75519]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:21:32 compute-0 ceph-mgr[75519]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:21:32 compute-0 sudo[212485]: pam_unix(sudo:session): session closed for user root
Jan 31 08:21:32 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v568: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:21:32 compute-0 ceph-mgr[75519]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 31 08:21:32 compute-0 ceph-mgr[75519]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 31 08:21:32 compute-0 ceph-mgr[75519]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 08:21:32 compute-0 ceph-mgr[75519]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 08:21:32 compute-0 ceph-mgr[75519]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 08:21:32 compute-0 ceph-mgr[75519]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 08:21:32 compute-0 ceph-mgr[75519]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 08:21:32 compute-0 ceph-mgr[75519]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 08:21:32 compute-0 ceph-mgr[75519]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 08:21:32 compute-0 ceph-mgr[75519]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 08:21:33 compute-0 sudo[212640]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-epxckrizsvkwqavrcznzmkcqrcqefsqo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847693.0080416-1350-59587453204167/AnsiballZ_blockinfile.py'
Jan 31 08:21:33 compute-0 sudo[212640]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:21:33 compute-0 python3.9[212642]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"
                                             include "/etc/nftables/edpm-chains.nft"
                                             include "/etc/nftables/edpm-rules.nft"
                                             include "/etc/nftables/edpm-jumps.nft"
                                              path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 08:21:33 compute-0 sudo[212640]: pam_unix(sudo:session): session closed for user root
Jan 31 08:21:34 compute-0 ceph-mon[75227]: pgmap v568: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:21:34 compute-0 sudo[212792]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uxrzzpqpfsasnclzfdklpsoqpannrsoo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847693.855932-1359-255005654225613/AnsiballZ_command.py'
Jan 31 08:21:34 compute-0 sudo[212792]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:21:34 compute-0 python3.9[212794]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 08:21:34 compute-0 sudo[212792]: pam_unix(sudo:session): session closed for user root
Jan 31 08:21:34 compute-0 sudo[212945]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rxuiapzbrwjlyfyynjfmfdpmjveswrss ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847694.5856228-1367-154035117286930/AnsiballZ_stat.py'
Jan 31 08:21:34 compute-0 sudo[212945]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:21:34 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v569: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:21:35 compute-0 python3.9[212947]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 31 08:21:35 compute-0 sudo[212945]: pam_unix(sudo:session): session closed for user root
Jan 31 08:21:35 compute-0 sudo[213099]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-btdzcxohhgnteaoqagqocgpqrubaqhap ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847695.193425-1375-94920377323389/AnsiballZ_command.py'
Jan 31 08:21:35 compute-0 sudo[213099]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:21:35 compute-0 python3.9[213101]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft | nft -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 08:21:35 compute-0 sudo[213099]: pam_unix(sudo:session): session closed for user root
Jan 31 08:21:36 compute-0 sudo[213254]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qlanxtlgvpppqgfxcpxknlwvqiyxqkvz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847695.7729356-1383-181289352702740/AnsiballZ_file.py'
Jan 31 08:21:36 compute-0 sudo[213254]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:21:36 compute-0 ceph-mon[75227]: pgmap v569: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:21:36 compute-0 python3.9[213256]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 08:21:36 compute-0 sudo[213254]: pam_unix(sudo:session): session closed for user root
Jan 31 08:21:36 compute-0 sudo[213406]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mjfbclehuujrrwyehultslcrkugtqmzd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847696.394935-1391-59840320344174/AnsiballZ_stat.py'
Jan 31 08:21:36 compute-0 sudo[213406]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:21:36 compute-0 python3.9[213408]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm_libvirt.target follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 08:21:36 compute-0 sudo[213406]: pam_unix(sudo:session): session closed for user root
Jan 31 08:21:36 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v570: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:21:37 compute-0 sudo[213529]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-auwnmmduhsntxakbotnscjlwvuantykm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847696.394935-1391-59840320344174/AnsiballZ_copy.py'
Jan 31 08:21:37 compute-0 sudo[213529]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:21:37 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 08:21:37 compute-0 python3.9[213531]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/edpm_libvirt.target mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769847696.394935-1391-59840320344174/.source.target follow=False _original_basename=edpm_libvirt.target checksum=13035a1aa0f414c677b14be9a5a363b6623d393c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 08:21:37 compute-0 sudo[213529]: pam_unix(sudo:session): session closed for user root
Jan 31 08:21:37 compute-0 sudo[213681]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qfofjdijmavbsmoddujojnwnmaaypxqd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847697.455447-1406-235554175796528/AnsiballZ_stat.py'
Jan 31 08:21:37 compute-0 sudo[213681]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:21:37 compute-0 python3.9[213683]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm_libvirt_guests.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 08:21:37 compute-0 sudo[213681]: pam_unix(sudo:session): session closed for user root
Jan 31 08:21:38 compute-0 ceph-mon[75227]: pgmap v570: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:21:38 compute-0 sudo[213804]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-anaqlebekljdmrqqinhdcmrxlilrxkez ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847697.455447-1406-235554175796528/AnsiballZ_copy.py'
Jan 31 08:21:38 compute-0 sudo[213804]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:21:38 compute-0 python3.9[213806]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/edpm_libvirt_guests.service mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769847697.455447-1406-235554175796528/.source.service follow=False _original_basename=edpm_libvirt_guests.service checksum=db83430a42fc2ccfd6ed8b56ebf04f3dff9cd0cf backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 08:21:38 compute-0 sudo[213804]: pam_unix(sudo:session): session closed for user root
Jan 31 08:21:38 compute-0 sudo[213956]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fyzqeisvpbiogddbwejrhyaaugtmsqhw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847698.6476796-1421-220953191478066/AnsiballZ_stat.py'
Jan 31 08:21:38 compute-0 sudo[213956]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:21:38 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v571: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:21:39 compute-0 python3.9[213958]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virt-guest-shutdown.target follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 08:21:39 compute-0 sudo[213956]: pam_unix(sudo:session): session closed for user root
Jan 31 08:21:39 compute-0 sudo[214079]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sewocbuvpiuzcrztbbvlmjzzaovxxulb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847698.6476796-1421-220953191478066/AnsiballZ_copy.py'
Jan 31 08:21:39 compute-0 sudo[214079]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:21:39 compute-0 python3.9[214081]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virt-guest-shutdown.target mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769847698.6476796-1421-220953191478066/.source.target follow=False _original_basename=virt-guest-shutdown.target checksum=49ca149619c596cbba877418629d2cf8f7b0f5cf backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 08:21:39 compute-0 sudo[214079]: pam_unix(sudo:session): session closed for user root
Jan 31 08:21:39 compute-0 sudo[214231]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-frqmvnvwzwqnglwyeahvkumhgfpyifrm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847699.7231603-1436-4413644484147/AnsiballZ_systemd.py'
Jan 31 08:21:39 compute-0 sudo[214231]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:21:40 compute-0 ceph-mon[75227]: pgmap v571: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:21:40 compute-0 python3.9[214233]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm_libvirt.target state=restarted daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 31 08:21:40 compute-0 systemd[1]: Reloading.
Jan 31 08:21:40 compute-0 systemd-rc-local-generator[214252]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 08:21:40 compute-0 systemd-sysv-generator[214258]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 08:21:40 compute-0 systemd[1]: Reached target edpm_libvirt.target.
Jan 31 08:21:40 compute-0 sudo[214231]: pam_unix(sudo:session): session closed for user root
Jan 31 08:21:40 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v572: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:21:41 compute-0 sudo[214422]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ubiksvpaoylgqcvhoxdrnltmdrkyywru ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847700.823024-1444-98516047109989/AnsiballZ_systemd.py'
Jan 31 08:21:41 compute-0 sudo[214422]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:21:41 compute-0 python3.9[214424]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm_libvirt_guests daemon_reexec=False scope=system no_block=False state=None force=None masked=None
Jan 31 08:21:41 compute-0 systemd[1]: Reloading.
Jan 31 08:21:41 compute-0 systemd-rc-local-generator[214451]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 08:21:41 compute-0 systemd-sysv-generator[214455]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 08:21:41 compute-0 systemd[1]: Reloading.
Jan 31 08:21:41 compute-0 systemd-sysv-generator[214487]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 08:21:41 compute-0 systemd-rc-local-generator[214483]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 08:21:41 compute-0 sudo[214422]: pam_unix(sudo:session): session closed for user root
Jan 31 08:21:42 compute-0 ceph-mon[75227]: pgmap v572: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:21:42 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 08:21:42 compute-0 sshd-session[155524]: Connection closed by 192.168.122.30 port 55366
Jan 31 08:21:42 compute-0 sshd-session[155521]: pam_unix(sshd:session): session closed for user zuul
Jan 31 08:21:42 compute-0 systemd[1]: session-49.scope: Deactivated successfully.
Jan 31 08:21:42 compute-0 systemd[1]: session-49.scope: Consumed 3min 1.777s CPU time.
Jan 31 08:21:42 compute-0 systemd-logind[793]: Session 49 logged out. Waiting for processes to exit.
Jan 31 08:21:42 compute-0 systemd-logind[793]: Removed session 49.
Jan 31 08:21:42 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v573: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:21:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] _maybe_adjust
Jan 31 08:21:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:21:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 31 08:21:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:21:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 08:21:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:21:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 08:21:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:21:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 08:21:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:21:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 08:21:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:21:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 2.6947183441958982e-06 of space, bias 4.0, pg target 0.003233662013035078 quantized to 16 (current 16)
Jan 31 08:21:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:21:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 08:21:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:21:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 31 08:21:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:21:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Jan 31 08:21:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:21:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 08:21:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:21:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 31 08:21:44 compute-0 ceph-mon[75227]: pgmap v573: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:21:44 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v574: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:21:46 compute-0 ceph-mon[75227]: pgmap v574: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:21:46 compute-0 podman[214521]: 2026-01-31 08:21:46.224734341 +0000 UTC m=+0.113965099 container health_status 14c3c41ef3ecc0cb180f3b1c6b2646401de390411c75f2edf127900ced71a3ae (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '3b798815db4eef76ff54d2cfe5801aee605c637f16e47a4297de289ab1fdb9c1-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20260127, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3)
Jan 31 08:21:46 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v575: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:21:47 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 08:21:47 compute-0 sshd-session[214547]: Accepted publickey for zuul from 192.168.122.30 port 57404 ssh2: ECDSA SHA256:Skb+4tfaoVfLHQIqkRSeA/sFlTrVc6ZnX8V66qTLHY8
Jan 31 08:21:47 compute-0 systemd-logind[793]: New session 50 of user zuul.
Jan 31 08:21:47 compute-0 systemd[1]: Started Session 50 of User zuul.
Jan 31 08:21:48 compute-0 sshd-session[214547]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 31 08:21:48 compute-0 ceph-mon[75227]: pgmap v575: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:21:48 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v576: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:21:48 compute-0 python3.9[214700]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 31 08:21:49 compute-0 podman[214828]: 2026-01-31 08:21:49.939063687 +0000 UTC m=+0.063467341 container health_status 5cc46d1955888fed41771eb977e7f9416e280539f01559118253757fe3eb0869 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '3b798815db4eef76ff54d2cfe5801aee605c637f16e47a4297de289ab1fdb9c1-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Jan 31 08:21:50 compute-0 python3.9[214865]: ansible-ansible.builtin.service_facts Invoked
Jan 31 08:21:50 compute-0 network[214889]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Jan 31 08:21:50 compute-0 network[214890]: 'network-scripts' will be removed from distribution in near future.
Jan 31 08:21:50 compute-0 network[214891]: It is advised to switch to 'NetworkManager' instead for network management.
Jan 31 08:21:50 compute-0 ceph-mon[75227]: pgmap v576: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:21:50 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v577: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:21:52 compute-0 ceph-mon[75227]: pgmap v577: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:21:52 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 08:21:52 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v578: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:21:53 compute-0 sudo[215161]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ucpoghqzvyfjpveacmrhdqbbgcmthizb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847712.8336017-42-77107766558841/AnsiballZ_setup.py'
Jan 31 08:21:53 compute-0 sudo[215161]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:21:53 compute-0 python3.9[215163]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 31 08:21:53 compute-0 sudo[215161]: pam_unix(sudo:session): session closed for user root
Jan 31 08:21:54 compute-0 sudo[215245]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bowjhvekfbqucdepxbsgyifswmamehct ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847712.8336017-42-77107766558841/AnsiballZ_dnf.py'
Jan 31 08:21:54 compute-0 sudo[215245]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:21:54 compute-0 ceph-mon[75227]: pgmap v578: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:21:54 compute-0 python3.9[215247]: ansible-ansible.legacy.dnf Invoked with name=['iscsi-initiator-utils'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 31 08:21:54 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v579: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:21:56 compute-0 ceph-mon[75227]: pgmap v579: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:21:56 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v580: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:21:57 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 08:21:58 compute-0 ceph-mon[75227]: pgmap v580: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:21:58 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v581: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:21:59 compute-0 sudo[215245]: pam_unix(sudo:session): session closed for user root
Jan 31 08:22:00 compute-0 ceph-mon[75227]: pgmap v581: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:22:00 compute-0 sudo[215398]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tgrikadmxcdnowmhtuuceawgklbnbktg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847719.9888723-54-167226903515792/AnsiballZ_stat.py'
Jan 31 08:22:00 compute-0 sudo[215398]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:22:00 compute-0 python3.9[215400]: ansible-ansible.builtin.stat Invoked with path=/var/lib/config-data/puppet-generated/iscsid/etc/iscsi follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 31 08:22:00 compute-0 sudo[215398]: pam_unix(sudo:session): session closed for user root
Jan 31 08:22:00 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v582: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:22:01 compute-0 sudo[215550]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bztmjcisqwvlhodbllrjdqeewemqtlzb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847720.8254545-64-190699180986053/AnsiballZ_command.py'
Jan 31 08:22:01 compute-0 sudo[215550]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:22:01 compute-0 python3.9[215552]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/restorecon -nvr /etc/iscsi /var/lib/iscsi _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 08:22:01 compute-0 ceph-mon[75227]: pgmap v582: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:22:01 compute-0 sudo[215550]: pam_unix(sudo:session): session closed for user root
Jan 31 08:22:01 compute-0 sudo[215703]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iymrsceuisdxivpwfdccurdnkvyfztzs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847721.7136905-74-175381231653065/AnsiballZ_stat.py'
Jan 31 08:22:01 compute-0 sudo[215703]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:22:02 compute-0 python3.9[215705]: ansible-ansible.builtin.stat Invoked with path=/etc/iscsi/.initiator_reset follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 31 08:22:02 compute-0 sudo[215703]: pam_unix(sudo:session): session closed for user root
Jan 31 08:22:02 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 08:22:02 compute-0 sudo[215855]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sqtztcfswwvjsfgsleymnskjahfgtxxn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847722.3178437-82-34697201206291/AnsiballZ_command.py'
Jan 31 08:22:02 compute-0 sudo[215855]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:22:02 compute-0 ceph-mgr[75519]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:22:02 compute-0 ceph-mgr[75519]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:22:02 compute-0 ceph-mgr[75519]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:22:02 compute-0 ceph-mgr[75519]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:22:02 compute-0 ceph-mgr[75519]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:22:02 compute-0 ceph-mgr[75519]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:22:02 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v583: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:22:02 compute-0 python3.9[215857]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/iscsi-iname _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 08:22:02 compute-0 sudo[215855]: pam_unix(sudo:session): session closed for user root
Jan 31 08:22:03 compute-0 sudo[216008]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mwyxsilwgnjaxhmfgqjfbszijyvyrisg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847723.1160784-90-210799825723806/AnsiballZ_stat.py'
Jan 31 08:22:03 compute-0 sudo[216008]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:22:03 compute-0 python3.9[216010]: ansible-ansible.legacy.stat Invoked with path=/etc/iscsi/initiatorname.iscsi follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 08:22:03 compute-0 sudo[216008]: pam_unix(sudo:session): session closed for user root
Jan 31 08:22:03 compute-0 ceph-mon[75227]: pgmap v583: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:22:04 compute-0 sudo[216131]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fddbmbiwbbqkpiodgiwwzgyiekgtmpsh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847723.1160784-90-210799825723806/AnsiballZ_copy.py'
Jan 31 08:22:04 compute-0 sudo[216131]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:22:04 compute-0 python3.9[216133]: ansible-ansible.legacy.copy Invoked with dest=/etc/iscsi/initiatorname.iscsi mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769847723.1160784-90-210799825723806/.source.iscsi _original_basename=.d_h60ozi follow=False checksum=dbd5f8f0b80c1618236eabb12e8c4b08e4109ee7 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 08:22:04 compute-0 sudo[216131]: pam_unix(sudo:session): session closed for user root
Jan 31 08:22:04 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v584: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:22:04 compute-0 sudo[216283]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dibqgjacobkczomaoklealkkdbcmbtma ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847724.5705783-105-233272286514635/AnsiballZ_file.py'
Jan 31 08:22:04 compute-0 sudo[216283]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:22:05 compute-0 python3.9[216285]: ansible-ansible.builtin.file Invoked with mode=0600 path=/etc/iscsi/.initiator_reset state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 08:22:05 compute-0 sudo[216283]: pam_unix(sudo:session): session closed for user root
Jan 31 08:22:05 compute-0 sudo[216435]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cnppowmzopwcdahxarmsefcrpgifgynr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847725.3591595-113-14153613215509/AnsiballZ_lineinfile.py'
Jan 31 08:22:05 compute-0 sudo[216435]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:22:05 compute-0 ceph-mon[75227]: pgmap v584: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:22:05 compute-0 python3.9[216437]: ansible-ansible.builtin.lineinfile Invoked with insertafter=^#node.session.auth.chap.algs line=node.session.auth.chap_algs = SHA3-256,SHA256,SHA1,MD5 path=/etc/iscsi/iscsid.conf regexp=^node.session.auth.chap_algs state=present encoding=utf-8 backrefs=False create=False backup=False firstmatch=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 08:22:05 compute-0 sudo[216435]: pam_unix(sudo:session): session closed for user root
Jan 31 08:22:06 compute-0 sudo[216587]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lpxnccsabijbzvmpiapznymasbskyzfn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847726.2094853-122-137582850982354/AnsiballZ_systemd_service.py'
Jan 31 08:22:06 compute-0 sudo[216587]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:22:06 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v585: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:22:07 compute-0 python3.9[216589]: ansible-ansible.builtin.systemd_service Invoked with enabled=True name=iscsid.socket state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 31 08:22:07 compute-0 systemd[1]: Listening on Open-iSCSI iscsid Socket.
Jan 31 08:22:07 compute-0 sudo[216587]: pam_unix(sudo:session): session closed for user root
Jan 31 08:22:07 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 08:22:07 compute-0 sudo[216743]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ornfsufnovaxzsftambiyfhjsohslnar ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847727.2627168-130-71242751284640/AnsiballZ_systemd_service.py'
Jan 31 08:22:07 compute-0 sudo[216743]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:22:07 compute-0 python3.9[216745]: ansible-ansible.builtin.systemd_service Invoked with enabled=True name=iscsid state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 31 08:22:07 compute-0 ceph-mon[75227]: pgmap v585: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:22:08 compute-0 systemd[1]: Reloading.
Jan 31 08:22:08 compute-0 systemd-sysv-generator[216772]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 08:22:08 compute-0 systemd-rc-local-generator[216766]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 08:22:08 compute-0 systemd[1]: One time configuration for iscsi.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/iscsi/initiatorname.iscsi).
Jan 31 08:22:08 compute-0 systemd[1]: Starting Open-iSCSI...
Jan 31 08:22:08 compute-0 kernel: Loading iSCSI transport class v2.0-870.
Jan 31 08:22:08 compute-0 systemd[1]: Started Open-iSCSI.
Jan 31 08:22:08 compute-0 systemd[1]: Starting Logout off all iSCSI sessions on shutdown...
Jan 31 08:22:08 compute-0 systemd[1]: Finished Logout off all iSCSI sessions on shutdown.
Jan 31 08:22:08 compute-0 sudo[216743]: pam_unix(sudo:session): session closed for user root
Jan 31 08:22:08 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v586: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:22:09 compute-0 python3.9[216944]: ansible-ansible.builtin.service_facts Invoked
Jan 31 08:22:09 compute-0 network[216961]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Jan 31 08:22:09 compute-0 network[216962]: 'network-scripts' will be removed from distribution in near future.
Jan 31 08:22:09 compute-0 network[216963]: It is advised to switch to 'NetworkManager' instead for network management.
Jan 31 08:22:09 compute-0 ceph-mon[75227]: pgmap v586: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:22:10 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v587: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:22:11 compute-0 ceph-mon[75227]: pgmap v587: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:22:12 compute-0 sudo[217234]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iaatdujyzvciwoughcbsntoejrijmjvv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847731.93085-153-34704192282185/AnsiballZ_dnf.py'
Jan 31 08:22:12 compute-0 sudo[217234]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:22:12 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 08:22:12 compute-0 python3.9[217236]: ansible-ansible.legacy.dnf Invoked with name=['device-mapper-multipath'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 31 08:22:12 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v588: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:22:13 compute-0 ceph-mon[75227]: pgmap v588: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:22:14 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v589: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:22:15 compute-0 sudo[217240]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 08:22:15 compute-0 sudo[217240]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:22:15 compute-0 sudo[217240]: pam_unix(sudo:session): session closed for user root
Jan 31 08:22:15 compute-0 sudo[217266]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/82c880e6-d992-5408-8b12-efff9c275473/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Jan 31 08:22:15 compute-0 sudo[217266]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:22:15 compute-0 sudo[217266]: pam_unix(sudo:session): session closed for user root
Jan 31 08:22:15 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 31 08:22:15 compute-0 ceph-mon[75227]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 08:22:15 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 31 08:22:15 compute-0 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 31 08:22:15 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 31 08:22:15 compute-0 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 08:22:15 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 31 08:22:15 compute-0 ceph-mon[75227]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 31 08:22:15 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 31 08:22:15 compute-0 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 31 08:22:15 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 31 08:22:15 compute-0 ceph-mon[75227]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 08:22:15 compute-0 sudo[217326]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 08:22:15 compute-0 sudo[217326]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:22:15 compute-0 sudo[217326]: pam_unix(sudo:session): session closed for user root
Jan 31 08:22:15 compute-0 sudo[217351]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/82c880e6-d992-5408-8b12-efff9c275473/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 82c880e6-d992-5408-8b12-efff9c275473 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Jan 31 08:22:15 compute-0 sudo[217351]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:22:16 compute-0 ceph-mon[75227]: pgmap v589: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:22:16 compute-0 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 08:22:16 compute-0 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 31 08:22:16 compute-0 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 08:22:16 compute-0 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 31 08:22:16 compute-0 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 31 08:22:16 compute-0 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 08:22:16 compute-0 podman[217392]: 2026-01-31 08:22:16.234234435 +0000 UTC m=+0.047332489 container create 18a35777ae391412d57c9fa14752577f68d449e08f0eb550b5f8727e96a5bb28 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=silly_mendel, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 08:22:16 compute-0 systemd[1]: Started libpod-conmon-18a35777ae391412d57c9fa14752577f68d449e08f0eb550b5f8727e96a5bb28.scope.
Jan 31 08:22:16 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:22:16 compute-0 podman[217392]: 2026-01-31 08:22:16.301802621 +0000 UTC m=+0.114900745 container init 18a35777ae391412d57c9fa14752577f68d449e08f0eb550b5f8727e96a5bb28 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=silly_mendel, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 08:22:16 compute-0 podman[217392]: 2026-01-31 08:22:16.213453642 +0000 UTC m=+0.026551746 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 08:22:16 compute-0 podman[217392]: 2026-01-31 08:22:16.310804063 +0000 UTC m=+0.123902137 container start 18a35777ae391412d57c9fa14752577f68d449e08f0eb550b5f8727e96a5bb28 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=silly_mendel, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030)
Jan 31 08:22:16 compute-0 silly_mendel[217416]: 167 167
Jan 31 08:22:16 compute-0 systemd[1]: libpod-18a35777ae391412d57c9fa14752577f68d449e08f0eb550b5f8727e96a5bb28.scope: Deactivated successfully.
Jan 31 08:22:16 compute-0 conmon[217416]: conmon 18a35777ae391412d57c <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-18a35777ae391412d57c9fa14752577f68d449e08f0eb550b5f8727e96a5bb28.scope/container/memory.events
Jan 31 08:22:16 compute-0 podman[217392]: 2026-01-31 08:22:16.318802998 +0000 UTC m=+0.131901072 container attach 18a35777ae391412d57c9fa14752577f68d449e08f0eb550b5f8727e96a5bb28 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=silly_mendel, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, ceph=True, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 08:22:16 compute-0 podman[217392]: 2026-01-31 08:22:16.319632901 +0000 UTC m=+0.132730945 container died 18a35777ae391412d57c9fa14752577f68d449e08f0eb550b5f8727e96a5bb28 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=silly_mendel, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 08:22:16 compute-0 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Jan 31 08:22:16 compute-0 systemd[1]: var-lib-containers-storage-overlay-a067d2dca60cf88bd71b5ea51f7bb9ce05265b32c47332e29d811b524be83807-merged.mount: Deactivated successfully.
Jan 31 08:22:16 compute-0 systemd[1]: Starting man-db-cache-update.service...
Jan 31 08:22:16 compute-0 podman[217406]: 2026-01-31 08:22:16.359004456 +0000 UTC m=+0.100278405 container health_status 14c3c41ef3ecc0cb180f3b1c6b2646401de390411c75f2edf127900ced71a3ae (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '3b798815db4eef76ff54d2cfe5801aee605c637f16e47a4297de289ab1fdb9c1-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller)
Jan 31 08:22:16 compute-0 podman[217392]: 2026-01-31 08:22:16.365103067 +0000 UTC m=+0.178201111 container remove 18a35777ae391412d57c9fa14752577f68d449e08f0eb550b5f8727e96a5bb28 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=silly_mendel, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 08:22:16 compute-0 systemd[1]: libpod-conmon-18a35777ae391412d57c9fa14752577f68d449e08f0eb550b5f8727e96a5bb28.scope: Deactivated successfully.
Jan 31 08:22:16 compute-0 systemd[1]: Reloading.
Jan 31 08:22:16 compute-0 systemd-rc-local-generator[217486]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 08:22:16 compute-0 systemd-sysv-generator[217492]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 08:22:16 compute-0 podman[217487]: 2026-01-31 08:22:16.517565464 +0000 UTC m=+0.042523064 container create 4d738671c7cbbc5e8e436af61ca48fed2dcad28a897f5a0f999a455c8ce5c81f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=amazing_mestorf, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030)
Jan 31 08:22:16 compute-0 podman[217487]: 2026-01-31 08:22:16.500110345 +0000 UTC m=+0.025067965 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 08:22:16 compute-0 systemd[1]: Queuing reload/restart jobs for marked units…
Jan 31 08:22:16 compute-0 systemd[1]: Started libpod-conmon-4d738671c7cbbc5e8e436af61ca48fed2dcad28a897f5a0f999a455c8ce5c81f.scope.
Jan 31 08:22:16 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:22:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3c4456ddc51124f445a2eb47b16074ba4c8e526b5da925c7e9078aa0dfc7a5e2/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 08:22:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3c4456ddc51124f445a2eb47b16074ba4c8e526b5da925c7e9078aa0dfc7a5e2/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 08:22:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3c4456ddc51124f445a2eb47b16074ba4c8e526b5da925c7e9078aa0dfc7a5e2/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 08:22:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3c4456ddc51124f445a2eb47b16074ba4c8e526b5da925c7e9078aa0dfc7a5e2/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 08:22:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3c4456ddc51124f445a2eb47b16074ba4c8e526b5da925c7e9078aa0dfc7a5e2/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 31 08:22:16 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v590: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:22:17 compute-0 podman[217487]: 2026-01-31 08:22:17.005485254 +0000 UTC m=+0.530442904 container init 4d738671c7cbbc5e8e436af61ca48fed2dcad28a897f5a0f999a455c8ce5c81f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=amazing_mestorf, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 08:22:17 compute-0 podman[217487]: 2026-01-31 08:22:17.01780579 +0000 UTC m=+0.542763420 container start 4d738671c7cbbc5e8e436af61ca48fed2dcad28a897f5a0f999a455c8ce5c81f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=amazing_mestorf, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True)
Jan 31 08:22:17 compute-0 podman[217487]: 2026-01-31 08:22:17.22630738 +0000 UTC m=+0.751264990 container attach 4d738671c7cbbc5e8e436af61ca48fed2dcad28a897f5a0f999a455c8ce5c81f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=amazing_mestorf, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=tentacle, ceph=True, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 31 08:22:17 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 08:22:17 compute-0 amazing_mestorf[217628]: --> passed data devices: 0 physical, 3 LVM
Jan 31 08:22:17 compute-0 amazing_mestorf[217628]: --> All data devices are unavailable
Jan 31 08:22:17 compute-0 systemd[1]: libpod-4d738671c7cbbc5e8e436af61ca48fed2dcad28a897f5a0f999a455c8ce5c81f.scope: Deactivated successfully.
Jan 31 08:22:17 compute-0 podman[217487]: 2026-01-31 08:22:17.418775671 +0000 UTC m=+0.943733291 container died 4d738671c7cbbc5e8e436af61ca48fed2dcad28a897f5a0f999a455c8ce5c81f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=amazing_mestorf, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Jan 31 08:22:17 compute-0 systemd[1]: var-lib-containers-storage-overlay-3c4456ddc51124f445a2eb47b16074ba4c8e526b5da925c7e9078aa0dfc7a5e2-merged.mount: Deactivated successfully.
Jan 31 08:22:17 compute-0 podman[217487]: 2026-01-31 08:22:17.482334984 +0000 UTC m=+1.007292624 container remove 4d738671c7cbbc5e8e436af61ca48fed2dcad28a897f5a0f999a455c8ce5c81f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=amazing_mestorf, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Jan 31 08:22:17 compute-0 systemd[1]: libpod-conmon-4d738671c7cbbc5e8e436af61ca48fed2dcad28a897f5a0f999a455c8ce5c81f.scope: Deactivated successfully.
Jan 31 08:22:17 compute-0 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Jan 31 08:22:17 compute-0 systemd[1]: Finished man-db-cache-update.service.
Jan 31 08:22:17 compute-0 systemd[1]: run-r3855a2c52c774a828c0ff318b8195f6d.service: Deactivated successfully.
Jan 31 08:22:17 compute-0 sudo[217351]: pam_unix(sudo:session): session closed for user root
Jan 31 08:22:17 compute-0 sudo[217234]: pam_unix(sudo:session): session closed for user root
Jan 31 08:22:17 compute-0 sudo[217659]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 08:22:17 compute-0 sudo[217659]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:22:17 compute-0 sudo[217659]: pam_unix(sudo:session): session closed for user root
Jan 31 08:22:17 compute-0 sudo[217691]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/82c880e6-d992-5408-8b12-efff9c275473/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 82c880e6-d992-5408-8b12-efff9c275473 -- lvm list --format json
Jan 31 08:22:17 compute-0 sudo[217691]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:22:17 compute-0 ovn_metadata_agent[154972]: 2026-01-31 08:22:17.881 154977 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:22:17 compute-0 ovn_metadata_agent[154972]: 2026-01-31 08:22:17.882 154977 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:22:17 compute-0 ovn_metadata_agent[154972]: 2026-01-31 08:22:17.882 154977 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:22:17 compute-0 podman[217801]: 2026-01-31 08:22:17.887531043 +0000 UTC m=+0.047228636 container create 188f2cccf7e65f52b8e754455720bd440c29486ceed2b404c6fe6671c602f665 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nostalgic_benz, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, ceph=True, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Jan 31 08:22:17 compute-0 systemd[1]: Started libpod-conmon-188f2cccf7e65f52b8e754455720bd440c29486ceed2b404c6fe6671c602f665.scope.
Jan 31 08:22:17 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:22:17 compute-0 podman[217801]: 2026-01-31 08:22:17.955966203 +0000 UTC m=+0.115663786 container init 188f2cccf7e65f52b8e754455720bd440c29486ceed2b404c6fe6671c602f665 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nostalgic_benz, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 08:22:17 compute-0 podman[217801]: 2026-01-31 08:22:17.860149665 +0000 UTC m=+0.019847238 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 08:22:17 compute-0 podman[217801]: 2026-01-31 08:22:17.963857225 +0000 UTC m=+0.123554778 container start 188f2cccf7e65f52b8e754455720bd440c29486ceed2b404c6fe6671c602f665 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nostalgic_benz, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle)
Jan 31 08:22:17 compute-0 podman[217801]: 2026-01-31 08:22:17.967432315 +0000 UTC m=+0.127129978 container attach 188f2cccf7e65f52b8e754455720bd440c29486ceed2b404c6fe6671c602f665 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nostalgic_benz, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 08:22:17 compute-0 nostalgic_benz[217858]: 167 167
Jan 31 08:22:17 compute-0 systemd[1]: libpod-188f2cccf7e65f52b8e754455720bd440c29486ceed2b404c6fe6671c602f665.scope: Deactivated successfully.
Jan 31 08:22:17 compute-0 conmon[217858]: conmon 188f2cccf7e65f52b8e7 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-188f2cccf7e65f52b8e754455720bd440c29486ceed2b404c6fe6671c602f665.scope/container/memory.events
Jan 31 08:22:17 compute-0 podman[217801]: 2026-01-31 08:22:17.971643893 +0000 UTC m=+0.131341496 container died 188f2cccf7e65f52b8e754455720bd440c29486ceed2b404c6fe6671c602f665 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nostalgic_benz, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True)
Jan 31 08:22:17 compute-0 systemd[1]: var-lib-containers-storage-overlay-5d829cbb91b6d9ba4d8863bf4a8c4d2e82a91bf79dcead2868ca7569490216b8-merged.mount: Deactivated successfully.
Jan 31 08:22:18 compute-0 sudo[217897]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sdlamjoyschlipzmhxvzlgkzcbydeqxx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847737.7558694-162-77224431448164/AnsiballZ_file.py'
Jan 31 08:22:18 compute-0 sudo[217897]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:22:18 compute-0 podman[217801]: 2026-01-31 08:22:18.008748254 +0000 UTC m=+0.168445807 container remove 188f2cccf7e65f52b8e754455720bd440c29486ceed2b404c6fe6671c602f665 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nostalgic_benz, io.buildah.version=1.41.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 08:22:18 compute-0 systemd[1]: libpod-conmon-188f2cccf7e65f52b8e754455720bd440c29486ceed2b404c6fe6671c602f665.scope: Deactivated successfully.
Jan 31 08:22:18 compute-0 podman[217911]: 2026-01-31 08:22:18.126422036 +0000 UTC m=+0.045298302 container create 2a98bef85e3eef5b98c8d639578c4cd9dd0ea9fdd086133cef92d5c7637fd952 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=beautiful_khorana, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS)
Jan 31 08:22:18 compute-0 systemd[1]: Started libpod-conmon-2a98bef85e3eef5b98c8d639578c4cd9dd0ea9fdd086133cef92d5c7637fd952.scope.
Jan 31 08:22:18 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:22:18 compute-0 podman[217911]: 2026-01-31 08:22:18.103458531 +0000 UTC m=+0.022334857 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 08:22:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2e9a611ab158d9042262a4d7833ae7c9b1789a2a220fb997e83c0e5bd4b33f1d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 08:22:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2e9a611ab158d9042262a4d7833ae7c9b1789a2a220fb997e83c0e5bd4b33f1d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 08:22:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2e9a611ab158d9042262a4d7833ae7c9b1789a2a220fb997e83c0e5bd4b33f1d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 08:22:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2e9a611ab158d9042262a4d7833ae7c9b1789a2a220fb997e83c0e5bd4b33f1d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 08:22:18 compute-0 python3.9[217903]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/modules-load.d selevel=s0 setype=etc_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None attributes=None
Jan 31 08:22:18 compute-0 podman[217911]: 2026-01-31 08:22:18.219689703 +0000 UTC m=+0.138565959 container init 2a98bef85e3eef5b98c8d639578c4cd9dd0ea9fdd086133cef92d5c7637fd952 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=beautiful_khorana, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=tentacle)
Jan 31 08:22:18 compute-0 podman[217911]: 2026-01-31 08:22:18.223945922 +0000 UTC m=+0.142822158 container start 2a98bef85e3eef5b98c8d639578c4cd9dd0ea9fdd086133cef92d5c7637fd952 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=beautiful_khorana, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Jan 31 08:22:18 compute-0 podman[217911]: 2026-01-31 08:22:18.228649894 +0000 UTC m=+0.147526160 container attach 2a98bef85e3eef5b98c8d639578c4cd9dd0ea9fdd086133cef92d5c7637fd952 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=beautiful_khorana, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2)
Jan 31 08:22:18 compute-0 ceph-mon[75227]: pgmap v590: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:22:18 compute-0 sudo[217897]: pam_unix(sudo:session): session closed for user root
Jan 31 08:22:18 compute-0 beautiful_khorana[217927]: {
Jan 31 08:22:18 compute-0 beautiful_khorana[217927]:     "0": [
Jan 31 08:22:18 compute-0 beautiful_khorana[217927]:         {
Jan 31 08:22:18 compute-0 beautiful_khorana[217927]:             "devices": [
Jan 31 08:22:18 compute-0 beautiful_khorana[217927]:                 "/dev/loop3"
Jan 31 08:22:18 compute-0 beautiful_khorana[217927]:             ],
Jan 31 08:22:18 compute-0 beautiful_khorana[217927]:             "lv_name": "ceph_lv0",
Jan 31 08:22:18 compute-0 beautiful_khorana[217927]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 08:22:18 compute-0 beautiful_khorana[217927]:             "lv_size": "21470642176",
Jan 31 08:22:18 compute-0 beautiful_khorana[217927]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=MTsNbY-MKaT-jGv0-3onj-5WQa-gnK0-BbfLsK,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=82c880e6-d992-5408-8b12-efff9c275473,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=39c36249-2898-4a76-b317-8e4ca379866f,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 31 08:22:18 compute-0 beautiful_khorana[217927]:             "lv_uuid": "MTsNbY-MKaT-jGv0-3onj-5WQa-gnK0-BbfLsK",
Jan 31 08:22:18 compute-0 beautiful_khorana[217927]:             "name": "ceph_lv0",
Jan 31 08:22:18 compute-0 beautiful_khorana[217927]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 08:22:18 compute-0 beautiful_khorana[217927]:             "tags": {
Jan 31 08:22:18 compute-0 beautiful_khorana[217927]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 31 08:22:18 compute-0 beautiful_khorana[217927]:                 "ceph.block_uuid": "MTsNbY-MKaT-jGv0-3onj-5WQa-gnK0-BbfLsK",
Jan 31 08:22:18 compute-0 beautiful_khorana[217927]:                 "ceph.cephx_lockbox_secret": "",
Jan 31 08:22:18 compute-0 beautiful_khorana[217927]:                 "ceph.cluster_fsid": "82c880e6-d992-5408-8b12-efff9c275473",
Jan 31 08:22:18 compute-0 beautiful_khorana[217927]:                 "ceph.cluster_name": "ceph",
Jan 31 08:22:18 compute-0 beautiful_khorana[217927]:                 "ceph.crush_device_class": "",
Jan 31 08:22:18 compute-0 beautiful_khorana[217927]:                 "ceph.encrypted": "0",
Jan 31 08:22:18 compute-0 beautiful_khorana[217927]:                 "ceph.objectstore": "bluestore",
Jan 31 08:22:18 compute-0 beautiful_khorana[217927]:                 "ceph.osd_fsid": "39c36249-2898-4a76-b317-8e4ca379866f",
Jan 31 08:22:18 compute-0 beautiful_khorana[217927]:                 "ceph.osd_id": "0",
Jan 31 08:22:18 compute-0 beautiful_khorana[217927]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 31 08:22:18 compute-0 beautiful_khorana[217927]:                 "ceph.type": "block",
Jan 31 08:22:18 compute-0 beautiful_khorana[217927]:                 "ceph.vdo": "0",
Jan 31 08:22:18 compute-0 beautiful_khorana[217927]:                 "ceph.with_tpm": "0"
Jan 31 08:22:18 compute-0 beautiful_khorana[217927]:             },
Jan 31 08:22:18 compute-0 beautiful_khorana[217927]:             "type": "block",
Jan 31 08:22:18 compute-0 beautiful_khorana[217927]:             "vg_name": "ceph_vg0"
Jan 31 08:22:18 compute-0 beautiful_khorana[217927]:         }
Jan 31 08:22:18 compute-0 beautiful_khorana[217927]:     ],
Jan 31 08:22:18 compute-0 beautiful_khorana[217927]:     "1": [
Jan 31 08:22:18 compute-0 beautiful_khorana[217927]:         {
Jan 31 08:22:18 compute-0 beautiful_khorana[217927]:             "devices": [
Jan 31 08:22:18 compute-0 beautiful_khorana[217927]:                 "/dev/loop4"
Jan 31 08:22:18 compute-0 beautiful_khorana[217927]:             ],
Jan 31 08:22:18 compute-0 beautiful_khorana[217927]:             "lv_name": "ceph_lv1",
Jan 31 08:22:18 compute-0 beautiful_khorana[217927]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 31 08:22:18 compute-0 beautiful_khorana[217927]:             "lv_size": "21470642176",
Jan 31 08:22:18 compute-0 beautiful_khorana[217927]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=p93Mbf-DMxT-pcUt-jSJE-SFna-oscq-yTAd40,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=82c880e6-d992-5408-8b12-efff9c275473,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=dacad4fa-56d8-4937-b2d8-306fb75187f3,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 31 08:22:18 compute-0 beautiful_khorana[217927]:             "lv_uuid": "p93Mbf-DMxT-pcUt-jSJE-SFna-oscq-yTAd40",
Jan 31 08:22:18 compute-0 beautiful_khorana[217927]:             "name": "ceph_lv1",
Jan 31 08:22:18 compute-0 beautiful_khorana[217927]:             "path": "/dev/ceph_vg1/ceph_lv1",
Jan 31 08:22:18 compute-0 beautiful_khorana[217927]:             "tags": {
Jan 31 08:22:18 compute-0 beautiful_khorana[217927]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 31 08:22:18 compute-0 beautiful_khorana[217927]:                 "ceph.block_uuid": "p93Mbf-DMxT-pcUt-jSJE-SFna-oscq-yTAd40",
Jan 31 08:22:18 compute-0 beautiful_khorana[217927]:                 "ceph.cephx_lockbox_secret": "",
Jan 31 08:22:18 compute-0 beautiful_khorana[217927]:                 "ceph.cluster_fsid": "82c880e6-d992-5408-8b12-efff9c275473",
Jan 31 08:22:18 compute-0 beautiful_khorana[217927]:                 "ceph.cluster_name": "ceph",
Jan 31 08:22:18 compute-0 beautiful_khorana[217927]:                 "ceph.crush_device_class": "",
Jan 31 08:22:18 compute-0 beautiful_khorana[217927]:                 "ceph.encrypted": "0",
Jan 31 08:22:18 compute-0 beautiful_khorana[217927]:                 "ceph.objectstore": "bluestore",
Jan 31 08:22:18 compute-0 beautiful_khorana[217927]:                 "ceph.osd_fsid": "dacad4fa-56d8-4937-b2d8-306fb75187f3",
Jan 31 08:22:18 compute-0 beautiful_khorana[217927]:                 "ceph.osd_id": "1",
Jan 31 08:22:18 compute-0 beautiful_khorana[217927]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 31 08:22:18 compute-0 beautiful_khorana[217927]:                 "ceph.type": "block",
Jan 31 08:22:18 compute-0 beautiful_khorana[217927]:                 "ceph.vdo": "0",
Jan 31 08:22:18 compute-0 beautiful_khorana[217927]:                 "ceph.with_tpm": "0"
Jan 31 08:22:18 compute-0 beautiful_khorana[217927]:             },
Jan 31 08:22:18 compute-0 beautiful_khorana[217927]:             "type": "block",
Jan 31 08:22:18 compute-0 beautiful_khorana[217927]:             "vg_name": "ceph_vg1"
Jan 31 08:22:18 compute-0 beautiful_khorana[217927]:         }
Jan 31 08:22:18 compute-0 beautiful_khorana[217927]:     ],
Jan 31 08:22:18 compute-0 beautiful_khorana[217927]:     "2": [
Jan 31 08:22:18 compute-0 beautiful_khorana[217927]:         {
Jan 31 08:22:18 compute-0 beautiful_khorana[217927]:             "devices": [
Jan 31 08:22:18 compute-0 beautiful_khorana[217927]:                 "/dev/loop5"
Jan 31 08:22:18 compute-0 beautiful_khorana[217927]:             ],
Jan 31 08:22:18 compute-0 beautiful_khorana[217927]:             "lv_name": "ceph_lv2",
Jan 31 08:22:18 compute-0 beautiful_khorana[217927]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 31 08:22:18 compute-0 beautiful_khorana[217927]:             "lv_size": "21470642176",
Jan 31 08:22:18 compute-0 beautiful_khorana[217927]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=fxd7JU-HnwP-NvcE-M4xv-EgEF-kK7y-w6dXCS,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=82c880e6-d992-5408-8b12-efff9c275473,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=faa25865-e7b6-44f9-8188-08bf287b941b,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 31 08:22:18 compute-0 beautiful_khorana[217927]:             "lv_uuid": "fxd7JU-HnwP-NvcE-M4xv-EgEF-kK7y-w6dXCS",
Jan 31 08:22:18 compute-0 beautiful_khorana[217927]:             "name": "ceph_lv2",
Jan 31 08:22:18 compute-0 beautiful_khorana[217927]:             "path": "/dev/ceph_vg2/ceph_lv2",
Jan 31 08:22:18 compute-0 beautiful_khorana[217927]:             "tags": {
Jan 31 08:22:18 compute-0 beautiful_khorana[217927]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 31 08:22:18 compute-0 beautiful_khorana[217927]:                 "ceph.block_uuid": "fxd7JU-HnwP-NvcE-M4xv-EgEF-kK7y-w6dXCS",
Jan 31 08:22:18 compute-0 beautiful_khorana[217927]:                 "ceph.cephx_lockbox_secret": "",
Jan 31 08:22:18 compute-0 beautiful_khorana[217927]:                 "ceph.cluster_fsid": "82c880e6-d992-5408-8b12-efff9c275473",
Jan 31 08:22:18 compute-0 beautiful_khorana[217927]:                 "ceph.cluster_name": "ceph",
Jan 31 08:22:18 compute-0 beautiful_khorana[217927]:                 "ceph.crush_device_class": "",
Jan 31 08:22:18 compute-0 beautiful_khorana[217927]:                 "ceph.encrypted": "0",
Jan 31 08:22:18 compute-0 beautiful_khorana[217927]:                 "ceph.objectstore": "bluestore",
Jan 31 08:22:18 compute-0 beautiful_khorana[217927]:                 "ceph.osd_fsid": "faa25865-e7b6-44f9-8188-08bf287b941b",
Jan 31 08:22:18 compute-0 beautiful_khorana[217927]:                 "ceph.osd_id": "2",
Jan 31 08:22:18 compute-0 beautiful_khorana[217927]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 31 08:22:18 compute-0 beautiful_khorana[217927]:                 "ceph.type": "block",
Jan 31 08:22:18 compute-0 beautiful_khorana[217927]:                 "ceph.vdo": "0",
Jan 31 08:22:18 compute-0 beautiful_khorana[217927]:                 "ceph.with_tpm": "0"
Jan 31 08:22:18 compute-0 beautiful_khorana[217927]:             },
Jan 31 08:22:18 compute-0 beautiful_khorana[217927]:             "type": "block",
Jan 31 08:22:18 compute-0 beautiful_khorana[217927]:             "vg_name": "ceph_vg2"
Jan 31 08:22:18 compute-0 beautiful_khorana[217927]:         }
Jan 31 08:22:18 compute-0 beautiful_khorana[217927]:     ]
Jan 31 08:22:18 compute-0 beautiful_khorana[217927]: }
Jan 31 08:22:18 compute-0 systemd[1]: libpod-2a98bef85e3eef5b98c8d639578c4cd9dd0ea9fdd086133cef92d5c7637fd952.scope: Deactivated successfully.
Jan 31 08:22:18 compute-0 podman[217911]: 2026-01-31 08:22:18.496590752 +0000 UTC m=+0.415466998 container died 2a98bef85e3eef5b98c8d639578c4cd9dd0ea9fdd086133cef92d5c7637fd952 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=beautiful_khorana, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Jan 31 08:22:18 compute-0 systemd[1]: var-lib-containers-storage-overlay-2e9a611ab158d9042262a4d7833ae7c9b1789a2a220fb997e83c0e5bd4b33f1d-merged.mount: Deactivated successfully.
Jan 31 08:22:18 compute-0 podman[217911]: 2026-01-31 08:22:18.532242862 +0000 UTC m=+0.451119098 container remove 2a98bef85e3eef5b98c8d639578c4cd9dd0ea9fdd086133cef92d5c7637fd952 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=beautiful_khorana, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 08:22:18 compute-0 systemd[1]: libpod-conmon-2a98bef85e3eef5b98c8d639578c4cd9dd0ea9fdd086133cef92d5c7637fd952.scope: Deactivated successfully.
Jan 31 08:22:18 compute-0 sudo[217691]: pam_unix(sudo:session): session closed for user root
Jan 31 08:22:18 compute-0 sudo[218023]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 08:22:18 compute-0 sudo[218023]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:22:18 compute-0 sudo[218023]: pam_unix(sudo:session): session closed for user root
Jan 31 08:22:18 compute-0 sudo[218071]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/82c880e6-d992-5408-8b12-efff9c275473/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 82c880e6-d992-5408-8b12-efff9c275473 -- raw list --format json
Jan 31 08:22:18 compute-0 sudo[218071]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:22:18 compute-0 sudo[218146]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zvjckagwilafaigbquxjpdhwhwhhsfej ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847738.3781984-170-222219659032321/AnsiballZ_modprobe.py'
Jan 31 08:22:18 compute-0 sudo[218146]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:22:18 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v591: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:22:18 compute-0 podman[218162]: 2026-01-31 08:22:18.958113771 +0000 UTC m=+0.044546580 container create 3c98eb67de99cc8ea3d1fcc412c0d852b37e864757fb33ca746ba5976ec7a5ae (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nifty_antonelli, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle)
Jan 31 08:22:18 compute-0 python3.9[218148]: ansible-community.general.modprobe Invoked with name=dm-multipath state=present params= persistent=disabled
Jan 31 08:22:18 compute-0 systemd[1]: Started libpod-conmon-3c98eb67de99cc8ea3d1fcc412c0d852b37e864757fb33ca746ba5976ec7a5ae.scope.
Jan 31 08:22:18 compute-0 sudo[218146]: pam_unix(sudo:session): session closed for user root
Jan 31 08:22:19 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:22:19 compute-0 podman[218162]: 2026-01-31 08:22:19.035680658 +0000 UTC m=+0.122113557 container init 3c98eb67de99cc8ea3d1fcc412c0d852b37e864757fb33ca746ba5976ec7a5ae (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nifty_antonelli, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 08:22:19 compute-0 podman[218162]: 2026-01-31 08:22:18.941070573 +0000 UTC m=+0.027503402 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 08:22:19 compute-0 podman[218162]: 2026-01-31 08:22:19.044148915 +0000 UTC m=+0.130581734 container start 3c98eb67de99cc8ea3d1fcc412c0d852b37e864757fb33ca746ba5976ec7a5ae (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nifty_antonelli, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_REF=tentacle, ceph=True, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2)
Jan 31 08:22:19 compute-0 podman[218162]: 2026-01-31 08:22:19.048488747 +0000 UTC m=+0.134921566 container attach 3c98eb67de99cc8ea3d1fcc412c0d852b37e864757fb33ca746ba5976ec7a5ae (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nifty_antonelli, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 08:22:19 compute-0 nifty_antonelli[218182]: 167 167
Jan 31 08:22:19 compute-0 systemd[1]: libpod-3c98eb67de99cc8ea3d1fcc412c0d852b37e864757fb33ca746ba5976ec7a5ae.scope: Deactivated successfully.
Jan 31 08:22:19 compute-0 podman[218162]: 2026-01-31 08:22:19.050865734 +0000 UTC m=+0.137298543 container died 3c98eb67de99cc8ea3d1fcc412c0d852b37e864757fb33ca746ba5976ec7a5ae (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nifty_antonelli, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 08:22:19 compute-0 systemd[1]: var-lib-containers-storage-overlay-86a04589ed30f2de77cfc02145014ddeed98f05d3e9a7fc44a3a5ea67c7e96e0-merged.mount: Deactivated successfully.
Jan 31 08:22:19 compute-0 podman[218162]: 2026-01-31 08:22:19.10242214 +0000 UTC m=+0.188854959 container remove 3c98eb67de99cc8ea3d1fcc412c0d852b37e864757fb33ca746ba5976ec7a5ae (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nifty_antonelli, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 08:22:19 compute-0 systemd[1]: libpod-conmon-3c98eb67de99cc8ea3d1fcc412c0d852b37e864757fb33ca746ba5976ec7a5ae.scope: Deactivated successfully.
Jan 31 08:22:19 compute-0 podman[218256]: 2026-01-31 08:22:19.250285119 +0000 UTC m=+0.047902705 container create d7cef6820c8554d9e4e6e8e4e35d0d2d373ceb395f3798b842f8aab1ae14f0da (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=objective_ride, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 08:22:19 compute-0 systemd[1]: Started libpod-conmon-d7cef6820c8554d9e4e6e8e4e35d0d2d373ceb395f3798b842f8aab1ae14f0da.scope.
Jan 31 08:22:19 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:22:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b058b0bc5bf3514ac152a480417a84725867b9eabf6b5aecf9f52afef3a592c4/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 08:22:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b058b0bc5bf3514ac152a480417a84725867b9eabf6b5aecf9f52afef3a592c4/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 08:22:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b058b0bc5bf3514ac152a480417a84725867b9eabf6b5aecf9f52afef3a592c4/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 08:22:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b058b0bc5bf3514ac152a480417a84725867b9eabf6b5aecf9f52afef3a592c4/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 08:22:19 compute-0 podman[218256]: 2026-01-31 08:22:19.23390907 +0000 UTC m=+0.031526676 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 08:22:19 compute-0 podman[218256]: 2026-01-31 08:22:19.344149433 +0000 UTC m=+0.141767039 container init d7cef6820c8554d9e4e6e8e4e35d0d2d373ceb395f3798b842f8aab1ae14f0da (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=objective_ride, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 08:22:19 compute-0 podman[218256]: 2026-01-31 08:22:19.348698651 +0000 UTC m=+0.146316237 container start d7cef6820c8554d9e4e6e8e4e35d0d2d373ceb395f3798b842f8aab1ae14f0da (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=objective_ride, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 08:22:19 compute-0 podman[218256]: 2026-01-31 08:22:19.351810928 +0000 UTC m=+0.149428524 container attach d7cef6820c8554d9e4e6e8e4e35d0d2d373ceb395f3798b842f8aab1ae14f0da (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=objective_ride, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 08:22:19 compute-0 sudo[218377]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qezjodujcovmualfcncxqfuhgahowgln ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847739.176283-178-78956825028929/AnsiballZ_stat.py'
Jan 31 08:22:19 compute-0 sudo[218377]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:22:19 compute-0 python3.9[218379]: ansible-ansible.legacy.stat Invoked with path=/etc/modules-load.d/dm-multipath.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 08:22:19 compute-0 sudo[218377]: pam_unix(sudo:session): session closed for user root
Jan 31 08:22:19 compute-0 sudo[218565]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qmweqhwjgvghxyvxnnxlrdhisbvgglul ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847739.176283-178-78956825028929/AnsiballZ_copy.py'
Jan 31 08:22:19 compute-0 sudo[218565]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:22:19 compute-0 lvm[218577]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 31 08:22:19 compute-0 lvm[218576]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 31 08:22:19 compute-0 lvm[218577]: VG ceph_vg1 finished
Jan 31 08:22:19 compute-0 lvm[218576]: VG ceph_vg0 finished
Jan 31 08:22:19 compute-0 lvm[218579]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 31 08:22:19 compute-0 lvm[218579]: VG ceph_vg2 finished
Jan 31 08:22:20 compute-0 python3.9[218568]: ansible-ansible.legacy.copy Invoked with dest=/etc/modules-load.d/dm-multipath.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769847739.176283-178-78956825028929/.source.conf follow=False _original_basename=module-load.conf.j2 checksum=065061c60917e4f67cecc70d12ce55e42f9d0b3f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 08:22:20 compute-0 objective_ride[218322]: {}
Jan 31 08:22:20 compute-0 podman[218580]: 2026-01-31 08:22:20.053353221 +0000 UTC m=+0.057944937 container health_status 5cc46d1955888fed41771eb977e7f9416e280539f01559118253757fe3eb0869 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '3b798815db4eef76ff54d2cfe5801aee605c637f16e47a4297de289ab1fdb9c1-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3)
Jan 31 08:22:20 compute-0 sudo[218565]: pam_unix(sudo:session): session closed for user root
Jan 31 08:22:20 compute-0 systemd[1]: libpod-d7cef6820c8554d9e4e6e8e4e35d0d2d373ceb395f3798b842f8aab1ae14f0da.scope: Deactivated successfully.
Jan 31 08:22:20 compute-0 systemd[1]: libpod-d7cef6820c8554d9e4e6e8e4e35d0d2d373ceb395f3798b842f8aab1ae14f0da.scope: Consumed 1.020s CPU time.
Jan 31 08:22:20 compute-0 podman[218256]: 2026-01-31 08:22:20.07507762 +0000 UTC m=+0.872695206 container died d7cef6820c8554d9e4e6e8e4e35d0d2d373ceb395f3798b842f8aab1ae14f0da (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=objective_ride, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 08:22:20 compute-0 systemd[1]: var-lib-containers-storage-overlay-b058b0bc5bf3514ac152a480417a84725867b9eabf6b5aecf9f52afef3a592c4-merged.mount: Deactivated successfully.
Jan 31 08:22:20 compute-0 podman[218256]: 2026-01-31 08:22:20.245950045 +0000 UTC m=+1.043567631 container remove d7cef6820c8554d9e4e6e8e4e35d0d2d373ceb395f3798b842f8aab1ae14f0da (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=objective_ride, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, ceph=True, org.label-schema.vendor=CentOS)
Jan 31 08:22:20 compute-0 ceph-mon[75227]: pgmap v591: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:22:20 compute-0 systemd[1]: libpod-conmon-d7cef6820c8554d9e4e6e8e4e35d0d2d373ceb395f3798b842f8aab1ae14f0da.scope: Deactivated successfully.
Jan 31 08:22:20 compute-0 sudo[218071]: pam_unix(sudo:session): session closed for user root
Jan 31 08:22:20 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 31 08:22:20 compute-0 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 08:22:20 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 31 08:22:20 compute-0 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 08:22:20 compute-0 sudo[218717]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 31 08:22:20 compute-0 sudo[218717]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:22:20 compute-0 sudo[218717]: pam_unix(sudo:session): session closed for user root
Jan 31 08:22:20 compute-0 sudo[218791]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-thwoqzmqkxwlmgsjmowyokqmdvgqrdiy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847740.2334216-194-209035105749368/AnsiballZ_lineinfile.py'
Jan 31 08:22:20 compute-0 sudo[218791]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:22:20 compute-0 python3.9[218793]: ansible-ansible.builtin.lineinfile Invoked with create=True dest=/etc/modules line=dm-multipath  mode=0644 state=present path=/etc/modules encoding=utf-8 backrefs=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 08:22:20 compute-0 sudo[218791]: pam_unix(sudo:session): session closed for user root
Jan 31 08:22:20 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v592: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:22:21 compute-0 sudo[218943]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bfrpwejnwzkskuivkotegamckwrnthsa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847740.8124816-202-69545694882299/AnsiballZ_systemd.py'
Jan 31 08:22:21 compute-0 sudo[218943]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:22:21 compute-0 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 08:22:21 compute-0 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 08:22:21 compute-0 python3.9[218945]: ansible-ansible.builtin.systemd Invoked with name=systemd-modules-load.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 31 08:22:21 compute-0 systemd[1]: systemd-modules-load.service: Deactivated successfully.
Jan 31 08:22:21 compute-0 systemd[1]: Stopped Load Kernel Modules.
Jan 31 08:22:21 compute-0 systemd[1]: Stopping Load Kernel Modules...
Jan 31 08:22:21 compute-0 systemd[1]: Starting Load Kernel Modules...
Jan 31 08:22:21 compute-0 systemd[1]: Finished Load Kernel Modules.
Jan 31 08:22:21 compute-0 sudo[218943]: pam_unix(sudo:session): session closed for user root
Jan 31 08:22:22 compute-0 sudo[219099]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-anulbmemcolfsulzercspvsxobhtjidy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847741.863471-210-106070830839857/AnsiballZ_command.py'
Jan 31 08:22:22 compute-0 sudo[219099]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:22:22 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 08:22:22 compute-0 python3.9[219101]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/restorecon -nvr /etc/multipath _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 08:22:22 compute-0 sudo[219099]: pam_unix(sudo:session): session closed for user root
Jan 31 08:22:22 compute-0 ceph-mon[75227]: pgmap v592: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:22:22 compute-0 sudo[219252]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jcpovyopqwokspsvofheijxzkqrserzy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847742.5468018-220-214413683674947/AnsiballZ_stat.py'
Jan 31 08:22:22 compute-0 sudo[219252]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:22:22 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v593: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:22:22 compute-0 python3.9[219254]: ansible-ansible.builtin.stat Invoked with path=/etc/multipath.conf follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 31 08:22:22 compute-0 sudo[219252]: pam_unix(sudo:session): session closed for user root
Jan 31 08:22:23 compute-0 sudo[219404]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ghuhuqlxjhksuemjecofjgoxsqvzjpjb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847743.1581905-229-165288016788793/AnsiballZ_stat.py'
Jan 31 08:22:23 compute-0 ceph-mon[75227]: pgmap v593: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:22:23 compute-0 sudo[219404]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:22:23 compute-0 python3.9[219406]: ansible-ansible.legacy.stat Invoked with path=/etc/multipath.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 08:22:23 compute-0 sudo[219404]: pam_unix(sudo:session): session closed for user root
Jan 31 08:22:23 compute-0 sudo[219527]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kxslfmvozbmebldrrqxqmdwcultfledc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847743.1581905-229-165288016788793/AnsiballZ_copy.py'
Jan 31 08:22:23 compute-0 sudo[219527]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:22:24 compute-0 python3.9[219529]: ansible-ansible.legacy.copy Invoked with dest=/etc/multipath.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769847743.1581905-229-165288016788793/.source.conf _original_basename=multipath.conf follow=False checksum=bf02ab264d3d648048a81f3bacec8bc58db93162 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 08:22:24 compute-0 sudo[219527]: pam_unix(sudo:session): session closed for user root
Jan 31 08:22:24 compute-0 sudo[219679]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lgdfnzcfumfauvzguorhzzgolchjgxsz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847744.1910465-244-246059845351021/AnsiballZ_command.py'
Jan 31 08:22:24 compute-0 sudo[219679]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:22:24 compute-0 python3.9[219681]: ansible-ansible.legacy.command Invoked with _raw_params=grep -q '^blacklist\s*{' /etc/multipath.conf _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 08:22:24 compute-0 sudo[219679]: pam_unix(sudo:session): session closed for user root
Jan 31 08:22:24 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v594: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:22:25 compute-0 sudo[219832]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kcolauyenanqfaltminqhisatagxujhq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847744.7871327-252-181727641146718/AnsiballZ_lineinfile.py'
Jan 31 08:22:25 compute-0 sudo[219832]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:22:25 compute-0 python3.9[219834]: ansible-ansible.builtin.lineinfile Invoked with line=blacklist { path=/etc/multipath.conf state=present encoding=utf-8 backrefs=False create=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 08:22:25 compute-0 sudo[219832]: pam_unix(sudo:session): session closed for user root
Jan 31 08:22:25 compute-0 sudo[219984]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dhvuvkslkvvpwpctunsopgaokbrmvzfh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847745.3465528-260-136086638440827/AnsiballZ_replace.py'
Jan 31 08:22:25 compute-0 sudo[219984]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:22:26 compute-0 ceph-mon[75227]: pgmap v594: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:22:26 compute-0 python3.9[219986]: ansible-ansible.builtin.replace Invoked with path=/etc/multipath.conf regexp=^(blacklist {) replace=\1\n} backup=False encoding=utf-8 unsafe_writes=False after=None before=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 08:22:26 compute-0 sudo[219984]: pam_unix(sudo:session): session closed for user root
Jan 31 08:22:26 compute-0 sudo[220136]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rzcxfewcxchomhfanklvlblwbdvqhvry ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847746.244158-268-176351871994497/AnsiballZ_replace.py'
Jan 31 08:22:26 compute-0 sudo[220136]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:22:26 compute-0 python3.9[220138]: ansible-ansible.builtin.replace Invoked with path=/etc/multipath.conf regexp=^blacklist\s*{\n[\s]+devnode \"\.\*\" replace=blacklist { backup=False encoding=utf-8 unsafe_writes=False after=None before=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 08:22:26 compute-0 sudo[220136]: pam_unix(sudo:session): session closed for user root
Jan 31 08:22:26 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v595: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:22:27 compute-0 sudo[220288]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hgwwyrddyfkqomcbonxnggvixmcnavtj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847746.835739-277-266530379626087/AnsiballZ_lineinfile.py'
Jan 31 08:22:27 compute-0 sudo[220288]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:22:27 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 08:22:27 compute-0 python3.9[220290]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line=        find_multipaths yes path=/etc/multipath.conf regexp=^\s+find_multipaths state=present encoding=utf-8 backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 08:22:27 compute-0 sudo[220288]: pam_unix(sudo:session): session closed for user root
Jan 31 08:22:27 compute-0 sudo[220440]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jxmmenvdvnmcdainumjygniylzsxfdqe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847747.4514854-277-273170052585072/AnsiballZ_lineinfile.py'
Jan 31 08:22:27 compute-0 sudo[220440]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:22:27 compute-0 python3.9[220442]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line=        recheck_wwid yes path=/etc/multipath.conf regexp=^\s+recheck_wwid state=present encoding=utf-8 backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 08:22:27 compute-0 sudo[220440]: pam_unix(sudo:session): session closed for user root
Jan 31 08:22:28 compute-0 ceph-mon[75227]: pgmap v595: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:22:28 compute-0 sudo[220592]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cyrlwlqpxpuudqpfanxpxchaynavojts ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847748.0060632-277-205142525617488/AnsiballZ_lineinfile.py'
Jan 31 08:22:28 compute-0 sudo[220592]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:22:28 compute-0 python3.9[220594]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line=        skip_kpartx yes path=/etc/multipath.conf regexp=^\s+skip_kpartx state=present encoding=utf-8 backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 08:22:28 compute-0 sudo[220592]: pam_unix(sudo:session): session closed for user root
Jan 31 08:22:28 compute-0 sudo[220744]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rauvdhzsjrjpkqcbddmlceebsejjuxta ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847748.5483136-277-180558525248258/AnsiballZ_lineinfile.py'
Jan 31 08:22:28 compute-0 sudo[220744]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:22:28 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v596: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:22:28 compute-0 python3.9[220746]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line=        user_friendly_names no path=/etc/multipath.conf regexp=^\s+user_friendly_names state=present encoding=utf-8 backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 08:22:28 compute-0 sudo[220744]: pam_unix(sudo:session): session closed for user root
Jan 31 08:22:29 compute-0 sudo[220896]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pryyjxnslkrruyxluvuoyqeyfgveirur ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847749.1452012-306-57075420109799/AnsiballZ_stat.py'
Jan 31 08:22:29 compute-0 sudo[220896]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:22:29 compute-0 python3.9[220898]: ansible-ansible.builtin.stat Invoked with path=/etc/multipath.conf follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 31 08:22:29 compute-0 sudo[220896]: pam_unix(sudo:session): session closed for user root
Jan 31 08:22:30 compute-0 sudo[221050]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dcaxgfhfxktjedjkhifjfxjebpmuuctf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847749.785143-314-67777567233098/AnsiballZ_command.py'
Jan 31 08:22:30 compute-0 sudo[221050]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:22:30 compute-0 ceph-mon[75227]: pgmap v596: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:22:30 compute-0 python3.9[221052]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/bin/true _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 08:22:30 compute-0 sudo[221050]: pam_unix(sudo:session): session closed for user root
Jan 31 08:22:30 compute-0 sudo[221203]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uthmywzliwlkswghwnhvzlnhnhjkbykc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847750.4516387-323-13135105406702/AnsiballZ_systemd_service.py'
Jan 31 08:22:30 compute-0 sudo[221203]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:22:30 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v597: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:22:30 compute-0 python3.9[221205]: ansible-ansible.builtin.systemd_service Invoked with enabled=True name=multipathd.socket state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 31 08:22:31 compute-0 systemd[1]: Listening on multipathd control socket.
Jan 31 08:22:31 compute-0 sudo[221203]: pam_unix(sudo:session): session closed for user root
Jan 31 08:22:31 compute-0 sudo[221359]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-umcfnymoubtesxapqnctxnbjxsqxylnu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847751.1744037-331-120685072679081/AnsiballZ_systemd_service.py'
Jan 31 08:22:31 compute-0 sudo[221359]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:22:31 compute-0 ceph-mgr[75519]: [balancer INFO root] Optimize plan auto_2026-01-31_08:22:31
Jan 31 08:22:31 compute-0 ceph-mgr[75519]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 31 08:22:31 compute-0 ceph-mgr[75519]: [balancer INFO root] do_upmap
Jan 31 08:22:31 compute-0 ceph-mgr[75519]: [balancer INFO root] pools ['default.rgw.log', 'vms', 'cephfs.cephfs.data', 'cephfs.cephfs.meta', '.mgr', '.rgw.root', 'backups', 'default.rgw.control', 'default.rgw.meta', 'volumes', 'images']
Jan 31 08:22:31 compute-0 ceph-mgr[75519]: [balancer INFO root] prepared 0/10 upmap changes
Jan 31 08:22:31 compute-0 python3.9[221361]: ansible-ansible.builtin.systemd_service Invoked with enabled=True name=multipathd state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 31 08:22:31 compute-0 systemd[1]: Starting Wait for udev To Complete Device Initialization...
Jan 31 08:22:31 compute-0 udevadm[221366]: systemd-udev-settle.service is deprecated. Please fix multipathd.service not to pull it in.
Jan 31 08:22:31 compute-0 systemd[1]: Finished Wait for udev To Complete Device Initialization.
Jan 31 08:22:31 compute-0 systemd[1]: Starting Device-Mapper Multipath Device Controller...
Jan 31 08:22:31 compute-0 multipathd[221369]: --------start up--------
Jan 31 08:22:31 compute-0 multipathd[221369]: read /etc/multipath.conf
Jan 31 08:22:31 compute-0 multipathd[221369]: path checkers start up
Jan 31 08:22:32 compute-0 systemd[1]: Started Device-Mapper Multipath Device Controller.
Jan 31 08:22:32 compute-0 sudo[221359]: pam_unix(sudo:session): session closed for user root
Jan 31 08:22:32 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 08:22:32 compute-0 ceph-mon[75227]: pgmap v597: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:22:32 compute-0 sudo[221526]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bnsuqmhwsoognsjuplpnmvwggzfugfqe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847752.455564-343-177699014557130/AnsiballZ_file.py'
Jan 31 08:22:32 compute-0 sudo[221526]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:22:32 compute-0 ceph-mgr[75519]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:22:32 compute-0 ceph-mgr[75519]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:22:32 compute-0 ceph-mgr[75519]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:22:32 compute-0 ceph-mgr[75519]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:22:32 compute-0 ceph-mgr[75519]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:22:32 compute-0 ceph-mgr[75519]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:22:32 compute-0 python3.9[221528]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/modules-load.d selevel=s0 setype=etc_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None attributes=None
Jan 31 08:22:32 compute-0 sudo[221526]: pam_unix(sudo:session): session closed for user root
Jan 31 08:22:32 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v598: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:22:32 compute-0 ceph-mgr[75519]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 31 08:22:32 compute-0 ceph-mgr[75519]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 31 08:22:32 compute-0 ceph-mgr[75519]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 08:22:32 compute-0 ceph-mgr[75519]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 08:22:32 compute-0 ceph-mgr[75519]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 08:22:32 compute-0 ceph-mgr[75519]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 08:22:32 compute-0 ceph-mgr[75519]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 08:22:32 compute-0 ceph-mgr[75519]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 08:22:32 compute-0 ceph-mgr[75519]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 08:22:33 compute-0 ceph-mgr[75519]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 08:22:33 compute-0 sudo[221678]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sdsjmyqqwyrnoriuearuwfeztabuyovx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847753.032789-351-263781017021163/AnsiballZ_modprobe.py'
Jan 31 08:22:33 compute-0 sudo[221678]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:22:33 compute-0 python3.9[221680]: ansible-community.general.modprobe Invoked with name=nvme-fabrics state=present params= persistent=disabled
Jan 31 08:22:33 compute-0 kernel: Key type psk registered
Jan 31 08:22:33 compute-0 sudo[221678]: pam_unix(sudo:session): session closed for user root
Jan 31 08:22:34 compute-0 sudo[221841]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tjexgoqnjcicodabscpsidmclaqziwwf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847753.7828674-359-176824095711282/AnsiballZ_stat.py'
Jan 31 08:22:34 compute-0 sudo[221841]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:22:34 compute-0 python3.9[221843]: ansible-ansible.legacy.stat Invoked with path=/etc/modules-load.d/nvme-fabrics.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 08:22:34 compute-0 sudo[221841]: pam_unix(sudo:session): session closed for user root
Jan 31 08:22:34 compute-0 ceph-mon[75227]: pgmap v598: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:22:34 compute-0 sudo[221964]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yeprtyrzimwjahvyfinffygctlkujkqb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847753.7828674-359-176824095711282/AnsiballZ_copy.py'
Jan 31 08:22:34 compute-0 sudo[221964]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:22:34 compute-0 python3.9[221966]: ansible-ansible.legacy.copy Invoked with dest=/etc/modules-load.d/nvme-fabrics.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769847753.7828674-359-176824095711282/.source.conf follow=False _original_basename=module-load.conf.j2 checksum=783c778f0c68cc414f35486f234cbb1cf3f9bbff backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 08:22:34 compute-0 sudo[221964]: pam_unix(sudo:session): session closed for user root
Jan 31 08:22:34 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v599: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:22:35 compute-0 sudo[222116]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-igiccihvkybwuiaykaejkvdpvjdbobte ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847754.943055-375-63031992049118/AnsiballZ_lineinfile.py'
Jan 31 08:22:35 compute-0 sudo[222116]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:22:35 compute-0 python3.9[222118]: ansible-ansible.builtin.lineinfile Invoked with create=True dest=/etc/modules line=nvme-fabrics  mode=0644 state=present path=/etc/modules encoding=utf-8 backrefs=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 08:22:35 compute-0 sudo[222116]: pam_unix(sudo:session): session closed for user root
Jan 31 08:22:35 compute-0 sudo[222268]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-craqbzuonwyreicgvqsweugvvnlgpvgp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847755.512228-383-220089026207437/AnsiballZ_systemd.py'
Jan 31 08:22:35 compute-0 sudo[222268]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:22:36 compute-0 python3.9[222270]: ansible-ansible.builtin.systemd Invoked with name=systemd-modules-load.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 31 08:22:36 compute-0 systemd[1]: systemd-modules-load.service: Deactivated successfully.
Jan 31 08:22:36 compute-0 systemd[1]: Stopped Load Kernel Modules.
Jan 31 08:22:36 compute-0 systemd[1]: Stopping Load Kernel Modules...
Jan 31 08:22:36 compute-0 systemd[1]: Starting Load Kernel Modules...
Jan 31 08:22:36 compute-0 systemd[1]: Finished Load Kernel Modules.
Jan 31 08:22:36 compute-0 sudo[222268]: pam_unix(sudo:session): session closed for user root
Jan 31 08:22:36 compute-0 ceph-mon[75227]: pgmap v599: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:22:36 compute-0 sudo[222424]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qcdfyetjkatggguwwdgsaoatsyfwfztl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847756.3968654-391-167467629145126/AnsiballZ_dnf.py'
Jan 31 08:22:36 compute-0 sudo[222424]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:22:36 compute-0 python3.9[222426]: ansible-ansible.legacy.dnf Invoked with name=['nvme-cli'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 31 08:22:36 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v600: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:22:37 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 08:22:38 compute-0 ceph-mon[75227]: pgmap v600: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:22:38 compute-0 systemd[1]: Reloading.
Jan 31 08:22:38 compute-0 systemd-sysv-generator[222456]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 08:22:38 compute-0 systemd-rc-local-generator[222453]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 08:22:38 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v601: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:22:39 compute-0 systemd[1]: Reloading.
Jan 31 08:22:39 compute-0 systemd-rc-local-generator[222490]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 08:22:39 compute-0 systemd-sysv-generator[222493]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 08:22:39 compute-0 systemd-logind[793]: Watching system buttons on /dev/input/event0 (Power Button)
Jan 31 08:22:39 compute-0 systemd-logind[793]: Watching system buttons on /dev/input/event1 (AT Translated Set 2 keyboard)
Jan 31 08:22:39 compute-0 lvm[222543]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 31 08:22:39 compute-0 lvm[222543]: VG ceph_vg2 finished
Jan 31 08:22:39 compute-0 lvm[222541]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 31 08:22:39 compute-0 lvm[222541]: VG ceph_vg0 finished
Jan 31 08:22:39 compute-0 lvm[222540]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 31 08:22:39 compute-0 lvm[222540]: VG ceph_vg1 finished
Jan 31 08:22:39 compute-0 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Jan 31 08:22:39 compute-0 systemd[1]: Starting man-db-cache-update.service...
Jan 31 08:22:39 compute-0 systemd[1]: Reloading.
Jan 31 08:22:39 compute-0 systemd-rc-local-generator[222591]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 08:22:39 compute-0 systemd-sysv-generator[222595]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 08:22:39 compute-0 systemd[1]: Queuing reload/restart jobs for marked units…
Jan 31 08:22:40 compute-0 ceph-mon[75227]: pgmap v601: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:22:40 compute-0 sudo[222424]: pam_unix(sudo:session): session closed for user root
Jan 31 08:22:40 compute-0 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Jan 31 08:22:40 compute-0 systemd[1]: Finished man-db-cache-update.service.
Jan 31 08:22:40 compute-0 systemd[1]: man-db-cache-update.service: Consumed 1.143s CPU time.
Jan 31 08:22:40 compute-0 systemd[1]: run-rc6f0038b51bc4f9d87c5427b26522ed1.service: Deactivated successfully.
Jan 31 08:22:40 compute-0 sudo[223895]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-npcgyzguryytjlhrpflsncioeloqwovd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847760.5869496-399-68033338854545/AnsiballZ_systemd_service.py'
Jan 31 08:22:40 compute-0 sudo[223895]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:22:40 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v602: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:22:41 compute-0 python3.9[223897]: ansible-ansible.builtin.systemd_service Invoked with name=iscsid state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 31 08:22:41 compute-0 iscsid[216785]: iscsid shutting down.
Jan 31 08:22:41 compute-0 systemd[1]: Stopping Open-iSCSI...
Jan 31 08:22:41 compute-0 systemd[1]: iscsid.service: Deactivated successfully.
Jan 31 08:22:41 compute-0 systemd[1]: Stopped Open-iSCSI.
Jan 31 08:22:41 compute-0 systemd[1]: One time configuration for iscsi.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/iscsi/initiatorname.iscsi).
Jan 31 08:22:41 compute-0 systemd[1]: Starting Open-iSCSI...
Jan 31 08:22:41 compute-0 systemd[1]: Started Open-iSCSI.
Jan 31 08:22:41 compute-0 sudo[223895]: pam_unix(sudo:session): session closed for user root
Jan 31 08:22:41 compute-0 sudo[224051]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pzcugnzyfslizgsfmqoaxnnihbbyukln ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847761.3423157-407-236319435300095/AnsiballZ_systemd_service.py'
Jan 31 08:22:41 compute-0 sudo[224051]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:22:41 compute-0 python3.9[224053]: ansible-ansible.builtin.systemd_service Invoked with name=multipathd state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 31 08:22:41 compute-0 systemd[1]: Stopping Device-Mapper Multipath Device Controller...
Jan 31 08:22:41 compute-0 multipathd[221369]: exit (signal)
Jan 31 08:22:41 compute-0 multipathd[221369]: --------shut down-------
Jan 31 08:22:41 compute-0 systemd[1]: multipathd.service: Deactivated successfully.
Jan 31 08:22:41 compute-0 systemd[1]: Stopped Device-Mapper Multipath Device Controller.
Jan 31 08:22:41 compute-0 systemd[1]: Starting Device-Mapper Multipath Device Controller...
Jan 31 08:22:42 compute-0 multipathd[224060]: --------start up--------
Jan 31 08:22:42 compute-0 multipathd[224060]: read /etc/multipath.conf
Jan 31 08:22:42 compute-0 multipathd[224060]: path checkers start up
Jan 31 08:22:42 compute-0 systemd[1]: Started Device-Mapper Multipath Device Controller.
Jan 31 08:22:42 compute-0 sudo[224051]: pam_unix(sudo:session): session closed for user root
Jan 31 08:22:42 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 08:22:42 compute-0 ceph-mon[75227]: pgmap v602: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:22:42 compute-0 python3.9[224217]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 31 08:22:42 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v603: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:22:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] _maybe_adjust
Jan 31 08:22:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:22:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 31 08:22:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:22:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 08:22:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:22:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 08:22:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:22:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 08:22:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:22:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 08:22:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:22:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 2.6947183441958982e-06 of space, bias 4.0, pg target 0.003233662013035078 quantized to 16 (current 16)
Jan 31 08:22:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:22:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 08:22:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:22:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 31 08:22:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:22:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Jan 31 08:22:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:22:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 08:22:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:22:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 31 08:22:43 compute-0 sudo[224371]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kcpqcrjiriciebzddqeacolwfkfobwcv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847763.1138124-425-173993283922007/AnsiballZ_file.py'
Jan 31 08:22:43 compute-0 sudo[224371]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:22:43 compute-0 python3.9[224373]: ansible-ansible.builtin.file Invoked with mode=0644 path=/etc/ssh/ssh_known_hosts state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 08:22:43 compute-0 sudo[224371]: pam_unix(sudo:session): session closed for user root
Jan 31 08:22:44 compute-0 sudo[224523]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cnaashxahntzmgzrudqqjtoargbbyhfo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847763.8856497-436-90313284135320/AnsiballZ_systemd_service.py'
Jan 31 08:22:44 compute-0 sudo[224523]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:22:44 compute-0 ceph-mon[75227]: pgmap v603: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:22:44 compute-0 python3.9[224525]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Jan 31 08:22:44 compute-0 systemd[1]: Reloading.
Jan 31 08:22:44 compute-0 systemd-rc-local-generator[224548]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 08:22:44 compute-0 systemd-sysv-generator[224553]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 08:22:44 compute-0 sudo[224523]: pam_unix(sudo:session): session closed for user root
Jan 31 08:22:44 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v604: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:22:45 compute-0 python3.9[224710]: ansible-ansible.builtin.service_facts Invoked
Jan 31 08:22:45 compute-0 network[224727]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Jan 31 08:22:45 compute-0 network[224728]: 'network-scripts' will be removed from distribution in near future.
Jan 31 08:22:45 compute-0 network[224729]: It is advised to switch to 'NetworkManager' instead for network management.
Jan 31 08:22:46 compute-0 ceph-mon[75227]: pgmap v604: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:22:46 compute-0 podman[224755]: 2026-01-31 08:22:46.542970642 +0000 UTC m=+0.126674646 container health_status 14c3c41ef3ecc0cb180f3b1c6b2646401de390411c75f2edf127900ced71a3ae (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, managed_by=edpm_ansible, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '3b798815db4eef76ff54d2cfe5801aee605c637f16e47a4297de289ab1fdb9c1-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller)
Jan 31 08:22:46 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v605: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:22:47 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 08:22:48 compute-0 ceph-mon[75227]: pgmap v605: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:22:48 compute-0 sudo[225026]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xrmrawudkwddwcnlmwmefhejfpxeedxy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847768.5824935-455-103260745126455/AnsiballZ_systemd_service.py'
Jan 31 08:22:48 compute-0 sudo[225026]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:22:48 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v606: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:22:49 compute-0 python3.9[225028]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_compute.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 31 08:22:49 compute-0 sudo[225026]: pam_unix(sudo:session): session closed for user root
Jan 31 08:22:49 compute-0 ceph-mon[75227]: pgmap v606: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:22:49 compute-0 sudo[225179]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ddifnoikoxnnefojousbewvljbrjyahl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847769.3311622-455-52270252066040/AnsiballZ_systemd_service.py'
Jan 31 08:22:49 compute-0 sudo[225179]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:22:49 compute-0 python3.9[225181]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_migration_target.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 31 08:22:49 compute-0 sudo[225179]: pam_unix(sudo:session): session closed for user root
Jan 31 08:22:50 compute-0 podman[225265]: 2026-01-31 08:22:50.170994084 +0000 UTC m=+0.062520066 container health_status 5cc46d1955888fed41771eb977e7f9416e280539f01559118253757fe3eb0869 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '3b798815db4eef76ff54d2cfe5801aee605c637f16e47a4297de289ab1fdb9c1-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 31 08:22:50 compute-0 sudo[225351]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fbswzecuduikrdzwydvyqgkgomcysbyn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847770.0114639-455-251974276688348/AnsiballZ_systemd_service.py'
Jan 31 08:22:50 compute-0 sudo[225351]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:22:50 compute-0 python3.9[225353]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_api_cron.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 31 08:22:50 compute-0 sudo[225351]: pam_unix(sudo:session): session closed for user root
Jan 31 08:22:50 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v607: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:22:50 compute-0 sudo[225504]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pxmeumdehsrmvqfqiiatovtocchsjnpm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847770.7187433-455-79072893295438/AnsiballZ_systemd_service.py'
Jan 31 08:22:50 compute-0 sudo[225504]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:22:51 compute-0 python3.9[225506]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_api.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 31 08:22:51 compute-0 sudo[225504]: pam_unix(sudo:session): session closed for user root
Jan 31 08:22:51 compute-0 sudo[225657]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fcwkiugonoindmaoyrmlhdkecdbtpldv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847771.3924816-455-154496580752722/AnsiballZ_systemd_service.py'
Jan 31 08:22:51 compute-0 sudo[225657]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:22:51 compute-0 python3.9[225659]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_conductor.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 31 08:22:51 compute-0 sudo[225657]: pam_unix(sudo:session): session closed for user root
Jan 31 08:22:51 compute-0 ceph-mon[75227]: pgmap v607: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:22:52 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 08:22:52 compute-0 sudo[225810]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kwtrrilobbwvzjivcwbnfonkchxrboei ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847772.08371-455-81496515761615/AnsiballZ_systemd_service.py'
Jan 31 08:22:52 compute-0 sudo[225810]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:22:52 compute-0 python3.9[225812]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_metadata.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 31 08:22:52 compute-0 sudo[225810]: pam_unix(sudo:session): session closed for user root
Jan 31 08:22:52 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v608: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:22:53 compute-0 sudo[225963]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tifdfdpmejxlvbvntfcmvbnluvcnyjjp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847772.811769-455-168898793542012/AnsiballZ_systemd_service.py'
Jan 31 08:22:53 compute-0 sudo[225963]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:22:53 compute-0 python3.9[225965]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_scheduler.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 31 08:22:53 compute-0 sudo[225963]: pam_unix(sudo:session): session closed for user root
Jan 31 08:22:53 compute-0 sudo[226116]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tlufjzutqqnxqrjkrnleopwppjpwepdx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847773.5507157-455-62985065768530/AnsiballZ_systemd_service.py'
Jan 31 08:22:53 compute-0 sudo[226116]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:22:53 compute-0 ceph-mon[75227]: pgmap v608: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:22:54 compute-0 python3.9[226118]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_vnc_proxy.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 31 08:22:54 compute-0 sudo[226116]: pam_unix(sudo:session): session closed for user root
Jan 31 08:22:54 compute-0 sudo[226269]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yavcfgmetbenlpcfwoniglxwkefvrgmp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847774.3730478-514-54699693486182/AnsiballZ_file.py'
Jan 31 08:22:54 compute-0 sudo[226269]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:22:54 compute-0 python3.9[226271]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_compute.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 08:22:54 compute-0 sudo[226269]: pam_unix(sudo:session): session closed for user root
Jan 31 08:22:54 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v609: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:22:55 compute-0 sudo[226421]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pfjmfstnaxlpcbvmgfhoxmbtcytavisd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847774.98027-514-202373248236286/AnsiballZ_file.py'
Jan 31 08:22:55 compute-0 sudo[226421]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:22:55 compute-0 python3.9[226423]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_migration_target.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 08:22:55 compute-0 sudo[226421]: pam_unix(sudo:session): session closed for user root
Jan 31 08:22:55 compute-0 sudo[226573]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rbbitsngigurpeoqerrgnpuygvvjyvps ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847775.5754724-514-179554722233067/AnsiballZ_file.py'
Jan 31 08:22:55 compute-0 sudo[226573]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:22:55 compute-0 python3.9[226575]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_api_cron.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 08:22:55 compute-0 sudo[226573]: pam_unix(sudo:session): session closed for user root
Jan 31 08:22:56 compute-0 ceph-mon[75227]: pgmap v609: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:22:56 compute-0 sudo[226725]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bnggrxzkilbjbdrzbasrugzdiubqxfbt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847776.0931804-514-172571576482259/AnsiballZ_file.py'
Jan 31 08:22:56 compute-0 sudo[226725]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:22:56 compute-0 python3.9[226727]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_api.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 08:22:56 compute-0 sudo[226725]: pam_unix(sudo:session): session closed for user root
Jan 31 08:22:56 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v610: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:22:56 compute-0 sudo[226877]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mttgjlmffykihsrvnehttgifnepfdeak ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847776.705044-514-253571094125983/AnsiballZ_file.py'
Jan 31 08:22:56 compute-0 sudo[226877]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:22:57 compute-0 python3.9[226879]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_conductor.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 08:22:57 compute-0 sudo[226877]: pam_unix(sudo:session): session closed for user root
Jan 31 08:22:57 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 08:22:57 compute-0 sudo[227029]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vffahyxboefzrhytuylwoabzwomqsoir ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847777.3054883-514-240321257566243/AnsiballZ_file.py'
Jan 31 08:22:57 compute-0 sudo[227029]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:22:57 compute-0 python3.9[227031]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_metadata.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 08:22:57 compute-0 sudo[227029]: pam_unix(sudo:session): session closed for user root
Jan 31 08:22:58 compute-0 ceph-mon[75227]: pgmap v610: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:22:58 compute-0 sudo[227181]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ivbggjirpniuinidjdwpirmikwzmsylw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847777.871862-514-142474375092565/AnsiballZ_file.py'
Jan 31 08:22:58 compute-0 sudo[227181]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:22:58 compute-0 python3.9[227183]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_scheduler.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 08:22:58 compute-0 sudo[227181]: pam_unix(sudo:session): session closed for user root
Jan 31 08:22:58 compute-0 sudo[227333]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-swcftidsoofamzuprlgbtonnimcidldn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847778.5420272-514-66795630221257/AnsiballZ_file.py'
Jan 31 08:22:58 compute-0 sudo[227333]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:22:58 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v611: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:22:58 compute-0 python3.9[227335]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_vnc_proxy.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 08:22:58 compute-0 sudo[227333]: pam_unix(sudo:session): session closed for user root
Jan 31 08:22:59 compute-0 sudo[227485]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-acivqpumfvngqhxlbozphbwopnatmvvb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847779.169555-571-246975940570131/AnsiballZ_file.py'
Jan 31 08:22:59 compute-0 sudo[227485]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:22:59 compute-0 python3.9[227487]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_compute.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 08:22:59 compute-0 sudo[227485]: pam_unix(sudo:session): session closed for user root
Jan 31 08:23:00 compute-0 sudo[227637]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lyzceorpfefkbhnihbdnbleyvymngaex ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847779.7893698-571-40494547458679/AnsiballZ_file.py'
Jan 31 08:23:00 compute-0 sudo[227637]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:23:00 compute-0 ceph-mon[75227]: pgmap v611: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:23:00 compute-0 python3.9[227639]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_migration_target.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 08:23:00 compute-0 sudo[227637]: pam_unix(sudo:session): session closed for user root
Jan 31 08:23:00 compute-0 sudo[227789]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bqaxfcykaxnlkbrzpghamjephnepebpx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847780.4693499-571-254887214944715/AnsiballZ_file.py'
Jan 31 08:23:00 compute-0 sudo[227789]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:23:00 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v612: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:23:00 compute-0 python3.9[227791]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_api_cron.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 08:23:01 compute-0 sudo[227789]: pam_unix(sudo:session): session closed for user root
Jan 31 08:23:01 compute-0 sudo[227941]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wmcucikgekdzlhmnlumvkqgzuibvidpw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847781.1264055-571-50312821214431/AnsiballZ_file.py'
Jan 31 08:23:01 compute-0 sudo[227941]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:23:01 compute-0 python3.9[227943]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_api.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 08:23:01 compute-0 sudo[227941]: pam_unix(sudo:session): session closed for user root
Jan 31 08:23:01 compute-0 sudo[228093]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ywneyyahnwbdmscoxivmccwmrdrxxiyv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847781.7514057-571-138708127377469/AnsiballZ_file.py'
Jan 31 08:23:01 compute-0 sudo[228093]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:23:02 compute-0 python3.9[228095]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_conductor.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 08:23:02 compute-0 sudo[228093]: pam_unix(sudo:session): session closed for user root
Jan 31 08:23:02 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 08:23:02 compute-0 ceph-mon[75227]: pgmap v612: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:23:02 compute-0 sudo[228245]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yyogqdznqrgwibnxihmblwiqistcxymk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847782.2990544-571-215927392495366/AnsiballZ_file.py'
Jan 31 08:23:02 compute-0 sudo[228245]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:23:02 compute-0 python3.9[228247]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_metadata.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 08:23:02 compute-0 sudo[228245]: pam_unix(sudo:session): session closed for user root
Jan 31 08:23:02 compute-0 ceph-mgr[75519]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:23:02 compute-0 ceph-mgr[75519]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:23:02 compute-0 ceph-mgr[75519]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:23:02 compute-0 ceph-mgr[75519]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:23:02 compute-0 ceph-mgr[75519]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:23:02 compute-0 ceph-mgr[75519]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:23:02 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v613: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:23:03 compute-0 sudo[228397]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fkmruamueidrambjvlrnxnfqsbwhlrlp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847782.908378-571-275809482932193/AnsiballZ_file.py'
Jan 31 08:23:03 compute-0 sudo[228397]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:23:03 compute-0 python3.9[228399]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_scheduler.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 08:23:03 compute-0 sudo[228397]: pam_unix(sudo:session): session closed for user root
Jan 31 08:23:03 compute-0 sudo[228549]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sogkutzbzbhfyjlzkfpfkdyymflsrhlv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847783.4117413-571-152051803547240/AnsiballZ_file.py'
Jan 31 08:23:03 compute-0 sudo[228549]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:23:03 compute-0 python3.9[228551]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_vnc_proxy.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 08:23:03 compute-0 sudo[228549]: pam_unix(sudo:session): session closed for user root
Jan 31 08:23:04 compute-0 ceph-mon[75227]: pgmap v613: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:23:04 compute-0 sudo[228701]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kllpovfafgvnsynbdsdvfajicpmgvyok ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847784.0593047-629-164246578521993/AnsiballZ_command.py'
Jan 31 08:23:04 compute-0 sudo[228701]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:23:04 compute-0 python3.9[228703]: ansible-ansible.legacy.command Invoked with _raw_params=if systemctl is-active certmonger.service; then
                                               systemctl disable --now certmonger.service
                                               test -f /etc/systemd/system/certmonger.service || systemctl mask certmonger.service
                                             fi
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 08:23:04 compute-0 sudo[228701]: pam_unix(sudo:session): session closed for user root
Jan 31 08:23:04 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v614: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:23:05 compute-0 python3.9[228855]: ansible-ansible.builtin.find Invoked with file_type=any hidden=True paths=['/var/lib/certmonger/requests'] patterns=[] read_whole_file=False age_stamp=mtime recurse=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Jan 31 08:23:05 compute-0 sudo[229005]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mevzxztcnaoohvfwylzeibyragpphdbj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847785.5357442-647-272613016518624/AnsiballZ_systemd_service.py'
Jan 31 08:23:05 compute-0 sudo[229005]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:23:06 compute-0 python3.9[229007]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Jan 31 08:23:06 compute-0 systemd[1]: Reloading.
Jan 31 08:23:06 compute-0 ceph-mon[75227]: pgmap v614: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:23:06 compute-0 systemd-rc-local-generator[229029]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 08:23:06 compute-0 systemd-sysv-generator[229032]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 08:23:06 compute-0 sudo[229005]: pam_unix(sudo:session): session closed for user root
Jan 31 08:23:06 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v615: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:23:07 compute-0 sudo[229192]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rtcuphxrnvgxavlaaduasrtyrfdduawm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847786.7569532-655-98091534686711/AnsiballZ_command.py'
Jan 31 08:23:07 compute-0 sudo[229192]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:23:07 compute-0 python3.9[229194]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_compute.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 08:23:07 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 08:23:07 compute-0 sudo[229192]: pam_unix(sudo:session): session closed for user root
Jan 31 08:23:07 compute-0 sudo[229345]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yfprsnkudsawkqtuoqmdvaywrzwshfpv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847787.3739848-655-156586543997278/AnsiballZ_command.py'
Jan 31 08:23:07 compute-0 sudo[229345]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:23:07 compute-0 python3.9[229347]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_migration_target.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 08:23:07 compute-0 sudo[229345]: pam_unix(sudo:session): session closed for user root
Jan 31 08:23:08 compute-0 sudo[229498]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bxborbnjxmoqzacakdfmiwfnndjjznic ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847787.9558089-655-232564838590750/AnsiballZ_command.py'
Jan 31 08:23:08 compute-0 sudo[229498]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:23:08 compute-0 ceph-mon[75227]: pgmap v615: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:23:08 compute-0 python3.9[229500]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_api_cron.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 08:23:08 compute-0 sudo[229498]: pam_unix(sudo:session): session closed for user root
Jan 31 08:23:08 compute-0 sudo[229651]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ljkisjojfmhutqseighjtvkhqfrfedys ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847788.5626347-655-149216926455749/AnsiballZ_command.py'
Jan 31 08:23:08 compute-0 sudo[229651]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:23:08 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v616: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:23:09 compute-0 python3.9[229653]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_api.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 08:23:09 compute-0 sudo[229651]: pam_unix(sudo:session): session closed for user root
Jan 31 08:23:09 compute-0 systemd[1]: virtnodedevd.service: Deactivated successfully.
Jan 31 08:23:09 compute-0 sudo[229805]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ruuckepisrtdefjcwabrikxjqtprakfh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847789.1868894-655-256378670782803/AnsiballZ_command.py'
Jan 31 08:23:09 compute-0 sudo[229805]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:23:09 compute-0 python3.9[229807]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_conductor.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 08:23:09 compute-0 sudo[229805]: pam_unix(sudo:session): session closed for user root
Jan 31 08:23:09 compute-0 sudo[229958]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jarsuxutrevzakdifauuwpupvtjcpxja ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847789.7423337-655-37757926841856/AnsiballZ_command.py'
Jan 31 08:23:09 compute-0 sudo[229958]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:23:10 compute-0 python3.9[229960]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_metadata.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 08:23:10 compute-0 sudo[229958]: pam_unix(sudo:session): session closed for user root
Jan 31 08:23:10 compute-0 ceph-mon[75227]: pgmap v616: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:23:10 compute-0 systemd[1]: virtproxyd.service: Deactivated successfully.
Jan 31 08:23:10 compute-0 sudo[230112]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oczlxiddmelworbfxwtlcjoxaqngdkrm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847790.287915-655-24590411643327/AnsiballZ_command.py'
Jan 31 08:23:10 compute-0 sudo[230112]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:23:10 compute-0 python3.9[230114]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_scheduler.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 08:23:10 compute-0 sudo[230112]: pam_unix(sudo:session): session closed for user root
Jan 31 08:23:10 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v617: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:23:11 compute-0 sudo[230265]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lrmfdrvmkgxixplvbylobexjwnyvulbl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847790.860536-655-32867889499800/AnsiballZ_command.py'
Jan 31 08:23:11 compute-0 sudo[230265]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:23:11 compute-0 python3.9[230267]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_vnc_proxy.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 08:23:11 compute-0 sudo[230265]: pam_unix(sudo:session): session closed for user root
Jan 31 08:23:12 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 08:23:12 compute-0 ceph-mon[75227]: pgmap v617: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:23:12 compute-0 sudo[230418]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lejxfropfwujgeivfneexbcpuvsoiqxc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847792.1416464-734-2794640309858/AnsiballZ_file.py'
Jan 31 08:23:12 compute-0 sudo[230418]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:23:12 compute-0 python3.9[230420]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/config/nova setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 31 08:23:12 compute-0 sudo[230418]: pam_unix(sudo:session): session closed for user root
Jan 31 08:23:12 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v618: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:23:12 compute-0 sudo[230570]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uybhkwonmfzkqyfjxylmxoqwxvwobcto ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847792.7497845-734-33304150599410/AnsiballZ_file.py'
Jan 31 08:23:12 compute-0 sudo[230570]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:23:13 compute-0 python3.9[230572]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/config/containers setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 31 08:23:13 compute-0 sudo[230570]: pam_unix(sudo:session): session closed for user root
Jan 31 08:23:13 compute-0 sudo[230722]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-execnddfagbmwunsnmkpsjqbeseqynkl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847793.2799969-734-10874603035633/AnsiballZ_file.py'
Jan 31 08:23:13 compute-0 sudo[230722]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:23:13 compute-0 python3.9[230724]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/config/nova_nvme_cleaner setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 31 08:23:13 compute-0 sudo[230722]: pam_unix(sudo:session): session closed for user root
Jan 31 08:23:14 compute-0 sudo[230874]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qzajhnbczhuxuvneayreunwuuqfvgtvc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847793.9737895-756-173155095049265/AnsiballZ_file.py'
Jan 31 08:23:14 compute-0 sudo[230874]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:23:14 compute-0 ceph-mon[75227]: pgmap v618: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:23:14 compute-0 python3.9[230876]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/nova setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 31 08:23:14 compute-0 sudo[230874]: pam_unix(sudo:session): session closed for user root
Jan 31 08:23:14 compute-0 sudo[231026]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dtapgjubqgwtwhgvuedgtnetbrvvckyf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847794.5760999-756-162688015876787/AnsiballZ_file.py'
Jan 31 08:23:14 compute-0 sudo[231026]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:23:14 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v619: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:23:15 compute-0 python3.9[231028]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/_nova_secontext setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 31 08:23:15 compute-0 sudo[231026]: pam_unix(sudo:session): session closed for user root
Jan 31 08:23:15 compute-0 sudo[231178]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vmrjilrgyhjwynarhcqqgalxxbdgvjwp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847795.1931934-756-55162195463084/AnsiballZ_file.py'
Jan 31 08:23:15 compute-0 sudo[231178]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:23:15 compute-0 python3.9[231180]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/nova/instances setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 31 08:23:15 compute-0 sudo[231178]: pam_unix(sudo:session): session closed for user root
Jan 31 08:23:16 compute-0 sudo[231330]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xuflqxiyydrsvhfdbtqqdhcfpgblohzq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847795.8421988-756-185236757443919/AnsiballZ_file.py'
Jan 31 08:23:16 compute-0 sudo[231330]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:23:16 compute-0 ceph-mon[75227]: pgmap v619: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:23:16 compute-0 python3.9[231332]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/etc/ceph setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 31 08:23:16 compute-0 sudo[231330]: pam_unix(sudo:session): session closed for user root
Jan 31 08:23:16 compute-0 sudo[231495]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nqpjhrwfbpedlmrmqlcrhhyjlhzavlqr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847796.5204372-756-101263081419980/AnsiballZ_file.py'
Jan 31 08:23:16 compute-0 sudo[231495]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:23:16 compute-0 podman[231456]: 2026-01-31 08:23:16.849950479 +0000 UTC m=+0.099371859 container health_status 14c3c41ef3ecc0cb180f3b1c6b2646401de390411c75f2edf127900ced71a3ae (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '3b798815db4eef76ff54d2cfe5801aee605c637f16e47a4297de289ab1fdb9c1-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_controller, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Jan 31 08:23:16 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v620: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:23:17 compute-0 python3.9[231499]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/etc/multipath setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Jan 31 08:23:17 compute-0 sudo[231495]: pam_unix(sudo:session): session closed for user root
Jan 31 08:23:17 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 08:23:17 compute-0 sudo[231660]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jjnqndoglxoqvqiuxrhkvpmrvhlgepqa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847797.145584-756-179119345853210/AnsiballZ_file.py'
Jan 31 08:23:17 compute-0 sudo[231660]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:23:17 compute-0 python3.9[231662]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/etc/nvme setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Jan 31 08:23:17 compute-0 sudo[231660]: pam_unix(sudo:session): session closed for user root
Jan 31 08:23:17 compute-0 ovn_metadata_agent[154972]: 2026-01-31 08:23:17.882 154977 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:23:17 compute-0 ovn_metadata_agent[154972]: 2026-01-31 08:23:17.882 154977 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:23:17 compute-0 ovn_metadata_agent[154972]: 2026-01-31 08:23:17.883 154977 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:23:17 compute-0 sudo[231812]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bwyomzvktrfktnbqqxcclxgbymimyvry ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847797.7245448-756-233214596672179/AnsiballZ_file.py'
Jan 31 08:23:17 compute-0 sudo[231812]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:23:18 compute-0 python3.9[231814]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/run/openvswitch setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Jan 31 08:23:18 compute-0 sudo[231812]: pam_unix(sudo:session): session closed for user root
Jan 31 08:23:18 compute-0 ceph-mon[75227]: pgmap v620: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:23:18 compute-0 systemd[1]: virtsecretd.service: Deactivated successfully.
Jan 31 08:23:18 compute-0 systemd[1]: virtqemud.service: Deactivated successfully.
Jan 31 08:23:18 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v621: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:23:20 compute-0 ceph-mon[75227]: pgmap v621: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:23:20 compute-0 sudo[231841]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 08:23:20 compute-0 sudo[231841]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:23:20 compute-0 sudo[231841]: pam_unix(sudo:session): session closed for user root
Jan 31 08:23:20 compute-0 podman[231865]: 2026-01-31 08:23:20.520729361 +0000 UTC m=+0.039376438 container health_status 5cc46d1955888fed41771eb977e7f9416e280539f01559118253757fe3eb0869 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '3b798815db4eef76ff54d2cfe5801aee605c637f16e47a4297de289ab1fdb9c1-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_metadata_agent)
Jan 31 08:23:20 compute-0 sudo[231872]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/82c880e6-d992-5408-8b12-efff9c275473/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Jan 31 08:23:20 compute-0 sudo[231872]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:23:20 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v622: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:23:21 compute-0 sudo[231872]: pam_unix(sudo:session): session closed for user root
Jan 31 08:23:21 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 31 08:23:21 compute-0 ceph-mon[75227]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 08:23:21 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 31 08:23:21 compute-0 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 31 08:23:21 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 31 08:23:21 compute-0 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 08:23:21 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 31 08:23:21 compute-0 ceph-mon[75227]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 31 08:23:21 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 31 08:23:21 compute-0 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 31 08:23:21 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 31 08:23:21 compute-0 ceph-mon[75227]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 08:23:21 compute-0 sudo[231940]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 08:23:21 compute-0 sudo[231940]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:23:21 compute-0 sudo[231940]: pam_unix(sudo:session): session closed for user root
Jan 31 08:23:21 compute-0 sudo[231965]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/82c880e6-d992-5408-8b12-efff9c275473/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 82c880e6-d992-5408-8b12-efff9c275473 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Jan 31 08:23:21 compute-0 sudo[231965]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:23:21 compute-0 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 08:23:21 compute-0 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 31 08:23:21 compute-0 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 08:23:21 compute-0 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 31 08:23:21 compute-0 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 31 08:23:21 compute-0 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 08:23:21 compute-0 podman[232003]: 2026-01-31 08:23:21.488010038 +0000 UTC m=+0.061480356 container create 1b1ffcc3d1c3830793af8d6f3632b3c28cb8843b86ca758ab7def8d85e17adfa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quizzical_mcclintock, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 31 08:23:21 compute-0 systemd[1]: Started libpod-conmon-1b1ffcc3d1c3830793af8d6f3632b3c28cb8843b86ca758ab7def8d85e17adfa.scope.
Jan 31 08:23:21 compute-0 podman[232003]: 2026-01-31 08:23:21.461489172 +0000 UTC m=+0.034959540 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 08:23:21 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:23:21 compute-0 podman[232003]: 2026-01-31 08:23:21.573822204 +0000 UTC m=+0.147292572 container init 1b1ffcc3d1c3830793af8d6f3632b3c28cb8843b86ca758ab7def8d85e17adfa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quizzical_mcclintock, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 08:23:21 compute-0 podman[232003]: 2026-01-31 08:23:21.583013069 +0000 UTC m=+0.156483357 container start 1b1ffcc3d1c3830793af8d6f3632b3c28cb8843b86ca758ab7def8d85e17adfa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quizzical_mcclintock, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, ceph=True)
Jan 31 08:23:21 compute-0 podman[232003]: 2026-01-31 08:23:21.586963653 +0000 UTC m=+0.160434041 container attach 1b1ffcc3d1c3830793af8d6f3632b3c28cb8843b86ca758ab7def8d85e17adfa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quizzical_mcclintock, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030)
Jan 31 08:23:21 compute-0 quizzical_mcclintock[232019]: 167 167
Jan 31 08:23:21 compute-0 systemd[1]: libpod-1b1ffcc3d1c3830793af8d6f3632b3c28cb8843b86ca758ab7def8d85e17adfa.scope: Deactivated successfully.
Jan 31 08:23:21 compute-0 podman[232003]: 2026-01-31 08:23:21.589026173 +0000 UTC m=+0.162496451 container died 1b1ffcc3d1c3830793af8d6f3632b3c28cb8843b86ca758ab7def8d85e17adfa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quizzical_mcclintock, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Jan 31 08:23:21 compute-0 systemd[1]: var-lib-containers-storage-overlay-16acd85134a09d72651b6c8803bcffe62ad3e455b8552396ed74226143b7c0c0-merged.mount: Deactivated successfully.
Jan 31 08:23:21 compute-0 podman[232003]: 2026-01-31 08:23:21.633881398 +0000 UTC m=+0.207351686 container remove 1b1ffcc3d1c3830793af8d6f3632b3c28cb8843b86ca758ab7def8d85e17adfa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quizzical_mcclintock, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 08:23:21 compute-0 systemd[1]: libpod-conmon-1b1ffcc3d1c3830793af8d6f3632b3c28cb8843b86ca758ab7def8d85e17adfa.scope: Deactivated successfully.
Jan 31 08:23:21 compute-0 podman[232043]: 2026-01-31 08:23:21.763440497 +0000 UTC m=+0.049864020 container create 80db4c9cbd967894ed669b0c1709e9713d6091d0f7e356ed2534cdcbb7f2965f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=practical_brown, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, io.buildah.version=1.41.3)
Jan 31 08:23:21 compute-0 systemd[1]: Started libpod-conmon-80db4c9cbd967894ed669b0c1709e9713d6091d0f7e356ed2534cdcbb7f2965f.scope.
Jan 31 08:23:21 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:23:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d5c85882fd73aef4027aa1450dfdc8f727940b7e99ef3555d49b7cb11aada7cf/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 08:23:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d5c85882fd73aef4027aa1450dfdc8f727940b7e99ef3555d49b7cb11aada7cf/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 08:23:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d5c85882fd73aef4027aa1450dfdc8f727940b7e99ef3555d49b7cb11aada7cf/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 08:23:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d5c85882fd73aef4027aa1450dfdc8f727940b7e99ef3555d49b7cb11aada7cf/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 08:23:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d5c85882fd73aef4027aa1450dfdc8f727940b7e99ef3555d49b7cb11aada7cf/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 31 08:23:21 compute-0 podman[232043]: 2026-01-31 08:23:21.745748516 +0000 UTC m=+0.032172009 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 08:23:21 compute-0 podman[232043]: 2026-01-31 08:23:21.855888075 +0000 UTC m=+0.142311598 container init 80db4c9cbd967894ed669b0c1709e9713d6091d0f7e356ed2534cdcbb7f2965f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=practical_brown, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, CEPH_REF=tentacle, org.label-schema.vendor=CentOS)
Jan 31 08:23:21 compute-0 podman[232043]: 2026-01-31 08:23:21.86332942 +0000 UTC m=+0.149752923 container start 80db4c9cbd967894ed669b0c1709e9713d6091d0f7e356ed2534cdcbb7f2965f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=practical_brown, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS)
Jan 31 08:23:21 compute-0 podman[232043]: 2026-01-31 08:23:21.867919212 +0000 UTC m=+0.154342725 container attach 80db4c9cbd967894ed669b0c1709e9713d6091d0f7e356ed2534cdcbb7f2965f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=practical_brown, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 08:23:22 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 08:23:22 compute-0 practical_brown[232060]: --> passed data devices: 0 physical, 3 LVM
Jan 31 08:23:22 compute-0 practical_brown[232060]: --> All data devices are unavailable
Jan 31 08:23:22 compute-0 ceph-mon[75227]: pgmap v622: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:23:22 compute-0 systemd[1]: libpod-80db4c9cbd967894ed669b0c1709e9713d6091d0f7e356ed2534cdcbb7f2965f.scope: Deactivated successfully.
Jan 31 08:23:22 compute-0 podman[232043]: 2026-01-31 08:23:22.379149727 +0000 UTC m=+0.665573210 container died 80db4c9cbd967894ed669b0c1709e9713d6091d0f7e356ed2534cdcbb7f2965f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=practical_brown, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 31 08:23:22 compute-0 systemd[1]: var-lib-containers-storage-overlay-d5c85882fd73aef4027aa1450dfdc8f727940b7e99ef3555d49b7cb11aada7cf-merged.mount: Deactivated successfully.
Jan 31 08:23:22 compute-0 podman[232043]: 2026-01-31 08:23:22.422160968 +0000 UTC m=+0.708584441 container remove 80db4c9cbd967894ed669b0c1709e9713d6091d0f7e356ed2534cdcbb7f2965f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=practical_brown, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.41.3, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 08:23:22 compute-0 systemd[1]: libpod-conmon-80db4c9cbd967894ed669b0c1709e9713d6091d0f7e356ed2534cdcbb7f2965f.scope: Deactivated successfully.
Jan 31 08:23:22 compute-0 sudo[231965]: pam_unix(sudo:session): session closed for user root
Jan 31 08:23:22 compute-0 sudo[232090]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 08:23:22 compute-0 sudo[232090]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:23:22 compute-0 sudo[232090]: pam_unix(sudo:session): session closed for user root
Jan 31 08:23:22 compute-0 sudo[232115]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/82c880e6-d992-5408-8b12-efff9c275473/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 82c880e6-d992-5408-8b12-efff9c275473 -- lvm list --format json
Jan 31 08:23:22 compute-0 sudo[232115]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:23:22 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v623: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:23:22 compute-0 podman[232152]: 2026-01-31 08:23:22.929528941 +0000 UTC m=+0.092338076 container create d67379ba61637f7305b989f071b775f8700e7e80885a289001a46605e7e5030c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=xenodochial_chatterjee, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Jan 31 08:23:22 compute-0 podman[232152]: 2026-01-31 08:23:22.866390389 +0000 UTC m=+0.029199544 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 08:23:22 compute-0 systemd[1]: Started libpod-conmon-d67379ba61637f7305b989f071b775f8700e7e80885a289001a46605e7e5030c.scope.
Jan 31 08:23:22 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:23:23 compute-0 podman[232152]: 2026-01-31 08:23:23.000238002 +0000 UTC m=+0.163047157 container init d67379ba61637f7305b989f071b775f8700e7e80885a289001a46605e7e5030c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=xenodochial_chatterjee, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 08:23:23 compute-0 podman[232152]: 2026-01-31 08:23:23.00570883 +0000 UTC m=+0.168517975 container start d67379ba61637f7305b989f071b775f8700e7e80885a289001a46605e7e5030c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=xenodochial_chatterjee, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS)
Jan 31 08:23:23 compute-0 xenodochial_chatterjee[232220]: 167 167
Jan 31 08:23:23 compute-0 systemd[1]: libpod-d67379ba61637f7305b989f071b775f8700e7e80885a289001a46605e7e5030c.scope: Deactivated successfully.
Jan 31 08:23:23 compute-0 podman[232152]: 2026-01-31 08:23:23.011155147 +0000 UTC m=+0.173964282 container attach d67379ba61637f7305b989f071b775f8700e7e80885a289001a46605e7e5030c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=xenodochial_chatterjee, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Jan 31 08:23:23 compute-0 podman[232152]: 2026-01-31 08:23:23.011481497 +0000 UTC m=+0.174290632 container died d67379ba61637f7305b989f071b775f8700e7e80885a289001a46605e7e5030c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=xenodochial_chatterjee, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 08:23:23 compute-0 systemd[1]: var-lib-containers-storage-overlay-577854fa7570d133eabb64277ca42c6ea28791e585f83e9e34b6a080fde8e03a-merged.mount: Deactivated successfully.
Jan 31 08:23:23 compute-0 podman[232152]: 2026-01-31 08:23:23.050619126 +0000 UTC m=+0.213428251 container remove d67379ba61637f7305b989f071b775f8700e7e80885a289001a46605e7e5030c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=xenodochial_chatterjee, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, io.buildah.version=1.41.3)
Jan 31 08:23:23 compute-0 systemd[1]: libpod-conmon-d67379ba61637f7305b989f071b775f8700e7e80885a289001a46605e7e5030c.scope: Deactivated successfully.
Jan 31 08:23:23 compute-0 podman[232266]: 2026-01-31 08:23:23.215642079 +0000 UTC m=+0.062623748 container create d270d6ef6e459d29935343ec7f00b1ad1088ac3da218f3b9291c337e97649693 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jolly_burnell, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 08:23:23 compute-0 systemd[1]: Started libpod-conmon-d270d6ef6e459d29935343ec7f00b1ad1088ac3da218f3b9291c337e97649693.scope.
Jan 31 08:23:23 compute-0 podman[232266]: 2026-01-31 08:23:23.186697804 +0000 UTC m=+0.033679533 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 08:23:23 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:23:23 compute-0 sudo[232333]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ngjmhjyixmssgudgmpaehjhdmlxndsba ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847802.8832247-945-211750075905553/AnsiballZ_getent.py'
Jan 31 08:23:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c0f458a89e5567b6073e07d4122dcda717bfddf250da5eac9cfb3e5a2396c663/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 08:23:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c0f458a89e5567b6073e07d4122dcda717bfddf250da5eac9cfb3e5a2396c663/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 08:23:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c0f458a89e5567b6073e07d4122dcda717bfddf250da5eac9cfb3e5a2396c663/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 08:23:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c0f458a89e5567b6073e07d4122dcda717bfddf250da5eac9cfb3e5a2396c663/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 08:23:23 compute-0 sudo[232333]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:23:23 compute-0 podman[232266]: 2026-01-31 08:23:23.33036106 +0000 UTC m=+0.177342779 container init d270d6ef6e459d29935343ec7f00b1ad1088ac3da218f3b9291c337e97649693 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jolly_burnell, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 08:23:23 compute-0 podman[232266]: 2026-01-31 08:23:23.336229669 +0000 UTC m=+0.183211328 container start d270d6ef6e459d29935343ec7f00b1ad1088ac3da218f3b9291c337e97649693 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jolly_burnell, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, ceph=True)
Jan 31 08:23:23 compute-0 podman[232266]: 2026-01-31 08:23:23.340167633 +0000 UTC m=+0.187149302 container attach d270d6ef6e459d29935343ec7f00b1ad1088ac3da218f3b9291c337e97649693 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jolly_burnell, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 08:23:23 compute-0 ceph-mon[75227]: pgmap v623: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:23:23 compute-0 python3.9[232337]: ansible-ansible.builtin.getent Invoked with database=passwd key=nova fail_key=True service=None split=None
Jan 31 08:23:23 compute-0 sudo[232333]: pam_unix(sudo:session): session closed for user root
Jan 31 08:23:23 compute-0 jolly_burnell[232332]: {
Jan 31 08:23:23 compute-0 jolly_burnell[232332]:     "0": [
Jan 31 08:23:23 compute-0 jolly_burnell[232332]:         {
Jan 31 08:23:23 compute-0 jolly_burnell[232332]:             "devices": [
Jan 31 08:23:23 compute-0 jolly_burnell[232332]:                 "/dev/loop3"
Jan 31 08:23:23 compute-0 jolly_burnell[232332]:             ],
Jan 31 08:23:23 compute-0 jolly_burnell[232332]:             "lv_name": "ceph_lv0",
Jan 31 08:23:23 compute-0 jolly_burnell[232332]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 08:23:23 compute-0 jolly_burnell[232332]:             "lv_size": "21470642176",
Jan 31 08:23:23 compute-0 jolly_burnell[232332]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=MTsNbY-MKaT-jGv0-3onj-5WQa-gnK0-BbfLsK,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=82c880e6-d992-5408-8b12-efff9c275473,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=39c36249-2898-4a76-b317-8e4ca379866f,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 31 08:23:23 compute-0 jolly_burnell[232332]:             "lv_uuid": "MTsNbY-MKaT-jGv0-3onj-5WQa-gnK0-BbfLsK",
Jan 31 08:23:23 compute-0 jolly_burnell[232332]:             "name": "ceph_lv0",
Jan 31 08:23:23 compute-0 jolly_burnell[232332]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 08:23:23 compute-0 jolly_burnell[232332]:             "tags": {
Jan 31 08:23:23 compute-0 jolly_burnell[232332]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 31 08:23:23 compute-0 jolly_burnell[232332]:                 "ceph.block_uuid": "MTsNbY-MKaT-jGv0-3onj-5WQa-gnK0-BbfLsK",
Jan 31 08:23:23 compute-0 jolly_burnell[232332]:                 "ceph.cephx_lockbox_secret": "",
Jan 31 08:23:23 compute-0 jolly_burnell[232332]:                 "ceph.cluster_fsid": "82c880e6-d992-5408-8b12-efff9c275473",
Jan 31 08:23:23 compute-0 jolly_burnell[232332]:                 "ceph.cluster_name": "ceph",
Jan 31 08:23:23 compute-0 jolly_burnell[232332]:                 "ceph.crush_device_class": "",
Jan 31 08:23:23 compute-0 jolly_burnell[232332]:                 "ceph.encrypted": "0",
Jan 31 08:23:23 compute-0 jolly_burnell[232332]:                 "ceph.objectstore": "bluestore",
Jan 31 08:23:23 compute-0 jolly_burnell[232332]:                 "ceph.osd_fsid": "39c36249-2898-4a76-b317-8e4ca379866f",
Jan 31 08:23:23 compute-0 jolly_burnell[232332]:                 "ceph.osd_id": "0",
Jan 31 08:23:23 compute-0 jolly_burnell[232332]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 31 08:23:23 compute-0 jolly_burnell[232332]:                 "ceph.type": "block",
Jan 31 08:23:23 compute-0 jolly_burnell[232332]:                 "ceph.vdo": "0",
Jan 31 08:23:23 compute-0 jolly_burnell[232332]:                 "ceph.with_tpm": "0"
Jan 31 08:23:23 compute-0 jolly_burnell[232332]:             },
Jan 31 08:23:23 compute-0 jolly_burnell[232332]:             "type": "block",
Jan 31 08:23:23 compute-0 jolly_burnell[232332]:             "vg_name": "ceph_vg0"
Jan 31 08:23:23 compute-0 jolly_burnell[232332]:         }
Jan 31 08:23:23 compute-0 jolly_burnell[232332]:     ],
Jan 31 08:23:23 compute-0 jolly_burnell[232332]:     "1": [
Jan 31 08:23:23 compute-0 jolly_burnell[232332]:         {
Jan 31 08:23:23 compute-0 jolly_burnell[232332]:             "devices": [
Jan 31 08:23:23 compute-0 jolly_burnell[232332]:                 "/dev/loop4"
Jan 31 08:23:23 compute-0 jolly_burnell[232332]:             ],
Jan 31 08:23:23 compute-0 jolly_burnell[232332]:             "lv_name": "ceph_lv1",
Jan 31 08:23:23 compute-0 jolly_burnell[232332]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 31 08:23:23 compute-0 jolly_burnell[232332]:             "lv_size": "21470642176",
Jan 31 08:23:23 compute-0 jolly_burnell[232332]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=p93Mbf-DMxT-pcUt-jSJE-SFna-oscq-yTAd40,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=82c880e6-d992-5408-8b12-efff9c275473,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=dacad4fa-56d8-4937-b2d8-306fb75187f3,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 31 08:23:23 compute-0 jolly_burnell[232332]:             "lv_uuid": "p93Mbf-DMxT-pcUt-jSJE-SFna-oscq-yTAd40",
Jan 31 08:23:23 compute-0 jolly_burnell[232332]:             "name": "ceph_lv1",
Jan 31 08:23:23 compute-0 jolly_burnell[232332]:             "path": "/dev/ceph_vg1/ceph_lv1",
Jan 31 08:23:23 compute-0 jolly_burnell[232332]:             "tags": {
Jan 31 08:23:23 compute-0 jolly_burnell[232332]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 31 08:23:23 compute-0 jolly_burnell[232332]:                 "ceph.block_uuid": "p93Mbf-DMxT-pcUt-jSJE-SFna-oscq-yTAd40",
Jan 31 08:23:23 compute-0 jolly_burnell[232332]:                 "ceph.cephx_lockbox_secret": "",
Jan 31 08:23:23 compute-0 jolly_burnell[232332]:                 "ceph.cluster_fsid": "82c880e6-d992-5408-8b12-efff9c275473",
Jan 31 08:23:23 compute-0 jolly_burnell[232332]:                 "ceph.cluster_name": "ceph",
Jan 31 08:23:23 compute-0 jolly_burnell[232332]:                 "ceph.crush_device_class": "",
Jan 31 08:23:23 compute-0 jolly_burnell[232332]:                 "ceph.encrypted": "0",
Jan 31 08:23:23 compute-0 jolly_burnell[232332]:                 "ceph.objectstore": "bluestore",
Jan 31 08:23:23 compute-0 jolly_burnell[232332]:                 "ceph.osd_fsid": "dacad4fa-56d8-4937-b2d8-306fb75187f3",
Jan 31 08:23:23 compute-0 jolly_burnell[232332]:                 "ceph.osd_id": "1",
Jan 31 08:23:23 compute-0 jolly_burnell[232332]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 31 08:23:23 compute-0 jolly_burnell[232332]:                 "ceph.type": "block",
Jan 31 08:23:23 compute-0 jolly_burnell[232332]:                 "ceph.vdo": "0",
Jan 31 08:23:23 compute-0 jolly_burnell[232332]:                 "ceph.with_tpm": "0"
Jan 31 08:23:23 compute-0 jolly_burnell[232332]:             },
Jan 31 08:23:23 compute-0 jolly_burnell[232332]:             "type": "block",
Jan 31 08:23:23 compute-0 jolly_burnell[232332]:             "vg_name": "ceph_vg1"
Jan 31 08:23:23 compute-0 jolly_burnell[232332]:         }
Jan 31 08:23:23 compute-0 jolly_burnell[232332]:     ],
Jan 31 08:23:23 compute-0 jolly_burnell[232332]:     "2": [
Jan 31 08:23:23 compute-0 jolly_burnell[232332]:         {
Jan 31 08:23:23 compute-0 jolly_burnell[232332]:             "devices": [
Jan 31 08:23:23 compute-0 jolly_burnell[232332]:                 "/dev/loop5"
Jan 31 08:23:23 compute-0 jolly_burnell[232332]:             ],
Jan 31 08:23:23 compute-0 jolly_burnell[232332]:             "lv_name": "ceph_lv2",
Jan 31 08:23:23 compute-0 jolly_burnell[232332]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 31 08:23:23 compute-0 jolly_burnell[232332]:             "lv_size": "21470642176",
Jan 31 08:23:23 compute-0 jolly_burnell[232332]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=fxd7JU-HnwP-NvcE-M4xv-EgEF-kK7y-w6dXCS,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=82c880e6-d992-5408-8b12-efff9c275473,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=faa25865-e7b6-44f9-8188-08bf287b941b,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 31 08:23:23 compute-0 jolly_burnell[232332]:             "lv_uuid": "fxd7JU-HnwP-NvcE-M4xv-EgEF-kK7y-w6dXCS",
Jan 31 08:23:23 compute-0 jolly_burnell[232332]:             "name": "ceph_lv2",
Jan 31 08:23:23 compute-0 jolly_burnell[232332]:             "path": "/dev/ceph_vg2/ceph_lv2",
Jan 31 08:23:23 compute-0 jolly_burnell[232332]:             "tags": {
Jan 31 08:23:23 compute-0 jolly_burnell[232332]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 31 08:23:23 compute-0 jolly_burnell[232332]:                 "ceph.block_uuid": "fxd7JU-HnwP-NvcE-M4xv-EgEF-kK7y-w6dXCS",
Jan 31 08:23:23 compute-0 jolly_burnell[232332]:                 "ceph.cephx_lockbox_secret": "",
Jan 31 08:23:23 compute-0 jolly_burnell[232332]:                 "ceph.cluster_fsid": "82c880e6-d992-5408-8b12-efff9c275473",
Jan 31 08:23:23 compute-0 jolly_burnell[232332]:                 "ceph.cluster_name": "ceph",
Jan 31 08:23:23 compute-0 jolly_burnell[232332]:                 "ceph.crush_device_class": "",
Jan 31 08:23:23 compute-0 jolly_burnell[232332]:                 "ceph.encrypted": "0",
Jan 31 08:23:23 compute-0 jolly_burnell[232332]:                 "ceph.objectstore": "bluestore",
Jan 31 08:23:23 compute-0 jolly_burnell[232332]:                 "ceph.osd_fsid": "faa25865-e7b6-44f9-8188-08bf287b941b",
Jan 31 08:23:23 compute-0 jolly_burnell[232332]:                 "ceph.osd_id": "2",
Jan 31 08:23:23 compute-0 jolly_burnell[232332]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 31 08:23:23 compute-0 jolly_burnell[232332]:                 "ceph.type": "block",
Jan 31 08:23:23 compute-0 jolly_burnell[232332]:                 "ceph.vdo": "0",
Jan 31 08:23:23 compute-0 jolly_burnell[232332]:                 "ceph.with_tpm": "0"
Jan 31 08:23:23 compute-0 jolly_burnell[232332]:             },
Jan 31 08:23:23 compute-0 jolly_burnell[232332]:             "type": "block",
Jan 31 08:23:23 compute-0 jolly_burnell[232332]:             "vg_name": "ceph_vg2"
Jan 31 08:23:23 compute-0 jolly_burnell[232332]:         }
Jan 31 08:23:23 compute-0 jolly_burnell[232332]:     ]
Jan 31 08:23:23 compute-0 jolly_burnell[232332]: }
Jan 31 08:23:23 compute-0 systemd[1]: libpod-d270d6ef6e459d29935343ec7f00b1ad1088ac3da218f3b9291c337e97649693.scope: Deactivated successfully.
Jan 31 08:23:23 compute-0 podman[232266]: 2026-01-31 08:23:23.680690921 +0000 UTC m=+0.527672550 container died d270d6ef6e459d29935343ec7f00b1ad1088ac3da218f3b9291c337e97649693 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jolly_burnell, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_REF=tentacle)
Jan 31 08:23:23 compute-0 systemd[1]: var-lib-containers-storage-overlay-c0f458a89e5567b6073e07d4122dcda717bfddf250da5eac9cfb3e5a2396c663-merged.mount: Deactivated successfully.
Jan 31 08:23:23 compute-0 podman[232266]: 2026-01-31 08:23:23.74926989 +0000 UTC m=+0.596251529 container remove d270d6ef6e459d29935343ec7f00b1ad1088ac3da218f3b9291c337e97649693 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jolly_burnell, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.41.3, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 08:23:23 compute-0 systemd[1]: libpod-conmon-d270d6ef6e459d29935343ec7f00b1ad1088ac3da218f3b9291c337e97649693.scope: Deactivated successfully.
Jan 31 08:23:23 compute-0 sudo[232115]: pam_unix(sudo:session): session closed for user root
Jan 31 08:23:23 compute-0 sudo[232431]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 08:23:23 compute-0 sudo[232431]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:23:23 compute-0 sudo[232431]: pam_unix(sudo:session): session closed for user root
Jan 31 08:23:23 compute-0 sudo[232456]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/82c880e6-d992-5408-8b12-efff9c275473/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 82c880e6-d992-5408-8b12-efff9c275473 -- raw list --format json
Jan 31 08:23:23 compute-0 sudo[232456]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:23:24 compute-0 sudo[232573]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bgzbqmtsbboxvlgbpsllqreirlqeutvm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847803.6778376-953-61467317859993/AnsiballZ_group.py'
Jan 31 08:23:24 compute-0 sudo[232573]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:23:24 compute-0 podman[232548]: 2026-01-31 08:23:24.195524698 +0000 UTC m=+0.051415825 container create ded5a315dcd9216ca91508fd54b71a1c19d23ae9778b536c650ef2ec81efef4a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=peaceful_carver, org.label-schema.build-date=20251030, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 08:23:24 compute-0 systemd[1]: Started libpod-conmon-ded5a315dcd9216ca91508fd54b71a1c19d23ae9778b536c650ef2ec81efef4a.scope.
Jan 31 08:23:24 compute-0 podman[232548]: 2026-01-31 08:23:24.17443121 +0000 UTC m=+0.030322447 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 08:23:24 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:23:24 compute-0 podman[232548]: 2026-01-31 08:23:24.289240953 +0000 UTC m=+0.145132190 container init ded5a315dcd9216ca91508fd54b71a1c19d23ae9778b536c650ef2ec81efef4a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=peaceful_carver, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 08:23:24 compute-0 podman[232548]: 2026-01-31 08:23:24.29847615 +0000 UTC m=+0.154367297 container start ded5a315dcd9216ca91508fd54b71a1c19d23ae9778b536c650ef2ec81efef4a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=peaceful_carver, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 08:23:24 compute-0 podman[232548]: 2026-01-31 08:23:24.302617689 +0000 UTC m=+0.158508906 container attach ded5a315dcd9216ca91508fd54b71a1c19d23ae9778b536c650ef2ec81efef4a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=peaceful_carver, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030)
Jan 31 08:23:24 compute-0 peaceful_carver[232586]: 167 167
Jan 31 08:23:24 compute-0 systemd[1]: libpod-ded5a315dcd9216ca91508fd54b71a1c19d23ae9778b536c650ef2ec81efef4a.scope: Deactivated successfully.
Jan 31 08:23:24 compute-0 podman[232548]: 2026-01-31 08:23:24.304411431 +0000 UTC m=+0.160302578 container died ded5a315dcd9216ca91508fd54b71a1c19d23ae9778b536c650ef2ec81efef4a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=peaceful_carver, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 08:23:24 compute-0 systemd[1]: var-lib-containers-storage-overlay-6b06efc16cad4582e4429c39e1f81d8f00cadec2595719931b6f330ec2b5afef-merged.mount: Deactivated successfully.
Jan 31 08:23:24 compute-0 podman[232548]: 2026-01-31 08:23:24.339322779 +0000 UTC m=+0.195213906 container remove ded5a315dcd9216ca91508fd54b71a1c19d23ae9778b536c650ef2ec81efef4a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=peaceful_carver, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3)
Jan 31 08:23:24 compute-0 python3.9[232583]: ansible-ansible.builtin.group Invoked with gid=42436 name=nova state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Jan 31 08:23:24 compute-0 systemd[1]: libpod-conmon-ded5a315dcd9216ca91508fd54b71a1c19d23ae9778b536c650ef2ec81efef4a.scope: Deactivated successfully.
Jan 31 08:23:24 compute-0 groupadd[232605]: group added to /etc/group: name=nova, GID=42436
Jan 31 08:23:24 compute-0 groupadd[232605]: group added to /etc/gshadow: name=nova
Jan 31 08:23:24 compute-0 groupadd[232605]: new group: name=nova, GID=42436
Jan 31 08:23:24 compute-0 sudo[232573]: pam_unix(sudo:session): session closed for user root
Jan 31 08:23:24 compute-0 podman[232616]: 2026-01-31 08:23:24.461661739 +0000 UTC m=+0.034901008 container create 0449b6f38e939b006ba97921f48897d4b1c3d2610217bdacf7f1f09dcfe2a75d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=laughing_feynman, CEPH_REF=tentacle, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 08:23:24 compute-0 systemd[1]: Started libpod-conmon-0449b6f38e939b006ba97921f48897d4b1c3d2610217bdacf7f1f09dcfe2a75d.scope.
Jan 31 08:23:24 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:23:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7c3b41064df8bebc19f801e14be5f98e8a97f182baddeb86f1d207744d6d9435/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 08:23:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7c3b41064df8bebc19f801e14be5f98e8a97f182baddeb86f1d207744d6d9435/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 08:23:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7c3b41064df8bebc19f801e14be5f98e8a97f182baddeb86f1d207744d6d9435/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 08:23:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7c3b41064df8bebc19f801e14be5f98e8a97f182baddeb86f1d207744d6d9435/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 08:23:24 compute-0 podman[232616]: 2026-01-31 08:23:24.53374628 +0000 UTC m=+0.106985549 container init 0449b6f38e939b006ba97921f48897d4b1c3d2610217bdacf7f1f09dcfe2a75d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=laughing_feynman, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3)
Jan 31 08:23:24 compute-0 podman[232616]: 2026-01-31 08:23:24.445511363 +0000 UTC m=+0.018750622 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 08:23:24 compute-0 podman[232616]: 2026-01-31 08:23:24.540384191 +0000 UTC m=+0.113623440 container start 0449b6f38e939b006ba97921f48897d4b1c3d2610217bdacf7f1f09dcfe2a75d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=laughing_feynman, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.license=GPLv2)
Jan 31 08:23:24 compute-0 podman[232616]: 2026-01-31 08:23:24.543721508 +0000 UTC m=+0.116960767 container attach 0449b6f38e939b006ba97921f48897d4b1c3d2610217bdacf7f1f09dcfe2a75d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=laughing_feynman, CEPH_REF=tentacle, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 08:23:24 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v624: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:23:25 compute-0 sudo[232832]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mrrmncdkrmwcxastbssttkikflvoxahn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847804.572415-961-150671223796033/AnsiballZ_user.py'
Jan 31 08:23:25 compute-0 sudo[232832]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:23:25 compute-0 lvm[232863]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 31 08:23:25 compute-0 lvm[232864]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 31 08:23:25 compute-0 lvm[232863]: VG ceph_vg0 finished
Jan 31 08:23:25 compute-0 lvm[232864]: VG ceph_vg1 finished
Jan 31 08:23:25 compute-0 lvm[232866]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 31 08:23:25 compute-0 lvm[232866]: VG ceph_vg2 finished
Jan 31 08:23:25 compute-0 python3.9[232839]: ansible-ansible.builtin.user Invoked with comment=nova user group=nova groups=['libvirt'] name=nova shell=/bin/sh state=present uid=42436 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on compute-0 update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None
Jan 31 08:23:25 compute-0 useradd[232868]: new user: name=nova, UID=42436, GID=42436, home=/home/nova, shell=/bin/sh, from=/dev/pts/0
Jan 31 08:23:25 compute-0 useradd[232868]: add 'nova' to group 'libvirt'
Jan 31 08:23:25 compute-0 useradd[232868]: add 'nova' to shadow group 'libvirt'
Jan 31 08:23:25 compute-0 laughing_feynman[232657]: {}
Jan 31 08:23:25 compute-0 systemd[1]: libpod-0449b6f38e939b006ba97921f48897d4b1c3d2610217bdacf7f1f09dcfe2a75d.scope: Deactivated successfully.
Jan 31 08:23:25 compute-0 systemd[1]: libpod-0449b6f38e939b006ba97921f48897d4b1c3d2610217bdacf7f1f09dcfe2a75d.scope: Consumed 1.187s CPU time.
Jan 31 08:23:25 compute-0 podman[232616]: 2026-01-31 08:23:25.356352671 +0000 UTC m=+0.929591940 container died 0449b6f38e939b006ba97921f48897d4b1c3d2610217bdacf7f1f09dcfe2a75d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=laughing_feynman, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Jan 31 08:23:25 compute-0 systemd[1]: var-lib-containers-storage-overlay-7c3b41064df8bebc19f801e14be5f98e8a97f182baddeb86f1d207744d6d9435-merged.mount: Deactivated successfully.
Jan 31 08:23:25 compute-0 podman[232616]: 2026-01-31 08:23:25.412715938 +0000 UTC m=+0.985955187 container remove 0449b6f38e939b006ba97921f48897d4b1c3d2610217bdacf7f1f09dcfe2a75d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=laughing_feynman, ceph=True, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 31 08:23:25 compute-0 sudo[232832]: pam_unix(sudo:session): session closed for user root
Jan 31 08:23:25 compute-0 systemd[1]: libpod-conmon-0449b6f38e939b006ba97921f48897d4b1c3d2610217bdacf7f1f09dcfe2a75d.scope: Deactivated successfully.
Jan 31 08:23:25 compute-0 sudo[232456]: pam_unix(sudo:session): session closed for user root
Jan 31 08:23:25 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 31 08:23:25 compute-0 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 08:23:25 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 31 08:23:25 compute-0 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 08:23:25 compute-0 sudo[232913]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 31 08:23:25 compute-0 sudo[232913]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:23:25 compute-0 sudo[232913]: pam_unix(sudo:session): session closed for user root
Jan 31 08:23:26 compute-0 ceph-mon[75227]: pgmap v624: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:23:26 compute-0 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 08:23:26 compute-0 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 08:23:26 compute-0 sshd-session[232938]: Accepted publickey for zuul from 192.168.122.30 port 39076 ssh2: ECDSA SHA256:Skb+4tfaoVfLHQIqkRSeA/sFlTrVc6ZnX8V66qTLHY8
Jan 31 08:23:26 compute-0 systemd-logind[793]: New session 51 of user zuul.
Jan 31 08:23:26 compute-0 systemd[1]: Started Session 51 of User zuul.
Jan 31 08:23:26 compute-0 sshd-session[232938]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 31 08:23:26 compute-0 sshd-session[232941]: Received disconnect from 192.168.122.30 port 39076:11: disconnected by user
Jan 31 08:23:26 compute-0 sshd-session[232941]: Disconnected from user zuul 192.168.122.30 port 39076
Jan 31 08:23:26 compute-0 sshd-session[232938]: pam_unix(sshd:session): session closed for user zuul
Jan 31 08:23:26 compute-0 systemd[1]: session-51.scope: Deactivated successfully.
Jan 31 08:23:26 compute-0 systemd-logind[793]: Session 51 logged out. Waiting for processes to exit.
Jan 31 08:23:26 compute-0 systemd-logind[793]: Removed session 51.
Jan 31 08:23:26 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v625: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:23:27 compute-0 python3.9[233091]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/config.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 08:23:27 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 08:23:27 compute-0 python3.9[233212]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/config.json mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769847806.64989-986-120812274702130/.source.json follow=False _original_basename=config.json.j2 checksum=b51012bfb0ca26296dcf3793a2f284446fb1395e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 31 08:23:28 compute-0 python3.9[233362]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/nova-blank.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 08:23:28 compute-0 ceph-mon[75227]: pgmap v625: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:23:28 compute-0 python3.9[233438]: ansible-ansible.legacy.file Invoked with mode=0644 setype=container_file_t dest=/var/lib/openstack/config/nova/nova-blank.conf _original_basename=nova-blank.conf recurse=False state=file path=/var/lib/openstack/config/nova/nova-blank.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 31 08:23:28 compute-0 python3.9[233588]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/ssh-config follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 08:23:28 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v626: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:23:29 compute-0 python3.9[233709]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/ssh-config mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769847808.5453548-986-279288139863687/.source follow=False _original_basename=ssh-config checksum=4297f735c41bdc1ff52d72e6f623a02242f37958 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 31 08:23:29 compute-0 python3.9[233859]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/02-nova-host-specific.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 08:23:30 compute-0 ceph-mon[75227]: pgmap v626: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:23:30 compute-0 python3.9[233980]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/02-nova-host-specific.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769847809.5444365-986-274071273058778/.source.conf follow=False _original_basename=02-nova-host-specific.conf.j2 checksum=1feba546d0beacad9258164ab79b8a747685ccc8 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 31 08:23:30 compute-0 python3.9[234130]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/nova_statedir_ownership.py follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 08:23:30 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v627: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 5.0 KiB/s rd, 0 B/s wr, 9 op/s
Jan 31 08:23:31 compute-0 python3.9[234251]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/nova_statedir_ownership.py mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769847810.5386238-986-22877050462384/.source.py follow=False _original_basename=nova_statedir_ownership.py checksum=c6c8a3cfefa5efd60ceb1408c4e977becedb71e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 31 08:23:31 compute-0 ceph-mgr[75519]: [balancer INFO root] Optimize plan auto_2026-01-31_08:23:31
Jan 31 08:23:31 compute-0 ceph-mgr[75519]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 31 08:23:31 compute-0 ceph-mgr[75519]: [balancer INFO root] do_upmap
Jan 31 08:23:31 compute-0 ceph-mgr[75519]: [balancer INFO root] pools ['cephfs.cephfs.meta', 'default.rgw.meta', 'default.rgw.log', 'volumes', '.rgw.root', '.mgr', 'default.rgw.control', 'images', 'cephfs.cephfs.data', 'backups', 'vms']
Jan 31 08:23:31 compute-0 ceph-mgr[75519]: [balancer INFO root] prepared 0/10 upmap changes
Jan 31 08:23:31 compute-0 python3.9[234401]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/run-on-host follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 08:23:32 compute-0 ceph-mon[75227]: pgmap v627: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 5.0 KiB/s rd, 0 B/s wr, 9 op/s
Jan 31 08:23:32 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 08:23:32 compute-0 python3.9[234522]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/run-on-host mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769847811.508902-986-115779033404361/.source follow=False _original_basename=run-on-host checksum=93aba8edc83d5878604a66d37fea2f12b60bdea2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 31 08:23:32 compute-0 ceph-mgr[75519]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:23:32 compute-0 ceph-mgr[75519]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:23:32 compute-0 ceph-mgr[75519]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:23:32 compute-0 ceph-mgr[75519]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:23:32 compute-0 ceph-mgr[75519]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:23:32 compute-0 ceph-mgr[75519]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:23:32 compute-0 sudo[234672]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tehsuetzebbdbpqrcttzjnimcqnqsxge ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847812.6364794-1069-35668567968639/AnsiballZ_file.py'
Jan 31 08:23:32 compute-0 sudo[234672]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:23:32 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v628: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 5.0 KiB/s rd, 0 B/s wr, 9 op/s
Jan 31 08:23:33 compute-0 ceph-mgr[75519]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 31 08:23:33 compute-0 ceph-mgr[75519]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 08:23:33 compute-0 ceph-mgr[75519]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 31 08:23:33 compute-0 ceph-mgr[75519]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 08:23:33 compute-0 ceph-mgr[75519]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 08:23:33 compute-0 ceph-mgr[75519]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 08:23:33 compute-0 ceph-mgr[75519]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 08:23:33 compute-0 ceph-mgr[75519]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 08:23:33 compute-0 ceph-mgr[75519]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 08:23:33 compute-0 ceph-mgr[75519]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 08:23:33 compute-0 python3.9[234674]: ansible-ansible.builtin.file Invoked with group=nova mode=0700 owner=nova path=/home/nova/.ssh state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 08:23:33 compute-0 sudo[234672]: pam_unix(sudo:session): session closed for user root
Jan 31 08:23:33 compute-0 sudo[234824]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tindvgfbvjjfgfhmqnkwkzncusarceku ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847813.259634-1077-238105965506943/AnsiballZ_copy.py'
Jan 31 08:23:33 compute-0 sudo[234824]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:23:33 compute-0 python3.9[234826]: ansible-ansible.legacy.copy Invoked with dest=/home/nova/.ssh/authorized_keys group=nova mode=0600 owner=nova remote_src=True src=/var/lib/openstack/config/nova/ssh-publickey backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 08:23:33 compute-0 sudo[234824]: pam_unix(sudo:session): session closed for user root
Jan 31 08:23:34 compute-0 sudo[234976]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-puajczvtbjprimvilzsnaghxzajjoihz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847813.823669-1085-123609921743697/AnsiballZ_stat.py'
Jan 31 08:23:34 compute-0 sudo[234976]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:23:34 compute-0 ceph-mon[75227]: pgmap v628: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 5.0 KiB/s rd, 0 B/s wr, 9 op/s
Jan 31 08:23:34 compute-0 python3.9[234978]: ansible-ansible.builtin.stat Invoked with path=/var/lib/nova/compute_id follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 31 08:23:34 compute-0 sudo[234976]: pam_unix(sudo:session): session closed for user root
Jan 31 08:23:34 compute-0 sudo[235128]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jouopksignmxmqkuftevtdpbxgsavnqm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847814.4122767-1093-101739576691211/AnsiballZ_stat.py'
Jan 31 08:23:34 compute-0 sudo[235128]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:23:34 compute-0 python3.9[235130]: ansible-ansible.legacy.stat Invoked with path=/var/lib/nova/compute_id follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 08:23:34 compute-0 sudo[235128]: pam_unix(sudo:session): session closed for user root
Jan 31 08:23:34 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v629: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 8.0 KiB/s rd, 0 B/s wr, 15 op/s
Jan 31 08:23:35 compute-0 sudo[235251]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xoqhuxlzwbbzrxhshqchkrojykkjimil ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847814.4122767-1093-101739576691211/AnsiballZ_copy.py'
Jan 31 08:23:35 compute-0 sudo[235251]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:23:35 compute-0 python3.9[235253]: ansible-ansible.legacy.copy Invoked with attributes=+i dest=/var/lib/nova/compute_id group=nova mode=0400 owner=nova src=/home/zuul/.ansible/tmp/ansible-tmp-1769847814.4122767-1093-101739576691211/.source _original_basename=.cg9u3wcc follow=False checksum=1a407d9fba258306aa66616f39ab370f27391333 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None
Jan 31 08:23:35 compute-0 sudo[235251]: pam_unix(sudo:session): session closed for user root
Jan 31 08:23:35 compute-0 python3.9[235405]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 31 08:23:36 compute-0 ceph-mon[75227]: pgmap v629: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 8.0 KiB/s rd, 0 B/s wr, 15 op/s
Jan 31 08:23:36 compute-0 python3.9[235557]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/containers/nova_compute.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 08:23:36 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v630: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 8.0 KiB/s rd, 0 B/s wr, 15 op/s
Jan 31 08:23:37 compute-0 python3.9[235678]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/containers/nova_compute.json mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769847816.1355546-1119-77208351948412/.source.json follow=False _original_basename=nova_compute.json.j2 checksum=aff5546b44cf4461a7541a94e4cce1332c9b58b0 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 31 08:23:37 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 08:23:37 compute-0 python3.9[235828]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/containers/nova_compute_init.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 08:23:38 compute-0 python3.9[235949]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/containers/nova_compute_init.json mode=0700 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769847817.2293115-1134-239044020794555/.source.json follow=False _original_basename=nova_compute_init.json.j2 checksum=60b024e6db49dc6e700fc0d50263944d98d4c034 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 31 08:23:38 compute-0 ceph-mon[75227]: pgmap v630: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 8.0 KiB/s rd, 0 B/s wr, 15 op/s
Jan 31 08:23:38 compute-0 sudo[236099]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nwkakmwkiwqrrklghqqwekfwndguxhyn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847818.4672248-1151-269092247588018/AnsiballZ_container_config_data.py'
Jan 31 08:23:38 compute-0 sudo[236099]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:23:38 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v631: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 8.0 KiB/s rd, 0 B/s wr, 15 op/s
Jan 31 08:23:39 compute-0 python3.9[236101]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/openstack/config/containers config_pattern=nova_compute_init.json debug=False
Jan 31 08:23:39 compute-0 sudo[236099]: pam_unix(sudo:session): session closed for user root
Jan 31 08:23:39 compute-0 sudo[236251]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-neyozmexxafyhoritsisfkquzbyauded ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847819.40588-1162-202146280718049/AnsiballZ_container_config_hash.py'
Jan 31 08:23:39 compute-0 sudo[236251]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:23:40 compute-0 python3.9[236253]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/openstack
Jan 31 08:23:40 compute-0 sudo[236251]: pam_unix(sudo:session): session closed for user root
Jan 31 08:23:40 compute-0 ceph-mon[75227]: pgmap v631: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 8.0 KiB/s rd, 0 B/s wr, 15 op/s
Jan 31 08:23:40 compute-0 sudo[236403]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nigaqcejluimkmvkklgdckfunzubktof ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1769847820.3499992-1172-50425779351151/AnsiballZ_edpm_container_manage.py'
Jan 31 08:23:40 compute-0 sudo[236403]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:23:40 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v632: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 8.0 KiB/s rd, 0 B/s wr, 15 op/s
Jan 31 08:23:41 compute-0 python3[236405]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/openstack/config/containers config_id=edpm config_overrides={} config_patterns=nova_compute_init.json containers=[] log_base_path=/var/log/containers/stdouts debug=False
Jan 31 08:23:42 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 08:23:42 compute-0 ceph-mon[75227]: pgmap v632: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 8.0 KiB/s rd, 0 B/s wr, 15 op/s
Jan 31 08:23:42 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v633: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 3.0 KiB/s rd, 0 B/s wr, 5 op/s
Jan 31 08:23:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] _maybe_adjust
Jan 31 08:23:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:23:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 31 08:23:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:23:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 08:23:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:23:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 08:23:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:23:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 08:23:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:23:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 08:23:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:23:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 2.6947183441958982e-06 of space, bias 4.0, pg target 0.003233662013035078 quantized to 16 (current 16)
Jan 31 08:23:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:23:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 08:23:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:23:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 31 08:23:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:23:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Jan 31 08:23:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:23:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 08:23:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:23:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 31 08:23:44 compute-0 ceph-mon[75227]: pgmap v633: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 3.0 KiB/s rd, 0 B/s wr, 5 op/s
Jan 31 08:23:44 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v634: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 3.0 KiB/s rd, 0 B/s wr, 5 op/s
Jan 31 08:23:45 compute-0 ceph-mon[75227]: pgmap v634: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 3.0 KiB/s rd, 0 B/s wr, 5 op/s
Jan 31 08:23:46 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v635: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:23:47 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 08:23:47 compute-0 ceph-mon[75227]: pgmap v635: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:23:48 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v636: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:23:50 compute-0 ceph-mon[75227]: pgmap v636: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:23:50 compute-0 podman[236474]: 2026-01-31 08:23:50.28870944 +0000 UTC m=+3.171703859 container health_status 14c3c41ef3ecc0cb180f3b1c6b2646401de390411c75f2edf127900ced71a3ae (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '3b798815db4eef76ff54d2cfe5801aee605c637f16e47a4297de289ab1fdb9c1-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team)
Jan 31 08:23:50 compute-0 podman[236418]: 2026-01-31 08:23:50.300622323 +0000 UTC m=+9.155412013 image pull f4e0688689eb3c524117ae65df199eeb4e620e591d26898b5cb25b819a2d79fd quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified
Jan 31 08:23:50 compute-0 podman[236524]: 2026-01-31 08:23:50.43356181 +0000 UTC m=+0.060867938 container create b44711eb7963a861f67aacdac80c6c0eae2f31ac8f9050d94be98043d2cdd713 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20260127, maintainer=OpenStack Kubernetes Operator team, config_id=edpm, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, org.label-schema.vendor=CentOS, container_name=nova_compute_init, managed_by=edpm_ansible, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true)
Jan 31 08:23:50 compute-0 podman[236524]: 2026-01-31 08:23:50.403663437 +0000 UTC m=+0.030969575 image pull f4e0688689eb3c524117ae65df199eeb4e620e591d26898b5cb25b819a2d79fd quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified
Jan 31 08:23:50 compute-0 python3[236405]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name nova_compute_init --conmon-pidfile /run/nova_compute_init.pid --env NOVA_STATEDIR_OWNERSHIP_SKIP=/var/lib/nova/compute_id --env __OS_DEBUG=False --label config_id=edpm --label container_name=nova_compute_init --label managed_by=edpm_ansible --label config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']} --log-driver journald --log-level info --network none --privileged=False --security-opt label=disable --user root --volume /dev/log:/dev/log --volume /var/lib/nova:/var/lib/nova:shared --volume /var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z --volume /var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init
Jan 31 08:23:50 compute-0 sudo[236403]: pam_unix(sudo:session): session closed for user root
Jan 31 08:23:50 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v637: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:23:51 compute-0 sudo[236728]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-exlkwfeswhditgjezmdigqvbifhemvps ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847830.752844-1180-263045874952413/AnsiballZ_stat.py'
Jan 31 08:23:51 compute-0 sudo[236728]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:23:51 compute-0 podman[236686]: 2026-01-31 08:23:51.0554968 +0000 UTC m=+0.059523469 container health_status 5cc46d1955888fed41771eb977e7f9416e280539f01559118253757fe3eb0869 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '3b798815db4eef76ff54d2cfe5801aee605c637f16e47a4297de289ab1fdb9c1-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ovn_metadata_agent, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, container_name=ovn_metadata_agent)
Jan 31 08:23:51 compute-0 python3.9[236734]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 31 08:23:51 compute-0 sudo[236728]: pam_unix(sudo:session): session closed for user root
Jan 31 08:23:51 compute-0 sudo[236886]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yxnfyaviihokhbmvcracmynheharqbrr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847831.684341-1192-55087109994549/AnsiballZ_container_config_data.py'
Jan 31 08:23:51 compute-0 sudo[236886]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:23:52 compute-0 python3.9[236888]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/openstack/config/containers config_pattern=nova_compute.json debug=False
Jan 31 08:23:52 compute-0 sudo[236886]: pam_unix(sudo:session): session closed for user root
Jan 31 08:23:52 compute-0 ceph-mon[75227]: pgmap v637: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:23:52 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 08:23:52 compute-0 ceph-mon[75227]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #30. Immutable memtables: 0.
Jan 31 08:23:52 compute-0 ceph-mon[75227]: rocksdb: (Original Log Time 2026/01/31-08:23:52.450585) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 31 08:23:52 compute-0 ceph-mon[75227]: rocksdb: [db/flush_job.cc:856] [default] [JOB 11] Flushing memtable with next log file: 30
Jan 31 08:23:52 compute-0 ceph-mon[75227]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769847832450650, "job": 11, "event": "flush_started", "num_memtables": 1, "num_entries": 1603, "num_deletes": 253, "total_data_size": 2684871, "memory_usage": 2729680, "flush_reason": "Manual Compaction"}
Jan 31 08:23:52 compute-0 ceph-mon[75227]: rocksdb: [db/flush_job.cc:885] [default] [JOB 11] Level-0 flush table #31: started
Jan 31 08:23:52 compute-0 ceph-mon[75227]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769847832459420, "cf_name": "default", "job": 11, "event": "table_file_creation", "file_number": 31, "file_size": 1530477, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 11833, "largest_seqno": 13435, "table_properties": {"data_size": 1525100, "index_size": 2581, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1733, "raw_key_size": 13606, "raw_average_key_size": 20, "raw_value_size": 1513278, "raw_average_value_size": 2238, "num_data_blocks": 119, "num_entries": 676, "num_filter_entries": 676, "num_deletions": 253, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769847651, "oldest_key_time": 1769847651, "file_creation_time": 1769847832, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "91992687-9ca4-489a-811f-a25b3432622d", "db_session_id": "RDN3DWKE2K2I6QTJYIJY", "orig_file_number": 31, "seqno_to_time_mapping": "N/A"}}
Jan 31 08:23:52 compute-0 ceph-mon[75227]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 11] Flush lasted 8906 microseconds, and 4784 cpu microseconds.
Jan 31 08:23:52 compute-0 ceph-mon[75227]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 31 08:23:52 compute-0 ceph-mon[75227]: rocksdb: (Original Log Time 2026/01/31-08:23:52.459490) [db/flush_job.cc:967] [default] [JOB 11] Level-0 flush table #31: 1530477 bytes OK
Jan 31 08:23:52 compute-0 ceph-mon[75227]: rocksdb: (Original Log Time 2026/01/31-08:23:52.459518) [db/memtable_list.cc:519] [default] Level-0 commit table #31 started
Jan 31 08:23:52 compute-0 ceph-mon[75227]: rocksdb: (Original Log Time 2026/01/31-08:23:52.461112) [db/memtable_list.cc:722] [default] Level-0 commit table #31: memtable #1 done
Jan 31 08:23:52 compute-0 ceph-mon[75227]: rocksdb: (Original Log Time 2026/01/31-08:23:52.461147) EVENT_LOG_v1 {"time_micros": 1769847832461137, "job": 11, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 31 08:23:52 compute-0 ceph-mon[75227]: rocksdb: (Original Log Time 2026/01/31-08:23:52.461187) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 31 08:23:52 compute-0 ceph-mon[75227]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 11] Try to delete WAL files size 2677933, prev total WAL file size 2677933, number of live WAL files 2.
Jan 31 08:23:52 compute-0 ceph-mon[75227]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000027.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 08:23:52 compute-0 ceph-mon[75227]: rocksdb: (Original Log Time 2026/01/31-08:23:52.462527) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D67727374617400323530' seq:72057594037927935, type:22 .. '6D67727374617400353034' seq:0, type:0; will stop at (end)
Jan 31 08:23:52 compute-0 ceph-mon[75227]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 12] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 31 08:23:52 compute-0 ceph-mon[75227]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 11 Base level 0, inputs: [31(1494KB)], [29(8235KB)]
Jan 31 08:23:52 compute-0 ceph-mon[75227]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769847832462587, "job": 12, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [31], "files_L6": [29], "score": -1, "input_data_size": 9963208, "oldest_snapshot_seqno": -1}
Jan 31 08:23:52 compute-0 ceph-mon[75227]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 12] Generated table #32: 3994 keys, 7687150 bytes, temperature: kUnknown
Jan 31 08:23:52 compute-0 ceph-mon[75227]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769847832522079, "cf_name": "default", "job": 12, "event": "table_file_creation", "file_number": 32, "file_size": 7687150, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 7658453, "index_size": 17579, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 10053, "raw_key_size": 95458, "raw_average_key_size": 23, "raw_value_size": 7584414, "raw_average_value_size": 1898, "num_data_blocks": 764, "num_entries": 3994, "num_filter_entries": 3994, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769846771, "oldest_key_time": 0, "file_creation_time": 1769847832, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "91992687-9ca4-489a-811f-a25b3432622d", "db_session_id": "RDN3DWKE2K2I6QTJYIJY", "orig_file_number": 32, "seqno_to_time_mapping": "N/A"}}
Jan 31 08:23:52 compute-0 ceph-mon[75227]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 31 08:23:52 compute-0 ceph-mon[75227]: rocksdb: (Original Log Time 2026/01/31-08:23:52.522478) [db/compaction/compaction_job.cc:1663] [default] [JOB 12] Compacted 1@0 + 1@6 files to L6 => 7687150 bytes
Jan 31 08:23:52 compute-0 ceph-mon[75227]: rocksdb: (Original Log Time 2026/01/31-08:23:52.524028) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 167.2 rd, 129.0 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.5, 8.0 +0.0 blob) out(7.3 +0.0 blob), read-write-amplify(11.5) write-amplify(5.0) OK, records in: 4429, records dropped: 435 output_compression: NoCompression
Jan 31 08:23:52 compute-0 ceph-mon[75227]: rocksdb: (Original Log Time 2026/01/31-08:23:52.524069) EVENT_LOG_v1 {"time_micros": 1769847832524051, "job": 12, "event": "compaction_finished", "compaction_time_micros": 59599, "compaction_time_cpu_micros": 26695, "output_level": 6, "num_output_files": 1, "total_output_size": 7687150, "num_input_records": 4429, "num_output_records": 3994, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 31 08:23:52 compute-0 ceph-mon[75227]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000031.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 08:23:52 compute-0 ceph-mon[75227]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769847832524504, "job": 12, "event": "table_file_deletion", "file_number": 31}
Jan 31 08:23:52 compute-0 ceph-mon[75227]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000029.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 08:23:52 compute-0 ceph-mon[75227]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769847832526294, "job": 12, "event": "table_file_deletion", "file_number": 29}
Jan 31 08:23:52 compute-0 ceph-mon[75227]: rocksdb: (Original Log Time 2026/01/31-08:23:52.462399) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 08:23:52 compute-0 ceph-mon[75227]: rocksdb: (Original Log Time 2026/01/31-08:23:52.526367) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 08:23:52 compute-0 ceph-mon[75227]: rocksdb: (Original Log Time 2026/01/31-08:23:52.526376) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 08:23:52 compute-0 ceph-mon[75227]: rocksdb: (Original Log Time 2026/01/31-08:23:52.526378) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 08:23:52 compute-0 ceph-mon[75227]: rocksdb: (Original Log Time 2026/01/31-08:23:52.526380) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 08:23:52 compute-0 ceph-mon[75227]: rocksdb: (Original Log Time 2026/01/31-08:23:52.526383) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 08:23:52 compute-0 sudo[237038]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dqhhbyjsgnygdptpcpwgvnmyviltxkro ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847832.3922367-1203-80036475419412/AnsiballZ_container_config_hash.py'
Jan 31 08:23:52 compute-0 sudo[237038]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:23:52 compute-0 python3.9[237040]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/openstack
Jan 31 08:23:52 compute-0 sudo[237038]: pam_unix(sudo:session): session closed for user root
Jan 31 08:23:52 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v638: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:23:53 compute-0 sudo[237190]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ezmpsdqduomjzifwdvifznrndrbblwlh ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1769847833.1947377-1213-156760097906438/AnsiballZ_edpm_container_manage.py'
Jan 31 08:23:53 compute-0 sudo[237190]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:23:53 compute-0 ceph-mon[75227]: pgmap v638: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:23:53 compute-0 python3[237192]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/openstack/config/containers config_id=edpm config_overrides={} config_patterns=nova_compute.json containers=[] log_base_path=/var/log/containers/stdouts debug=False
Jan 31 08:23:54 compute-0 podman[237229]: 2026-01-31 08:23:54.003662936 +0000 UTC m=+0.060824906 container create ad91499f592110baa995f9e773e9a9441889f826f5199ed38e72f1e54bd13662 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=edpm, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath', '/etc/multipath.conf:/etc/multipath.conf:ro,Z', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, container_name=nova_compute, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Jan 31 08:23:54 compute-0 podman[237229]: 2026-01-31 08:23:53.965652539 +0000 UTC m=+0.022814579 image pull f4e0688689eb3c524117ae65df199eeb4e620e591d26898b5cb25b819a2d79fd quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified
Jan 31 08:23:54 compute-0 python3[237192]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name nova_compute --conmon-pidfile /run/nova_compute.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --label config_id=edpm --label container_name=nova_compute --label managed_by=edpm_ansible --label config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath', '/etc/multipath.conf:/etc/multipath.conf:ro,Z', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']} --log-driver journald --log-level info --network host --pid host --privileged=True --user nova --volume /var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro --volume /var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z --volume /etc/localtime:/etc/localtime:ro --volume /lib/modules:/lib/modules:ro --volume /dev:/dev --volume /var/lib/libvirt:/var/lib/libvirt --volume /run/libvirt:/run/libvirt:shared --volume /var/lib/nova:/var/lib/nova:shared --volume /var/lib/iscsi:/var/lib/iscsi --volume /etc/multipath:/etc/multipath --volume /etc/multipath.conf:/etc/multipath.conf:ro,Z --volume /etc/iscsi:/etc/iscsi:ro --volume /etc/nvme:/etc/nvme --volume /var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro --volume /etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified kolla_start
Jan 31 08:23:54 compute-0 sudo[237190]: pam_unix(sudo:session): session closed for user root
Jan 31 08:23:54 compute-0 sudo[237419]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vndhrlosojristijlpnopsozedejcfut ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847834.308859-1221-1635699879273/AnsiballZ_stat.py'
Jan 31 08:23:54 compute-0 sudo[237419]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:23:54 compute-0 python3.9[237421]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 31 08:23:54 compute-0 sudo[237419]: pam_unix(sudo:session): session closed for user root
Jan 31 08:23:54 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v639: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:23:55 compute-0 sudo[237573]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xsrjipmfnjhtinxtsbcvosgtepnvhqyr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847835.0209594-1230-246705279519827/AnsiballZ_file.py'
Jan 31 08:23:55 compute-0 sudo[237573]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:23:55 compute-0 python3.9[237575]: ansible-file Invoked with path=/etc/systemd/system/edpm_nova_compute.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 08:23:55 compute-0 sudo[237573]: pam_unix(sudo:session): session closed for user root
Jan 31 08:23:55 compute-0 sudo[237724]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qkkrlyezynimbobkzkfvrizrracekiyn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847835.5164196-1230-193594826925030/AnsiballZ_copy.py'
Jan 31 08:23:55 compute-0 sudo[237724]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:23:56 compute-0 python3.9[237726]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1769847835.5164196-1230-193594826925030/source dest=/etc/systemd/system/edpm_nova_compute.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 08:23:56 compute-0 sudo[237724]: pam_unix(sudo:session): session closed for user root
Jan 31 08:23:56 compute-0 ceph-mon[75227]: pgmap v639: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:23:56 compute-0 sudo[237800]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ngbwrzzhgdlangysttnpmxepmquznisd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847835.5164196-1230-193594826925030/AnsiballZ_systemd.py'
Jan 31 08:23:56 compute-0 sudo[237800]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:23:56 compute-0 python3.9[237802]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Jan 31 08:23:56 compute-0 systemd[1]: Reloading.
Jan 31 08:23:56 compute-0 systemd-sysv-generator[237831]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 08:23:56 compute-0 systemd-rc-local-generator[237827]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 08:23:56 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v640: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:23:57 compute-0 sudo[237800]: pam_unix(sudo:session): session closed for user root
Jan 31 08:23:57 compute-0 sudo[237911]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-euiyqatpvyuudmmeddxnfvqepbtarbty ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847835.5164196-1230-193594826925030/AnsiballZ_systemd.py'
Jan 31 08:23:57 compute-0 sudo[237911]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:23:57 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 08:23:57 compute-0 python3.9[237913]: ansible-systemd Invoked with state=restarted name=edpm_nova_compute.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 31 08:23:57 compute-0 systemd[1]: Reloading.
Jan 31 08:23:57 compute-0 systemd-rc-local-generator[237937]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 08:23:57 compute-0 systemd-sysv-generator[237941]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 08:23:57 compute-0 systemd[1]: Starting nova_compute container...
Jan 31 08:23:58 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:23:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/749357b78f09c7d1b2042c749eea4adeb953260796ecd08f949e9c9719838833/merged/etc/nvme supports timestamps until 2038 (0x7fffffff)
Jan 31 08:23:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/749357b78f09c7d1b2042c749eea4adeb953260796ecd08f949e9c9719838833/merged/etc/multipath supports timestamps until 2038 (0x7fffffff)
Jan 31 08:23:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/749357b78f09c7d1b2042c749eea4adeb953260796ecd08f949e9c9719838833/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff)
Jan 31 08:23:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/749357b78f09c7d1b2042c749eea4adeb953260796ecd08f949e9c9719838833/merged/var/lib/libvirt supports timestamps until 2038 (0x7fffffff)
Jan 31 08:23:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/749357b78f09c7d1b2042c749eea4adeb953260796ecd08f949e9c9719838833/merged/var/lib/iscsi supports timestamps until 2038 (0x7fffffff)
Jan 31 08:23:58 compute-0 podman[237953]: 2026-01-31 08:23:58.089686953 +0000 UTC m=+0.100461621 container init ad91499f592110baa995f9e773e9a9441889f826f5199ed38e72f1e54bd13662 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, org.label-schema.license=GPLv2, config_id=edpm, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath', '/etc/multipath.conf:/etc/multipath.conf:ro,Z', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, container_name=nova_compute, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, managed_by=edpm_ansible)
Jan 31 08:23:58 compute-0 podman[237953]: 2026-01-31 08:23:58.094550933 +0000 UTC m=+0.105325561 container start ad91499f592110baa995f9e773e9a9441889f826f5199ed38e72f1e54bd13662 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath', '/etc/multipath.conf:/etc/multipath.conf:ro,Z', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, config_id=edpm, container_name=nova_compute, io.buildah.version=1.41.3, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 08:23:58 compute-0 nova_compute[237968]: + sudo -E kolla_set_configs
Jan 31 08:23:58 compute-0 podman[237953]: nova_compute
Jan 31 08:23:58 compute-0 systemd[1]: Started nova_compute container.
Jan 31 08:23:58 compute-0 sudo[237911]: pam_unix(sudo:session): session closed for user root
Jan 31 08:23:58 compute-0 ceph-mon[75227]: pgmap v640: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:23:58 compute-0 nova_compute[237968]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Jan 31 08:23:58 compute-0 nova_compute[237968]: INFO:__main__:Validating config file
Jan 31 08:23:58 compute-0 nova_compute[237968]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Jan 31 08:23:58 compute-0 nova_compute[237968]: INFO:__main__:Copying service configuration files
Jan 31 08:23:58 compute-0 nova_compute[237968]: INFO:__main__:Deleting /etc/nova/nova.conf
Jan 31 08:23:58 compute-0 nova_compute[237968]: INFO:__main__:Copying /var/lib/kolla/config_files/nova-blank.conf to /etc/nova/nova.conf
Jan 31 08:23:58 compute-0 nova_compute[237968]: INFO:__main__:Setting permission for /etc/nova/nova.conf
Jan 31 08:23:58 compute-0 nova_compute[237968]: INFO:__main__:Copying /var/lib/kolla/config_files/01-nova.conf to /etc/nova/nova.conf.d/01-nova.conf
Jan 31 08:23:58 compute-0 nova_compute[237968]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/01-nova.conf
Jan 31 08:23:58 compute-0 nova_compute[237968]: INFO:__main__:Copying /var/lib/kolla/config_files/03-ceph-nova.conf to /etc/nova/nova.conf.d/03-ceph-nova.conf
Jan 31 08:23:58 compute-0 nova_compute[237968]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/03-ceph-nova.conf
Jan 31 08:23:58 compute-0 nova_compute[237968]: INFO:__main__:Copying /var/lib/kolla/config_files/25-nova-extra.conf to /etc/nova/nova.conf.d/25-nova-extra.conf
Jan 31 08:23:58 compute-0 nova_compute[237968]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/25-nova-extra.conf
Jan 31 08:23:58 compute-0 nova_compute[237968]: INFO:__main__:Copying /var/lib/kolla/config_files/nova-blank.conf to /etc/nova/nova.conf.d/nova-blank.conf
Jan 31 08:23:58 compute-0 nova_compute[237968]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/nova-blank.conf
Jan 31 08:23:58 compute-0 nova_compute[237968]: INFO:__main__:Copying /var/lib/kolla/config_files/02-nova-host-specific.conf to /etc/nova/nova.conf.d/02-nova-host-specific.conf
Jan 31 08:23:58 compute-0 nova_compute[237968]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/02-nova-host-specific.conf
Jan 31 08:23:58 compute-0 nova_compute[237968]: INFO:__main__:Deleting /etc/ceph
Jan 31 08:23:58 compute-0 nova_compute[237968]: INFO:__main__:Creating directory /etc/ceph
Jan 31 08:23:58 compute-0 nova_compute[237968]: INFO:__main__:Setting permission for /etc/ceph
Jan 31 08:23:58 compute-0 nova_compute[237968]: INFO:__main__:Copying /var/lib/kolla/config_files/ceph/ceph.client.openstack.keyring to /etc/ceph/ceph.client.openstack.keyring
Jan 31 08:23:58 compute-0 nova_compute[237968]: INFO:__main__:Setting permission for /etc/ceph/ceph.client.openstack.keyring
Jan 31 08:23:58 compute-0 nova_compute[237968]: INFO:__main__:Copying /var/lib/kolla/config_files/ceph/ceph.conf to /etc/ceph/ceph.conf
Jan 31 08:23:58 compute-0 nova_compute[237968]: INFO:__main__:Setting permission for /etc/ceph/ceph.conf
Jan 31 08:23:58 compute-0 nova_compute[237968]: INFO:__main__:Copying /var/lib/kolla/config_files/ssh-privatekey to /var/lib/nova/.ssh/ssh-privatekey
Jan 31 08:23:58 compute-0 nova_compute[237968]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey
Jan 31 08:23:58 compute-0 nova_compute[237968]: INFO:__main__:Copying /var/lib/kolla/config_files/ssh-config to /var/lib/nova/.ssh/config
Jan 31 08:23:58 compute-0 nova_compute[237968]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config
Jan 31 08:23:58 compute-0 nova_compute[237968]: INFO:__main__:Deleting /usr/sbin/iscsiadm
Jan 31 08:23:58 compute-0 nova_compute[237968]: INFO:__main__:Copying /var/lib/kolla/config_files/run-on-host to /usr/sbin/iscsiadm
Jan 31 08:23:58 compute-0 nova_compute[237968]: INFO:__main__:Setting permission for /usr/sbin/iscsiadm
Jan 31 08:23:58 compute-0 nova_compute[237968]: INFO:__main__:Writing out command to execute
Jan 31 08:23:58 compute-0 nova_compute[237968]: INFO:__main__:Setting permission for /etc/ceph/ceph.client.openstack.keyring
Jan 31 08:23:58 compute-0 nova_compute[237968]: INFO:__main__:Setting permission for /etc/ceph/ceph.conf
Jan 31 08:23:58 compute-0 nova_compute[237968]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/
Jan 31 08:23:58 compute-0 nova_compute[237968]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey
Jan 31 08:23:58 compute-0 nova_compute[237968]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config
Jan 31 08:23:58 compute-0 nova_compute[237968]: ++ cat /run_command
Jan 31 08:23:58 compute-0 nova_compute[237968]: + CMD=nova-compute
Jan 31 08:23:58 compute-0 nova_compute[237968]: + ARGS=
Jan 31 08:23:58 compute-0 nova_compute[237968]: + sudo kolla_copy_cacerts
Jan 31 08:23:58 compute-0 nova_compute[237968]: + [[ ! -n '' ]]
Jan 31 08:23:58 compute-0 nova_compute[237968]: + . kolla_extend_start
Jan 31 08:23:58 compute-0 nova_compute[237968]: Running command: 'nova-compute'
Jan 31 08:23:58 compute-0 nova_compute[237968]: + echo 'Running command: '\''nova-compute'\'''
Jan 31 08:23:58 compute-0 nova_compute[237968]: + umask 0022
Jan 31 08:23:58 compute-0 nova_compute[237968]: + exec nova-compute
Jan 31 08:23:58 compute-0 python3.9[238129]: ansible-ansible.builtin.stat Invoked with path=/etc/systemd/system/edpm_nova_nvme_cleaner_healthcheck.service follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 31 08:23:58 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v641: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:23:59 compute-0 python3.9[238280]: ansible-ansible.builtin.stat Invoked with path=/etc/systemd/system/edpm_nova_nvme_cleaner.service follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 31 08:24:00 compute-0 python3.9[238430]: ansible-ansible.builtin.stat Invoked with path=/etc/systemd/system/edpm_nova_nvme_cleaner.service.requires follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 31 08:24:00 compute-0 ceph-mon[75227]: pgmap v641: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:24:00 compute-0 sudo[238580]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ttxpnoiizvjnspmagnpndttnmppeeapc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847840.474568-1290-263712202524208/AnsiballZ_podman_container.py'
Jan 31 08:24:00 compute-0 sudo[238580]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:24:00 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v642: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:24:01 compute-0 python3.9[238582]: ansible-containers.podman.podman_container Invoked with name=nova_nvme_cleaner state=absent executable=podman detach=True debug=False force_restart=False force_delete=True generate_systemd={} image_strict=False recreate=False image=None annotation=None arch=None attach=None authfile=None blkio_weight=None blkio_weight_device=None cap_add=None cap_drop=None cgroup_conf=None cgroup_parent=None cgroupns=None cgroups=None chrootdirs=None cidfile=None cmd_args=None conmon_pidfile=None command=None cpu_period=None cpu_quota=None cpu_rt_period=None cpu_rt_runtime=None cpu_shares=None cpus=None cpuset_cpus=None cpuset_mems=None decryption_key=None delete_depend=None delete_time=None delete_volumes=None detach_keys=None device=None device_cgroup_rule=None device_read_bps=None device_read_iops=None device_write_bps=None device_write_iops=None dns=None dns_option=None dns_search=None entrypoint=None env=None env_file=None env_host=None env_merge=None etc_hosts=None expose=None gidmap=None gpus=None group_add=None group_entry=None healthcheck=None healthcheck_interval=None healthcheck_retries=None healthcheck_start_period=None health_startup_cmd=None health_startup_interval=None health_startup_retries=None health_startup_success=None health_startup_timeout=None healthcheck_timeout=None healthcheck_failure_action=None hooks_dir=None hostname=None hostuser=None http_proxy=None image_volume=None init=None init_ctr=None init_path=None interactive=None ip=None ip6=None ipc=None kernel_memory=None label=None label_file=None log_driver=None log_level=None log_opt=None mac_address=None memory=None memory_reservation=None memory_swap=None memory_swappiness=None mount=None network=None network_aliases=None no_healthcheck=None no_hosts=None oom_kill_disable=None oom_score_adj=None os=None passwd=None passwd_entry=None personality=None pid=None pid_file=None pids_limit=None platform=None pod=None pod_id_file=None preserve_fd=None preserve_fds=None privileged=None publish=None publish_all=None pull=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None rdt_class=None read_only=None read_only_tmpfs=None requires=None restart_policy=None restart_time=None retry=None retry_delay=None rm=None rmi=None rootfs=None seccomp_policy=None secrets=NOT_LOGGING_PARAMETER sdnotify=None security_opt=None shm_size=None shm_size_systemd=None sig_proxy=None stop_signal=None stop_timeout=None stop_time=None subgidname=None subuidname=None sysctl=None systemd=None timeout=None timezone=None tls_verify=None tmpfs=None tty=None uidmap=None ulimit=None umask=None unsetenv=None unsetenv_all=None user=None userns=None uts=None variant=None volume=None volumes_from=None workdir=None
Jan 31 08:24:01 compute-0 sudo[238580]: pam_unix(sudo:session): session closed for user root
Jan 31 08:24:01 compute-0 rsyslogd[1007]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 31 08:24:01 compute-0 nova_compute[237968]: 2026-01-31 08:24:01.266 237972 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_linux_bridge.linux_bridge.LinuxBridgePlugin'>' with name 'linux_bridge' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44
Jan 31 08:24:01 compute-0 nova_compute[237968]: 2026-01-31 08:24:01.267 237972 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_noop.noop.NoOpPlugin'>' with name 'noop' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44
Jan 31 08:24:01 compute-0 nova_compute[237968]: 2026-01-31 08:24:01.267 237972 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_ovs.ovs.OvsPlugin'>' with name 'ovs' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44
Jan 31 08:24:01 compute-0 nova_compute[237968]: 2026-01-31 08:24:01.267 237972 INFO os_vif [-] Loaded VIF plugins: linux_bridge, noop, ovs
Jan 31 08:24:01 compute-0 nova_compute[237968]: 2026-01-31 08:24:01.415 237972 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): grep -F node.session.scan /sbin/iscsiadm execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:24:01 compute-0 nova_compute[237968]: 2026-01-31 08:24:01.431 237972 DEBUG oslo_concurrency.processutils [-] CMD "grep -F node.session.scan /sbin/iscsiadm" returned: 1 in 0.017s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:24:01 compute-0 nova_compute[237968]: 2026-01-31 08:24:01.432 237972 DEBUG oslo_concurrency.processutils [-] 'grep -F node.session.scan /sbin/iscsiadm' failed. Not Retrying. execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:473
Jan 31 08:24:01 compute-0 sudo[238760]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bmlfqrkzcguflsrqajiunyjpqpyuigxo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847841.4034865-1298-262454131932990/AnsiballZ_systemd.py'
Jan 31 08:24:01 compute-0 sudo[238760]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:24:01 compute-0 python3.9[238762]: ansible-ansible.builtin.systemd Invoked with name=edpm_nova_compute.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 31 08:24:02 compute-0 systemd[1]: Stopping nova_compute container...
Jan 31 08:24:02 compute-0 nova_compute[237968]: 2026-01-31 08:24:02.096 237972 INFO nova.virt.driver [None req-d9f5c17e-8a43-4d95-bf06-9a10e6ca4464 - - - - - -] Loading compute driver 'libvirt.LibvirtDriver'
Jan 31 08:24:02 compute-0 systemd[1]: libpod-ad91499f592110baa995f9e773e9a9441889f826f5199ed38e72f1e54bd13662.scope: Deactivated successfully.
Jan 31 08:24:02 compute-0 systemd[1]: libpod-ad91499f592110baa995f9e773e9a9441889f826f5199ed38e72f1e54bd13662.scope: Consumed 2.439s CPU time.
Jan 31 08:24:02 compute-0 podman[238766]: 2026-01-31 08:24:02.120731132 +0000 UTC m=+0.073125022 container died ad91499f592110baa995f9e773e9a9441889f826f5199ed38e72f1e54bd13662 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=nova_compute, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath', '/etc/multipath.conf:/etc/multipath.conf:ro,Z', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_id=edpm)
Jan 31 08:24:02 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-ad91499f592110baa995f9e773e9a9441889f826f5199ed38e72f1e54bd13662-userdata-shm.mount: Deactivated successfully.
Jan 31 08:24:02 compute-0 systemd[1]: var-lib-containers-storage-overlay-749357b78f09c7d1b2042c749eea4adeb953260796ecd08f949e9c9719838833-merged.mount: Deactivated successfully.
Jan 31 08:24:02 compute-0 ceph-mgr[75519]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:24:02 compute-0 ceph-mgr[75519]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:24:02 compute-0 ceph-mgr[75519]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:24:02 compute-0 ceph-mgr[75519]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:24:02 compute-0 ceph-mgr[75519]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:24:02 compute-0 ceph-mgr[75519]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:24:02 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v643: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:24:03 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 08:24:03 compute-0 ceph-mon[75227]: pgmap v642: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:24:04 compute-0 podman[238766]: 2026-01-31 08:24:04.72109384 +0000 UTC m=+2.673487690 container cleanup ad91499f592110baa995f9e773e9a9441889f826f5199ed38e72f1e54bd13662 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath', '/etc/multipath.conf:/etc/multipath.conf:ro,Z', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=edpm, container_name=nova_compute, io.buildah.version=1.41.3, org.label-schema.build-date=20260127)
Jan 31 08:24:04 compute-0 podman[238766]: nova_compute
Jan 31 08:24:04 compute-0 podman[238796]: nova_compute
Jan 31 08:24:04 compute-0 systemd[1]: edpm_nova_compute.service: Deactivated successfully.
Jan 31 08:24:04 compute-0 systemd[1]: Stopped nova_compute container.
Jan 31 08:24:04 compute-0 systemd[1]: Starting nova_compute container...
Jan 31 08:24:04 compute-0 ceph-mon[75227]: pgmap v643: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:24:04 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:24:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/749357b78f09c7d1b2042c749eea4adeb953260796ecd08f949e9c9719838833/merged/etc/nvme supports timestamps until 2038 (0x7fffffff)
Jan 31 08:24:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/749357b78f09c7d1b2042c749eea4adeb953260796ecd08f949e9c9719838833/merged/etc/multipath supports timestamps until 2038 (0x7fffffff)
Jan 31 08:24:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/749357b78f09c7d1b2042c749eea4adeb953260796ecd08f949e9c9719838833/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff)
Jan 31 08:24:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/749357b78f09c7d1b2042c749eea4adeb953260796ecd08f949e9c9719838833/merged/var/lib/libvirt supports timestamps until 2038 (0x7fffffff)
Jan 31 08:24:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/749357b78f09c7d1b2042c749eea4adeb953260796ecd08f949e9c9719838833/merged/var/lib/iscsi supports timestamps until 2038 (0x7fffffff)
Jan 31 08:24:04 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v644: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:24:05 compute-0 podman[238809]: 2026-01-31 08:24:05.115511513 +0000 UTC m=+0.298416744 container init ad91499f592110baa995f9e773e9a9441889f826f5199ed38e72f1e54bd13662 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath', '/etc/multipath.conf:/etc/multipath.conf:ro,Z', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, config_id=edpm, io.buildah.version=1.41.3, managed_by=edpm_ansible, container_name=nova_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20260127)
Jan 31 08:24:05 compute-0 podman[238809]: 2026-01-31 08:24:05.123039461 +0000 UTC m=+0.305944632 container start ad91499f592110baa995f9e773e9a9441889f826f5199ed38e72f1e54bd13662 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath', '/etc/multipath.conf:/etc/multipath.conf:ro,Z', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, config_id=edpm, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, container_name=nova_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true)
Jan 31 08:24:05 compute-0 nova_compute[238824]: + sudo -E kolla_set_configs
Jan 31 08:24:05 compute-0 podman[238809]: nova_compute
Jan 31 08:24:05 compute-0 systemd[1]: Started nova_compute container.
Jan 31 08:24:05 compute-0 sudo[238760]: pam_unix(sudo:session): session closed for user root
Jan 31 08:24:05 compute-0 nova_compute[238824]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Jan 31 08:24:05 compute-0 nova_compute[238824]: INFO:__main__:Validating config file
Jan 31 08:24:05 compute-0 nova_compute[238824]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Jan 31 08:24:05 compute-0 nova_compute[238824]: INFO:__main__:Copying service configuration files
Jan 31 08:24:05 compute-0 nova_compute[238824]: INFO:__main__:Deleting /etc/nova/nova.conf
Jan 31 08:24:05 compute-0 nova_compute[238824]: INFO:__main__:Copying /var/lib/kolla/config_files/nova-blank.conf to /etc/nova/nova.conf
Jan 31 08:24:05 compute-0 nova_compute[238824]: INFO:__main__:Setting permission for /etc/nova/nova.conf
Jan 31 08:24:05 compute-0 nova_compute[238824]: INFO:__main__:Deleting /etc/nova/nova.conf.d/01-nova.conf
Jan 31 08:24:05 compute-0 nova_compute[238824]: INFO:__main__:Copying /var/lib/kolla/config_files/01-nova.conf to /etc/nova/nova.conf.d/01-nova.conf
Jan 31 08:24:05 compute-0 nova_compute[238824]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/01-nova.conf
Jan 31 08:24:05 compute-0 nova_compute[238824]: INFO:__main__:Deleting /etc/nova/nova.conf.d/03-ceph-nova.conf
Jan 31 08:24:05 compute-0 nova_compute[238824]: INFO:__main__:Copying /var/lib/kolla/config_files/03-ceph-nova.conf to /etc/nova/nova.conf.d/03-ceph-nova.conf
Jan 31 08:24:05 compute-0 nova_compute[238824]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/03-ceph-nova.conf
Jan 31 08:24:05 compute-0 nova_compute[238824]: INFO:__main__:Deleting /etc/nova/nova.conf.d/25-nova-extra.conf
Jan 31 08:24:05 compute-0 nova_compute[238824]: INFO:__main__:Copying /var/lib/kolla/config_files/25-nova-extra.conf to /etc/nova/nova.conf.d/25-nova-extra.conf
Jan 31 08:24:05 compute-0 nova_compute[238824]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/25-nova-extra.conf
Jan 31 08:24:05 compute-0 nova_compute[238824]: INFO:__main__:Deleting /etc/nova/nova.conf.d/nova-blank.conf
Jan 31 08:24:05 compute-0 nova_compute[238824]: INFO:__main__:Copying /var/lib/kolla/config_files/nova-blank.conf to /etc/nova/nova.conf.d/nova-blank.conf
Jan 31 08:24:05 compute-0 nova_compute[238824]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/nova-blank.conf
Jan 31 08:24:05 compute-0 nova_compute[238824]: INFO:__main__:Deleting /etc/nova/nova.conf.d/02-nova-host-specific.conf
Jan 31 08:24:05 compute-0 nova_compute[238824]: INFO:__main__:Copying /var/lib/kolla/config_files/02-nova-host-specific.conf to /etc/nova/nova.conf.d/02-nova-host-specific.conf
Jan 31 08:24:05 compute-0 nova_compute[238824]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/02-nova-host-specific.conf
Jan 31 08:24:05 compute-0 nova_compute[238824]: INFO:__main__:Deleting /etc/ceph
Jan 31 08:24:05 compute-0 nova_compute[238824]: INFO:__main__:Creating directory /etc/ceph
Jan 31 08:24:05 compute-0 nova_compute[238824]: INFO:__main__:Setting permission for /etc/ceph
Jan 31 08:24:05 compute-0 nova_compute[238824]: INFO:__main__:Copying /var/lib/kolla/config_files/ceph/ceph.client.openstack.keyring to /etc/ceph/ceph.client.openstack.keyring
Jan 31 08:24:05 compute-0 nova_compute[238824]: INFO:__main__:Setting permission for /etc/ceph/ceph.client.openstack.keyring
Jan 31 08:24:05 compute-0 nova_compute[238824]: INFO:__main__:Copying /var/lib/kolla/config_files/ceph/ceph.conf to /etc/ceph/ceph.conf
Jan 31 08:24:05 compute-0 nova_compute[238824]: INFO:__main__:Setting permission for /etc/ceph/ceph.conf
Jan 31 08:24:05 compute-0 nova_compute[238824]: INFO:__main__:Deleting /var/lib/nova/.ssh/ssh-privatekey
Jan 31 08:24:05 compute-0 nova_compute[238824]: INFO:__main__:Copying /var/lib/kolla/config_files/ssh-privatekey to /var/lib/nova/.ssh/ssh-privatekey
Jan 31 08:24:05 compute-0 nova_compute[238824]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey
Jan 31 08:24:05 compute-0 nova_compute[238824]: INFO:__main__:Deleting /var/lib/nova/.ssh/config
Jan 31 08:24:05 compute-0 nova_compute[238824]: INFO:__main__:Copying /var/lib/kolla/config_files/ssh-config to /var/lib/nova/.ssh/config
Jan 31 08:24:05 compute-0 nova_compute[238824]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config
Jan 31 08:24:05 compute-0 nova_compute[238824]: INFO:__main__:Deleting /usr/sbin/iscsiadm
Jan 31 08:24:05 compute-0 nova_compute[238824]: INFO:__main__:Copying /var/lib/kolla/config_files/run-on-host to /usr/sbin/iscsiadm
Jan 31 08:24:05 compute-0 nova_compute[238824]: INFO:__main__:Setting permission for /usr/sbin/iscsiadm
Jan 31 08:24:05 compute-0 nova_compute[238824]: INFO:__main__:Writing out command to execute
Jan 31 08:24:05 compute-0 nova_compute[238824]: INFO:__main__:Setting permission for /etc/ceph/ceph.client.openstack.keyring
Jan 31 08:24:05 compute-0 nova_compute[238824]: INFO:__main__:Setting permission for /etc/ceph/ceph.conf
Jan 31 08:24:05 compute-0 nova_compute[238824]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/
Jan 31 08:24:05 compute-0 nova_compute[238824]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey
Jan 31 08:24:05 compute-0 nova_compute[238824]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config
Jan 31 08:24:05 compute-0 nova_compute[238824]: ++ cat /run_command
Jan 31 08:24:05 compute-0 nova_compute[238824]: + CMD=nova-compute
Jan 31 08:24:05 compute-0 nova_compute[238824]: + ARGS=
Jan 31 08:24:05 compute-0 nova_compute[238824]: + sudo kolla_copy_cacerts
Jan 31 08:24:05 compute-0 nova_compute[238824]: + [[ ! -n '' ]]
Jan 31 08:24:05 compute-0 nova_compute[238824]: + . kolla_extend_start
Jan 31 08:24:05 compute-0 nova_compute[238824]: Running command: 'nova-compute'
Jan 31 08:24:05 compute-0 nova_compute[238824]: + echo 'Running command: '\''nova-compute'\'''
Jan 31 08:24:05 compute-0 nova_compute[238824]: + umask 0022
Jan 31 08:24:05 compute-0 nova_compute[238824]: + exec nova-compute
Jan 31 08:24:05 compute-0 sudo[238985]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-umoytmathqzonqepwjetiipsfztvqpej ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769847845.40325-1307-205698534021613/AnsiballZ_podman_container.py'
Jan 31 08:24:05 compute-0 sudo[238985]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:24:05 compute-0 ceph-mon[75227]: pgmap v644: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:24:05 compute-0 python3.9[238987]: ansible-containers.podman.podman_container Invoked with name=nova_compute_init state=started executable=podman detach=True debug=False force_restart=False force_delete=True generate_systemd={} image_strict=False recreate=False image=None annotation=None arch=None attach=None authfile=None blkio_weight=None blkio_weight_device=None cap_add=None cap_drop=None cgroup_conf=None cgroup_parent=None cgroupns=None cgroups=None chrootdirs=None cidfile=None cmd_args=None conmon_pidfile=None command=None cpu_period=None cpu_quota=None cpu_rt_period=None cpu_rt_runtime=None cpu_shares=None cpus=None cpuset_cpus=None cpuset_mems=None decryption_key=None delete_depend=None delete_time=None delete_volumes=None detach_keys=None device=None device_cgroup_rule=None device_read_bps=None device_read_iops=None device_write_bps=None device_write_iops=None dns=None dns_option=None dns_search=None entrypoint=None env=None env_file=None env_host=None env_merge=None etc_hosts=None expose=None gidmap=None gpus=None group_add=None group_entry=None healthcheck=None healthcheck_interval=None healthcheck_retries=None healthcheck_start_period=None health_startup_cmd=None health_startup_interval=None health_startup_retries=None health_startup_success=None health_startup_timeout=None healthcheck_timeout=None healthcheck_failure_action=None hooks_dir=None hostname=None hostuser=None http_proxy=None image_volume=None init=None init_ctr=None init_path=None interactive=None ip=None ip6=None ipc=None kernel_memory=None label=None label_file=None log_driver=None log_level=None log_opt=None mac_address=None memory=None memory_reservation=None memory_swap=None memory_swappiness=None mount=None network=None network_aliases=None no_healthcheck=None no_hosts=None oom_kill_disable=None oom_score_adj=None os=None passwd=None passwd_entry=None personality=None pid=None pid_file=None pids_limit=None platform=None pod=None pod_id_file=None preserve_fd=None preserve_fds=None privileged=None publish=None publish_all=None pull=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None rdt_class=None read_only=None read_only_tmpfs=None requires=None restart_policy=None restart_time=None retry=None retry_delay=None rm=None rmi=None rootfs=None seccomp_policy=None secrets=NOT_LOGGING_PARAMETER sdnotify=None security_opt=None shm_size=None shm_size_systemd=None sig_proxy=None stop_signal=None stop_timeout=None stop_time=None subgidname=None subuidname=None sysctl=None systemd=None timeout=None timezone=None tls_verify=None tmpfs=None tty=None uidmap=None ulimit=None umask=None unsetenv=None unsetenv_all=None user=None userns=None uts=None variant=None volume=None volumes_from=None workdir=None
Jan 31 08:24:06 compute-0 systemd[1]: Started libpod-conmon-b44711eb7963a861f67aacdac80c6c0eae2f31ac8f9050d94be98043d2cdd713.scope.
Jan 31 08:24:06 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:24:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/32474850052e02a47c1250101de74ea01710f0f76aea0fc256f76eab51cfd9e7/merged/usr/sbin/nova_statedir_ownership.py supports timestamps until 2038 (0x7fffffff)
Jan 31 08:24:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/32474850052e02a47c1250101de74ea01710f0f76aea0fc256f76eab51cfd9e7/merged/var/lib/_nova_secontext supports timestamps until 2038 (0x7fffffff)
Jan 31 08:24:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/32474850052e02a47c1250101de74ea01710f0f76aea0fc256f76eab51cfd9e7/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff)
Jan 31 08:24:06 compute-0 podman[239012]: 2026-01-31 08:24:06.170609935 +0000 UTC m=+0.140070654 container init b44711eb7963a861f67aacdac80c6c0eae2f31ac8f9050d94be98043d2cdd713 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, container_name=nova_compute_init, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, config_id=edpm, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 31 08:24:06 compute-0 podman[239012]: 2026-01-31 08:24:06.17668499 +0000 UTC m=+0.146145699 container start b44711eb7963a861f67aacdac80c6c0eae2f31ac8f9050d94be98043d2cdd713 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=edpm, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, container_name=nova_compute_init, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Jan 31 08:24:06 compute-0 python3.9[238987]: ansible-containers.podman.podman_container PODMAN-CONTAINER-DEBUG: podman start nova_compute_init
Jan 31 08:24:06 compute-0 nova_compute_init[239034]: INFO:nova_statedir:Applying nova statedir ownership
Jan 31 08:24:06 compute-0 nova_compute_init[239034]: INFO:nova_statedir:Target ownership for /var/lib/nova: 42436:42436
Jan 31 08:24:06 compute-0 nova_compute_init[239034]: INFO:nova_statedir:Checking uid: 1000 gid: 1000 path: /var/lib/nova/
Jan 31 08:24:06 compute-0 nova_compute_init[239034]: INFO:nova_statedir:Changing ownership of /var/lib/nova from 1000:1000 to 42436:42436
Jan 31 08:24:06 compute-0 nova_compute_init[239034]: INFO:nova_statedir:Setting selinux context of /var/lib/nova to system_u:object_r:container_file_t:s0
Jan 31 08:24:06 compute-0 nova_compute_init[239034]: INFO:nova_statedir:Checking uid: 1000 gid: 1000 path: /var/lib/nova/instances/
Jan 31 08:24:06 compute-0 nova_compute_init[239034]: INFO:nova_statedir:Changing ownership of /var/lib/nova/instances from 1000:1000 to 42436:42436
Jan 31 08:24:06 compute-0 nova_compute_init[239034]: INFO:nova_statedir:Setting selinux context of /var/lib/nova/instances to system_u:object_r:container_file_t:s0
Jan 31 08:24:06 compute-0 nova_compute_init[239034]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/.ssh/
Jan 31 08:24:06 compute-0 nova_compute_init[239034]: INFO:nova_statedir:Ownership of /var/lib/nova/.ssh already 42436:42436
Jan 31 08:24:06 compute-0 nova_compute_init[239034]: INFO:nova_statedir:Setting selinux context of /var/lib/nova/.ssh to system_u:object_r:container_file_t:s0
Jan 31 08:24:06 compute-0 nova_compute_init[239034]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/.ssh/ssh-privatekey
Jan 31 08:24:06 compute-0 nova_compute_init[239034]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/.ssh/config
Jan 31 08:24:06 compute-0 nova_compute_init[239034]: INFO:nova_statedir:Nova statedir ownership complete
Jan 31 08:24:06 compute-0 systemd[1]: libpod-b44711eb7963a861f67aacdac80c6c0eae2f31ac8f9050d94be98043d2cdd713.scope: Deactivated successfully.
Jan 31 08:24:06 compute-0 podman[239035]: 2026-01-31 08:24:06.248621466 +0000 UTC m=+0.033244320 container died b44711eb7963a861f67aacdac80c6c0eae2f31ac8f9050d94be98043d2cdd713 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, container_name=nova_compute_init, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.name=CentOS Stream 9 Base Image, config_id=edpm, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Jan 31 08:24:06 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-b44711eb7963a861f67aacdac80c6c0eae2f31ac8f9050d94be98043d2cdd713-userdata-shm.mount: Deactivated successfully.
Jan 31 08:24:06 compute-0 systemd[1]: var-lib-containers-storage-overlay-32474850052e02a47c1250101de74ea01710f0f76aea0fc256f76eab51cfd9e7-merged.mount: Deactivated successfully.
Jan 31 08:24:06 compute-0 podman[239046]: 2026-01-31 08:24:06.323854827 +0000 UTC m=+0.081246106 container cleanup b44711eb7963a861f67aacdac80c6c0eae2f31ac8f9050d94be98043d2cdd713 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=nova_compute_init, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, config_id=edpm, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team)
Jan 31 08:24:06 compute-0 systemd[1]: libpod-conmon-b44711eb7963a861f67aacdac80c6c0eae2f31ac8f9050d94be98043d2cdd713.scope: Deactivated successfully.
Jan 31 08:24:06 compute-0 sudo[238985]: pam_unix(sudo:session): session closed for user root
Jan 31 08:24:06 compute-0 sshd-session[214550]: Connection closed by 192.168.122.30 port 57404
Jan 31 08:24:06 compute-0 sshd-session[214547]: pam_unix(sshd:session): session closed for user zuul
Jan 31 08:24:06 compute-0 systemd[1]: session-50.scope: Deactivated successfully.
Jan 31 08:24:06 compute-0 systemd[1]: session-50.scope: Consumed 1min 50.117s CPU time.
Jan 31 08:24:06 compute-0 systemd-logind[793]: Session 50 logged out. Waiting for processes to exit.
Jan 31 08:24:06 compute-0 systemd-logind[793]: Removed session 50.
Jan 31 08:24:06 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v645: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:24:07 compute-0 nova_compute[238824]: 2026-01-31 08:24:07.171 238828 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_linux_bridge.linux_bridge.LinuxBridgePlugin'>' with name 'linux_bridge' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44
Jan 31 08:24:07 compute-0 nova_compute[238824]: 2026-01-31 08:24:07.171 238828 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_noop.noop.NoOpPlugin'>' with name 'noop' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44
Jan 31 08:24:07 compute-0 nova_compute[238824]: 2026-01-31 08:24:07.171 238828 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_ovs.ovs.OvsPlugin'>' with name 'ovs' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44
Jan 31 08:24:07 compute-0 nova_compute[238824]: 2026-01-31 08:24:07.172 238828 INFO os_vif [-] Loaded VIF plugins: linux_bridge, noop, ovs
Jan 31 08:24:07 compute-0 nova_compute[238824]: 2026-01-31 08:24:07.306 238828 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): grep -F node.session.scan /sbin/iscsiadm execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:24:07 compute-0 nova_compute[238824]: 2026-01-31 08:24:07.329 238828 DEBUG oslo_concurrency.processutils [-] CMD "grep -F node.session.scan /sbin/iscsiadm" returned: 1 in 0.023s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:24:07 compute-0 nova_compute[238824]: 2026-01-31 08:24:07.329 238828 DEBUG oslo_concurrency.processutils [-] 'grep -F node.session.scan /sbin/iscsiadm' failed. Not Retrying. execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:473
Jan 31 08:24:07 compute-0 nova_compute[238824]: 2026-01-31 08:24:07.833 238828 INFO nova.virt.driver [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] Loading compute driver 'libvirt.LibvirtDriver'
Jan 31 08:24:08 compute-0 ceph-mon[75227]: pgmap v645: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.018 238828 INFO nova.compute.provider_config [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] No provider configs found in /etc/nova/provider_config/. If files are present, ensure the Nova process has access.
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.075 238828 DEBUG oslo_concurrency.lockutils [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] Acquiring lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.076 238828 DEBUG oslo_concurrency.lockutils [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] Acquired lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.076 238828 DEBUG oslo_concurrency.lockutils [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] Releasing lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.077 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] Full set of CONF: _wait_for_exit_or_signal /usr/lib/python3.9/site-packages/oslo_service/service.py:362
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.077 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.078 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.078 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] command line args: [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.078 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] config files: ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.079 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.079 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] allow_resize_to_same_host      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.079 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] arq_binding_timeout            = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.079 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] backdoor_port                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.080 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] backdoor_socket                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.080 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] block_device_allocate_retries  = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.080 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] block_device_allocate_retries_interval = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.081 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] cert                           = self.pem log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.081 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] compute_driver                 = libvirt.LibvirtDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.081 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] compute_monitors               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.082 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] config_dir                     = ['/etc/nova/nova.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.082 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] config_drive_format            = iso9660 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.082 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] config_file                    = ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.083 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.083 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] console_host                   = compute-0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.083 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] control_exchange               = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.084 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] cpu_allocation_ratio           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.084 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] daemon                         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.084 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.085 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] default_access_ip_network_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.085 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] default_availability_zone      = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.085 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] default_ephemeral_format       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.086 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'glanceclient=WARN', 'oslo.privsep.daemon=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.086 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] default_schedule_zone          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.086 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] disk_allocation_ratio          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.087 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] enable_new_services            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.087 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] enabled_apis                   = ['osapi_compute', 'metadata'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.087 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] enabled_ssl_apis               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.088 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] flat_injected                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.088 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] force_config_drive             = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.088 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] force_raw_images               = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.089 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.089 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] heal_instance_info_cache_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.089 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] host                           = compute-0.ctlplane.example.com log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.090 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] initial_cpu_allocation_ratio   = 4.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.090 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] initial_disk_allocation_ratio  = 0.9 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.090 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] initial_ram_allocation_ratio   = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.091 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] injected_network_template      = /usr/lib/python3.9/site-packages/nova/virt/interfaces.template log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.091 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] instance_build_timeout         = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.091 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] instance_delete_interval       = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.092 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.092 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] instance_name_template         = instance-%08x log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.092 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] instance_usage_audit           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.093 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] instance_usage_audit_period    = month log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.093 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.093 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] instances_path                 = /var/lib/nova/instances log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.094 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] internal_service_availability_zone = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.094 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] key                            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.094 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] live_migration_retry_count     = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.095 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.095 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.095 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] log_dir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.096 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] log_file                       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.096 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] log_options                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.096 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.096 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.097 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] log_rotation_type              = size log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.097 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.097 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.098 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.098 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.098 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.098 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] long_rpc_timeout               = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.099 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] max_concurrent_builds          = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.099 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] max_concurrent_live_migrations = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.099 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] max_concurrent_snapshots       = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.100 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] max_local_block_devices        = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.100 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] max_logfile_count              = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.100 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] max_logfile_size_mb            = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.101 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] maximum_instance_delete_attempts = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.101 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] metadata_listen                = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.101 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] metadata_listen_port           = 8775 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.102 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] metadata_workers               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.102 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] migrate_max_retries            = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.102 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] mkisofs_cmd                    = /usr/bin/mkisofs log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.103 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] my_block_storage_ip            = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.103 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] my_ip                          = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.103 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] network_allocate_retries       = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.104 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] non_inheritable_image_properties = ['cache_in_nova', 'bittorrent'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.104 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] osapi_compute_listen           = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.104 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] osapi_compute_listen_port      = 8774 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.105 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] osapi_compute_unique_server_name_scope =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.105 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] osapi_compute_workers          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.105 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] password_length                = 12 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.105 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] periodic_enable                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.105 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] periodic_fuzzy_delay           = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.106 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] pointer_model                  = usbtablet log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.106 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] preallocate_images             = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.106 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.106 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] pybasedir                      = /usr/lib/python3.9/site-packages log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.106 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] ram_allocation_ratio           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.107 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.107 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.107 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.107 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] reboot_timeout                 = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.108 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] reclaim_instance_interval      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.108 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] record                         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.108 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] reimage_timeout_per_gb         = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.108 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] report_interval                = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.108 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] rescue_timeout                 = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.109 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] reserved_host_cpus             = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.109 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] reserved_host_disk_mb          = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.109 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] reserved_host_memory_mb        = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.109 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] reserved_huge_pages            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.109 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] resize_confirm_window          = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.110 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] resize_fs_using_block_device   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.110 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] resume_guests_state_on_host_boot = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.110 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] rootwrap_config                = /etc/nova/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.110 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] rpc_response_timeout           = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.111 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] run_external_periodic_tasks    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.111 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] running_deleted_instance_action = reap log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.111 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] running_deleted_instance_poll_interval = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.111 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] running_deleted_instance_timeout = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.112 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] scheduler_instance_sync_interval = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.112 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] service_down_time              = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.112 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] servicegroup_driver            = db log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.112 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] shelved_offload_time           = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.112 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] shelved_poll_interval          = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.113 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] shutdown_timeout               = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.113 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] source_is_ipv6                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.113 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] ssl_only                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.113 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] state_path                     = /var/lib/nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.113 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] sync_power_state_interval      = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.114 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] sync_power_state_pool_size     = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.114 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] syslog_log_facility            = LOG_USER log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.114 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] tempdir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.114 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] timeout_nbd                    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.114 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.115 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] update_resources_interval      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.115 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] use_cow_images                 = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.115 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.115 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.115 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.116 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] use_rootwrap_daemon            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.116 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.116 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.116 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] vcpu_pin_set                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.117 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] vif_plugging_is_fatal          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.117 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] vif_plugging_timeout           = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.117 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] virt_mkfs                      = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.117 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] volume_usage_poll_interval     = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.117 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.118 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] web                            = /usr/share/spice-html5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.118 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] oslo_concurrency.disable_process_locking = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.118 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] oslo_concurrency.lock_path     = /var/lib/nova/tmp log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.118 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] oslo_messaging_metrics.metrics_buffer_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.119 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] oslo_messaging_metrics.metrics_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.119 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] oslo_messaging_metrics.metrics_process_name =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.119 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] oslo_messaging_metrics.metrics_socket_file = /var/tmp/metrics_collector.sock log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.119 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] oslo_messaging_metrics.metrics_thread_stop_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.119 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] api.auth_strategy              = keystone log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.120 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] api.compute_link_prefix        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.120 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] api.config_drive_skip_versions = 1.0 2007-01-19 2007-03-01 2007-08-29 2007-10-10 2007-12-15 2008-02-01 2008-09-01 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.120 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] api.dhcp_domain                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.120 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] api.enable_instance_password   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.121 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] api.glance_link_prefix         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.121 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] api.instance_list_cells_batch_fixed_size = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.121 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] api.instance_list_cells_batch_strategy = distributed log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.121 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] api.instance_list_per_project_cells = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.121 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] api.list_records_by_skipping_down_cells = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.122 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] api.local_metadata_per_cell    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.122 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] api.max_limit                  = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.122 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] api.metadata_cache_expiration  = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.122 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] api.neutron_default_tenant_id  = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.122 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] api.use_forwarded_for          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.123 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] api.use_neutron_default_nets   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.123 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] api.vendordata_dynamic_connect_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.123 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] api.vendordata_dynamic_failure_fatal = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.123 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] api.vendordata_dynamic_read_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.123 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] api.vendordata_dynamic_ssl_certfile =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.124 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] api.vendordata_dynamic_targets = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.124 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] api.vendordata_jsonfile_path   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.124 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] api.vendordata_providers       = ['StaticJSON'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.124 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] cache.backend                  = oslo_cache.dict log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.124 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] cache.backend_argument         = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.125 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] cache.config_prefix            = cache.oslo log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.125 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] cache.dead_timeout             = 60.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.125 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] cache.debug_cache_backend      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.125 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] cache.enable_retry_client      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.125 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] cache.enable_socket_keepalive  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.126 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] cache.enabled                  = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.126 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] cache.expiration_time          = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.126 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] cache.hashclient_retry_attempts = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.126 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] cache.hashclient_retry_delay   = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.126 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] cache.memcache_dead_retry      = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.127 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] cache.memcache_password        =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.127 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] cache.memcache_pool_connection_get_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.127 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] cache.memcache_pool_flush_on_reconnect = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.127 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] cache.memcache_pool_maxsize    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.127 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] cache.memcache_pool_unused_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.128 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] cache.memcache_sasl_enabled    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.128 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] cache.memcache_servers         = ['localhost:11211'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.128 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] cache.memcache_socket_timeout  = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.128 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] cache.memcache_username        =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.129 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] cache.proxies                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.129 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] cache.retry_attempts           = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.129 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] cache.retry_delay              = 0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.129 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] cache.socket_keepalive_count   = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.129 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] cache.socket_keepalive_idle    = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.130 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] cache.socket_keepalive_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.130 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] cache.tls_allowed_ciphers      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.130 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] cache.tls_cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.130 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] cache.tls_certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.130 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] cache.tls_enabled              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.131 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] cache.tls_keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.131 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] cinder.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.131 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] cinder.auth_type               = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.131 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] cinder.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.132 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] cinder.catalog_info            = volumev3:cinderv3:internalURL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.132 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] cinder.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.132 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] cinder.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.132 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] cinder.cross_az_attach         = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.133 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] cinder.debug                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.133 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] cinder.endpoint_template       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.133 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] cinder.http_retries            = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.133 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] cinder.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.133 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] cinder.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.134 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] cinder.os_region_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.134 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] cinder.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.134 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] cinder.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.134 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] compute.consecutive_build_service_disable_threshold = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.134 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] compute.cpu_dedicated_set      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.135 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] compute.cpu_shared_set         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.135 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] compute.image_type_exclude_list = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.135 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] compute.live_migration_wait_for_vif_plug = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.135 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] compute.max_concurrent_disk_ops = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.135 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] compute.max_disk_devices_to_attach = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.136 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] compute.packing_host_numa_cells_allocation_strategy = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.136 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] compute.provider_config_location = /etc/nova/provider_config/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.136 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] compute.resource_provider_association_refresh = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.136 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] compute.shutdown_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.136 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] compute.vmdk_allowed_types     = ['streamOptimized', 'monolithicSparse'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.137 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] conductor.workers              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.137 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] console.allowed_origins        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.137 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] console.ssl_ciphers            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.137 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] console.ssl_minimum_version    = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.137 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] consoleauth.token_ttl          = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.138 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] cyborg.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.138 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] cyborg.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.138 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] cyborg.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.138 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] cyborg.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.138 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] cyborg.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.139 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] cyborg.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.139 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] cyborg.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.139 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] cyborg.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.139 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] cyborg.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.139 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] cyborg.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.140 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] cyborg.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.140 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] cyborg.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.140 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] cyborg.service_type            = accelerator log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.140 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] cyborg.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.141 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] cyborg.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.141 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] cyborg.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.141 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] cyborg.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.141 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] cyborg.valid_interfaces        = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.141 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] cyborg.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.142 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] database.backend               = sqlalchemy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.142 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] database.connection            = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.142 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] database.connection_debug      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.142 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] database.connection_parameters =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.142 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.143 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] database.connection_trace      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.143 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.143 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] database.db_max_retries        = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.143 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.143 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] database.db_retry_interval     = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.144 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] database.max_overflow          = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.144 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] database.max_pool_size         = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.144 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] database.max_retries           = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.144 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] database.mysql_enable_ndb      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.144 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] database.mysql_sql_mode        = TRADITIONAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.144 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.145 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] database.pool_timeout          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.145 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] database.retry_interval        = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.145 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] database.slave_connection      = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.145 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] database.sqlite_synchronous    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.145 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] api_database.backend           = sqlalchemy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.145 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] api_database.connection        = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.145 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] api_database.connection_debug  = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.146 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] api_database.connection_parameters =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.146 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] api_database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.146 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] api_database.connection_trace  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.146 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] api_database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.146 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] api_database.db_max_retries    = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.146 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] api_database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.146 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] api_database.db_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.146 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] api_database.max_overflow      = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.147 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] api_database.max_pool_size     = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.147 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] api_database.max_retries       = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.147 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] api_database.mysql_enable_ndb  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.147 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] api_database.mysql_sql_mode    = TRADITIONAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.147 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] api_database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.147 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] api_database.pool_timeout      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.147 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] api_database.retry_interval    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.148 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] api_database.slave_connection  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.148 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] api_database.sqlite_synchronous = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.148 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] devices.enabled_mdev_types     = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.148 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] ephemeral_storage_encryption.cipher = aes-xts-plain64 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.148 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] ephemeral_storage_encryption.enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.148 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] ephemeral_storage_encryption.key_size = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.148 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] glance.api_servers             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.149 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] glance.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.149 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] glance.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.149 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] glance.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.149 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] glance.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.149 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] glance.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.149 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] glance.debug                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.149 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] glance.default_trusted_certificate_ids = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.149 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] glance.enable_certificate_validation = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.150 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] glance.enable_rbd_download     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.150 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] glance.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.150 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] glance.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.150 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] glance.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.150 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] glance.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.150 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] glance.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.150 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] glance.num_retries             = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.150 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] glance.rbd_ceph_conf           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.151 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] glance.rbd_connect_timeout     = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.151 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] glance.rbd_pool                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.151 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] glance.rbd_user                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.151 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] glance.region_name             = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.151 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] glance.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.151 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] glance.service_type            = image log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.151 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] glance.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.152 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] glance.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.152 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] glance.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.152 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] glance.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.152 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] glance.valid_interfaces        = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.152 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] glance.verify_glance_signatures = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.152 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] glance.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.152 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] guestfs.debug                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.153 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] hyperv.config_drive_cdrom      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.153 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] hyperv.config_drive_inject_password = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.153 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] hyperv.dynamic_memory_ratio    = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.153 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] hyperv.enable_instance_metrics_collection = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.153 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] hyperv.enable_remotefx         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.153 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] hyperv.instances_path_share    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.153 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] hyperv.iscsi_initiator_list    = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.153 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] hyperv.limit_cpu_features      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.154 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] hyperv.mounted_disk_query_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.154 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] hyperv.mounted_disk_query_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.154 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] hyperv.power_state_check_timeframe = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.154 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] hyperv.power_state_event_polling_interval = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.154 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] hyperv.qemu_img_cmd            = qemu-img.exe log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.154 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] hyperv.use_multipath_io        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.154 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] hyperv.volume_attach_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.155 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] hyperv.volume_attach_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.155 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] hyperv.vswitch_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.155 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] hyperv.wait_soft_reboot_seconds = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.155 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] mks.enabled                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.155 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] mks.mksproxy_base_url          = http://127.0.0.1:6090/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.155 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] image_cache.manager_interval   = 2400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.156 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] image_cache.precache_concurrency = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.156 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] image_cache.remove_unused_base_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.156 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] image_cache.remove_unused_original_minimum_age_seconds = 86400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.156 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] image_cache.remove_unused_resized_minimum_age_seconds = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.156 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] image_cache.subdirectory_name  = _base log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.156 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] ironic.api_max_retries         = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.156 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] ironic.api_retry_interval      = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.157 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] ironic.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.157 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] ironic.auth_type               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.157 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] ironic.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.157 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] ironic.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.157 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] ironic.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.157 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] ironic.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.157 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] ironic.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.157 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] ironic.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.158 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] ironic.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.158 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] ironic.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.158 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] ironic.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.158 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] ironic.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.158 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] ironic.partition_key           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.158 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] ironic.peer_list               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.158 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] ironic.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.159 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] ironic.serial_console_state_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.159 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] ironic.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.159 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] ironic.service_type            = baremetal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.159 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] ironic.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.159 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] ironic.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.159 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] ironic.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.159 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] ironic.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.159 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] ironic.valid_interfaces        = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.160 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] ironic.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.160 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] key_manager.backend            = barbican log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.160 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] key_manager.fixed_key          = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.160 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] barbican.auth_endpoint         = http://localhost/identity/v3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.160 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] barbican.barbican_api_version  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.160 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] barbican.barbican_endpoint     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.160 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] barbican.barbican_endpoint_type = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.161 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] barbican.barbican_region_name  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.161 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] barbican.cafile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.161 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] barbican.certfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.161 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] barbican.collect_timing        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.161 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] barbican.insecure              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.161 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] barbican.keyfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.161 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] barbican.number_of_retries     = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.162 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] barbican.retry_delay           = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.162 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] barbican.send_service_user_token = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.162 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] barbican.split_loggers         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.162 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] barbican.timeout               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.162 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] barbican.verify_ssl            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.162 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] barbican.verify_ssl_path       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.162 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] barbican_service_user.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.162 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] barbican_service_user.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.163 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] barbican_service_user.cafile   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.163 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] barbican_service_user.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.163 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] barbican_service_user.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.163 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] barbican_service_user.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.163 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] barbican_service_user.keyfile  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.163 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] barbican_service_user.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.163 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] barbican_service_user.timeout  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.164 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] vault.approle_role_id          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.164 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] vault.approle_secret_id        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.164 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] vault.cafile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.164 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] vault.certfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.164 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] vault.collect_timing           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.164 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] vault.insecure                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.164 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] vault.keyfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.164 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] vault.kv_mountpoint            = secret log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.165 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] vault.kv_version               = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.165 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] vault.namespace                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.165 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] vault.root_token_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.165 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] vault.split_loggers            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.165 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] vault.ssl_ca_crt_file          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.165 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] vault.timeout                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.165 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] vault.use_ssl                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.166 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] vault.vault_url                = http://127.0.0.1:8200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.166 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] keystone.cafile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.166 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] keystone.certfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.166 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] keystone.collect_timing        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.166 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] keystone.connect_retries       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.166 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] keystone.connect_retry_delay   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.166 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] keystone.endpoint_override     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.167 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] keystone.insecure              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.167 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] keystone.keyfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.167 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] keystone.max_version           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.167 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] keystone.min_version           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.167 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] keystone.region_name           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.167 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] keystone.service_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.168 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] keystone.service_type          = identity log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.168 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] keystone.split_loggers         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.168 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] keystone.status_code_retries   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.168 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] keystone.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.168 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] keystone.timeout               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.168 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] keystone.valid_interfaces      = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.169 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] keystone.version               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.169 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] libvirt.connection_uri         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.169 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] libvirt.cpu_mode               = host-model log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.169 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] libvirt.cpu_model_extra_flags  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.169 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] libvirt.cpu_models             = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.169 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] libvirt.cpu_power_governor_high = performance log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.169 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] libvirt.cpu_power_governor_low = powersave log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.170 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] libvirt.cpu_power_management   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.170 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] libvirt.cpu_power_management_strategy = cpu_state log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.170 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] libvirt.device_detach_attempts = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.170 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] libvirt.device_detach_timeout  = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.170 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] libvirt.disk_cachemodes        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.170 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] libvirt.disk_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.170 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] libvirt.enabled_perf_events    = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.171 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] libvirt.file_backed_memory     = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.171 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] libvirt.gid_maps               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.171 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] libvirt.hw_disk_discard        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.171 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] libvirt.hw_machine_type        = ['x86_64=q35'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.171 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] libvirt.images_rbd_ceph_conf   = /etc/ceph/ceph.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.171 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] libvirt.images_rbd_glance_copy_poll_interval = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.171 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] libvirt.images_rbd_glance_copy_timeout = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.171 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] libvirt.images_rbd_glance_store_name = default_backend log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.172 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] libvirt.images_rbd_pool        = vms log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.172 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] libvirt.images_type            = rbd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.172 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] libvirt.images_volume_group    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.172 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] libvirt.inject_key             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.172 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] libvirt.inject_partition       = -2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.172 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] libvirt.inject_password        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.172 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] libvirt.iscsi_iface            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.173 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] libvirt.iser_use_multipath     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.173 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] libvirt.live_migration_bandwidth = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.173 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] libvirt.live_migration_completion_timeout = 800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.173 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] libvirt.live_migration_downtime = 500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.173 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] libvirt.live_migration_downtime_delay = 75 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.173 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] libvirt.live_migration_downtime_steps = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.173 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] libvirt.live_migration_inbound_addr = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.174 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] libvirt.live_migration_permit_auto_converge = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.174 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] libvirt.live_migration_permit_post_copy = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.174 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] libvirt.live_migration_scheme  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.174 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] libvirt.live_migration_timeout_action = force_complete log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.174 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] libvirt.live_migration_tunnelled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.174 238828 WARNING oslo_config.cfg [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] Deprecated: Option "live_migration_uri" from group "libvirt" is deprecated for removal (
Jan 31 08:24:08 compute-0 nova_compute[238824]: live_migration_uri is deprecated for removal in favor of two other options that
Jan 31 08:24:08 compute-0 nova_compute[238824]: allow to change live migration scheme and target URI: ``live_migration_scheme``
Jan 31 08:24:08 compute-0 nova_compute[238824]: and ``live_migration_inbound_addr`` respectively.
Jan 31 08:24:08 compute-0 nova_compute[238824]: ).  Its value may be silently ignored in the future.
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.174 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] libvirt.live_migration_uri     = qemu+tls://%s/system log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.175 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] libvirt.live_migration_with_native_tls = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.175 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] libvirt.max_queues             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.175 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] libvirt.mem_stats_period_seconds = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.175 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] libvirt.nfs_mount_options      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.175 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] libvirt.nfs_mount_point_base   = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.175 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] libvirt.num_aoe_discover_tries = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.176 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] libvirt.num_iser_scan_tries    = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.176 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] libvirt.num_memory_encrypted_guests = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.176 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] libvirt.num_nvme_discover_tries = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.176 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] libvirt.num_pcie_ports         = 24 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.176 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] libvirt.num_volume_scan_tries  = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.176 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] libvirt.pmem_namespaces        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.176 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] libvirt.quobyte_client_cfg     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.177 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] libvirt.quobyte_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.177 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] libvirt.rbd_connect_timeout    = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.177 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] libvirt.rbd_destroy_volume_retries = 12 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.177 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] libvirt.rbd_destroy_volume_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.177 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] libvirt.rbd_secret_uuid        = 82c880e6-d992-5408-8b12-efff9c275473 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.177 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] libvirt.rbd_user               = openstack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.177 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] libvirt.realtime_scheduler_priority = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.177 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] libvirt.remote_filesystem_transport = ssh log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.178 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] libvirt.rescue_image_id        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.178 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] libvirt.rescue_kernel_id       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.178 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] libvirt.rescue_ramdisk_id      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.178 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] libvirt.rng_dev_path           = /dev/urandom log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.178 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] libvirt.rx_queue_size          = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.178 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] libvirt.smbfs_mount_options    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.178 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] libvirt.smbfs_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.179 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] libvirt.snapshot_compression   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.179 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] libvirt.snapshot_image_format  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.179 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] libvirt.snapshots_directory    = /var/lib/nova/instances/snapshots log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.179 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] libvirt.sparse_logical_volumes = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.179 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] libvirt.swtpm_enabled          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.179 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] libvirt.swtpm_group            = tss log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.179 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] libvirt.swtpm_user             = tss log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.180 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] libvirt.sysinfo_serial         = unique log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.180 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] libvirt.tx_queue_size          = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.180 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] libvirt.uid_maps               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.180 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] libvirt.use_virtio_for_bridges = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.180 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] libvirt.virt_type              = kvm log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.180 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] libvirt.volume_clear           = zero log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.180 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] libvirt.volume_clear_size      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.181 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] libvirt.volume_use_multipath   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.181 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] libvirt.vzstorage_cache_path   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.181 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] libvirt.vzstorage_log_path     = /var/log/vstorage/%(cluster_name)s/nova.log.gz log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.181 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] libvirt.vzstorage_mount_group  = qemu log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.181 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] libvirt.vzstorage_mount_opts   = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.181 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] libvirt.vzstorage_mount_perms  = 0770 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.181 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] libvirt.vzstorage_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.182 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] libvirt.vzstorage_mount_user   = stack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.182 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] libvirt.wait_soft_reboot_seconds = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.182 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] neutron.auth_section           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.182 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] neutron.auth_type              = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.182 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] neutron.cafile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.182 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] neutron.certfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.182 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] neutron.collect_timing         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.182 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] neutron.connect_retries        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.183 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] neutron.connect_retry_delay    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.183 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] neutron.default_floating_pool  = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.183 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] neutron.endpoint_override      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.183 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] neutron.extension_sync_interval = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.183 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] neutron.http_retries           = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.183 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] neutron.insecure               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.183 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] neutron.keyfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.184 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] neutron.max_version            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.184 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] neutron.metadata_proxy_shared_secret = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.184 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] neutron.min_version            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.184 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] neutron.ovs_bridge             = br-int log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.184 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] neutron.physnets               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.184 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] neutron.region_name            = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.184 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] neutron.service_metadata_proxy = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.184 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] neutron.service_name           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.185 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] neutron.service_type           = network log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.185 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] neutron.split_loggers          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.185 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] neutron.status_code_retries    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.185 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] neutron.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.185 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] neutron.timeout                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.185 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] neutron.valid_interfaces       = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.185 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] neutron.version                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.186 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] notifications.bdms_in_notifications = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.186 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] notifications.default_level    = INFO log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.186 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] notifications.notification_format = unversioned log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.186 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] notifications.notify_on_state_change = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.186 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] notifications.versioned_notifications_topics = ['versioned_notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.186 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] pci.alias                      = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.186 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] pci.device_spec                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.186 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] pci.report_in_placement        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.187 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] placement.auth_section         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.187 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] placement.auth_type            = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.187 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] placement.auth_url             = https://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.187 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] placement.cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.187 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] placement.certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.187 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] placement.collect_timing       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.187 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] placement.connect_retries      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.188 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] placement.connect_retry_delay  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.188 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] placement.default_domain_id    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.188 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] placement.default_domain_name  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.188 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] placement.domain_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.188 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] placement.domain_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.188 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] placement.endpoint_override    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.188 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] placement.insecure             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.188 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] placement.keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.189 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] placement.max_version          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.189 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] placement.min_version          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.189 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] placement.password             = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.189 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] placement.project_domain_id    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.189 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] placement.project_domain_name  = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.189 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] placement.project_id           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.189 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] placement.project_name         = service log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.190 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] placement.region_name          = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.190 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] placement.service_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.190 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] placement.service_type         = placement log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.190 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] placement.split_loggers        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.190 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] placement.status_code_retries  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.190 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] placement.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.190 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] placement.system_scope         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.190 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] placement.timeout              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.191 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] placement.trust_id             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.191 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] placement.user_domain_id       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.191 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] placement.user_domain_name     = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.191 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] placement.user_id              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.191 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] placement.username             = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.191 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] placement.valid_interfaces     = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.191 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] placement.version              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.192 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] quota.cores                    = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.192 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] quota.count_usage_from_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.192 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] quota.driver                   = nova.quota.DbQuotaDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.192 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] quota.injected_file_content_bytes = 10240 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.192 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] quota.injected_file_path_length = 255 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.192 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] quota.injected_files           = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.192 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] quota.instances                = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.192 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] quota.key_pairs                = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.193 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] quota.metadata_items           = 128 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.193 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] quota.ram                      = 51200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.193 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] quota.recheck_quota            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.193 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] quota.server_group_members     = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.193 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] quota.server_groups            = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.193 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] rdp.enabled                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.194 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] rdp.html5_proxy_base_url       = http://127.0.0.1:6083/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.194 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] scheduler.discover_hosts_in_cells_interval = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.194 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] scheduler.enable_isolated_aggregate_filtering = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.194 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] scheduler.image_metadata_prefilter = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.194 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] scheduler.limit_tenants_to_placement_aggregate = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.194 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] scheduler.max_attempts         = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.194 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] scheduler.max_placement_results = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.195 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] scheduler.placement_aggregate_required_for_tenants = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.195 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] scheduler.query_placement_for_availability_zone = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.195 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] scheduler.query_placement_for_image_type_support = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.195 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] scheduler.query_placement_for_routed_network_aggregates = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.195 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] scheduler.workers              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.195 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] filter_scheduler.aggregate_image_properties_isolation_namespace = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.195 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] filter_scheduler.aggregate_image_properties_isolation_separator = . log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.195 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] filter_scheduler.available_filters = ['nova.scheduler.filters.all_filters'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.196 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] filter_scheduler.build_failure_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.196 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] filter_scheduler.cpu_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.196 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] filter_scheduler.cross_cell_move_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.196 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] filter_scheduler.disk_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.196 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] filter_scheduler.enabled_filters = ['ComputeFilter', 'ComputeCapabilitiesFilter', 'ImagePropertiesFilter', 'ServerGroupAntiAffinityFilter', 'ServerGroupAffinityFilter'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.196 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] filter_scheduler.host_subset_size = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.197 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] filter_scheduler.image_properties_default_architecture = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.197 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] filter_scheduler.io_ops_weight_multiplier = -1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.197 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] filter_scheduler.isolated_hosts = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.197 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] filter_scheduler.isolated_images = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.197 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] filter_scheduler.max_instances_per_host = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.197 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] filter_scheduler.max_io_ops_per_host = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.197 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] filter_scheduler.pci_in_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.198 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] filter_scheduler.pci_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.198 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] filter_scheduler.ram_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.198 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] filter_scheduler.restrict_isolated_hosts_to_isolated_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.198 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] filter_scheduler.shuffle_best_same_weighed_hosts = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.198 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] filter_scheduler.soft_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.198 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] filter_scheduler.soft_anti_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.198 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] filter_scheduler.track_instance_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.198 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] filter_scheduler.weight_classes = ['nova.scheduler.weights.all_weighers'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.199 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] metrics.required               = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.199 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] metrics.weight_multiplier      = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.199 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] metrics.weight_of_unavailable  = -10000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.199 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] metrics.weight_setting         = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.199 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] serial_console.base_url        = ws://127.0.0.1:6083/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.199 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] serial_console.enabled         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.200 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] serial_console.port_range      = 10000:20000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.200 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] serial_console.proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.200 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] serial_console.serialproxy_host = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.200 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] serial_console.serialproxy_port = 6083 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.200 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] service_user.auth_section      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.200 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] service_user.auth_type         = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.200 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] service_user.cafile            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.200 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] service_user.certfile          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.201 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] service_user.collect_timing    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.201 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] service_user.insecure          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.201 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] service_user.keyfile           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.201 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] service_user.send_service_user_token = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.201 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] service_user.split_loggers     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.201 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] service_user.timeout           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.201 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] spice.agent_enabled            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.202 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] spice.enabled                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.202 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] spice.html5proxy_base_url      = http://127.0.0.1:6082/spice_auto.html log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.202 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] spice.html5proxy_host          = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.202 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] spice.html5proxy_port          = 6082 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.202 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] spice.image_compression        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.202 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] spice.jpeg_compression         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.203 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] spice.playback_compression     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.203 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] spice.server_listen            = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.203 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] spice.server_proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.203 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] spice.streaming_mode           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.203 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] spice.zlib_compression         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.203 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] upgrade_levels.baseapi         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.203 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] upgrade_levels.cert            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.204 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] upgrade_levels.compute         = auto log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.204 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] upgrade_levels.conductor       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.204 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] upgrade_levels.scheduler       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.204 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] vendordata_dynamic_auth.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.204 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] vendordata_dynamic_auth.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.204 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] vendordata_dynamic_auth.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.204 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] vendordata_dynamic_auth.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.204 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] vendordata_dynamic_auth.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.205 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] vendordata_dynamic_auth.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.205 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] vendordata_dynamic_auth.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.205 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] vendordata_dynamic_auth.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.205 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] vendordata_dynamic_auth.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.205 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] vmware.api_retry_count         = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.205 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] vmware.ca_file                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.205 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] vmware.cache_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.206 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] vmware.cluster_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.206 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] vmware.connection_pool_size    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.206 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] vmware.console_delay_seconds   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.206 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] vmware.datastore_regex         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.206 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] vmware.host_ip                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.206 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] vmware.host_password           = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.206 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] vmware.host_port               = 443 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.206 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] vmware.host_username           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.207 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] vmware.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.207 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] vmware.integration_bridge      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.207 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] vmware.maximum_objects         = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.207 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] vmware.pbm_default_policy      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.207 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] vmware.pbm_enabled             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.207 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] vmware.pbm_wsdl_location       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.207 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] vmware.serial_log_dir          = /opt/vmware/vspc log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.207 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] vmware.serial_port_proxy_uri   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.208 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] vmware.serial_port_service_uri = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.208 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] vmware.task_poll_interval      = 0.5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.208 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] vmware.use_linked_clone        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.208 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] vmware.vnc_keymap              = en-us log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.208 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] vmware.vnc_port                = 5900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.208 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] vmware.vnc_port_total          = 10000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.208 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] vnc.auth_schemes               = ['none'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.209 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] vnc.enabled                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.209 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] vnc.novncproxy_base_url        = https://nova-novncproxy-cell1-public-openstack.apps-crc.testing/vnc_lite.html log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.209 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] vnc.novncproxy_host            = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.209 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] vnc.novncproxy_port            = 6080 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.209 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] vnc.server_listen              = ::0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.209 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] vnc.server_proxyclient_address = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.210 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] vnc.vencrypt_ca_certs          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.210 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] vnc.vencrypt_client_cert       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.210 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] vnc.vencrypt_client_key        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.210 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] workarounds.disable_compute_service_check_for_ffu = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.210 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] workarounds.disable_deep_image_inspection = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.210 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] workarounds.disable_fallback_pcpu_query = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.210 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] workarounds.disable_group_policy_check_upcall = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.210 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] workarounds.disable_libvirt_livesnapshot = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.211 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] workarounds.disable_rootwrap   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.211 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] workarounds.enable_numa_live_migration = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.211 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] workarounds.enable_qemu_monitor_announce_self = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.211 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] workarounds.ensure_libvirt_rbd_instance_dir_cleanup = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.211 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] workarounds.handle_virt_lifecycle_events = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.211 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] workarounds.libvirt_disable_apic = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.211 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] workarounds.never_download_image_if_on_rbd = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.212 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] workarounds.qemu_monitor_announce_self_count = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.212 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] workarounds.qemu_monitor_announce_self_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.212 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] workarounds.reserve_disk_resource_for_image_cache = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.212 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] workarounds.skip_cpu_compare_at_startup = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.212 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] workarounds.skip_cpu_compare_on_dest = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.212 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] workarounds.skip_hypervisor_version_check_on_lm = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.212 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] workarounds.skip_reserve_in_use_ironic_nodes = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.212 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] workarounds.unified_limits_count_pcpu_as_vcpu = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.213 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] workarounds.wait_for_vif_plugged_event_during_hard_reboot = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.213 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] wsgi.api_paste_config          = api-paste.ini log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.213 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] wsgi.client_socket_timeout     = 900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.213 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] wsgi.default_pool_size         = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.213 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] wsgi.keep_alive                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.213 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] wsgi.max_header_line           = 16384 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.213 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] wsgi.secure_proxy_ssl_header   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.214 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] wsgi.ssl_ca_file               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.214 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] wsgi.ssl_cert_file             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.214 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] wsgi.ssl_key_file              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.214 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] wsgi.tcp_keepidle              = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.214 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] wsgi.wsgi_log_format           = %(client_ip)s "%(request_line)s" status: %(status_code)s len: %(body_length)s time: %(wall_seconds).7f log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.214 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] zvm.ca_file                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.215 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] zvm.cloud_connector_url        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.215 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] zvm.image_tmp_path             = /var/lib/nova/images log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.215 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] zvm.reachable_timeout          = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.215 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] oslo_policy.enforce_new_defaults = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.215 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] oslo_policy.enforce_scope      = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.215 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] oslo_policy.policy_default_rule = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.216 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] oslo_policy.policy_dirs        = ['policy.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.216 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] oslo_policy.policy_file        = policy.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.216 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] oslo_policy.remote_content_type = application/x-www-form-urlencoded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.216 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] oslo_policy.remote_ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.216 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] oslo_policy.remote_ssl_client_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.216 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] oslo_policy.remote_ssl_client_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.216 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] oslo_policy.remote_ssl_verify_server_crt = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.217 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] oslo_versionedobjects.fatal_exception_format_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.217 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] oslo_middleware.http_basic_auth_user_file = /etc/htpasswd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.217 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] remote_debug.host              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.217 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] remote_debug.port              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.217 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.217 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] oslo_messaging_rabbit.amqp_durable_queues = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.217 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.217 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.218 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.218 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.218 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.218 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.218 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.218 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.218 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.219 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.219 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.219 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.219 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.219 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.219 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.219 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.219 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.220 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.220 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_queue = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.220 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.220 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.220 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.220 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.220 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.221 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.221 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.221 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.221 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.221 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.221 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] oslo_messaging_notifications.driver = ['noop'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.222 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.222 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.222 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.222 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] oslo_limit.auth_section        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.222 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] oslo_limit.auth_type           = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.222 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] oslo_limit.auth_url            = https://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.222 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] oslo_limit.cafile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.223 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] oslo_limit.certfile            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.223 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] oslo_limit.collect_timing      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.223 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] oslo_limit.connect_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.223 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] oslo_limit.connect_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.223 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] oslo_limit.default_domain_id   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.223 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] oslo_limit.default_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.224 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] oslo_limit.domain_id           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.224 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] oslo_limit.domain_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.224 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] oslo_limit.endpoint_id         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.224 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] oslo_limit.endpoint_override   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.224 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] oslo_limit.insecure            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.224 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] oslo_limit.keyfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.224 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] oslo_limit.max_version         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.225 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] oslo_limit.min_version         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.225 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] oslo_limit.password            = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.225 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] oslo_limit.project_domain_id   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.225 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] oslo_limit.project_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.225 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] oslo_limit.project_id          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.226 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] oslo_limit.project_name        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.226 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] oslo_limit.region_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.226 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] oslo_limit.service_name        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.226 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] oslo_limit.service_type        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.226 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] oslo_limit.split_loggers       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.226 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] oslo_limit.status_code_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.226 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] oslo_limit.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.227 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] oslo_limit.system_scope        = all log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.227 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] oslo_limit.timeout             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.227 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] oslo_limit.trust_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.227 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] oslo_limit.user_domain_id      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.227 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] oslo_limit.user_domain_name    = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.227 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] oslo_limit.user_id             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.227 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] oslo_limit.username            = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.228 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] oslo_limit.valid_interfaces    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.228 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] oslo_limit.version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.228 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] oslo_reports.file_event_handler = /var/lib/nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.228 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] oslo_reports.file_event_handler_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.228 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] oslo_reports.log_dir           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.229 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] vif_plug_linux_bridge_privileged.capabilities = [12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.229 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] vif_plug_linux_bridge_privileged.group = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.229 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] vif_plug_linux_bridge_privileged.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.229 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] vif_plug_linux_bridge_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.229 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] vif_plug_linux_bridge_privileged.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.229 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] vif_plug_linux_bridge_privileged.user = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.229 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] vif_plug_ovs_privileged.capabilities = [12, 1] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.230 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] vif_plug_ovs_privileged.group  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.230 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] vif_plug_ovs_privileged.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.230 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] vif_plug_ovs_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.230 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] vif_plug_ovs_privileged.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.230 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] vif_plug_ovs_privileged.user   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.230 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] os_vif_linux_bridge.flat_interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.230 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] os_vif_linux_bridge.forward_bridge_interface = ['all'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.231 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] os_vif_linux_bridge.iptables_bottom_regex =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.231 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] os_vif_linux_bridge.iptables_drop_action = DROP log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.231 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] os_vif_linux_bridge.iptables_top_regex =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.231 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] os_vif_linux_bridge.network_device_mtu = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.231 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] os_vif_linux_bridge.use_ipv6   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.231 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] os_vif_linux_bridge.vlan_interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.231 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] os_vif_ovs.isolate_vif         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.232 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] os_vif_ovs.network_device_mtu  = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.232 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] os_vif_ovs.ovs_vsctl_timeout   = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.232 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] os_vif_ovs.ovsdb_connection    = tcp:127.0.0.1:6640 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.232 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] os_vif_ovs.ovsdb_interface     = native log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.232 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] os_vif_ovs.per_port_bridge     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.232 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] os_brick.lock_path             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.232 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] os_brick.wait_mpath_device_attempts = 4 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.233 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] os_brick.wait_mpath_device_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.233 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] privsep_osbrick.capabilities   = [21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.233 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] privsep_osbrick.group          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.233 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] privsep_osbrick.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.233 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] privsep_osbrick.logger_name    = os_brick.privileged log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.233 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] privsep_osbrick.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.233 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] privsep_osbrick.user           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.234 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] nova_sys_admin.capabilities    = [0, 1, 2, 3, 12, 21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.234 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] nova_sys_admin.group           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.234 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] nova_sys_admin.helper_command  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.234 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] nova_sys_admin.logger_name     = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.234 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] nova_sys_admin.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.234 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] nova_sys_admin.user            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.234 238828 DEBUG oslo_service.service [None req-54a3a391-e289-4ec8-b50b-f2cb93d1eaf6 - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.235 238828 INFO nova.service [-] Starting compute node (version 27.5.2-0.20260127144738.eaa65f0.el9)
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.255 238828 DEBUG nova.virt.libvirt.host [None req-2c04beea-b301-4289-b04e-ab5631b910c8 - - - - - -] Starting native event thread _init_events /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:492
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.256 238828 DEBUG nova.virt.libvirt.host [None req-2c04beea-b301-4289-b04e-ab5631b910c8 - - - - - -] Starting green dispatch thread _init_events /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:498
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.256 238828 DEBUG nova.virt.libvirt.host [None req-2c04beea-b301-4289-b04e-ab5631b910c8 - - - - - -] Starting connection event dispatch thread initialize /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:620
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.257 238828 DEBUG nova.virt.libvirt.host [None req-2c04beea-b301-4289-b04e-ab5631b910c8 - - - - - -] Connecting to libvirt: qemu:///system _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:503
Jan 31 08:24:08 compute-0 systemd[1]: Starting libvirt QEMU daemon...
Jan 31 08:24:08 compute-0 systemd[1]: Started libvirt QEMU daemon.
Jan 31 08:24:08 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.315 238828 DEBUG nova.virt.libvirt.host [None req-2c04beea-b301-4289-b04e-ab5631b910c8 - - - - - -] Registering for lifecycle events <nova.virt.libvirt.host.Host object at 0x7f6572815070> _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:509
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.318 238828 DEBUG nova.virt.libvirt.host [None req-2c04beea-b301-4289-b04e-ab5631b910c8 - - - - - -] Registering for connection events: <nova.virt.libvirt.host.Host object at 0x7f6572815070> _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:530
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.318 238828 INFO nova.virt.libvirt.driver [None req-2c04beea-b301-4289-b04e-ab5631b910c8 - - - - - -] Connection event '1' reason 'None'
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.366 238828 WARNING nova.virt.libvirt.driver [None req-2c04beea-b301-4289-b04e-ab5631b910c8 - - - - - -] Cannot update service status on host "compute-0.ctlplane.example.com" since it is not registered.: nova.exception_Remote.ComputeHostNotFound_Remote: Compute host compute-0.ctlplane.example.com could not be found.
Jan 31 08:24:08 compute-0 nova_compute[238824]: 2026-01-31 08:24:08.367 238828 DEBUG nova.virt.libvirt.volume.mount [None req-2c04beea-b301-4289-b04e-ab5631b910c8 - - - - - -] Initialising _HostMountState generation 0 host_up /usr/lib/python3.9/site-packages/nova/virt/libvirt/volume/mount.py:130
Jan 31 08:24:08 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v646: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:24:09 compute-0 nova_compute[238824]: 2026-01-31 08:24:09.268 238828 INFO nova.virt.libvirt.host [None req-2c04beea-b301-4289-b04e-ab5631b910c8 - - - - - -] Libvirt host capabilities <capabilities>
Jan 31 08:24:09 compute-0 nova_compute[238824]: 
Jan 31 08:24:09 compute-0 nova_compute[238824]:   <host>
Jan 31 08:24:09 compute-0 nova_compute[238824]:     <uuid>2848852e-0b64-43df-9df3-1c9bd96fb83b</uuid>
Jan 31 08:24:09 compute-0 nova_compute[238824]:     <cpu>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <arch>x86_64</arch>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <model>EPYC-Rome-v4</model>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <vendor>AMD</vendor>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <microcode version='16777317'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <signature family='23' model='49' stepping='0'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <topology sockets='8' dies='1' clusters='1' cores='1' threads='1'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <maxphysaddr mode='emulate' bits='40'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <feature name='x2apic'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <feature name='tsc-deadline'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <feature name='osxsave'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <feature name='hypervisor'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <feature name='tsc_adjust'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <feature name='spec-ctrl'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <feature name='stibp'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <feature name='arch-capabilities'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <feature name='ssbd'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <feature name='cmp_legacy'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <feature name='topoext'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <feature name='virt-ssbd'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <feature name='lbrv'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <feature name='tsc-scale'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <feature name='vmcb-clean'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <feature name='pause-filter'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <feature name='pfthreshold'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <feature name='svme-addr-chk'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <feature name='rdctl-no'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <feature name='skip-l1dfl-vmentry'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <feature name='mds-no'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <feature name='pschange-mc-no'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <pages unit='KiB' size='4'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <pages unit='KiB' size='2048'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <pages unit='KiB' size='1048576'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:     </cpu>
Jan 31 08:24:09 compute-0 nova_compute[238824]:     <power_management>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <suspend_mem/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:     </power_management>
Jan 31 08:24:09 compute-0 nova_compute[238824]:     <iommu support='no'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:     <migration_features>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <live/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <uri_transports>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <uri_transport>tcp</uri_transport>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <uri_transport>rdma</uri_transport>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       </uri_transports>
Jan 31 08:24:09 compute-0 nova_compute[238824]:     </migration_features>
Jan 31 08:24:09 compute-0 nova_compute[238824]:     <topology>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <cells num='1'>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <cell id='0'>
Jan 31 08:24:09 compute-0 nova_compute[238824]:           <memory unit='KiB'>7864296</memory>
Jan 31 08:24:09 compute-0 nova_compute[238824]:           <pages unit='KiB' size='4'>1966074</pages>
Jan 31 08:24:09 compute-0 nova_compute[238824]:           <pages unit='KiB' size='2048'>0</pages>
Jan 31 08:24:09 compute-0 nova_compute[238824]:           <pages unit='KiB' size='1048576'>0</pages>
Jan 31 08:24:09 compute-0 nova_compute[238824]:           <distances>
Jan 31 08:24:09 compute-0 nova_compute[238824]:             <sibling id='0' value='10'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:           </distances>
Jan 31 08:24:09 compute-0 nova_compute[238824]:           <cpus num='8'>
Jan 31 08:24:09 compute-0 nova_compute[238824]:             <cpu id='0' socket_id='0' die_id='0' cluster_id='65535' core_id='0' siblings='0'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:             <cpu id='1' socket_id='1' die_id='1' cluster_id='65535' core_id='0' siblings='1'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:             <cpu id='2' socket_id='2' die_id='2' cluster_id='65535' core_id='0' siblings='2'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:             <cpu id='3' socket_id='3' die_id='3' cluster_id='65535' core_id='0' siblings='3'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:             <cpu id='4' socket_id='4' die_id='4' cluster_id='65535' core_id='0' siblings='4'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:             <cpu id='5' socket_id='5' die_id='5' cluster_id='65535' core_id='0' siblings='5'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:             <cpu id='6' socket_id='6' die_id='6' cluster_id='65535' core_id='0' siblings='6'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:             <cpu id='7' socket_id='7' die_id='7' cluster_id='65535' core_id='0' siblings='7'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:           </cpus>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         </cell>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       </cells>
Jan 31 08:24:09 compute-0 nova_compute[238824]:     </topology>
Jan 31 08:24:09 compute-0 nova_compute[238824]:     <cache>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <bank id='0' level='2' type='both' size='512' unit='KiB' cpus='0'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <bank id='1' level='2' type='both' size='512' unit='KiB' cpus='1'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <bank id='2' level='2' type='both' size='512' unit='KiB' cpus='2'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <bank id='3' level='2' type='both' size='512' unit='KiB' cpus='3'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <bank id='4' level='2' type='both' size='512' unit='KiB' cpus='4'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <bank id='5' level='2' type='both' size='512' unit='KiB' cpus='5'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <bank id='6' level='2' type='both' size='512' unit='KiB' cpus='6'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <bank id='7' level='2' type='both' size='512' unit='KiB' cpus='7'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <bank id='0' level='3' type='both' size='16' unit='MiB' cpus='0'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <bank id='1' level='3' type='both' size='16' unit='MiB' cpus='1'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <bank id='2' level='3' type='both' size='16' unit='MiB' cpus='2'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <bank id='3' level='3' type='both' size='16' unit='MiB' cpus='3'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <bank id='4' level='3' type='both' size='16' unit='MiB' cpus='4'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <bank id='5' level='3' type='both' size='16' unit='MiB' cpus='5'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <bank id='6' level='3' type='both' size='16' unit='MiB' cpus='6'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <bank id='7' level='3' type='both' size='16' unit='MiB' cpus='7'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:     </cache>
Jan 31 08:24:09 compute-0 nova_compute[238824]:     <secmodel>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <model>selinux</model>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <doi>0</doi>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <baselabel type='kvm'>system_u:system_r:svirt_t:s0</baselabel>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <baselabel type='qemu'>system_u:system_r:svirt_tcg_t:s0</baselabel>
Jan 31 08:24:09 compute-0 nova_compute[238824]:     </secmodel>
Jan 31 08:24:09 compute-0 nova_compute[238824]:     <secmodel>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <model>dac</model>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <doi>0</doi>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <baselabel type='kvm'>+107:+107</baselabel>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <baselabel type='qemu'>+107:+107</baselabel>
Jan 31 08:24:09 compute-0 nova_compute[238824]:     </secmodel>
Jan 31 08:24:09 compute-0 nova_compute[238824]:   </host>
Jan 31 08:24:09 compute-0 nova_compute[238824]: 
Jan 31 08:24:09 compute-0 nova_compute[238824]:   <guest>
Jan 31 08:24:09 compute-0 nova_compute[238824]:     <os_type>hvm</os_type>
Jan 31 08:24:09 compute-0 nova_compute[238824]:     <arch name='i686'>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <wordsize>32</wordsize>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <emulator>/usr/libexec/qemu-kvm</emulator>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <machine maxCpus='240' deprecated='yes'>pc-i440fx-rhel7.6.0</machine>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <machine canonical='pc-i440fx-rhel7.6.0' maxCpus='240' deprecated='yes'>pc</machine>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <machine maxCpus='4096'>pc-q35-rhel9.8.0</machine>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <machine canonical='pc-q35-rhel9.8.0' maxCpus='4096'>q35</machine>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <machine maxCpus='4096'>pc-q35-rhel9.6.0</machine>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.6.0</machine>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <machine maxCpus='710'>pc-q35-rhel9.4.0</machine>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.5.0</machine>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.3.0</machine>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel7.6.0</machine>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.4.0</machine>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <machine maxCpus='710'>pc-q35-rhel9.2.0</machine>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.2.0</machine>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <machine maxCpus='710'>pc-q35-rhel9.0.0</machine>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.0.0</machine>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.1.0</machine>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <domain type='qemu'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <domain type='kvm'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:     </arch>
Jan 31 08:24:09 compute-0 nova_compute[238824]:     <features>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <pae/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <nonpae/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <acpi default='on' toggle='yes'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <apic default='on' toggle='no'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <cpuselection/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <deviceboot/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <disksnapshot default='on' toggle='no'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <externalSnapshot/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:     </features>
Jan 31 08:24:09 compute-0 nova_compute[238824]:   </guest>
Jan 31 08:24:09 compute-0 nova_compute[238824]: 
Jan 31 08:24:09 compute-0 nova_compute[238824]:   <guest>
Jan 31 08:24:09 compute-0 nova_compute[238824]:     <os_type>hvm</os_type>
Jan 31 08:24:09 compute-0 nova_compute[238824]:     <arch name='x86_64'>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <wordsize>64</wordsize>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <emulator>/usr/libexec/qemu-kvm</emulator>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <machine maxCpus='240' deprecated='yes'>pc-i440fx-rhel7.6.0</machine>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <machine canonical='pc-i440fx-rhel7.6.0' maxCpus='240' deprecated='yes'>pc</machine>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <machine maxCpus='4096'>pc-q35-rhel9.8.0</machine>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <machine canonical='pc-q35-rhel9.8.0' maxCpus='4096'>q35</machine>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <machine maxCpus='4096'>pc-q35-rhel9.6.0</machine>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.6.0</machine>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <machine maxCpus='710'>pc-q35-rhel9.4.0</machine>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.5.0</machine>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.3.0</machine>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel7.6.0</machine>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.4.0</machine>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <machine maxCpus='710'>pc-q35-rhel9.2.0</machine>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.2.0</machine>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <machine maxCpus='710'>pc-q35-rhel9.0.0</machine>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.0.0</machine>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.1.0</machine>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <domain type='qemu'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <domain type='kvm'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:     </arch>
Jan 31 08:24:09 compute-0 nova_compute[238824]:     <features>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <acpi default='on' toggle='yes'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <apic default='on' toggle='no'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <cpuselection/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <deviceboot/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <disksnapshot default='on' toggle='no'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <externalSnapshot/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:     </features>
Jan 31 08:24:09 compute-0 nova_compute[238824]:   </guest>
Jan 31 08:24:09 compute-0 nova_compute[238824]: 
Jan 31 08:24:09 compute-0 nova_compute[238824]: </capabilities>
Jan 31 08:24:09 compute-0 nova_compute[238824]: 
Jan 31 08:24:09 compute-0 nova_compute[238824]: 2026-01-31 08:24:09.275 238828 DEBUG nova.virt.libvirt.host [None req-2c04beea-b301-4289-b04e-ab5631b910c8 - - - - - -] Getting domain capabilities for i686 via machine types: {'pc', 'q35'} _get_machine_types /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:952
Jan 31 08:24:09 compute-0 nova_compute[238824]: 2026-01-31 08:24:09.294 238828 DEBUG nova.virt.libvirt.host [None req-2c04beea-b301-4289-b04e-ab5631b910c8 - - - - - -] Libvirt host hypervisor capabilities for arch=i686 and machine_type=pc:
Jan 31 08:24:09 compute-0 nova_compute[238824]: <domainCapabilities>
Jan 31 08:24:09 compute-0 nova_compute[238824]:   <path>/usr/libexec/qemu-kvm</path>
Jan 31 08:24:09 compute-0 nova_compute[238824]:   <domain>kvm</domain>
Jan 31 08:24:09 compute-0 nova_compute[238824]:   <machine>pc-i440fx-rhel7.6.0</machine>
Jan 31 08:24:09 compute-0 nova_compute[238824]:   <arch>i686</arch>
Jan 31 08:24:09 compute-0 nova_compute[238824]:   <vcpu max='240'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:   <iothreads supported='yes'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:   <os supported='yes'>
Jan 31 08:24:09 compute-0 nova_compute[238824]:     <enum name='firmware'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:     <loader supported='yes'>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <enum name='type'>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <value>rom</value>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <value>pflash</value>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       </enum>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <enum name='readonly'>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <value>yes</value>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <value>no</value>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       </enum>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <enum name='secure'>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <value>no</value>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       </enum>
Jan 31 08:24:09 compute-0 nova_compute[238824]:     </loader>
Jan 31 08:24:09 compute-0 nova_compute[238824]:   </os>
Jan 31 08:24:09 compute-0 nova_compute[238824]:   <cpu>
Jan 31 08:24:09 compute-0 nova_compute[238824]:     <mode name='host-passthrough' supported='yes'>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <enum name='hostPassthroughMigratable'>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <value>on</value>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <value>off</value>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       </enum>
Jan 31 08:24:09 compute-0 nova_compute[238824]:     </mode>
Jan 31 08:24:09 compute-0 nova_compute[238824]:     <mode name='maximum' supported='yes'>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <enum name='maximumMigratable'>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <value>on</value>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <value>off</value>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       </enum>
Jan 31 08:24:09 compute-0 nova_compute[238824]:     </mode>
Jan 31 08:24:09 compute-0 nova_compute[238824]:     <mode name='host-model' supported='yes'>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <model fallback='forbid'>EPYC-Rome</model>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <vendor>AMD</vendor>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <maxphysaddr mode='passthrough' limit='40'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <feature policy='require' name='x2apic'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <feature policy='require' name='tsc-deadline'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <feature policy='require' name='hypervisor'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <feature policy='require' name='tsc_adjust'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <feature policy='require' name='spec-ctrl'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <feature policy='require' name='stibp'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <feature policy='require' name='ssbd'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <feature policy='require' name='cmp_legacy'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <feature policy='require' name='overflow-recov'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <feature policy='require' name='succor'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <feature policy='require' name='ibrs'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <feature policy='require' name='amd-ssbd'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <feature policy='require' name='virt-ssbd'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <feature policy='require' name='lbrv'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <feature policy='require' name='tsc-scale'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <feature policy='require' name='vmcb-clean'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <feature policy='require' name='flushbyasid'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <feature policy='require' name='pause-filter'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <feature policy='require' name='pfthreshold'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <feature policy='require' name='svme-addr-chk'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <feature policy='require' name='lfence-always-serializing'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <feature policy='disable' name='xsaves'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:     </mode>
Jan 31 08:24:09 compute-0 nova_compute[238824]:     <mode name='custom' supported='yes'>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <blockers model='Broadwell'>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='erms'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='hle'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='invpcid'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='pcid'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='rtm'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       </blockers>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <blockers model='Broadwell-IBRS'>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='erms'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='hle'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='invpcid'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='pcid'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='rtm'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       </blockers>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <blockers model='Broadwell-noTSX'>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='erms'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='invpcid'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='pcid'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       </blockers>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <blockers model='Broadwell-noTSX-IBRS'>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='erms'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='invpcid'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='pcid'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       </blockers>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <model usable='no' vendor='Intel'>Broadwell-v1</model>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <blockers model='Broadwell-v1'>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='erms'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='hle'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='invpcid'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='pcid'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='rtm'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       </blockers>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <model usable='no' vendor='Intel'>Broadwell-v2</model>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <blockers model='Broadwell-v2'>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='erms'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='invpcid'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='pcid'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       </blockers>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <model usable='no' vendor='Intel'>Broadwell-v3</model>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <blockers model='Broadwell-v3'>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='erms'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='hle'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='invpcid'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='pcid'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='rtm'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       </blockers>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <model usable='no' vendor='Intel'>Broadwell-v4</model>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <blockers model='Broadwell-v4'>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='erms'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='invpcid'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='pcid'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       </blockers>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <blockers model='Cascadelake-Server'>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512bw'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512cd'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512dq'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512f'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512vl'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512vnni'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='erms'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='hle'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='invpcid'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='pcid'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='pku'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='rtm'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       </blockers>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <blockers model='Cascadelake-Server-noTSX'>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512bw'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512cd'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512dq'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512f'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512vl'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512vnni'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='erms'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='ibrs-all'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='invpcid'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='pcid'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='pku'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       </blockers>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <blockers model='Cascadelake-Server-v1'>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512bw'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512cd'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512dq'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512f'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512vl'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512vnni'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='erms'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='hle'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='invpcid'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='pcid'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='pku'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='rtm'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       </blockers>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <blockers model='Cascadelake-Server-v2'>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512bw'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512cd'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512dq'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512f'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512vl'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512vnni'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='erms'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='hle'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='ibrs-all'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='invpcid'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='pcid'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='pku'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='rtm'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       </blockers>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <blockers model='Cascadelake-Server-v3'>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512bw'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512cd'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512dq'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512f'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512vl'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512vnni'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='erms'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='ibrs-all'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='invpcid'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='pcid'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='pku'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       </blockers>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <blockers model='Cascadelake-Server-v4'>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512bw'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512cd'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512dq'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512f'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512vl'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512vnni'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='erms'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='ibrs-all'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='invpcid'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='pcid'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='pku'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       </blockers>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <blockers model='Cascadelake-Server-v5'>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512bw'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512cd'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512dq'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512f'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512vl'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512vnni'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='erms'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='ibrs-all'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='invpcid'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='pcid'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='pku'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='xsaves'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       </blockers>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <model usable='no' vendor='Intel' canonical='ClearwaterForest-v1'>ClearwaterForest</model>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <blockers model='ClearwaterForest'>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx-ifma'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx-ne-convert'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx-vnni'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx-vnni-int16'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx-vnni-int8'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='bhi-ctrl'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='bhi-no'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='bus-lock-detect'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='cldemote'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='cmpccxadd'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='ddpd-u'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='erms'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='fbsdp-no'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='fsrm'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='fsrs'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='gfni'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='ibrs-all'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='intel-psfd'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='invpcid'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='ipred-ctrl'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='lam'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='mcdt-no'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='movdir64b'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='movdiri'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='pbrsb-no'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='pcid'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='pku'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='prefetchiti'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='psdp-no'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='rrsba-ctrl'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='sbdr-ssdp-no'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='serialize'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='sha512'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='sm3'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='sm4'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='ss'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='vaes'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='vpclmulqdq'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='xsaves'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       </blockers>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <model usable='no' vendor='Intel'>ClearwaterForest-v1</model>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <blockers model='ClearwaterForest-v1'>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx-ifma'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx-ne-convert'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx-vnni'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx-vnni-int16'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx-vnni-int8'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='bhi-ctrl'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='bhi-no'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='bus-lock-detect'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='cldemote'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='cmpccxadd'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='ddpd-u'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='erms'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='fbsdp-no'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='fsrm'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='fsrs'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='gfni'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='ibrs-all'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='intel-psfd'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='invpcid'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='ipred-ctrl'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='lam'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='mcdt-no'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='movdir64b'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='movdiri'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='pbrsb-no'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='pcid'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='pku'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='prefetchiti'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='psdp-no'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='rrsba-ctrl'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='sbdr-ssdp-no'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='serialize'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='sha512'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='sm3'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='sm4'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='ss'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='vaes'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='vpclmulqdq'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='xsaves'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       </blockers>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <blockers model='Cooperlake'>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512-bf16'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512bw'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512cd'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512dq'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512f'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512vl'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512vnni'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='erms'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='hle'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='ibrs-all'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='invpcid'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='pcid'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='pku'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='rtm'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='taa-no'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       </blockers>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <blockers model='Cooperlake-v1'>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512-bf16'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512bw'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512cd'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512dq'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512f'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512vl'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512vnni'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='erms'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='hle'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='ibrs-all'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='invpcid'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='pcid'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='pku'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='rtm'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='taa-no'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       </blockers>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <blockers model='Cooperlake-v2'>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512-bf16'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512bw'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512cd'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512dq'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512f'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512vl'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512vnni'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='erms'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='hle'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='ibrs-all'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='invpcid'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='pcid'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='pku'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='rtm'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='taa-no'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='xsaves'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       </blockers>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <blockers model='Denverton'>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='erms'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='mpx'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       </blockers>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <model usable='no' vendor='Intel'>Denverton-v1</model>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <blockers model='Denverton-v1'>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='erms'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='mpx'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       </blockers>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <model usable='no' vendor='Intel'>Denverton-v2</model>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <blockers model='Denverton-v2'>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='erms'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       </blockers>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <model usable='no' vendor='Intel'>Denverton-v3</model>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <blockers model='Denverton-v3'>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='erms'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='xsaves'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       </blockers>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <blockers model='Dhyana-v2'>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='xsaves'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       </blockers>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <blockers model='EPYC-Genoa'>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='amd-psfd'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='auto-ibrs'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512-bf16'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512-vpopcntdq'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512bitalg'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512bw'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512cd'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512dq'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512f'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512ifma'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512vbmi'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512vbmi2'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512vl'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512vnni'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='erms'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='fsrm'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='gfni'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='invpcid'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='la57'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='no-nested-data-bp'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='null-sel-clr-base'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='pcid'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='pku'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='stibp-always-on'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='vaes'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='vpclmulqdq'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='xsaves'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       </blockers>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <blockers model='EPYC-Genoa-v1'>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='amd-psfd'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='auto-ibrs'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512-bf16'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512-vpopcntdq'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512bitalg'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512bw'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512cd'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512dq'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512f'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512ifma'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512vbmi'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512vbmi2'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512vl'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512vnni'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='erms'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='fsrm'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='gfni'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='invpcid'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='la57'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='no-nested-data-bp'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='null-sel-clr-base'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='pcid'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='pku'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='stibp-always-on'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='vaes'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='vpclmulqdq'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='xsaves'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       </blockers>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v2</model>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <blockers model='EPYC-Genoa-v2'>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='amd-psfd'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='auto-ibrs'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512-bf16'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512-vpopcntdq'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512bitalg'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512bw'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512cd'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512dq'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512f'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512ifma'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512vbmi'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512vbmi2'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512vl'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512vnni'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='erms'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='fs-gs-base-ns'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='fsrm'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='gfni'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='invpcid'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='la57'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='no-nested-data-bp'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='null-sel-clr-base'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='pcid'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='perfmon-v2'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='pku'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='stibp-always-on'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='vaes'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='vpclmulqdq'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='xsaves'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       </blockers>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <blockers model='EPYC-Milan'>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='erms'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='fsrm'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='invpcid'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='pcid'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='pku'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='xsaves'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       </blockers>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <blockers model='EPYC-Milan-v1'>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='erms'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='fsrm'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='invpcid'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='pcid'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='pku'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='xsaves'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       </blockers>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <blockers model='EPYC-Milan-v2'>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='amd-psfd'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='erms'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='fsrm'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='invpcid'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='no-nested-data-bp'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='null-sel-clr-base'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='pcid'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='pku'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='stibp-always-on'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='vaes'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='vpclmulqdq'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='xsaves'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       </blockers>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <model usable='no' vendor='AMD'>EPYC-Milan-v3</model>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <blockers model='EPYC-Milan-v3'>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='amd-psfd'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='erms'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='fsrm'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='invpcid'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='no-nested-data-bp'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='null-sel-clr-base'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='pcid'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='pku'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='stibp-always-on'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='vaes'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='vpclmulqdq'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='xsaves'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       </blockers>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <blockers model='EPYC-Rome'>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='xsaves'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       </blockers>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <blockers model='EPYC-Rome-v1'>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='xsaves'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       </blockers>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <blockers model='EPYC-Rome-v2'>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='xsaves'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       </blockers>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <blockers model='EPYC-Rome-v3'>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='xsaves'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       </blockers>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v5</model>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <model usable='no' vendor='AMD' canonical='EPYC-Turin-v1'>EPYC-Turin</model>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <blockers model='EPYC-Turin'>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='amd-psfd'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='auto-ibrs'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx-vnni'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512-bf16'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512-vp2intersect'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512-vpopcntdq'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512bitalg'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512bw'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512cd'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512dq'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512f'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512ifma'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512vbmi'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512vbmi2'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512vl'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512vnni'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='erms'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='fs-gs-base-ns'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='fsrm'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='gfni'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='ibpb-brtype'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='invpcid'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='la57'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='movdir64b'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='movdiri'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='no-nested-data-bp'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='null-sel-clr-base'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='pcid'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='perfmon-v2'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='pku'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='prefetchi'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='sbpb'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='srso-user-kernel-no'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='stibp-always-on'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='vaes'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='vpclmulqdq'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='xsaves'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       </blockers>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <model usable='no' vendor='AMD'>EPYC-Turin-v1</model>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <blockers model='EPYC-Turin-v1'>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='amd-psfd'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='auto-ibrs'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx-vnni'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512-bf16'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512-vp2intersect'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512-vpopcntdq'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512bitalg'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512bw'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512cd'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512dq'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512f'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512ifma'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512vbmi'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512vbmi2'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512vl'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512vnni'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='erms'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='fs-gs-base-ns'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='fsrm'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='gfni'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='ibpb-brtype'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='invpcid'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='la57'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='movdir64b'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='movdiri'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='no-nested-data-bp'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='null-sel-clr-base'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='pcid'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='perfmon-v2'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='pku'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='prefetchi'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='sbpb'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='srso-user-kernel-no'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='stibp-always-on'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='vaes'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='vpclmulqdq'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='xsaves'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       </blockers>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <model usable='yes' vendor='AMD'>EPYC-v1</model>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <model usable='yes' vendor='AMD'>EPYC-v2</model>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <model usable='no' vendor='AMD'>EPYC-v3</model>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <blockers model='EPYC-v3'>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='xsaves'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       </blockers>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <model usable='no' vendor='AMD'>EPYC-v4</model>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <blockers model='EPYC-v4'>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='xsaves'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       </blockers>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <model usable='no' vendor='AMD'>EPYC-v5</model>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <blockers model='EPYC-v5'>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='xsaves'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       </blockers>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <blockers model='GraniteRapids'>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='amx-bf16'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='amx-fp16'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='amx-int8'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='amx-tile'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx-vnni'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512-bf16'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512-fp16'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512-vpopcntdq'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512bitalg'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512bw'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512cd'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512dq'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512f'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512ifma'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512vbmi'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512vbmi2'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512vl'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512vnni'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='bus-lock-detect'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='erms'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='fbsdp-no'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='fsrc'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='fsrm'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='fsrs'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='fzrm'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='gfni'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='hle'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='ibrs-all'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='invpcid'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='la57'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='mcdt-no'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='pbrsb-no'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='pcid'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='pku'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='prefetchiti'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='psdp-no'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='rtm'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='sbdr-ssdp-no'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='serialize'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='taa-no'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='tsx-ldtrk'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='vaes'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='vpclmulqdq'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='xfd'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='xsaves'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       </blockers>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <blockers model='GraniteRapids-v1'>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='amx-bf16'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='amx-fp16'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='amx-int8'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='amx-tile'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx-vnni'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512-bf16'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512-fp16'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512-vpopcntdq'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512bitalg'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512bw'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512cd'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512dq'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512f'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512ifma'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512vbmi'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512vbmi2'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512vl'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512vnni'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='bus-lock-detect'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='erms'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='fbsdp-no'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='fsrc'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='fsrm'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='fsrs'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='fzrm'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='gfni'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='hle'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='ibrs-all'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='invpcid'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='la57'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='mcdt-no'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='pbrsb-no'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='pcid'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='pku'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='prefetchiti'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='psdp-no'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='rtm'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='sbdr-ssdp-no'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='serialize'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='taa-no'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='tsx-ldtrk'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='vaes'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='vpclmulqdq'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='xfd'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='xsaves'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       </blockers>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <blockers model='GraniteRapids-v2'>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='amx-bf16'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='amx-fp16'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='amx-int8'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='amx-tile'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx-vnni'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx10'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx10-128'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx10-256'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx10-512'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512-bf16'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512-fp16'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512-vpopcntdq'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512bitalg'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512bw'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512cd'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512dq'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512f'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512ifma'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512vbmi'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512vbmi2'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512vl'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512vnni'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='bus-lock-detect'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='cldemote'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='erms'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='fbsdp-no'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='fsrc'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='fsrm'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='fsrs'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='fzrm'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='gfni'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='hle'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='ibrs-all'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='invpcid'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='la57'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='mcdt-no'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='movdir64b'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='movdiri'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='pbrsb-no'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='pcid'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='pku'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='prefetchiti'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='psdp-no'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='rtm'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='sbdr-ssdp-no'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='serialize'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='ss'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='taa-no'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='tsx-ldtrk'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='vaes'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='vpclmulqdq'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='xfd'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='xsaves'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       </blockers>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <model usable='no' vendor='Intel'>GraniteRapids-v3</model>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <blockers model='GraniteRapids-v3'>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='amx-bf16'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='amx-fp16'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='amx-int8'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='amx-tile'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx-vnni'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx10'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx10-128'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx10-256'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx10-512'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512-bf16'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512-fp16'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512-vpopcntdq'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512bitalg'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512bw'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512cd'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512dq'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512f'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512ifma'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512vbmi'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512vbmi2'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512vl'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512vnni'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='bus-lock-detect'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='cldemote'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='erms'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='fbsdp-no'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='fsrc'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='fsrm'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='fsrs'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='fzrm'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='gfni'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='hle'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='ibrs-all'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='invpcid'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='la57'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='mcdt-no'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='movdir64b'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='movdiri'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='pbrsb-no'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='pcid'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='pku'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='prefetchiti'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='psdp-no'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='rtm'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='sbdr-ssdp-no'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='serialize'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='ss'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='taa-no'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='tsx-ldtrk'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='vaes'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='vpclmulqdq'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='xfd'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='xsaves'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       </blockers>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <blockers model='Haswell'>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='erms'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='hle'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='invpcid'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='pcid'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='rtm'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       </blockers>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <blockers model='Haswell-IBRS'>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='erms'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='hle'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='invpcid'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='pcid'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='rtm'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       </blockers>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <blockers model='Haswell-noTSX'>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='erms'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='invpcid'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='pcid'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       </blockers>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <blockers model='Haswell-noTSX-IBRS'>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='erms'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='invpcid'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='pcid'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       </blockers>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <model usable='no' vendor='Intel'>Haswell-v1</model>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <blockers model='Haswell-v1'>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='erms'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='hle'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='invpcid'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='pcid'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='rtm'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       </blockers>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <model usable='no' vendor='Intel'>Haswell-v2</model>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <blockers model='Haswell-v2'>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='erms'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='invpcid'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='pcid'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       </blockers>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <model usable='no' vendor='Intel'>Haswell-v3</model>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <blockers model='Haswell-v3'>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='erms'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='hle'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='invpcid'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='pcid'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='rtm'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       </blockers>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <model usable='no' vendor='Intel'>Haswell-v4</model>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <blockers model='Haswell-v4'>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='erms'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='invpcid'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='pcid'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       </blockers>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <blockers model='Icelake-Server'>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512-vpopcntdq'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512bitalg'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512bw'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512cd'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512dq'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512f'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512vbmi'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512vbmi2'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512vl'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512vnni'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='erms'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='gfni'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='hle'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='invpcid'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='la57'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='pcid'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='pku'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='rtm'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='vaes'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='vpclmulqdq'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       </blockers>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <blockers model='Icelake-Server-noTSX'>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512-vpopcntdq'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512bitalg'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512bw'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512cd'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512dq'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512f'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512vbmi'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512vbmi2'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512vl'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512vnni'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='erms'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='gfni'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='invpcid'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='la57'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='pcid'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='pku'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='vaes'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='vpclmulqdq'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       </blockers>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <blockers model='Icelake-Server-v1'>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512-vpopcntdq'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512bitalg'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512bw'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512cd'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512dq'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512f'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512vbmi'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512vbmi2'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512vl'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512vnni'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='erms'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='gfni'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='hle'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='invpcid'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='la57'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='pcid'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='pku'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='rtm'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='vaes'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='vpclmulqdq'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       </blockers>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <blockers model='Icelake-Server-v2'>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512-vpopcntdq'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512bitalg'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512bw'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512cd'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512dq'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512f'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512vbmi'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512vbmi2'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512vl'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512vnni'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='erms'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='gfni'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='invpcid'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='la57'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='pcid'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='pku'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='vaes'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='vpclmulqdq'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       </blockers>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <blockers model='Icelake-Server-v3'>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512-vpopcntdq'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512bitalg'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512bw'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512cd'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512dq'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512f'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512vbmi'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512vbmi2'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512vl'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512vnni'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='erms'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='gfni'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='ibrs-all'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='invpcid'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='la57'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='pcid'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='pku'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='taa-no'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='vaes'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='vpclmulqdq'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       </blockers>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <blockers model='Icelake-Server-v4'>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512-vpopcntdq'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512bitalg'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512bw'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512cd'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512dq'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512f'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512ifma'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512vbmi'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512vbmi2'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512vl'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512vnni'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='erms'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='fsrm'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='gfni'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='ibrs-all'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='invpcid'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='la57'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='pcid'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='pku'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='taa-no'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='vaes'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='vpclmulqdq'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       </blockers>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <blockers model='Icelake-Server-v5'>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512-vpopcntdq'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512bitalg'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512bw'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512cd'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512dq'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512f'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512ifma'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512vbmi'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512vbmi2'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512vl'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512vnni'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='erms'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='fsrm'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='gfni'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='ibrs-all'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='invpcid'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='la57'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='pcid'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='pku'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='taa-no'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='vaes'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='vpclmulqdq'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='xsaves'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       </blockers>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <blockers model='Icelake-Server-v6'>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512-vpopcntdq'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512bitalg'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512bw'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512cd'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512dq'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512f'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512ifma'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512vbmi'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512vbmi2'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512vl'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512vnni'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='erms'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='fsrm'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='gfni'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='ibrs-all'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='invpcid'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='la57'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='pcid'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='pku'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='taa-no'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='vaes'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='vpclmulqdq'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='xsaves'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       </blockers>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <blockers model='Icelake-Server-v7'>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512-vpopcntdq'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512bitalg'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512bw'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512cd'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512dq'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512f'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512ifma'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512vbmi'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512vbmi2'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512vl'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512vnni'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='erms'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='fsrm'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='gfni'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='hle'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='ibrs-all'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='invpcid'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='la57'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='pcid'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='pku'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='rtm'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='taa-no'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='vaes'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='vpclmulqdq'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='xsaves'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       </blockers>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <blockers model='IvyBridge'>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='erms'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       </blockers>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <blockers model='IvyBridge-IBRS'>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='erms'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       </blockers>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <blockers model='IvyBridge-v1'>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='erms'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       </blockers>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <blockers model='IvyBridge-v2'>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='erms'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       </blockers>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <blockers model='KnightsMill'>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512-4fmaps'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512-4vnniw'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512-vpopcntdq'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512cd'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512er'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512f'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512pf'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='erms'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='ss'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       </blockers>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <blockers model='KnightsMill-v1'>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512-4fmaps'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512-4vnniw'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512-vpopcntdq'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512cd'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512er'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512f'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512pf'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='erms'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='ss'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       </blockers>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <blockers model='Opteron_G4'>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='fma4'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='xop'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       </blockers>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <blockers model='Opteron_G4-v1'>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='fma4'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='xop'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       </blockers>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <blockers model='Opteron_G5'>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='fma4'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='tbm'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='xop'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       </blockers>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <blockers model='Opteron_G5-v1'>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='fma4'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='tbm'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='xop'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       </blockers>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <blockers model='SapphireRapids'>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='amx-bf16'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='amx-int8'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='amx-tile'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx-vnni'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512-bf16'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512-fp16'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512-vpopcntdq'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512bitalg'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512bw'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512cd'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512dq'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512f'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512ifma'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512vbmi'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512vbmi2'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512vl'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512vnni'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='bus-lock-detect'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='erms'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='fsrc'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='fsrm'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='fsrs'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='fzrm'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='gfni'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='hle'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='ibrs-all'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='invpcid'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='la57'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='pcid'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='pku'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='rtm'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='serialize'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='taa-no'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='tsx-ldtrk'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='vaes'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='vpclmulqdq'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='xfd'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='xsaves'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       </blockers>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <blockers model='SapphireRapids-v1'>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='amx-bf16'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='amx-int8'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='amx-tile'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx-vnni'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512-bf16'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512-fp16'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512-vpopcntdq'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512bitalg'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512bw'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512cd'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512dq'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512f'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512ifma'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512vbmi'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512vbmi2'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512vl'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512vnni'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='bus-lock-detect'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='erms'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='fsrc'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='fsrm'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='fsrs'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='fzrm'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='gfni'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='hle'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='ibrs-all'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='invpcid'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='la57'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='pcid'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='pku'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='rtm'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='serialize'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='taa-no'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='tsx-ldtrk'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='vaes'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='vpclmulqdq'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='xfd'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='xsaves'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       </blockers>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <blockers model='SapphireRapids-v2'>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='amx-bf16'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='amx-int8'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='amx-tile'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx-vnni'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512-bf16'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512-fp16'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512-vpopcntdq'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512bitalg'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512bw'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512cd'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512dq'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512f'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512ifma'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512vbmi'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512vbmi2'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512vl'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512vnni'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='bus-lock-detect'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='erms'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='fbsdp-no'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='fsrc'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='fsrm'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='fsrs'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='fzrm'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='gfni'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='hle'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='ibrs-all'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='invpcid'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='la57'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='pcid'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='pku'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='psdp-no'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='rtm'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='sbdr-ssdp-no'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='serialize'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='taa-no'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='tsx-ldtrk'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='vaes'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='vpclmulqdq'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='xfd'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='xsaves'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       </blockers>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <blockers model='SapphireRapids-v3'>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='amx-bf16'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='amx-int8'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='amx-tile'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx-vnni'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512-bf16'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512-fp16'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512-vpopcntdq'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512bitalg'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512bw'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512cd'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512dq'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512f'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512ifma'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512vbmi'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512vbmi2'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512vl'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512vnni'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='bus-lock-detect'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='cldemote'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='erms'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='fbsdp-no'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='fsrc'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='fsrm'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='fsrs'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='fzrm'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='gfni'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='hle'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='ibrs-all'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='invpcid'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='la57'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='movdir64b'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='movdiri'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='pcid'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='pku'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='psdp-no'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='rtm'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='sbdr-ssdp-no'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='serialize'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='ss'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='taa-no'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='tsx-ldtrk'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='vaes'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='vpclmulqdq'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='xfd'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='xsaves'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       </blockers>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <model usable='no' vendor='Intel'>SapphireRapids-v4</model>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <blockers model='SapphireRapids-v4'>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='amx-bf16'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='amx-int8'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='amx-tile'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx-vnni'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512-bf16'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512-fp16'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512-vpopcntdq'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512bitalg'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512bw'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512cd'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512dq'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512f'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512ifma'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512vbmi'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512vbmi2'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512vl'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512vnni'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='bus-lock-detect'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='cldemote'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='erms'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='fbsdp-no'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='fsrc'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='fsrm'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='fsrs'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='fzrm'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='gfni'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='hle'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='ibrs-all'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='invpcid'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='la57'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='movdir64b'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='movdiri'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='pcid'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='pku'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='psdp-no'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='rtm'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='sbdr-ssdp-no'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='serialize'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='ss'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='taa-no'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='tsx-ldtrk'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='vaes'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='vpclmulqdq'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='xfd'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='xsaves'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       </blockers>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <blockers model='SierraForest'>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx-ifma'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx-ne-convert'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx-vnni'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx-vnni-int8'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='bus-lock-detect'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='cmpccxadd'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='erms'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='fbsdp-no'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='fsrm'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='fsrs'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='gfni'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='ibrs-all'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='invpcid'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='mcdt-no'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='pbrsb-no'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='pcid'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='pku'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='psdp-no'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='sbdr-ssdp-no'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='serialize'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='vaes'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='vpclmulqdq'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='xsaves'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       </blockers>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <model usable='no' vendor='Intel'>SierraForest-v1</model>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <blockers model='SierraForest-v1'>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx-ifma'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx-ne-convert'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx-vnni'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx-vnni-int8'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='bus-lock-detect'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='cmpccxadd'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='erms'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='fbsdp-no'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='fsrm'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='fsrs'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='gfni'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='ibrs-all'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='invpcid'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='mcdt-no'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='pbrsb-no'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='pcid'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='pku'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='psdp-no'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='sbdr-ssdp-no'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='serialize'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='vaes'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='vpclmulqdq'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='xsaves'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       </blockers>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <model usable='no' vendor='Intel'>SierraForest-v2</model>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <blockers model='SierraForest-v2'>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx-ifma'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx-ne-convert'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx-vnni'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx-vnni-int8'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='bhi-ctrl'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='bus-lock-detect'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='cldemote'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='cmpccxadd'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='erms'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='fbsdp-no'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='fsrm'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='fsrs'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='gfni'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='ibrs-all'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='intel-psfd'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='invpcid'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='ipred-ctrl'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='lam'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='mcdt-no'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='movdir64b'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='movdiri'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='pbrsb-no'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='pcid'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='pku'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='psdp-no'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='rrsba-ctrl'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='sbdr-ssdp-no'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='serialize'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='ss'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='vaes'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='vpclmulqdq'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='xsaves'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       </blockers>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <model usable='no' vendor='Intel'>SierraForest-v3</model>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <blockers model='SierraForest-v3'>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx-ifma'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx-ne-convert'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx-vnni'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx-vnni-int8'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='bhi-ctrl'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='bus-lock-detect'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='cldemote'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='cmpccxadd'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='erms'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='fbsdp-no'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='fsrm'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='fsrs'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='gfni'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='ibrs-all'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='intel-psfd'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='invpcid'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='ipred-ctrl'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='lam'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='mcdt-no'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='movdir64b'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='movdiri'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='pbrsb-no'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='pcid'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='pku'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='psdp-no'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='rrsba-ctrl'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='sbdr-ssdp-no'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='serialize'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='ss'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='vaes'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='vpclmulqdq'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='xsaves'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       </blockers>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <blockers model='Skylake-Client'>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='erms'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='hle'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='invpcid'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='pcid'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='rtm'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       </blockers>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <blockers model='Skylake-Client-IBRS'>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='erms'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='hle'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='invpcid'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='pcid'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='rtm'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       </blockers>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <blockers model='Skylake-Client-noTSX-IBRS'>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='erms'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='invpcid'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='pcid'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       </blockers>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <blockers model='Skylake-Client-v1'>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='erms'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='hle'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='invpcid'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='pcid'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='rtm'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       </blockers>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <blockers model='Skylake-Client-v2'>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='erms'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='hle'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='invpcid'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='pcid'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='rtm'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       </blockers>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <blockers model='Skylake-Client-v3'>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='erms'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='invpcid'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='pcid'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       </blockers>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <blockers model='Skylake-Client-v4'>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='erms'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='invpcid'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='pcid'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='xsaves'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       </blockers>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <blockers model='Skylake-Server'>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512bw'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512cd'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512dq'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512f'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512vl'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='erms'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='hle'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='invpcid'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='pcid'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='pku'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='rtm'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       </blockers>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <blockers model='Skylake-Server-IBRS'>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512bw'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512cd'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512dq'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512f'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512vl'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='erms'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='hle'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='invpcid'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='pcid'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='pku'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='rtm'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       </blockers>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <blockers model='Skylake-Server-noTSX-IBRS'>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512bw'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512cd'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512dq'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512f'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512vl'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='erms'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='invpcid'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='pcid'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='pku'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       </blockers>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <blockers model='Skylake-Server-v1'>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512bw'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512cd'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512dq'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512f'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512vl'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='erms'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='hle'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='invpcid'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='pcid'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='pku'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='rtm'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       </blockers>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <blockers model='Skylake-Server-v2'>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512bw'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512cd'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512dq'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512f'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512vl'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='erms'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='hle'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='invpcid'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='pcid'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='pku'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='rtm'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       </blockers>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <blockers model='Skylake-Server-v3'>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512bw'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512cd'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512dq'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512f'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512vl'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='erms'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='invpcid'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='pcid'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='pku'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       </blockers>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <blockers model='Skylake-Server-v4'>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512bw'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512cd'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512dq'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512f'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512vl'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='erms'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='invpcid'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='pcid'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='pku'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       </blockers>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <blockers model='Skylake-Server-v5'>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512bw'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512cd'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512dq'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512f'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512vl'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='erms'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='invpcid'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='pcid'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='pku'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='xsaves'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       </blockers>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <blockers model='Snowridge'>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='cldemote'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='core-capability'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='erms'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='gfni'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='movdir64b'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='movdiri'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='mpx'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='split-lock-detect'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       </blockers>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <model usable='no' vendor='Intel'>Snowridge-v1</model>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <blockers model='Snowridge-v1'>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='cldemote'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='core-capability'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='erms'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='gfni'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='movdir64b'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='movdiri'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='mpx'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='split-lock-detect'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       </blockers>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <model usable='no' vendor='Intel'>Snowridge-v2</model>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <blockers model='Snowridge-v2'>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='cldemote'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='core-capability'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='erms'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='gfni'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='movdir64b'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='movdiri'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='split-lock-detect'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       </blockers>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <model usable='no' vendor='Intel'>Snowridge-v3</model>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <blockers model='Snowridge-v3'>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='cldemote'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='core-capability'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='erms'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='gfni'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='movdir64b'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='movdiri'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='split-lock-detect'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='xsaves'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       </blockers>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <model usable='no' vendor='Intel'>Snowridge-v4</model>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <blockers model='Snowridge-v4'>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='cldemote'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='erms'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='gfni'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='movdir64b'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='movdiri'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='xsaves'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       </blockers>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <model usable='yes' vendor='Intel'>Westmere-v1</model>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <model usable='yes' vendor='Intel'>Westmere-v2</model>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <blockers model='athlon'>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='3dnow'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='3dnowext'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       </blockers>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <blockers model='athlon-v1'>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='3dnow'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='3dnowext'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       </blockers>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <blockers model='core2duo'>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='ss'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       </blockers>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <blockers model='core2duo-v1'>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='ss'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       </blockers>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <blockers model='coreduo'>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='ss'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       </blockers>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <blockers model='coreduo-v1'>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='ss'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       </blockers>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <blockers model='n270'>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='ss'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       </blockers>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <blockers model='n270-v1'>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='ss'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       </blockers>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <blockers model='phenom'>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='3dnow'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='3dnowext'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       </blockers>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <blockers model='phenom-v1'>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='3dnow'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='3dnowext'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       </blockers>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Jan 31 08:24:09 compute-0 nova_compute[238824]:     </mode>
Jan 31 08:24:09 compute-0 nova_compute[238824]:   </cpu>
Jan 31 08:24:09 compute-0 nova_compute[238824]:   <memoryBacking supported='yes'>
Jan 31 08:24:09 compute-0 nova_compute[238824]:     <enum name='sourceType'>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <value>file</value>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <value>anonymous</value>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <value>memfd</value>
Jan 31 08:24:09 compute-0 nova_compute[238824]:     </enum>
Jan 31 08:24:09 compute-0 nova_compute[238824]:   </memoryBacking>
Jan 31 08:24:09 compute-0 nova_compute[238824]:   <devices>
Jan 31 08:24:09 compute-0 nova_compute[238824]:     <disk supported='yes'>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <enum name='diskDevice'>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <value>disk</value>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <value>cdrom</value>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <value>floppy</value>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <value>lun</value>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       </enum>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <enum name='bus'>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <value>ide</value>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <value>fdc</value>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <value>scsi</value>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <value>virtio</value>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <value>usb</value>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <value>sata</value>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       </enum>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <enum name='model'>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <value>virtio</value>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <value>virtio-transitional</value>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <value>virtio-non-transitional</value>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       </enum>
Jan 31 08:24:09 compute-0 nova_compute[238824]:     </disk>
Jan 31 08:24:09 compute-0 nova_compute[238824]:     <graphics supported='yes'>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <enum name='type'>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <value>vnc</value>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <value>egl-headless</value>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <value>dbus</value>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       </enum>
Jan 31 08:24:09 compute-0 nova_compute[238824]:     </graphics>
Jan 31 08:24:09 compute-0 nova_compute[238824]:     <video supported='yes'>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <enum name='modelType'>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <value>vga</value>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <value>cirrus</value>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <value>virtio</value>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <value>none</value>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <value>bochs</value>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <value>ramfb</value>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       </enum>
Jan 31 08:24:09 compute-0 nova_compute[238824]:     </video>
Jan 31 08:24:09 compute-0 nova_compute[238824]:     <hostdev supported='yes'>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <enum name='mode'>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <value>subsystem</value>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       </enum>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <enum name='startupPolicy'>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <value>default</value>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <value>mandatory</value>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <value>requisite</value>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <value>optional</value>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       </enum>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <enum name='subsysType'>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <value>usb</value>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <value>pci</value>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <value>scsi</value>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       </enum>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <enum name='capsType'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <enum name='pciBackend'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:     </hostdev>
Jan 31 08:24:09 compute-0 nova_compute[238824]:     <rng supported='yes'>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <enum name='model'>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <value>virtio</value>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <value>virtio-transitional</value>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <value>virtio-non-transitional</value>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       </enum>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <enum name='backendModel'>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <value>random</value>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <value>egd</value>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <value>builtin</value>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       </enum>
Jan 31 08:24:09 compute-0 nova_compute[238824]:     </rng>
Jan 31 08:24:09 compute-0 nova_compute[238824]:     <filesystem supported='yes'>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <enum name='driverType'>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <value>path</value>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <value>handle</value>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <value>virtiofs</value>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       </enum>
Jan 31 08:24:09 compute-0 nova_compute[238824]:     </filesystem>
Jan 31 08:24:09 compute-0 nova_compute[238824]:     <tpm supported='yes'>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <enum name='model'>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <value>tpm-tis</value>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <value>tpm-crb</value>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       </enum>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <enum name='backendModel'>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <value>emulator</value>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <value>external</value>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       </enum>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <enum name='backendVersion'>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <value>2.0</value>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       </enum>
Jan 31 08:24:09 compute-0 nova_compute[238824]:     </tpm>
Jan 31 08:24:09 compute-0 nova_compute[238824]:     <redirdev supported='yes'>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <enum name='bus'>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <value>usb</value>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       </enum>
Jan 31 08:24:09 compute-0 nova_compute[238824]:     </redirdev>
Jan 31 08:24:09 compute-0 nova_compute[238824]:     <channel supported='yes'>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <enum name='type'>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <value>pty</value>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <value>unix</value>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       </enum>
Jan 31 08:24:09 compute-0 nova_compute[238824]:     </channel>
Jan 31 08:24:09 compute-0 nova_compute[238824]:     <crypto supported='yes'>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <enum name='model'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <enum name='type'>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <value>qemu</value>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       </enum>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <enum name='backendModel'>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <value>builtin</value>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       </enum>
Jan 31 08:24:09 compute-0 nova_compute[238824]:     </crypto>
Jan 31 08:24:09 compute-0 nova_compute[238824]:     <interface supported='yes'>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <enum name='backendType'>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <value>default</value>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <value>passt</value>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       </enum>
Jan 31 08:24:09 compute-0 nova_compute[238824]:     </interface>
Jan 31 08:24:09 compute-0 nova_compute[238824]:     <panic supported='yes'>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <enum name='model'>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <value>isa</value>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <value>hyperv</value>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       </enum>
Jan 31 08:24:09 compute-0 nova_compute[238824]:     </panic>
Jan 31 08:24:09 compute-0 nova_compute[238824]:     <console supported='yes'>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <enum name='type'>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <value>null</value>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <value>vc</value>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <value>pty</value>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <value>dev</value>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <value>file</value>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <value>pipe</value>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <value>stdio</value>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <value>udp</value>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <value>tcp</value>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <value>unix</value>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <value>qemu-vdagent</value>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <value>dbus</value>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       </enum>
Jan 31 08:24:09 compute-0 nova_compute[238824]:     </console>
Jan 31 08:24:09 compute-0 nova_compute[238824]:   </devices>
Jan 31 08:24:09 compute-0 nova_compute[238824]:   <features>
Jan 31 08:24:09 compute-0 nova_compute[238824]:     <gic supported='no'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:     <vmcoreinfo supported='yes'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:     <genid supported='yes'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:     <backingStoreInput supported='yes'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:     <backup supported='yes'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:     <async-teardown supported='yes'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:     <s390-pv supported='no'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:     <ps2 supported='yes'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:     <tdx supported='no'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:     <sev supported='no'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:     <sgx supported='no'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:     <hyperv supported='yes'>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <enum name='features'>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <value>relaxed</value>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <value>vapic</value>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <value>spinlocks</value>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <value>vpindex</value>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <value>runtime</value>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <value>synic</value>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <value>stimer</value>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <value>reset</value>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <value>vendor_id</value>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <value>frequencies</value>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <value>reenlightenment</value>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <value>tlbflush</value>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <value>ipi</value>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <value>avic</value>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <value>emsr_bitmap</value>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <value>xmm_input</value>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       </enum>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <defaults>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <spinlocks>4095</spinlocks>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <stimer_direct>on</stimer_direct>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <tlbflush_direct>on</tlbflush_direct>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <tlbflush_extended>on</tlbflush_extended>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <vendor_id>Linux KVM Hv</vendor_id>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       </defaults>
Jan 31 08:24:09 compute-0 nova_compute[238824]:     </hyperv>
Jan 31 08:24:09 compute-0 nova_compute[238824]:     <launchSecurity supported='no'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:   </features>
Jan 31 08:24:09 compute-0 nova_compute[238824]: </domainCapabilities>
Jan 31 08:24:09 compute-0 nova_compute[238824]:  _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037
Jan 31 08:24:09 compute-0 nova_compute[238824]: 2026-01-31 08:24:09.300 238828 DEBUG nova.virt.libvirt.host [None req-2c04beea-b301-4289-b04e-ab5631b910c8 - - - - - -] Libvirt host hypervisor capabilities for arch=i686 and machine_type=q35:
Jan 31 08:24:09 compute-0 nova_compute[238824]: <domainCapabilities>
Jan 31 08:24:09 compute-0 nova_compute[238824]:   <path>/usr/libexec/qemu-kvm</path>
Jan 31 08:24:09 compute-0 nova_compute[238824]:   <domain>kvm</domain>
Jan 31 08:24:09 compute-0 nova_compute[238824]:   <machine>pc-q35-rhel9.8.0</machine>
Jan 31 08:24:09 compute-0 nova_compute[238824]:   <arch>i686</arch>
Jan 31 08:24:09 compute-0 nova_compute[238824]:   <vcpu max='4096'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:   <iothreads supported='yes'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:   <os supported='yes'>
Jan 31 08:24:09 compute-0 nova_compute[238824]:     <enum name='firmware'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:     <loader supported='yes'>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <enum name='type'>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <value>rom</value>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <value>pflash</value>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       </enum>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <enum name='readonly'>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <value>yes</value>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <value>no</value>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       </enum>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <enum name='secure'>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <value>no</value>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       </enum>
Jan 31 08:24:09 compute-0 nova_compute[238824]:     </loader>
Jan 31 08:24:09 compute-0 nova_compute[238824]:   </os>
Jan 31 08:24:09 compute-0 nova_compute[238824]:   <cpu>
Jan 31 08:24:09 compute-0 nova_compute[238824]:     <mode name='host-passthrough' supported='yes'>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <enum name='hostPassthroughMigratable'>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <value>on</value>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <value>off</value>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       </enum>
Jan 31 08:24:09 compute-0 nova_compute[238824]:     </mode>
Jan 31 08:24:09 compute-0 nova_compute[238824]:     <mode name='maximum' supported='yes'>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <enum name='maximumMigratable'>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <value>on</value>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <value>off</value>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       </enum>
Jan 31 08:24:09 compute-0 nova_compute[238824]:     </mode>
Jan 31 08:24:09 compute-0 nova_compute[238824]:     <mode name='host-model' supported='yes'>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <model fallback='forbid'>EPYC-Rome</model>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <vendor>AMD</vendor>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <maxphysaddr mode='passthrough' limit='40'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <feature policy='require' name='x2apic'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <feature policy='require' name='tsc-deadline'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <feature policy='require' name='hypervisor'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <feature policy='require' name='tsc_adjust'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <feature policy='require' name='spec-ctrl'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <feature policy='require' name='stibp'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <feature policy='require' name='ssbd'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <feature policy='require' name='cmp_legacy'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <feature policy='require' name='overflow-recov'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <feature policy='require' name='succor'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <feature policy='require' name='ibrs'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <feature policy='require' name='amd-ssbd'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <feature policy='require' name='virt-ssbd'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <feature policy='require' name='lbrv'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <feature policy='require' name='tsc-scale'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <feature policy='require' name='vmcb-clean'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <feature policy='require' name='flushbyasid'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <feature policy='require' name='pause-filter'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <feature policy='require' name='pfthreshold'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <feature policy='require' name='svme-addr-chk'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <feature policy='require' name='lfence-always-serializing'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <feature policy='disable' name='xsaves'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:     </mode>
Jan 31 08:24:09 compute-0 nova_compute[238824]:     <mode name='custom' supported='yes'>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <blockers model='Broadwell'>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='erms'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='hle'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='invpcid'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='pcid'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='rtm'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       </blockers>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <blockers model='Broadwell-IBRS'>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='erms'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='hle'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='invpcid'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='pcid'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='rtm'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       </blockers>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <blockers model='Broadwell-noTSX'>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='erms'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='invpcid'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='pcid'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       </blockers>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <blockers model='Broadwell-noTSX-IBRS'>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='erms'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='invpcid'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='pcid'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       </blockers>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <model usable='no' vendor='Intel'>Broadwell-v1</model>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <blockers model='Broadwell-v1'>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='erms'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='hle'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='invpcid'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='pcid'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='rtm'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       </blockers>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <model usable='no' vendor='Intel'>Broadwell-v2</model>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <blockers model='Broadwell-v2'>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='erms'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='invpcid'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='pcid'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       </blockers>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <model usable='no' vendor='Intel'>Broadwell-v3</model>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <blockers model='Broadwell-v3'>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='erms'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='hle'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='invpcid'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='pcid'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='rtm'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       </blockers>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <model usable='no' vendor='Intel'>Broadwell-v4</model>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <blockers model='Broadwell-v4'>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='erms'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='invpcid'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='pcid'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       </blockers>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <blockers model='Cascadelake-Server'>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512bw'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512cd'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512dq'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512f'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512vl'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512vnni'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='erms'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='hle'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='invpcid'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='pcid'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='pku'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='rtm'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       </blockers>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <blockers model='Cascadelake-Server-noTSX'>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512bw'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512cd'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512dq'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512f'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512vl'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512vnni'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='erms'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='ibrs-all'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='invpcid'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='pcid'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='pku'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       </blockers>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <blockers model='Cascadelake-Server-v1'>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512bw'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512cd'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512dq'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512f'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512vl'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512vnni'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='erms'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='hle'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='invpcid'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='pcid'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='pku'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='rtm'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       </blockers>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <blockers model='Cascadelake-Server-v2'>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512bw'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512cd'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512dq'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512f'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512vl'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512vnni'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='erms'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='hle'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='ibrs-all'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='invpcid'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='pcid'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='pku'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='rtm'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       </blockers>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <blockers model='Cascadelake-Server-v3'>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512bw'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512cd'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512dq'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512f'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512vl'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512vnni'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='erms'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='ibrs-all'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='invpcid'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='pcid'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='pku'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       </blockers>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <blockers model='Cascadelake-Server-v4'>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512bw'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512cd'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512dq'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512f'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512vl'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512vnni'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='erms'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='ibrs-all'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='invpcid'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='pcid'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='pku'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       </blockers>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <blockers model='Cascadelake-Server-v5'>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512bw'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512cd'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512dq'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512f'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512vl'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512vnni'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='erms'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='ibrs-all'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='invpcid'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='pcid'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='pku'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='xsaves'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       </blockers>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <model usable='no' vendor='Intel' canonical='ClearwaterForest-v1'>ClearwaterForest</model>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <blockers model='ClearwaterForest'>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx-ifma'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx-ne-convert'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx-vnni'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx-vnni-int16'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx-vnni-int8'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='bhi-ctrl'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='bhi-no'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='bus-lock-detect'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='cldemote'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='cmpccxadd'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='ddpd-u'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='erms'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='fbsdp-no'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='fsrm'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='fsrs'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='gfni'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='ibrs-all'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='intel-psfd'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='invpcid'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='ipred-ctrl'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='lam'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='mcdt-no'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='movdir64b'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='movdiri'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='pbrsb-no'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='pcid'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='pku'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='prefetchiti'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='psdp-no'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='rrsba-ctrl'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='sbdr-ssdp-no'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='serialize'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='sha512'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='sm3'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='sm4'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='ss'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='vaes'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='vpclmulqdq'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='xsaves'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       </blockers>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <model usable='no' vendor='Intel'>ClearwaterForest-v1</model>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <blockers model='ClearwaterForest-v1'>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx-ifma'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx-ne-convert'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx-vnni'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx-vnni-int16'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx-vnni-int8'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='bhi-ctrl'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='bhi-no'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='bus-lock-detect'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='cldemote'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='cmpccxadd'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='ddpd-u'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='erms'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='fbsdp-no'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='fsrm'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='fsrs'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='gfni'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='ibrs-all'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='intel-psfd'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='invpcid'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='ipred-ctrl'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='lam'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='mcdt-no'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='movdir64b'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='movdiri'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='pbrsb-no'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='pcid'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='pku'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='prefetchiti'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='psdp-no'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='rrsba-ctrl'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='sbdr-ssdp-no'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='serialize'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='sha512'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='sm3'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='sm4'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='ss'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='vaes'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='vpclmulqdq'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='xsaves'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       </blockers>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <blockers model='Cooperlake'>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512-bf16'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512bw'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512cd'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512dq'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512f'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512vl'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512vnni'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='erms'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='hle'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='ibrs-all'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='invpcid'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='pcid'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='pku'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='rtm'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='taa-no'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       </blockers>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <blockers model='Cooperlake-v1'>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512-bf16'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512bw'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512cd'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512dq'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512f'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512vl'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512vnni'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='erms'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='hle'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='ibrs-all'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='invpcid'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='pcid'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='pku'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='rtm'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='taa-no'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       </blockers>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <blockers model='Cooperlake-v2'>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512-bf16'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512bw'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512cd'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512dq'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512f'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512vl'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512vnni'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='erms'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='hle'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='ibrs-all'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='invpcid'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='pcid'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='pku'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='rtm'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='taa-no'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='xsaves'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       </blockers>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <blockers model='Denverton'>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='erms'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='mpx'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       </blockers>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <model usable='no' vendor='Intel'>Denverton-v1</model>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <blockers model='Denverton-v1'>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='erms'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='mpx'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       </blockers>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <model usable='no' vendor='Intel'>Denverton-v2</model>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <blockers model='Denverton-v2'>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='erms'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       </blockers>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <model usable='no' vendor='Intel'>Denverton-v3</model>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <blockers model='Denverton-v3'>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='erms'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='xsaves'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       </blockers>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <blockers model='Dhyana-v2'>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='xsaves'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       </blockers>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <blockers model='EPYC-Genoa'>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='amd-psfd'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='auto-ibrs'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512-bf16'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512-vpopcntdq'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512bitalg'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512bw'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512cd'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512dq'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512f'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512ifma'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512vbmi'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512vbmi2'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512vl'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512vnni'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='erms'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='fsrm'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='gfni'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='invpcid'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='la57'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='no-nested-data-bp'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='null-sel-clr-base'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='pcid'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='pku'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='stibp-always-on'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='vaes'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='vpclmulqdq'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='xsaves'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       </blockers>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <blockers model='EPYC-Genoa-v1'>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='amd-psfd'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='auto-ibrs'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512-bf16'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512-vpopcntdq'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512bitalg'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512bw'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512cd'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512dq'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512f'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512ifma'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512vbmi'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512vbmi2'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512vl'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512vnni'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='erms'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='fsrm'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='gfni'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='invpcid'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='la57'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='no-nested-data-bp'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='null-sel-clr-base'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='pcid'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='pku'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='stibp-always-on'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='vaes'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='vpclmulqdq'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='xsaves'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       </blockers>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v2</model>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <blockers model='EPYC-Genoa-v2'>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='amd-psfd'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='auto-ibrs'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512-bf16'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512-vpopcntdq'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512bitalg'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512bw'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512cd'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512dq'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512f'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512ifma'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512vbmi'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512vbmi2'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512vl'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512vnni'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='erms'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='fs-gs-base-ns'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='fsrm'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='gfni'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='invpcid'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='la57'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='no-nested-data-bp'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='null-sel-clr-base'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='pcid'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='perfmon-v2'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='pku'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='stibp-always-on'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='vaes'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='vpclmulqdq'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='xsaves'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       </blockers>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <blockers model='EPYC-Milan'>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='erms'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='fsrm'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='invpcid'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='pcid'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='pku'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='xsaves'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       </blockers>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <blockers model='EPYC-Milan-v1'>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='erms'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='fsrm'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='invpcid'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='pcid'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='pku'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='xsaves'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       </blockers>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <blockers model='EPYC-Milan-v2'>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='amd-psfd'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='erms'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='fsrm'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='invpcid'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='no-nested-data-bp'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='null-sel-clr-base'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='pcid'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='pku'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='stibp-always-on'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='vaes'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='vpclmulqdq'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='xsaves'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       </blockers>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <model usable='no' vendor='AMD'>EPYC-Milan-v3</model>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <blockers model='EPYC-Milan-v3'>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='amd-psfd'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='erms'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='fsrm'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='invpcid'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='no-nested-data-bp'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='null-sel-clr-base'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='pcid'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='pku'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='stibp-always-on'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='vaes'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='vpclmulqdq'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='xsaves'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       </blockers>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <blockers model='EPYC-Rome'>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='xsaves'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       </blockers>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <blockers model='EPYC-Rome-v1'>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='xsaves'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       </blockers>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <blockers model='EPYC-Rome-v2'>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='xsaves'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       </blockers>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <blockers model='EPYC-Rome-v3'>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='xsaves'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       </blockers>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v5</model>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <model usable='no' vendor='AMD' canonical='EPYC-Turin-v1'>EPYC-Turin</model>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <blockers model='EPYC-Turin'>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='amd-psfd'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='auto-ibrs'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx-vnni'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512-bf16'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512-vp2intersect'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512-vpopcntdq'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512bitalg'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512bw'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512cd'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512dq'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512f'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512ifma'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512vbmi'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512vbmi2'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512vl'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512vnni'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='erms'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='fs-gs-base-ns'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='fsrm'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='gfni'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='ibpb-brtype'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='invpcid'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='la57'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='movdir64b'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='movdiri'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='no-nested-data-bp'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='null-sel-clr-base'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='pcid'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='perfmon-v2'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='pku'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='prefetchi'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='sbpb'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='srso-user-kernel-no'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='stibp-always-on'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='vaes'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='vpclmulqdq'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='xsaves'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       </blockers>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <model usable='no' vendor='AMD'>EPYC-Turin-v1</model>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <blockers model='EPYC-Turin-v1'>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='amd-psfd'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='auto-ibrs'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx-vnni'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512-bf16'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512-vp2intersect'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512-vpopcntdq'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512bitalg'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512bw'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512cd'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512dq'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512f'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512ifma'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512vbmi'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512vbmi2'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512vl'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512vnni'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='erms'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='fs-gs-base-ns'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='fsrm'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='gfni'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='ibpb-brtype'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='invpcid'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='la57'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='movdir64b'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='movdiri'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='no-nested-data-bp'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='null-sel-clr-base'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='pcid'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='perfmon-v2'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='pku'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='prefetchi'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='sbpb'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='srso-user-kernel-no'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='stibp-always-on'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='vaes'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='vpclmulqdq'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='xsaves'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       </blockers>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <model usable='yes' vendor='AMD'>EPYC-v1</model>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <model usable='yes' vendor='AMD'>EPYC-v2</model>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <model usable='no' vendor='AMD'>EPYC-v3</model>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <blockers model='EPYC-v3'>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='xsaves'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       </blockers>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <model usable='no' vendor='AMD'>EPYC-v4</model>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <blockers model='EPYC-v4'>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='xsaves'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       </blockers>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <model usable='no' vendor='AMD'>EPYC-v5</model>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <blockers model='EPYC-v5'>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='xsaves'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       </blockers>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <blockers model='GraniteRapids'>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='amx-bf16'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='amx-fp16'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='amx-int8'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='amx-tile'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx-vnni'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512-bf16'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512-fp16'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512-vpopcntdq'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512bitalg'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512bw'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512cd'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512dq'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512f'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512ifma'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512vbmi'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512vbmi2'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512vl'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512vnni'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='bus-lock-detect'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='erms'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='fbsdp-no'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='fsrc'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='fsrm'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='fsrs'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='fzrm'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='gfni'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='hle'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='ibrs-all'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='invpcid'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='la57'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='mcdt-no'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='pbrsb-no'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='pcid'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='pku'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='prefetchiti'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='psdp-no'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='rtm'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='sbdr-ssdp-no'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='serialize'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='taa-no'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='tsx-ldtrk'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='vaes'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='vpclmulqdq'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='xfd'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='xsaves'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       </blockers>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <blockers model='GraniteRapids-v1'>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='amx-bf16'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='amx-fp16'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='amx-int8'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='amx-tile'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx-vnni'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512-bf16'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512-fp16'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512-vpopcntdq'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512bitalg'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512bw'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512cd'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512dq'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512f'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512ifma'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512vbmi'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512vbmi2'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512vl'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512vnni'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='bus-lock-detect'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='erms'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='fbsdp-no'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='fsrc'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='fsrm'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='fsrs'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='fzrm'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='gfni'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='hle'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='ibrs-all'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='invpcid'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='la57'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='mcdt-no'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='pbrsb-no'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='pcid'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='pku'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='prefetchiti'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='psdp-no'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='rtm'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='sbdr-ssdp-no'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='serialize'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='taa-no'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='tsx-ldtrk'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='vaes'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='vpclmulqdq'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='xfd'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='xsaves'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       </blockers>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <blockers model='GraniteRapids-v2'>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='amx-bf16'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='amx-fp16'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='amx-int8'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='amx-tile'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx-vnni'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx10'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx10-128'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx10-256'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx10-512'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512-bf16'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512-fp16'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512-vpopcntdq'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512bitalg'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512bw'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512cd'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512dq'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512f'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512ifma'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512vbmi'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512vbmi2'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512vl'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512vnni'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='bus-lock-detect'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='cldemote'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='erms'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='fbsdp-no'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='fsrc'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='fsrm'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='fsrs'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='fzrm'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='gfni'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='hle'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='ibrs-all'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='invpcid'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='la57'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='mcdt-no'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='movdir64b'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='movdiri'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='pbrsb-no'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='pcid'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='pku'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='prefetchiti'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='psdp-no'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='rtm'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='sbdr-ssdp-no'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='serialize'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='ss'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='taa-no'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='tsx-ldtrk'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='vaes'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='vpclmulqdq'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='xfd'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='xsaves'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       </blockers>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <model usable='no' vendor='Intel'>GraniteRapids-v3</model>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <blockers model='GraniteRapids-v3'>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='amx-bf16'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='amx-fp16'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='amx-int8'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='amx-tile'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx-vnni'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx10'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx10-128'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx10-256'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx10-512'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512-bf16'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512-fp16'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512-vpopcntdq'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512bitalg'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512bw'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512cd'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512dq'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512f'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512ifma'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512vbmi'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512vbmi2'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512vl'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512vnni'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='bus-lock-detect'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='cldemote'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='erms'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='fbsdp-no'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='fsrc'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='fsrm'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='fsrs'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='fzrm'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='gfni'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='hle'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='ibrs-all'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='invpcid'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='la57'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='mcdt-no'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='movdir64b'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='movdiri'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='pbrsb-no'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='pcid'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='pku'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='prefetchiti'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='psdp-no'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='rtm'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='sbdr-ssdp-no'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='serialize'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='ss'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='taa-no'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='tsx-ldtrk'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='vaes'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='vpclmulqdq'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='xfd'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='xsaves'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       </blockers>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <blockers model='Haswell'>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='erms'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='hle'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='invpcid'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='pcid'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='rtm'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       </blockers>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <blockers model='Haswell-IBRS'>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='erms'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='hle'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='invpcid'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='pcid'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='rtm'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       </blockers>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <blockers model='Haswell-noTSX'>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='erms'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='invpcid'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='pcid'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       </blockers>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <blockers model='Haswell-noTSX-IBRS'>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='erms'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='invpcid'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='pcid'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       </blockers>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <model usable='no' vendor='Intel'>Haswell-v1</model>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <blockers model='Haswell-v1'>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='erms'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='hle'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='invpcid'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='pcid'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='rtm'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       </blockers>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <model usable='no' vendor='Intel'>Haswell-v2</model>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <blockers model='Haswell-v2'>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='erms'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='invpcid'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='pcid'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       </blockers>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <model usable='no' vendor='Intel'>Haswell-v3</model>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <blockers model='Haswell-v3'>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='erms'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='hle'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='invpcid'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='pcid'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='rtm'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       </blockers>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <model usable='no' vendor='Intel'>Haswell-v4</model>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <blockers model='Haswell-v4'>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='erms'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='invpcid'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='pcid'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       </blockers>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <blockers model='Icelake-Server'>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512-vpopcntdq'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512bitalg'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512bw'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512cd'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512dq'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512f'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512vbmi'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512vbmi2'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512vl'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512vnni'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='erms'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='gfni'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='hle'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='invpcid'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='la57'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='pcid'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='pku'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='rtm'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='vaes'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='vpclmulqdq'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       </blockers>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <blockers model='Icelake-Server-noTSX'>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512-vpopcntdq'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512bitalg'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512bw'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512cd'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512dq'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512f'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512vbmi'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512vbmi2'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512vl'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512vnni'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='erms'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='gfni'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='invpcid'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='la57'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='pcid'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='pku'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='vaes'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='vpclmulqdq'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       </blockers>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <blockers model='Icelake-Server-v1'>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512-vpopcntdq'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512bitalg'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512bw'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512cd'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512dq'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512f'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512vbmi'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512vbmi2'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512vl'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512vnni'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='erms'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='gfni'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='hle'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='invpcid'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='la57'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='pcid'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='pku'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='rtm'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='vaes'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='vpclmulqdq'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       </blockers>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <blockers model='Icelake-Server-v2'>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512-vpopcntdq'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512bitalg'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512bw'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512cd'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512dq'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512f'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512vbmi'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512vbmi2'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512vl'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512vnni'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='erms'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='gfni'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='invpcid'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='la57'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='pcid'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='pku'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='vaes'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='vpclmulqdq'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       </blockers>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <blockers model='Icelake-Server-v3'>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512-vpopcntdq'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512bitalg'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512bw'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512cd'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512dq'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512f'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512vbmi'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512vbmi2'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512vl'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512vnni'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='erms'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='gfni'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='ibrs-all'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='invpcid'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='la57'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='pcid'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='pku'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='taa-no'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='vaes'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='vpclmulqdq'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       </blockers>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <blockers model='Icelake-Server-v4'>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512-vpopcntdq'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512bitalg'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512bw'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512cd'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512dq'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512f'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512ifma'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512vbmi'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512vbmi2'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512vl'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512vnni'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='erms'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='fsrm'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='gfni'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='ibrs-all'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='invpcid'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='la57'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='pcid'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='pku'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='taa-no'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='vaes'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='vpclmulqdq'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       </blockers>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <blockers model='Icelake-Server-v5'>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512-vpopcntdq'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512bitalg'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512bw'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512cd'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512dq'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512f'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512ifma'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512vbmi'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512vbmi2'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512vl'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512vnni'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='erms'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='fsrm'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='gfni'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='ibrs-all'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='invpcid'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='la57'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='pcid'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='pku'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='taa-no'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='vaes'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='vpclmulqdq'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='xsaves'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       </blockers>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <blockers model='Icelake-Server-v6'>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512-vpopcntdq'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512bitalg'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512bw'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512cd'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512dq'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512f'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512ifma'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512vbmi'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512vbmi2'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512vl'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512vnni'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='erms'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='fsrm'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='gfni'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='ibrs-all'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='invpcid'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='la57'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='pcid'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='pku'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='taa-no'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='vaes'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='vpclmulqdq'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='xsaves'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       </blockers>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <blockers model='Icelake-Server-v7'>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512-vpopcntdq'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512bitalg'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512bw'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512cd'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512dq'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512f'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512ifma'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512vbmi'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512vbmi2'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512vl'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512vnni'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='erms'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='fsrm'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='gfni'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='hle'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='ibrs-all'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='invpcid'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='la57'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='pcid'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='pku'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='rtm'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='taa-no'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='vaes'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='vpclmulqdq'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='xsaves'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       </blockers>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <blockers model='IvyBridge'>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='erms'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       </blockers>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <blockers model='IvyBridge-IBRS'>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='erms'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       </blockers>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <blockers model='IvyBridge-v1'>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='erms'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       </blockers>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <blockers model='IvyBridge-v2'>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='erms'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       </blockers>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <blockers model='KnightsMill'>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512-4fmaps'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512-4vnniw'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512-vpopcntdq'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512cd'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512er'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512f'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512pf'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='erms'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='ss'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       </blockers>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <blockers model='KnightsMill-v1'>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512-4fmaps'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512-4vnniw'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512-vpopcntdq'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512cd'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512er'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512f'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512pf'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='erms'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='ss'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       </blockers>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <blockers model='Opteron_G4'>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='fma4'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='xop'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       </blockers>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <blockers model='Opteron_G4-v1'>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='fma4'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='xop'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       </blockers>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <blockers model='Opteron_G5'>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='fma4'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='tbm'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='xop'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       </blockers>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <blockers model='Opteron_G5-v1'>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='fma4'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='tbm'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='xop'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       </blockers>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <blockers model='SapphireRapids'>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='amx-bf16'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='amx-int8'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='amx-tile'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx-vnni'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512-bf16'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512-fp16'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512-vpopcntdq'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512bitalg'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512bw'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512cd'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512dq'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512f'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512ifma'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512vbmi'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512vbmi2'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512vl'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512vnni'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='bus-lock-detect'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='erms'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='fsrc'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='fsrm'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='fsrs'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='fzrm'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='gfni'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='hle'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='ibrs-all'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='invpcid'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='la57'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='pcid'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='pku'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='rtm'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='serialize'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='taa-no'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='tsx-ldtrk'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='vaes'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='vpclmulqdq'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='xfd'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='xsaves'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       </blockers>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <blockers model='SapphireRapids-v1'>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='amx-bf16'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='amx-int8'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='amx-tile'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx-vnni'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512-bf16'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512-fp16'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512-vpopcntdq'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512bitalg'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512bw'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512cd'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512dq'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512f'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512ifma'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512vbmi'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512vbmi2'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512vl'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512vnni'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='bus-lock-detect'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='erms'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='fsrc'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='fsrm'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='fsrs'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='fzrm'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='gfni'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='hle'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='ibrs-all'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='invpcid'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='la57'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='pcid'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='pku'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='rtm'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='serialize'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='taa-no'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='tsx-ldtrk'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='vaes'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='vpclmulqdq'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='xfd'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='xsaves'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       </blockers>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <blockers model='SapphireRapids-v2'>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='amx-bf16'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='amx-int8'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='amx-tile'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx-vnni'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512-bf16'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512-fp16'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512-vpopcntdq'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512bitalg'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512bw'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512cd'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512dq'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512f'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512ifma'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512vbmi'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512vbmi2'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512vl'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512vnni'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='bus-lock-detect'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='erms'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='fbsdp-no'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='fsrc'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='fsrm'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='fsrs'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='fzrm'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='gfni'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='hle'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='ibrs-all'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='invpcid'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='la57'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='pcid'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='pku'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='psdp-no'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='rtm'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='sbdr-ssdp-no'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='serialize'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='taa-no'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='tsx-ldtrk'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='vaes'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='vpclmulqdq'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='xfd'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='xsaves'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       </blockers>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <blockers model='SapphireRapids-v3'>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='amx-bf16'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='amx-int8'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='amx-tile'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx-vnni'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512-bf16'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512-fp16'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512-vpopcntdq'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512bitalg'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512bw'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512cd'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512dq'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512f'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512ifma'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512vbmi'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512vbmi2'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512vl'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512vnni'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='bus-lock-detect'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='cldemote'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='erms'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='fbsdp-no'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='fsrc'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='fsrm'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='fsrs'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='fzrm'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='gfni'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='hle'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='ibrs-all'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='invpcid'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='la57'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='movdir64b'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='movdiri'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='pcid'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='pku'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='psdp-no'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='rtm'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='sbdr-ssdp-no'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='serialize'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='ss'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='taa-no'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='tsx-ldtrk'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='vaes'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='vpclmulqdq'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='xfd'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='xsaves'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       </blockers>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <model usable='no' vendor='Intel'>SapphireRapids-v4</model>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <blockers model='SapphireRapids-v4'>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='amx-bf16'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='amx-int8'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='amx-tile'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx-vnni'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512-bf16'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512-fp16'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512-vpopcntdq'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512bitalg'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512bw'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512cd'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512dq'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512f'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512ifma'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512vbmi'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512vbmi2'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512vl'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512vnni'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='bus-lock-detect'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='cldemote'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='erms'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='fbsdp-no'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='fsrc'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='fsrm'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='fsrs'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='fzrm'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='gfni'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='hle'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='ibrs-all'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='invpcid'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='la57'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='movdir64b'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='movdiri'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='pcid'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='pku'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='psdp-no'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='rtm'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='sbdr-ssdp-no'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='serialize'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='ss'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='taa-no'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='tsx-ldtrk'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='vaes'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='vpclmulqdq'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='xfd'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='xsaves'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       </blockers>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <blockers model='SierraForest'>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx-ifma'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx-ne-convert'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx-vnni'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx-vnni-int8'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='bus-lock-detect'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='cmpccxadd'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='erms'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='fbsdp-no'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='fsrm'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='fsrs'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='gfni'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='ibrs-all'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='invpcid'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='mcdt-no'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='pbrsb-no'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='pcid'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='pku'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='psdp-no'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='sbdr-ssdp-no'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='serialize'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='vaes'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='vpclmulqdq'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='xsaves'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       </blockers>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <model usable='no' vendor='Intel'>SierraForest-v1</model>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <blockers model='SierraForest-v1'>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx-ifma'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx-ne-convert'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx-vnni'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx-vnni-int8'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='bus-lock-detect'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='cmpccxadd'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='erms'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='fbsdp-no'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='fsrm'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='fsrs'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='gfni'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='ibrs-all'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='invpcid'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='mcdt-no'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='pbrsb-no'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='pcid'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='pku'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='psdp-no'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='sbdr-ssdp-no'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='serialize'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='vaes'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='vpclmulqdq'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='xsaves'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       </blockers>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <model usable='no' vendor='Intel'>SierraForest-v2</model>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <blockers model='SierraForest-v2'>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx-ifma'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx-ne-convert'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx-vnni'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx-vnni-int8'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='bhi-ctrl'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='bus-lock-detect'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='cldemote'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='cmpccxadd'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='erms'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='fbsdp-no'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='fsrm'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='fsrs'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='gfni'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='ibrs-all'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='intel-psfd'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='invpcid'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='ipred-ctrl'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='lam'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='mcdt-no'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='movdir64b'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='movdiri'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='pbrsb-no'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='pcid'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='pku'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='psdp-no'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='rrsba-ctrl'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='sbdr-ssdp-no'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='serialize'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='ss'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='vaes'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='vpclmulqdq'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='xsaves'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       </blockers>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <model usable='no' vendor='Intel'>SierraForest-v3</model>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <blockers model='SierraForest-v3'>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx-ifma'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx-ne-convert'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx-vnni'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx-vnni-int8'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='bhi-ctrl'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='bus-lock-detect'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='cldemote'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='cmpccxadd'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='erms'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='fbsdp-no'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='fsrm'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='fsrs'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='gfni'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='ibrs-all'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='intel-psfd'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='invpcid'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='ipred-ctrl'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='lam'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='mcdt-no'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='movdir64b'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='movdiri'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='pbrsb-no'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='pcid'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='pku'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='psdp-no'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='rrsba-ctrl'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='sbdr-ssdp-no'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='serialize'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='ss'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='vaes'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='vpclmulqdq'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='xsaves'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       </blockers>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <blockers model='Skylake-Client'>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='erms'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='hle'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='invpcid'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='pcid'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='rtm'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       </blockers>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <blockers model='Skylake-Client-IBRS'>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='erms'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='hle'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='invpcid'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='pcid'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='rtm'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       </blockers>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <blockers model='Skylake-Client-noTSX-IBRS'>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='erms'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='invpcid'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='pcid'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       </blockers>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <blockers model='Skylake-Client-v1'>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='erms'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='hle'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='invpcid'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='pcid'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='rtm'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       </blockers>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <blockers model='Skylake-Client-v2'>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='erms'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='hle'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='invpcid'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='pcid'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='rtm'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       </blockers>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <blockers model='Skylake-Client-v3'>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='erms'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='invpcid'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='pcid'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       </blockers>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <blockers model='Skylake-Client-v4'>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='erms'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='invpcid'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='pcid'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='xsaves'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       </blockers>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <blockers model='Skylake-Server'>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512bw'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512cd'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512dq'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512f'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512vl'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='erms'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='hle'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='invpcid'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='pcid'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='pku'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='rtm'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       </blockers>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <blockers model='Skylake-Server-IBRS'>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512bw'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512cd'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512dq'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512f'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512vl'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='erms'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='hle'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='invpcid'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='pcid'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='pku'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='rtm'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       </blockers>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <blockers model='Skylake-Server-noTSX-IBRS'>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512bw'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512cd'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512dq'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512f'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512vl'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='erms'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='invpcid'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='pcid'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='pku'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       </blockers>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <blockers model='Skylake-Server-v1'>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512bw'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512cd'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512dq'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512f'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512vl'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='erms'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='hle'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='invpcid'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='pcid'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='pku'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='rtm'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       </blockers>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <blockers model='Skylake-Server-v2'>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512bw'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512cd'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512dq'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512f'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512vl'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='erms'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='hle'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='invpcid'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='pcid'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='pku'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='rtm'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       </blockers>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <blockers model='Skylake-Server-v3'>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512bw'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512cd'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512dq'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512f'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512vl'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='erms'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='invpcid'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='pcid'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='pku'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       </blockers>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <blockers model='Skylake-Server-v4'>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512bw'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512cd'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512dq'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512f'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512vl'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='erms'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='invpcid'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='pcid'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='pku'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       </blockers>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <blockers model='Skylake-Server-v5'>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512bw'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512cd'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512dq'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512f'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512vl'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='erms'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='invpcid'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='pcid'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='pku'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='xsaves'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       </blockers>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <blockers model='Snowridge'>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='cldemote'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='core-capability'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='erms'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='gfni'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='movdir64b'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='movdiri'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='mpx'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='split-lock-detect'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       </blockers>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <model usable='no' vendor='Intel'>Snowridge-v1</model>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <blockers model='Snowridge-v1'>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='cldemote'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='core-capability'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='erms'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='gfni'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='movdir64b'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='movdiri'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='mpx'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='split-lock-detect'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       </blockers>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <model usable='no' vendor='Intel'>Snowridge-v2</model>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <blockers model='Snowridge-v2'>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='cldemote'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='core-capability'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='erms'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='gfni'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='movdir64b'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='movdiri'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='split-lock-detect'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       </blockers>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <model usable='no' vendor='Intel'>Snowridge-v3</model>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <blockers model='Snowridge-v3'>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='cldemote'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='core-capability'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='erms'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='gfni'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='movdir64b'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='movdiri'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='split-lock-detect'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='xsaves'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       </blockers>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <model usable='no' vendor='Intel'>Snowridge-v4</model>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <blockers model='Snowridge-v4'>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='cldemote'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='erms'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='gfni'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='movdir64b'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='movdiri'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='xsaves'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       </blockers>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <model usable='yes' vendor='Intel'>Westmere-v1</model>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <model usable='yes' vendor='Intel'>Westmere-v2</model>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <blockers model='athlon'>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='3dnow'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='3dnowext'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       </blockers>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <blockers model='athlon-v1'>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='3dnow'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='3dnowext'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       </blockers>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <blockers model='core2duo'>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='ss'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       </blockers>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <blockers model='core2duo-v1'>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='ss'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       </blockers>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <blockers model='coreduo'>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='ss'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       </blockers>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <blockers model='coreduo-v1'>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='ss'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       </blockers>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <blockers model='n270'>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='ss'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       </blockers>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <blockers model='n270-v1'>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='ss'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       </blockers>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <blockers model='phenom'>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='3dnow'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='3dnowext'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       </blockers>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <blockers model='phenom-v1'>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='3dnow'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='3dnowext'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       </blockers>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Jan 31 08:24:09 compute-0 nova_compute[238824]:     </mode>
Jan 31 08:24:09 compute-0 nova_compute[238824]:   </cpu>
Jan 31 08:24:09 compute-0 nova_compute[238824]:   <memoryBacking supported='yes'>
Jan 31 08:24:09 compute-0 nova_compute[238824]:     <enum name='sourceType'>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <value>file</value>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <value>anonymous</value>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <value>memfd</value>
Jan 31 08:24:09 compute-0 nova_compute[238824]:     </enum>
Jan 31 08:24:09 compute-0 nova_compute[238824]:   </memoryBacking>
Jan 31 08:24:09 compute-0 nova_compute[238824]:   <devices>
Jan 31 08:24:09 compute-0 nova_compute[238824]:     <disk supported='yes'>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <enum name='diskDevice'>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <value>disk</value>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <value>cdrom</value>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <value>floppy</value>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <value>lun</value>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       </enum>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <enum name='bus'>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <value>fdc</value>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <value>scsi</value>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <value>virtio</value>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <value>usb</value>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <value>sata</value>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       </enum>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <enum name='model'>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <value>virtio</value>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <value>virtio-transitional</value>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <value>virtio-non-transitional</value>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       </enum>
Jan 31 08:24:09 compute-0 nova_compute[238824]:     </disk>
Jan 31 08:24:09 compute-0 nova_compute[238824]:     <graphics supported='yes'>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <enum name='type'>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <value>vnc</value>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <value>egl-headless</value>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <value>dbus</value>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       </enum>
Jan 31 08:24:09 compute-0 nova_compute[238824]:     </graphics>
Jan 31 08:24:09 compute-0 nova_compute[238824]:     <video supported='yes'>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <enum name='modelType'>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <value>vga</value>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <value>cirrus</value>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <value>virtio</value>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <value>none</value>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <value>bochs</value>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <value>ramfb</value>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       </enum>
Jan 31 08:24:09 compute-0 nova_compute[238824]:     </video>
Jan 31 08:24:09 compute-0 nova_compute[238824]:     <hostdev supported='yes'>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <enum name='mode'>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <value>subsystem</value>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       </enum>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <enum name='startupPolicy'>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <value>default</value>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <value>mandatory</value>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <value>requisite</value>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <value>optional</value>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       </enum>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <enum name='subsysType'>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <value>usb</value>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <value>pci</value>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <value>scsi</value>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       </enum>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <enum name='capsType'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <enum name='pciBackend'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:     </hostdev>
Jan 31 08:24:09 compute-0 nova_compute[238824]:     <rng supported='yes'>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <enum name='model'>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <value>virtio</value>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <value>virtio-transitional</value>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <value>virtio-non-transitional</value>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       </enum>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <enum name='backendModel'>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <value>random</value>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <value>egd</value>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <value>builtin</value>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       </enum>
Jan 31 08:24:09 compute-0 nova_compute[238824]:     </rng>
Jan 31 08:24:09 compute-0 nova_compute[238824]:     <filesystem supported='yes'>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <enum name='driverType'>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <value>path</value>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <value>handle</value>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <value>virtiofs</value>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       </enum>
Jan 31 08:24:09 compute-0 nova_compute[238824]:     </filesystem>
Jan 31 08:24:09 compute-0 nova_compute[238824]:     <tpm supported='yes'>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <enum name='model'>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <value>tpm-tis</value>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <value>tpm-crb</value>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       </enum>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <enum name='backendModel'>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <value>emulator</value>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <value>external</value>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       </enum>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <enum name='backendVersion'>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <value>2.0</value>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       </enum>
Jan 31 08:24:09 compute-0 nova_compute[238824]:     </tpm>
Jan 31 08:24:09 compute-0 nova_compute[238824]:     <redirdev supported='yes'>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <enum name='bus'>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <value>usb</value>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       </enum>
Jan 31 08:24:09 compute-0 nova_compute[238824]:     </redirdev>
Jan 31 08:24:09 compute-0 nova_compute[238824]:     <channel supported='yes'>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <enum name='type'>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <value>pty</value>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <value>unix</value>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       </enum>
Jan 31 08:24:09 compute-0 nova_compute[238824]:     </channel>
Jan 31 08:24:09 compute-0 nova_compute[238824]:     <crypto supported='yes'>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <enum name='model'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <enum name='type'>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <value>qemu</value>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       </enum>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <enum name='backendModel'>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <value>builtin</value>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       </enum>
Jan 31 08:24:09 compute-0 nova_compute[238824]:     </crypto>
Jan 31 08:24:09 compute-0 nova_compute[238824]:     <interface supported='yes'>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <enum name='backendType'>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <value>default</value>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <value>passt</value>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       </enum>
Jan 31 08:24:09 compute-0 nova_compute[238824]:     </interface>
Jan 31 08:24:09 compute-0 nova_compute[238824]:     <panic supported='yes'>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <enum name='model'>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <value>isa</value>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <value>hyperv</value>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       </enum>
Jan 31 08:24:09 compute-0 nova_compute[238824]:     </panic>
Jan 31 08:24:09 compute-0 nova_compute[238824]:     <console supported='yes'>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <enum name='type'>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <value>null</value>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <value>vc</value>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <value>pty</value>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <value>dev</value>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <value>file</value>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <value>pipe</value>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <value>stdio</value>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <value>udp</value>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <value>tcp</value>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <value>unix</value>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <value>qemu-vdagent</value>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <value>dbus</value>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       </enum>
Jan 31 08:24:09 compute-0 nova_compute[238824]:     </console>
Jan 31 08:24:09 compute-0 nova_compute[238824]:   </devices>
Jan 31 08:24:09 compute-0 nova_compute[238824]:   <features>
Jan 31 08:24:09 compute-0 nova_compute[238824]:     <gic supported='no'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:     <vmcoreinfo supported='yes'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:     <genid supported='yes'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:     <backingStoreInput supported='yes'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:     <backup supported='yes'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:     <async-teardown supported='yes'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:     <s390-pv supported='no'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:     <ps2 supported='yes'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:     <tdx supported='no'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:     <sev supported='no'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:     <sgx supported='no'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:     <hyperv supported='yes'>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <enum name='features'>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <value>relaxed</value>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <value>vapic</value>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <value>spinlocks</value>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <value>vpindex</value>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <value>runtime</value>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <value>synic</value>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <value>stimer</value>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <value>reset</value>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <value>vendor_id</value>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <value>frequencies</value>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <value>reenlightenment</value>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <value>tlbflush</value>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <value>ipi</value>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <value>avic</value>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <value>emsr_bitmap</value>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <value>xmm_input</value>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       </enum>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <defaults>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <spinlocks>4095</spinlocks>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <stimer_direct>on</stimer_direct>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <tlbflush_direct>on</tlbflush_direct>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <tlbflush_extended>on</tlbflush_extended>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <vendor_id>Linux KVM Hv</vendor_id>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       </defaults>
Jan 31 08:24:09 compute-0 nova_compute[238824]:     </hyperv>
Jan 31 08:24:09 compute-0 nova_compute[238824]:     <launchSecurity supported='no'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:   </features>
Jan 31 08:24:09 compute-0 nova_compute[238824]: </domainCapabilities>
Jan 31 08:24:09 compute-0 nova_compute[238824]:  _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037
Jan 31 08:24:09 compute-0 nova_compute[238824]: 2026-01-31 08:24:09.345 238828 DEBUG nova.virt.libvirt.host [None req-2c04beea-b301-4289-b04e-ab5631b910c8 - - - - - -] Getting domain capabilities for x86_64 via machine types: {'pc', 'q35'} _get_machine_types /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:952
Jan 31 08:24:09 compute-0 nova_compute[238824]: 2026-01-31 08:24:09.350 238828 DEBUG nova.virt.libvirt.host [None req-2c04beea-b301-4289-b04e-ab5631b910c8 - - - - - -] Libvirt host hypervisor capabilities for arch=x86_64 and machine_type=pc:
Jan 31 08:24:09 compute-0 nova_compute[238824]: <domainCapabilities>
Jan 31 08:24:09 compute-0 nova_compute[238824]:   <path>/usr/libexec/qemu-kvm</path>
Jan 31 08:24:09 compute-0 nova_compute[238824]:   <domain>kvm</domain>
Jan 31 08:24:09 compute-0 nova_compute[238824]:   <machine>pc-i440fx-rhel7.6.0</machine>
Jan 31 08:24:09 compute-0 nova_compute[238824]:   <arch>x86_64</arch>
Jan 31 08:24:09 compute-0 nova_compute[238824]:   <vcpu max='240'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:   <iothreads supported='yes'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:   <os supported='yes'>
Jan 31 08:24:09 compute-0 nova_compute[238824]:     <enum name='firmware'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:     <loader supported='yes'>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <enum name='type'>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <value>rom</value>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <value>pflash</value>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       </enum>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <enum name='readonly'>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <value>yes</value>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <value>no</value>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       </enum>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <enum name='secure'>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <value>no</value>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       </enum>
Jan 31 08:24:09 compute-0 nova_compute[238824]:     </loader>
Jan 31 08:24:09 compute-0 nova_compute[238824]:   </os>
Jan 31 08:24:09 compute-0 nova_compute[238824]:   <cpu>
Jan 31 08:24:09 compute-0 nova_compute[238824]:     <mode name='host-passthrough' supported='yes'>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <enum name='hostPassthroughMigratable'>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <value>on</value>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <value>off</value>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       </enum>
Jan 31 08:24:09 compute-0 nova_compute[238824]:     </mode>
Jan 31 08:24:09 compute-0 nova_compute[238824]:     <mode name='maximum' supported='yes'>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <enum name='maximumMigratable'>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <value>on</value>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <value>off</value>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       </enum>
Jan 31 08:24:09 compute-0 nova_compute[238824]:     </mode>
Jan 31 08:24:09 compute-0 nova_compute[238824]:     <mode name='host-model' supported='yes'>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <model fallback='forbid'>EPYC-Rome</model>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <vendor>AMD</vendor>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <maxphysaddr mode='passthrough' limit='40'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <feature policy='require' name='x2apic'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <feature policy='require' name='tsc-deadline'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <feature policy='require' name='hypervisor'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <feature policy='require' name='tsc_adjust'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <feature policy='require' name='spec-ctrl'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <feature policy='require' name='stibp'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <feature policy='require' name='ssbd'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <feature policy='require' name='cmp_legacy'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <feature policy='require' name='overflow-recov'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <feature policy='require' name='succor'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <feature policy='require' name='ibrs'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <feature policy='require' name='amd-ssbd'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <feature policy='require' name='virt-ssbd'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <feature policy='require' name='lbrv'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <feature policy='require' name='tsc-scale'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <feature policy='require' name='vmcb-clean'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <feature policy='require' name='flushbyasid'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <feature policy='require' name='pause-filter'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <feature policy='require' name='pfthreshold'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <feature policy='require' name='svme-addr-chk'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <feature policy='require' name='lfence-always-serializing'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <feature policy='disable' name='xsaves'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:     </mode>
Jan 31 08:24:09 compute-0 nova_compute[238824]:     <mode name='custom' supported='yes'>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <blockers model='Broadwell'>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='erms'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='hle'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='invpcid'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='pcid'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='rtm'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       </blockers>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <blockers model='Broadwell-IBRS'>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='erms'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='hle'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='invpcid'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='pcid'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='rtm'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       </blockers>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <blockers model='Broadwell-noTSX'>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='erms'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='invpcid'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='pcid'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       </blockers>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <blockers model='Broadwell-noTSX-IBRS'>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='erms'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='invpcid'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='pcid'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       </blockers>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <model usable='no' vendor='Intel'>Broadwell-v1</model>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <blockers model='Broadwell-v1'>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='erms'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='hle'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='invpcid'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='pcid'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='rtm'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       </blockers>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <model usable='no' vendor='Intel'>Broadwell-v2</model>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <blockers model='Broadwell-v2'>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='erms'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='invpcid'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='pcid'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       </blockers>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <model usable='no' vendor='Intel'>Broadwell-v3</model>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <blockers model='Broadwell-v3'>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='erms'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='hle'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='invpcid'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='pcid'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='rtm'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       </blockers>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <model usable='no' vendor='Intel'>Broadwell-v4</model>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <blockers model='Broadwell-v4'>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='erms'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='invpcid'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='pcid'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       </blockers>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <blockers model='Cascadelake-Server'>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512bw'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512cd'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512dq'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512f'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512vl'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512vnni'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='erms'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='hle'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='invpcid'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='pcid'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='pku'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='rtm'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       </blockers>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <blockers model='Cascadelake-Server-noTSX'>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512bw'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512cd'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512dq'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512f'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512vl'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512vnni'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='erms'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='ibrs-all'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='invpcid'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='pcid'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='pku'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       </blockers>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <blockers model='Cascadelake-Server-v1'>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512bw'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512cd'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512dq'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512f'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512vl'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512vnni'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='erms'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='hle'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='invpcid'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='pcid'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='pku'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='rtm'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       </blockers>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <blockers model='Cascadelake-Server-v2'>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512bw'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512cd'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512dq'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512f'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512vl'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512vnni'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='erms'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='hle'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='ibrs-all'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='invpcid'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='pcid'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='pku'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='rtm'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       </blockers>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <blockers model='Cascadelake-Server-v3'>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512bw'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512cd'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512dq'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512f'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512vl'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512vnni'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='erms'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='ibrs-all'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='invpcid'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='pcid'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='pku'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       </blockers>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <blockers model='Cascadelake-Server-v4'>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512bw'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512cd'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512dq'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512f'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512vl'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512vnni'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='erms'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='ibrs-all'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='invpcid'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='pcid'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='pku'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       </blockers>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <blockers model='Cascadelake-Server-v5'>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512bw'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512cd'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512dq'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512f'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512vl'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512vnni'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='erms'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='ibrs-all'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='invpcid'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='pcid'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='pku'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='xsaves'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       </blockers>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <model usable='no' vendor='Intel' canonical='ClearwaterForest-v1'>ClearwaterForest</model>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <blockers model='ClearwaterForest'>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx-ifma'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx-ne-convert'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx-vnni'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx-vnni-int16'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx-vnni-int8'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='bhi-ctrl'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='bhi-no'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='bus-lock-detect'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='cldemote'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='cmpccxadd'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='ddpd-u'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='erms'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='fbsdp-no'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='fsrm'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='fsrs'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='gfni'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='ibrs-all'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='intel-psfd'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='invpcid'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='ipred-ctrl'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='lam'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='mcdt-no'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='movdir64b'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='movdiri'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='pbrsb-no'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='pcid'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='pku'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='prefetchiti'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='psdp-no'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='rrsba-ctrl'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='sbdr-ssdp-no'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='serialize'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='sha512'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='sm3'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='sm4'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='ss'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='vaes'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='vpclmulqdq'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='xsaves'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       </blockers>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <model usable='no' vendor='Intel'>ClearwaterForest-v1</model>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <blockers model='ClearwaterForest-v1'>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx-ifma'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx-ne-convert'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx-vnni'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx-vnni-int16'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx-vnni-int8'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='bhi-ctrl'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='bhi-no'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='bus-lock-detect'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='cldemote'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='cmpccxadd'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='ddpd-u'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='erms'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='fbsdp-no'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='fsrm'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='fsrs'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='gfni'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='ibrs-all'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='intel-psfd'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='invpcid'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='ipred-ctrl'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='lam'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='mcdt-no'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='movdir64b'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='movdiri'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='pbrsb-no'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='pcid'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='pku'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='prefetchiti'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='psdp-no'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='rrsba-ctrl'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='sbdr-ssdp-no'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='serialize'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='sha512'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='sm3'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='sm4'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='ss'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='vaes'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='vpclmulqdq'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='xsaves'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       </blockers>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <blockers model='Cooperlake'>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512-bf16'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512bw'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512cd'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512dq'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512f'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512vl'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512vnni'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='erms'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='hle'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='ibrs-all'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='invpcid'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='pcid'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='pku'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='rtm'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='taa-no'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       </blockers>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <blockers model='Cooperlake-v1'>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512-bf16'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512bw'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512cd'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512dq'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512f'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512vl'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512vnni'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='erms'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='hle'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='ibrs-all'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='invpcid'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='pcid'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='pku'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='rtm'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='taa-no'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       </blockers>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <blockers model='Cooperlake-v2'>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512-bf16'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512bw'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512cd'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512dq'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512f'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512vl'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512vnni'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='erms'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='hle'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='ibrs-all'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='invpcid'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='pcid'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='pku'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='rtm'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='taa-no'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='xsaves'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       </blockers>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <blockers model='Denverton'>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='erms'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='mpx'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       </blockers>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <model usable='no' vendor='Intel'>Denverton-v1</model>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <blockers model='Denverton-v1'>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='erms'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='mpx'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       </blockers>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <model usable='no' vendor='Intel'>Denverton-v2</model>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <blockers model='Denverton-v2'>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='erms'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       </blockers>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <model usable='no' vendor='Intel'>Denverton-v3</model>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <blockers model='Denverton-v3'>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='erms'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='xsaves'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       </blockers>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <blockers model='Dhyana-v2'>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='xsaves'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       </blockers>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <blockers model='EPYC-Genoa'>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='amd-psfd'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='auto-ibrs'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512-bf16'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512-vpopcntdq'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512bitalg'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512bw'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512cd'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512dq'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512f'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512ifma'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512vbmi'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512vbmi2'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512vl'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512vnni'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='erms'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='fsrm'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='gfni'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='invpcid'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='la57'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='no-nested-data-bp'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='null-sel-clr-base'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='pcid'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='pku'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='stibp-always-on'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='vaes'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='vpclmulqdq'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='xsaves'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       </blockers>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <blockers model='EPYC-Genoa-v1'>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='amd-psfd'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='auto-ibrs'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512-bf16'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512-vpopcntdq'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512bitalg'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512bw'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512cd'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512dq'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512f'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512ifma'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512vbmi'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512vbmi2'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512vl'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512vnni'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='erms'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='fsrm'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='gfni'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='invpcid'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='la57'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='no-nested-data-bp'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='null-sel-clr-base'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='pcid'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='pku'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='stibp-always-on'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='vaes'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='vpclmulqdq'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='xsaves'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       </blockers>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v2</model>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <blockers model='EPYC-Genoa-v2'>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='amd-psfd'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='auto-ibrs'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512-bf16'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512-vpopcntdq'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512bitalg'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512bw'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512cd'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512dq'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512f'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512ifma'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512vbmi'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512vbmi2'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512vl'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512vnni'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='erms'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='fs-gs-base-ns'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='fsrm'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='gfni'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='invpcid'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='la57'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='no-nested-data-bp'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='null-sel-clr-base'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='pcid'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='perfmon-v2'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='pku'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='stibp-always-on'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='vaes'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='vpclmulqdq'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='xsaves'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       </blockers>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <blockers model='EPYC-Milan'>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='erms'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='fsrm'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='invpcid'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='pcid'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='pku'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='xsaves'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       </blockers>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <blockers model='EPYC-Milan-v1'>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='erms'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='fsrm'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='invpcid'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='pcid'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='pku'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='xsaves'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       </blockers>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <blockers model='EPYC-Milan-v2'>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='amd-psfd'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='erms'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='fsrm'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='invpcid'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='no-nested-data-bp'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='null-sel-clr-base'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='pcid'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='pku'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='stibp-always-on'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='vaes'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='vpclmulqdq'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='xsaves'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       </blockers>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <model usable='no' vendor='AMD'>EPYC-Milan-v3</model>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <blockers model='EPYC-Milan-v3'>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='amd-psfd'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='erms'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='fsrm'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='invpcid'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='no-nested-data-bp'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='null-sel-clr-base'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='pcid'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='pku'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='stibp-always-on'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='vaes'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='vpclmulqdq'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='xsaves'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       </blockers>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <blockers model='EPYC-Rome'>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='xsaves'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       </blockers>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <blockers model='EPYC-Rome-v1'>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='xsaves'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       </blockers>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <blockers model='EPYC-Rome-v2'>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='xsaves'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       </blockers>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <blockers model='EPYC-Rome-v3'>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='xsaves'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       </blockers>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v5</model>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <model usable='no' vendor='AMD' canonical='EPYC-Turin-v1'>EPYC-Turin</model>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <blockers model='EPYC-Turin'>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='amd-psfd'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='auto-ibrs'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx-vnni'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512-bf16'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512-vp2intersect'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512-vpopcntdq'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512bitalg'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512bw'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512cd'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512dq'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512f'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512ifma'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512vbmi'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512vbmi2'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512vl'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512vnni'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='erms'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='fs-gs-base-ns'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='fsrm'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='gfni'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='ibpb-brtype'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='invpcid'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='la57'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='movdir64b'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='movdiri'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='no-nested-data-bp'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='null-sel-clr-base'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='pcid'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='perfmon-v2'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='pku'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='prefetchi'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='sbpb'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='srso-user-kernel-no'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='stibp-always-on'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='vaes'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='vpclmulqdq'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='xsaves'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       </blockers>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <model usable='no' vendor='AMD'>EPYC-Turin-v1</model>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <blockers model='EPYC-Turin-v1'>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='amd-psfd'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='auto-ibrs'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx-vnni'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512-bf16'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512-vp2intersect'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512-vpopcntdq'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512bitalg'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512bw'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512cd'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512dq'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512f'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512ifma'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512vbmi'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512vbmi2'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512vl'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512vnni'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='erms'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='fs-gs-base-ns'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='fsrm'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='gfni'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='ibpb-brtype'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='invpcid'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='la57'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='movdir64b'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='movdiri'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='no-nested-data-bp'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='null-sel-clr-base'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='pcid'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='perfmon-v2'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='pku'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='prefetchi'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='sbpb'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='srso-user-kernel-no'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='stibp-always-on'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='vaes'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='vpclmulqdq'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='xsaves'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       </blockers>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <model usable='yes' vendor='AMD'>EPYC-v1</model>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <model usable='yes' vendor='AMD'>EPYC-v2</model>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <model usable='no' vendor='AMD'>EPYC-v3</model>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <blockers model='EPYC-v3'>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='xsaves'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       </blockers>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <model usable='no' vendor='AMD'>EPYC-v4</model>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <blockers model='EPYC-v4'>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='xsaves'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       </blockers>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <model usable='no' vendor='AMD'>EPYC-v5</model>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <blockers model='EPYC-v5'>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='xsaves'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       </blockers>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <blockers model='GraniteRapids'>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='amx-bf16'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='amx-fp16'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='amx-int8'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='amx-tile'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx-vnni'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512-bf16'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512-fp16'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512-vpopcntdq'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512bitalg'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512bw'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512cd'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512dq'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512f'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512ifma'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512vbmi'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512vbmi2'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512vl'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512vnni'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='bus-lock-detect'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='erms'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='fbsdp-no'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='fsrc'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='fsrm'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='fsrs'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='fzrm'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='gfni'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='hle'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='ibrs-all'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='invpcid'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='la57'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='mcdt-no'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='pbrsb-no'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='pcid'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='pku'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='prefetchiti'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='psdp-no'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='rtm'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='sbdr-ssdp-no'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='serialize'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='taa-no'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='tsx-ldtrk'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='vaes'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='vpclmulqdq'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='xfd'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='xsaves'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       </blockers>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <blockers model='GraniteRapids-v1'>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='amx-bf16'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='amx-fp16'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='amx-int8'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='amx-tile'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx-vnni'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512-bf16'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512-fp16'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512-vpopcntdq'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512bitalg'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512bw'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512cd'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512dq'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512f'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512ifma'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512vbmi'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512vbmi2'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512vl'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512vnni'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='bus-lock-detect'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='erms'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='fbsdp-no'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='fsrc'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='fsrm'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='fsrs'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='fzrm'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='gfni'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='hle'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='ibrs-all'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='invpcid'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='la57'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='mcdt-no'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='pbrsb-no'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='pcid'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='pku'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='prefetchiti'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='psdp-no'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='rtm'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='sbdr-ssdp-no'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='serialize'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='taa-no'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='tsx-ldtrk'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='vaes'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='vpclmulqdq'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='xfd'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='xsaves'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       </blockers>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <blockers model='GraniteRapids-v2'>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='amx-bf16'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='amx-fp16'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='amx-int8'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='amx-tile'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx-vnni'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx10'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx10-128'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx10-256'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx10-512'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512-bf16'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512-fp16'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512-vpopcntdq'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512bitalg'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512bw'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512cd'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512dq'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512f'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512ifma'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512vbmi'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512vbmi2'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512vl'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512vnni'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='bus-lock-detect'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='cldemote'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='erms'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='fbsdp-no'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='fsrc'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='fsrm'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='fsrs'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='fzrm'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='gfni'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='hle'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='ibrs-all'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='invpcid'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='la57'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='mcdt-no'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='movdir64b'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='movdiri'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='pbrsb-no'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='pcid'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='pku'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='prefetchiti'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='psdp-no'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='rtm'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='sbdr-ssdp-no'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='serialize'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='ss'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='taa-no'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='tsx-ldtrk'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='vaes'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='vpclmulqdq'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='xfd'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='xsaves'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       </blockers>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <model usable='no' vendor='Intel'>GraniteRapids-v3</model>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <blockers model='GraniteRapids-v3'>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='amx-bf16'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='amx-fp16'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='amx-int8'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='amx-tile'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx-vnni'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx10'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx10-128'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx10-256'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx10-512'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512-bf16'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512-fp16'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512-vpopcntdq'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512bitalg'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512bw'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512cd'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512dq'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512f'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512ifma'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512vbmi'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512vbmi2'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512vl'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512vnni'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='bus-lock-detect'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='cldemote'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='erms'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='fbsdp-no'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='fsrc'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='fsrm'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='fsrs'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='fzrm'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='gfni'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='hle'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='ibrs-all'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='invpcid'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='la57'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='mcdt-no'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='movdir64b'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='movdiri'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='pbrsb-no'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='pcid'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='pku'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='prefetchiti'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='psdp-no'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='rtm'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='sbdr-ssdp-no'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='serialize'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='ss'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='taa-no'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='tsx-ldtrk'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='vaes'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='vpclmulqdq'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='xfd'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='xsaves'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       </blockers>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <blockers model='Haswell'>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='erms'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='hle'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='invpcid'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='pcid'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='rtm'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       </blockers>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <blockers model='Haswell-IBRS'>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='erms'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='hle'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='invpcid'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='pcid'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='rtm'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       </blockers>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <blockers model='Haswell-noTSX'>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='erms'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='invpcid'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='pcid'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       </blockers>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <blockers model='Haswell-noTSX-IBRS'>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='erms'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='invpcid'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='pcid'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       </blockers>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <model usable='no' vendor='Intel'>Haswell-v1</model>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <blockers model='Haswell-v1'>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='erms'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='hle'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='invpcid'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='pcid'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='rtm'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       </blockers>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <model usable='no' vendor='Intel'>Haswell-v2</model>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <blockers model='Haswell-v2'>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='erms'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='invpcid'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='pcid'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       </blockers>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <model usable='no' vendor='Intel'>Haswell-v3</model>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <blockers model='Haswell-v3'>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='erms'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='hle'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='invpcid'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='pcid'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='rtm'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       </blockers>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <model usable='no' vendor='Intel'>Haswell-v4</model>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <blockers model='Haswell-v4'>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='erms'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='invpcid'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='pcid'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       </blockers>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <blockers model='Icelake-Server'>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512-vpopcntdq'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512bitalg'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512bw'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512cd'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512dq'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512f'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512vbmi'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512vbmi2'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512vl'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512vnni'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='erms'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='gfni'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='hle'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='invpcid'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='la57'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='pcid'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='pku'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='rtm'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='vaes'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='vpclmulqdq'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       </blockers>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <blockers model='Icelake-Server-noTSX'>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512-vpopcntdq'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512bitalg'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512bw'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512cd'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512dq'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512f'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512vbmi'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512vbmi2'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512vl'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512vnni'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='erms'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='gfni'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='invpcid'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='la57'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='pcid'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='pku'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='vaes'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='vpclmulqdq'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       </blockers>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <blockers model='Icelake-Server-v1'>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512-vpopcntdq'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512bitalg'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512bw'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512cd'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512dq'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512f'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512vbmi'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512vbmi2'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512vl'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512vnni'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='erms'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='gfni'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='hle'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='invpcid'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='la57'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='pcid'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='pku'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='rtm'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='vaes'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='vpclmulqdq'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       </blockers>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <blockers model='Icelake-Server-v2'>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512-vpopcntdq'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512bitalg'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512bw'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512cd'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512dq'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512f'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512vbmi'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512vbmi2'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512vl'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512vnni'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='erms'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='gfni'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='invpcid'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='la57'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='pcid'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='pku'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='vaes'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='vpclmulqdq'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       </blockers>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <blockers model='Icelake-Server-v3'>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512-vpopcntdq'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512bitalg'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512bw'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512cd'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512dq'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512f'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512vbmi'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512vbmi2'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512vl'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512vnni'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='erms'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='gfni'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='ibrs-all'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='invpcid'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='la57'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='pcid'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='pku'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='taa-no'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='vaes'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='vpclmulqdq'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       </blockers>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <blockers model='Icelake-Server-v4'>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512-vpopcntdq'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512bitalg'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512bw'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512cd'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512dq'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512f'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512ifma'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512vbmi'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512vbmi2'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512vl'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512vnni'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='erms'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='fsrm'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='gfni'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='ibrs-all'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='invpcid'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='la57'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='pcid'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='pku'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='taa-no'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='vaes'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='vpclmulqdq'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       </blockers>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <blockers model='Icelake-Server-v5'>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512-vpopcntdq'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512bitalg'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512bw'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512cd'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512dq'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512f'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512ifma'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512vbmi'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512vbmi2'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512vl'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512vnni'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='erms'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='fsrm'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='gfni'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='ibrs-all'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='invpcid'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='la57'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='pcid'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='pku'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='taa-no'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='vaes'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='vpclmulqdq'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='xsaves'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       </blockers>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <blockers model='Icelake-Server-v6'>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512-vpopcntdq'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512bitalg'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512bw'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512cd'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512dq'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512f'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512ifma'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512vbmi'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512vbmi2'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512vl'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512vnni'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='erms'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='fsrm'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='gfni'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='ibrs-all'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='invpcid'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='la57'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='pcid'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='pku'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='taa-no'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='vaes'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='vpclmulqdq'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='xsaves'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       </blockers>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <blockers model='Icelake-Server-v7'>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512-vpopcntdq'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512bitalg'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512bw'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512cd'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512dq'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512f'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512ifma'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512vbmi'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512vbmi2'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512vl'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512vnni'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='erms'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='fsrm'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='gfni'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='hle'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='ibrs-all'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='invpcid'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='la57'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='pcid'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='pku'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='rtm'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='taa-no'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='vaes'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='vpclmulqdq'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='xsaves'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       </blockers>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <blockers model='IvyBridge'>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='erms'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       </blockers>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <blockers model='IvyBridge-IBRS'>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='erms'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       </blockers>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <blockers model='IvyBridge-v1'>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='erms'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       </blockers>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <blockers model='IvyBridge-v2'>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='erms'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       </blockers>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <blockers model='KnightsMill'>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512-4fmaps'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512-4vnniw'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512-vpopcntdq'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512cd'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512er'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512f'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512pf'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='erms'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='ss'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       </blockers>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <blockers model='KnightsMill-v1'>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512-4fmaps'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512-4vnniw'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512-vpopcntdq'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512cd'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512er'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512f'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512pf'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='erms'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='ss'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       </blockers>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <blockers model='Opteron_G4'>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='fma4'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='xop'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       </blockers>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <blockers model='Opteron_G4-v1'>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='fma4'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='xop'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       </blockers>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <blockers model='Opteron_G5'>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='fma4'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='tbm'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='xop'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       </blockers>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <blockers model='Opteron_G5-v1'>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='fma4'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='tbm'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='xop'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       </blockers>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <blockers model='SapphireRapids'>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='amx-bf16'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='amx-int8'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='amx-tile'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx-vnni'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512-bf16'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512-fp16'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512-vpopcntdq'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512bitalg'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512bw'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512cd'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512dq'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512f'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512ifma'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512vbmi'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512vbmi2'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512vl'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512vnni'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='bus-lock-detect'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='erms'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='fsrc'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='fsrm'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='fsrs'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='fzrm'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='gfni'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='hle'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='ibrs-all'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='invpcid'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='la57'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='pcid'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='pku'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='rtm'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='serialize'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='taa-no'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='tsx-ldtrk'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='vaes'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='vpclmulqdq'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='xfd'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='xsaves'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       </blockers>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <blockers model='SapphireRapids-v1'>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='amx-bf16'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='amx-int8'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='amx-tile'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx-vnni'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512-bf16'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512-fp16'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512-vpopcntdq'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512bitalg'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512bw'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512cd'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512dq'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512f'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512ifma'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512vbmi'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512vbmi2'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512vl'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512vnni'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='bus-lock-detect'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='erms'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='fsrc'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='fsrm'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='fsrs'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='fzrm'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='gfni'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='hle'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='ibrs-all'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='invpcid'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='la57'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='pcid'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='pku'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='rtm'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='serialize'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='taa-no'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='tsx-ldtrk'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='vaes'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='vpclmulqdq'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='xfd'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='xsaves'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       </blockers>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <blockers model='SapphireRapids-v2'>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='amx-bf16'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='amx-int8'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='amx-tile'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx-vnni'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512-bf16'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512-fp16'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512-vpopcntdq'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512bitalg'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512bw'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512cd'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512dq'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512f'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512ifma'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512vbmi'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512vbmi2'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512vl'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512vnni'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='bus-lock-detect'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='erms'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='fbsdp-no'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='fsrc'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='fsrm'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='fsrs'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='fzrm'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='gfni'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='hle'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='ibrs-all'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='invpcid'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='la57'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='pcid'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='pku'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='psdp-no'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='rtm'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='sbdr-ssdp-no'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='serialize'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='taa-no'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='tsx-ldtrk'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='vaes'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='vpclmulqdq'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='xfd'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='xsaves'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       </blockers>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <blockers model='SapphireRapids-v3'>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='amx-bf16'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='amx-int8'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='amx-tile'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx-vnni'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512-bf16'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512-fp16'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512-vpopcntdq'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512bitalg'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512bw'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512cd'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512dq'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512f'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512ifma'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512vbmi'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512vbmi2'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512vl'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512vnni'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='bus-lock-detect'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='cldemote'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='erms'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='fbsdp-no'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='fsrc'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='fsrm'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='fsrs'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='fzrm'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='gfni'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='hle'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='ibrs-all'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='invpcid'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='la57'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='movdir64b'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='movdiri'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='pcid'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='pku'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='psdp-no'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='rtm'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='sbdr-ssdp-no'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='serialize'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='ss'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='taa-no'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='tsx-ldtrk'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='vaes'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='vpclmulqdq'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='xfd'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='xsaves'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       </blockers>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <model usable='no' vendor='Intel'>SapphireRapids-v4</model>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <blockers model='SapphireRapids-v4'>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='amx-bf16'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='amx-int8'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='amx-tile'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx-vnni'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512-bf16'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512-fp16'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512-vpopcntdq'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512bitalg'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512bw'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512cd'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512dq'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512f'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512ifma'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512vbmi'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512vbmi2'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512vl'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512vnni'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='bus-lock-detect'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='cldemote'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='erms'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='fbsdp-no'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='fsrc'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='fsrm'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='fsrs'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='fzrm'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='gfni'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='hle'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='ibrs-all'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='invpcid'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='la57'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='movdir64b'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='movdiri'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='pcid'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='pku'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='psdp-no'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='rtm'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='sbdr-ssdp-no'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='serialize'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='ss'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='taa-no'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='tsx-ldtrk'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='vaes'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='vpclmulqdq'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='xfd'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='xsaves'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       </blockers>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <blockers model='SierraForest'>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx-ifma'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx-ne-convert'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx-vnni'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx-vnni-int8'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='bus-lock-detect'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='cmpccxadd'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='erms'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='fbsdp-no'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='fsrm'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='fsrs'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='gfni'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='ibrs-all'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='invpcid'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='mcdt-no'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='pbrsb-no'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='pcid'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='pku'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='psdp-no'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='sbdr-ssdp-no'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='serialize'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='vaes'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='vpclmulqdq'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='xsaves'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       </blockers>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <model usable='no' vendor='Intel'>SierraForest-v1</model>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <blockers model='SierraForest-v1'>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx-ifma'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx-ne-convert'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx-vnni'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx-vnni-int8'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='bus-lock-detect'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='cmpccxadd'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='erms'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='fbsdp-no'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='fsrm'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='fsrs'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='gfni'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='ibrs-all'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='invpcid'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='mcdt-no'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='pbrsb-no'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='pcid'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='pku'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='psdp-no'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='sbdr-ssdp-no'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='serialize'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='vaes'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='vpclmulqdq'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='xsaves'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       </blockers>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <model usable='no' vendor='Intel'>SierraForest-v2</model>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <blockers model='SierraForest-v2'>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx-ifma'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx-ne-convert'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx-vnni'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx-vnni-int8'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='bhi-ctrl'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='bus-lock-detect'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='cldemote'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='cmpccxadd'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='erms'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='fbsdp-no'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='fsrm'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='fsrs'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='gfni'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='ibrs-all'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='intel-psfd'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='invpcid'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='ipred-ctrl'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='lam'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='mcdt-no'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='movdir64b'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='movdiri'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='pbrsb-no'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='pcid'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='pku'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='psdp-no'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='rrsba-ctrl'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='sbdr-ssdp-no'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='serialize'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='ss'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='vaes'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='vpclmulqdq'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='xsaves'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       </blockers>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <model usable='no' vendor='Intel'>SierraForest-v3</model>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <blockers model='SierraForest-v3'>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx-ifma'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx-ne-convert'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx-vnni'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx-vnni-int8'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='bhi-ctrl'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='bus-lock-detect'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='cldemote'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='cmpccxadd'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='erms'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='fbsdp-no'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='fsrm'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='fsrs'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='gfni'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='ibrs-all'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='intel-psfd'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='invpcid'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='ipred-ctrl'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='lam'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='mcdt-no'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='movdir64b'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='movdiri'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='pbrsb-no'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='pcid'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='pku'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='psdp-no'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='rrsba-ctrl'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='sbdr-ssdp-no'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='serialize'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='ss'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='vaes'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='vpclmulqdq'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='xsaves'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       </blockers>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <blockers model='Skylake-Client'>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='erms'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='hle'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='invpcid'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='pcid'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='rtm'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       </blockers>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <blockers model='Skylake-Client-IBRS'>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='erms'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='hle'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='invpcid'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='pcid'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='rtm'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       </blockers>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <blockers model='Skylake-Client-noTSX-IBRS'>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='erms'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='invpcid'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='pcid'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       </blockers>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <blockers model='Skylake-Client-v1'>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='erms'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='hle'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='invpcid'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='pcid'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='rtm'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       </blockers>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <blockers model='Skylake-Client-v2'>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='erms'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='hle'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='invpcid'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='pcid'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='rtm'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       </blockers>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <blockers model='Skylake-Client-v3'>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='erms'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='invpcid'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='pcid'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       </blockers>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <blockers model='Skylake-Client-v4'>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='erms'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='invpcid'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='pcid'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='xsaves'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       </blockers>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <blockers model='Skylake-Server'>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512bw'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512cd'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512dq'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512f'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512vl'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='erms'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='hle'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='invpcid'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='pcid'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='pku'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='rtm'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       </blockers>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <blockers model='Skylake-Server-IBRS'>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512bw'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512cd'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512dq'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512f'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512vl'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='erms'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='hle'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='invpcid'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='pcid'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='pku'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='rtm'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       </blockers>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <blockers model='Skylake-Server-noTSX-IBRS'>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512bw'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512cd'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512dq'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512f'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512vl'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='erms'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='invpcid'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='pcid'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='pku'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       </blockers>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <blockers model='Skylake-Server-v1'>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512bw'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512cd'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512dq'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512f'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512vl'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='erms'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='hle'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='invpcid'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='pcid'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='pku'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='rtm'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       </blockers>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <blockers model='Skylake-Server-v2'>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512bw'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512cd'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512dq'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512f'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512vl'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='erms'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='hle'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='invpcid'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='pcid'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='pku'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='rtm'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       </blockers>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <blockers model='Skylake-Server-v3'>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512bw'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512cd'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512dq'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512f'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512vl'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='erms'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='invpcid'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='pcid'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='pku'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       </blockers>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <blockers model='Skylake-Server-v4'>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512bw'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512cd'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512dq'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512f'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512vl'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='erms'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='invpcid'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='pcid'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='pku'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       </blockers>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <blockers model='Skylake-Server-v5'>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512bw'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512cd'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512dq'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512f'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512vl'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='erms'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='invpcid'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='pcid'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='pku'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='xsaves'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       </blockers>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <blockers model='Snowridge'>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='cldemote'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='core-capability'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='erms'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='gfni'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='movdir64b'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='movdiri'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='mpx'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='split-lock-detect'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       </blockers>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <model usable='no' vendor='Intel'>Snowridge-v1</model>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <blockers model='Snowridge-v1'>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='cldemote'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='core-capability'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='erms'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='gfni'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='movdir64b'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='movdiri'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='mpx'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='split-lock-detect'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       </blockers>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <model usable='no' vendor='Intel'>Snowridge-v2</model>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <blockers model='Snowridge-v2'>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='cldemote'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='core-capability'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='erms'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='gfni'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='movdir64b'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='movdiri'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='split-lock-detect'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       </blockers>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <model usable='no' vendor='Intel'>Snowridge-v3</model>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <blockers model='Snowridge-v3'>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='cldemote'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='core-capability'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='erms'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='gfni'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='movdir64b'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='movdiri'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='split-lock-detect'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='xsaves'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       </blockers>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <model usable='no' vendor='Intel'>Snowridge-v4</model>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <blockers model='Snowridge-v4'>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='cldemote'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='erms'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='gfni'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='movdir64b'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='movdiri'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='xsaves'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       </blockers>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <model usable='yes' vendor='Intel'>Westmere-v1</model>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <model usable='yes' vendor='Intel'>Westmere-v2</model>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <blockers model='athlon'>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='3dnow'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='3dnowext'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       </blockers>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <blockers model='athlon-v1'>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='3dnow'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='3dnowext'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       </blockers>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <blockers model='core2duo'>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='ss'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       </blockers>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <blockers model='core2duo-v1'>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='ss'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       </blockers>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <blockers model='coreduo'>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='ss'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       </blockers>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <blockers model='coreduo-v1'>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='ss'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       </blockers>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <blockers model='n270'>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='ss'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       </blockers>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <blockers model='n270-v1'>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='ss'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       </blockers>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <blockers model='phenom'>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='3dnow'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='3dnowext'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       </blockers>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <blockers model='phenom-v1'>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='3dnow'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='3dnowext'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       </blockers>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Jan 31 08:24:09 compute-0 nova_compute[238824]:     </mode>
Jan 31 08:24:09 compute-0 nova_compute[238824]:   </cpu>
Jan 31 08:24:09 compute-0 nova_compute[238824]:   <memoryBacking supported='yes'>
Jan 31 08:24:09 compute-0 nova_compute[238824]:     <enum name='sourceType'>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <value>file</value>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <value>anonymous</value>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <value>memfd</value>
Jan 31 08:24:09 compute-0 nova_compute[238824]:     </enum>
Jan 31 08:24:09 compute-0 nova_compute[238824]:   </memoryBacking>
Jan 31 08:24:09 compute-0 nova_compute[238824]:   <devices>
Jan 31 08:24:09 compute-0 nova_compute[238824]:     <disk supported='yes'>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <enum name='diskDevice'>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <value>disk</value>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <value>cdrom</value>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <value>floppy</value>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <value>lun</value>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       </enum>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <enum name='bus'>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <value>ide</value>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <value>fdc</value>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <value>scsi</value>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <value>virtio</value>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <value>usb</value>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <value>sata</value>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       </enum>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <enum name='model'>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <value>virtio</value>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <value>virtio-transitional</value>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <value>virtio-non-transitional</value>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       </enum>
Jan 31 08:24:09 compute-0 nova_compute[238824]:     </disk>
Jan 31 08:24:09 compute-0 nova_compute[238824]:     <graphics supported='yes'>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <enum name='type'>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <value>vnc</value>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <value>egl-headless</value>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <value>dbus</value>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       </enum>
Jan 31 08:24:09 compute-0 nova_compute[238824]:     </graphics>
Jan 31 08:24:09 compute-0 nova_compute[238824]:     <video supported='yes'>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <enum name='modelType'>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <value>vga</value>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <value>cirrus</value>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <value>virtio</value>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <value>none</value>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <value>bochs</value>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <value>ramfb</value>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       </enum>
Jan 31 08:24:09 compute-0 nova_compute[238824]:     </video>
Jan 31 08:24:09 compute-0 nova_compute[238824]:     <hostdev supported='yes'>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <enum name='mode'>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <value>subsystem</value>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       </enum>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <enum name='startupPolicy'>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <value>default</value>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <value>mandatory</value>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <value>requisite</value>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <value>optional</value>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       </enum>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <enum name='subsysType'>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <value>usb</value>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <value>pci</value>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <value>scsi</value>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       </enum>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <enum name='capsType'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <enum name='pciBackend'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:     </hostdev>
Jan 31 08:24:09 compute-0 nova_compute[238824]:     <rng supported='yes'>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <enum name='model'>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <value>virtio</value>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <value>virtio-transitional</value>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <value>virtio-non-transitional</value>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       </enum>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <enum name='backendModel'>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <value>random</value>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <value>egd</value>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <value>builtin</value>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       </enum>
Jan 31 08:24:09 compute-0 nova_compute[238824]:     </rng>
Jan 31 08:24:09 compute-0 nova_compute[238824]:     <filesystem supported='yes'>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <enum name='driverType'>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <value>path</value>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <value>handle</value>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <value>virtiofs</value>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       </enum>
Jan 31 08:24:09 compute-0 nova_compute[238824]:     </filesystem>
Jan 31 08:24:09 compute-0 nova_compute[238824]:     <tpm supported='yes'>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <enum name='model'>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <value>tpm-tis</value>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <value>tpm-crb</value>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       </enum>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <enum name='backendModel'>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <value>emulator</value>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <value>external</value>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       </enum>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <enum name='backendVersion'>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <value>2.0</value>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       </enum>
Jan 31 08:24:09 compute-0 nova_compute[238824]:     </tpm>
Jan 31 08:24:09 compute-0 nova_compute[238824]:     <redirdev supported='yes'>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <enum name='bus'>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <value>usb</value>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       </enum>
Jan 31 08:24:09 compute-0 nova_compute[238824]:     </redirdev>
Jan 31 08:24:09 compute-0 nova_compute[238824]:     <channel supported='yes'>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <enum name='type'>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <value>pty</value>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <value>unix</value>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       </enum>
Jan 31 08:24:09 compute-0 nova_compute[238824]:     </channel>
Jan 31 08:24:09 compute-0 nova_compute[238824]:     <crypto supported='yes'>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <enum name='model'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <enum name='type'>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <value>qemu</value>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       </enum>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <enum name='backendModel'>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <value>builtin</value>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       </enum>
Jan 31 08:24:09 compute-0 nova_compute[238824]:     </crypto>
Jan 31 08:24:09 compute-0 nova_compute[238824]:     <interface supported='yes'>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <enum name='backendType'>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <value>default</value>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <value>passt</value>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       </enum>
Jan 31 08:24:09 compute-0 nova_compute[238824]:     </interface>
Jan 31 08:24:09 compute-0 nova_compute[238824]:     <panic supported='yes'>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <enum name='model'>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <value>isa</value>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <value>hyperv</value>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       </enum>
Jan 31 08:24:09 compute-0 nova_compute[238824]:     </panic>
Jan 31 08:24:09 compute-0 nova_compute[238824]:     <console supported='yes'>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <enum name='type'>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <value>null</value>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <value>vc</value>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <value>pty</value>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <value>dev</value>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <value>file</value>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <value>pipe</value>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <value>stdio</value>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <value>udp</value>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <value>tcp</value>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <value>unix</value>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <value>qemu-vdagent</value>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <value>dbus</value>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       </enum>
Jan 31 08:24:09 compute-0 nova_compute[238824]:     </console>
Jan 31 08:24:09 compute-0 nova_compute[238824]:   </devices>
Jan 31 08:24:09 compute-0 nova_compute[238824]:   <features>
Jan 31 08:24:09 compute-0 nova_compute[238824]:     <gic supported='no'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:     <vmcoreinfo supported='yes'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:     <genid supported='yes'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:     <backingStoreInput supported='yes'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:     <backup supported='yes'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:     <async-teardown supported='yes'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:     <s390-pv supported='no'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:     <ps2 supported='yes'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:     <tdx supported='no'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:     <sev supported='no'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:     <sgx supported='no'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:     <hyperv supported='yes'>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <enum name='features'>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <value>relaxed</value>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <value>vapic</value>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <value>spinlocks</value>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <value>vpindex</value>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <value>runtime</value>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <value>synic</value>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <value>stimer</value>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <value>reset</value>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <value>vendor_id</value>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <value>frequencies</value>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <value>reenlightenment</value>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <value>tlbflush</value>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <value>ipi</value>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <value>avic</value>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <value>emsr_bitmap</value>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <value>xmm_input</value>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       </enum>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <defaults>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <spinlocks>4095</spinlocks>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <stimer_direct>on</stimer_direct>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <tlbflush_direct>on</tlbflush_direct>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <tlbflush_extended>on</tlbflush_extended>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <vendor_id>Linux KVM Hv</vendor_id>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       </defaults>
Jan 31 08:24:09 compute-0 nova_compute[238824]:     </hyperv>
Jan 31 08:24:09 compute-0 nova_compute[238824]:     <launchSecurity supported='no'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:   </features>
Jan 31 08:24:09 compute-0 nova_compute[238824]: </domainCapabilities>
Jan 31 08:24:09 compute-0 nova_compute[238824]:  _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037
Jan 31 08:24:09 compute-0 nova_compute[238824]: 2026-01-31 08:24:09.409 238828 DEBUG nova.virt.libvirt.host [None req-2c04beea-b301-4289-b04e-ab5631b910c8 - - - - - -] Libvirt host hypervisor capabilities for arch=x86_64 and machine_type=q35:
Jan 31 08:24:09 compute-0 nova_compute[238824]: <domainCapabilities>
Jan 31 08:24:09 compute-0 nova_compute[238824]:   <path>/usr/libexec/qemu-kvm</path>
Jan 31 08:24:09 compute-0 nova_compute[238824]:   <domain>kvm</domain>
Jan 31 08:24:09 compute-0 nova_compute[238824]:   <machine>pc-q35-rhel9.8.0</machine>
Jan 31 08:24:09 compute-0 nova_compute[238824]:   <arch>x86_64</arch>
Jan 31 08:24:09 compute-0 nova_compute[238824]:   <vcpu max='4096'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:   <iothreads supported='yes'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:   <os supported='yes'>
Jan 31 08:24:09 compute-0 nova_compute[238824]:     <enum name='firmware'>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <value>efi</value>
Jan 31 08:24:09 compute-0 nova_compute[238824]:     </enum>
Jan 31 08:24:09 compute-0 nova_compute[238824]:     <loader supported='yes'>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <value>/usr/share/edk2/ovmf/OVMF_CODE.secboot.fd</value>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <value>/usr/share/edk2/ovmf/OVMF_CODE.fd</value>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <value>/usr/share/edk2/ovmf/OVMF.amdsev.fd</value>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <value>/usr/share/edk2/ovmf/OVMF.inteltdx.secboot.fd</value>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <enum name='type'>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <value>rom</value>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <value>pflash</value>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       </enum>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <enum name='readonly'>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <value>yes</value>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <value>no</value>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       </enum>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <enum name='secure'>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <value>yes</value>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <value>no</value>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       </enum>
Jan 31 08:24:09 compute-0 nova_compute[238824]:     </loader>
Jan 31 08:24:09 compute-0 nova_compute[238824]:   </os>
Jan 31 08:24:09 compute-0 nova_compute[238824]:   <cpu>
Jan 31 08:24:09 compute-0 nova_compute[238824]:     <mode name='host-passthrough' supported='yes'>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <enum name='hostPassthroughMigratable'>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <value>on</value>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <value>off</value>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       </enum>
Jan 31 08:24:09 compute-0 nova_compute[238824]:     </mode>
Jan 31 08:24:09 compute-0 nova_compute[238824]:     <mode name='maximum' supported='yes'>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <enum name='maximumMigratable'>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <value>on</value>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <value>off</value>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       </enum>
Jan 31 08:24:09 compute-0 nova_compute[238824]:     </mode>
Jan 31 08:24:09 compute-0 nova_compute[238824]:     <mode name='host-model' supported='yes'>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <model fallback='forbid'>EPYC-Rome</model>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <vendor>AMD</vendor>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <maxphysaddr mode='passthrough' limit='40'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <feature policy='require' name='x2apic'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <feature policy='require' name='tsc-deadline'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <feature policy='require' name='hypervisor'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <feature policy='require' name='tsc_adjust'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <feature policy='require' name='spec-ctrl'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <feature policy='require' name='stibp'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <feature policy='require' name='ssbd'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <feature policy='require' name='cmp_legacy'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <feature policy='require' name='overflow-recov'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <feature policy='require' name='succor'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <feature policy='require' name='ibrs'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <feature policy='require' name='amd-ssbd'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <feature policy='require' name='virt-ssbd'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <feature policy='require' name='lbrv'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <feature policy='require' name='tsc-scale'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <feature policy='require' name='vmcb-clean'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <feature policy='require' name='flushbyasid'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <feature policy='require' name='pause-filter'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <feature policy='require' name='pfthreshold'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <feature policy='require' name='svme-addr-chk'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <feature policy='require' name='lfence-always-serializing'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <feature policy='disable' name='xsaves'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:     </mode>
Jan 31 08:24:09 compute-0 nova_compute[238824]:     <mode name='custom' supported='yes'>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <blockers model='Broadwell'>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='erms'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='hle'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='invpcid'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='pcid'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='rtm'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       </blockers>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <blockers model='Broadwell-IBRS'>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='erms'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='hle'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='invpcid'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='pcid'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='rtm'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       </blockers>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <blockers model='Broadwell-noTSX'>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='erms'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='invpcid'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='pcid'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       </blockers>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <blockers model='Broadwell-noTSX-IBRS'>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='erms'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='invpcid'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='pcid'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       </blockers>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <model usable='no' vendor='Intel'>Broadwell-v1</model>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <blockers model='Broadwell-v1'>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='erms'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='hle'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='invpcid'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='pcid'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='rtm'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       </blockers>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <model usable='no' vendor='Intel'>Broadwell-v2</model>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <blockers model='Broadwell-v2'>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='erms'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='invpcid'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='pcid'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       </blockers>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <model usable='no' vendor='Intel'>Broadwell-v3</model>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <blockers model='Broadwell-v3'>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='erms'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='hle'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='invpcid'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='pcid'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='rtm'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       </blockers>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <model usable='no' vendor='Intel'>Broadwell-v4</model>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <blockers model='Broadwell-v4'>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='erms'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='invpcid'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='pcid'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       </blockers>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <blockers model='Cascadelake-Server'>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512bw'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512cd'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512dq'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512f'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512vl'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512vnni'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='erms'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='hle'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='invpcid'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='pcid'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='pku'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='rtm'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       </blockers>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <blockers model='Cascadelake-Server-noTSX'>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512bw'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512cd'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512dq'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512f'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512vl'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512vnni'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='erms'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='ibrs-all'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='invpcid'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='pcid'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='pku'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       </blockers>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <blockers model='Cascadelake-Server-v1'>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512bw'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512cd'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512dq'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512f'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512vl'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512vnni'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='erms'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='hle'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='invpcid'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='pcid'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='pku'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='rtm'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       </blockers>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <blockers model='Cascadelake-Server-v2'>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512bw'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512cd'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512dq'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512f'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512vl'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512vnni'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='erms'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='hle'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='ibrs-all'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='invpcid'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='pcid'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='pku'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='rtm'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       </blockers>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <blockers model='Cascadelake-Server-v3'>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512bw'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512cd'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512dq'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512f'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512vl'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512vnni'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='erms'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='ibrs-all'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='invpcid'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='pcid'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='pku'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       </blockers>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <blockers model='Cascadelake-Server-v4'>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512bw'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512cd'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512dq'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512f'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512vl'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512vnni'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='erms'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='ibrs-all'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='invpcid'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='pcid'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='pku'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       </blockers>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <blockers model='Cascadelake-Server-v5'>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512bw'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512cd'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512dq'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512f'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512vl'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512vnni'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='erms'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='ibrs-all'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='invpcid'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='pcid'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='pku'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='xsaves'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       </blockers>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <model usable='no' vendor='Intel' canonical='ClearwaterForest-v1'>ClearwaterForest</model>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <blockers model='ClearwaterForest'>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx-ifma'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx-ne-convert'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx-vnni'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx-vnni-int16'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx-vnni-int8'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='bhi-ctrl'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='bhi-no'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='bus-lock-detect'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='cldemote'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='cmpccxadd'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='ddpd-u'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='erms'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='fbsdp-no'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='fsrm'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='fsrs'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='gfni'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='ibrs-all'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='intel-psfd'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='invpcid'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='ipred-ctrl'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='lam'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='mcdt-no'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='movdir64b'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='movdiri'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='pbrsb-no'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='pcid'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='pku'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='prefetchiti'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='psdp-no'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='rrsba-ctrl'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='sbdr-ssdp-no'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='serialize'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='sha512'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='sm3'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='sm4'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='ss'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='vaes'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='vpclmulqdq'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='xsaves'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       </blockers>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <model usable='no' vendor='Intel'>ClearwaterForest-v1</model>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <blockers model='ClearwaterForest-v1'>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx-ifma'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx-ne-convert'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx-vnni'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx-vnni-int16'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx-vnni-int8'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='bhi-ctrl'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='bhi-no'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='bus-lock-detect'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='cldemote'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='cmpccxadd'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='ddpd-u'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='erms'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='fbsdp-no'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='fsrm'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='fsrs'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='gfni'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='ibrs-all'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='intel-psfd'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='invpcid'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='ipred-ctrl'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='lam'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='mcdt-no'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='movdir64b'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='movdiri'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='pbrsb-no'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='pcid'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='pku'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='prefetchiti'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='psdp-no'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='rrsba-ctrl'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='sbdr-ssdp-no'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='serialize'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='sha512'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='sm3'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='sm4'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='ss'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='vaes'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='vpclmulqdq'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='xsaves'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       </blockers>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <blockers model='Cooperlake'>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512-bf16'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512bw'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512cd'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512dq'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512f'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512vl'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512vnni'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='erms'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='hle'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='ibrs-all'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='invpcid'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='pcid'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='pku'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='rtm'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='taa-no'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       </blockers>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <blockers model='Cooperlake-v1'>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512-bf16'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512bw'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512cd'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512dq'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512f'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512vl'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512vnni'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='erms'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='hle'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='ibrs-all'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='invpcid'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='pcid'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='pku'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='rtm'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='taa-no'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       </blockers>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <blockers model='Cooperlake-v2'>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512-bf16'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512bw'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512cd'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512dq'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512f'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512vl'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512vnni'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='erms'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='hle'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='ibrs-all'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='invpcid'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='pcid'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='pku'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='rtm'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='taa-no'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='xsaves'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       </blockers>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <blockers model='Denverton'>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='erms'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='mpx'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       </blockers>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <model usable='no' vendor='Intel'>Denverton-v1</model>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <blockers model='Denverton-v1'>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='erms'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='mpx'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       </blockers>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <model usable='no' vendor='Intel'>Denverton-v2</model>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <blockers model='Denverton-v2'>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='erms'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       </blockers>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <model usable='no' vendor='Intel'>Denverton-v3</model>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <blockers model='Denverton-v3'>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='erms'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='xsaves'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       </blockers>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <blockers model='Dhyana-v2'>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='xsaves'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       </blockers>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <blockers model='EPYC-Genoa'>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='amd-psfd'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='auto-ibrs'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512-bf16'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512-vpopcntdq'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512bitalg'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512bw'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512cd'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512dq'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512f'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512ifma'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512vbmi'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512vbmi2'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512vl'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512vnni'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='erms'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='fsrm'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='gfni'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='invpcid'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='la57'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='no-nested-data-bp'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='null-sel-clr-base'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='pcid'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='pku'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='stibp-always-on'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='vaes'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='vpclmulqdq'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='xsaves'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       </blockers>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <blockers model='EPYC-Genoa-v1'>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='amd-psfd'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='auto-ibrs'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512-bf16'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512-vpopcntdq'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512bitalg'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512bw'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512cd'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512dq'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512f'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512ifma'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512vbmi'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512vbmi2'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512vl'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512vnni'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='erms'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='fsrm'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='gfni'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='invpcid'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='la57'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='no-nested-data-bp'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='null-sel-clr-base'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='pcid'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='pku'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='stibp-always-on'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='vaes'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='vpclmulqdq'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='xsaves'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       </blockers>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v2</model>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <blockers model='EPYC-Genoa-v2'>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='amd-psfd'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='auto-ibrs'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512-bf16'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512-vpopcntdq'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512bitalg'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512bw'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512cd'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512dq'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512f'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512ifma'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512vbmi'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512vbmi2'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512vl'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512vnni'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='erms'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='fs-gs-base-ns'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='fsrm'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='gfni'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='invpcid'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='la57'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='no-nested-data-bp'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='null-sel-clr-base'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='pcid'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='perfmon-v2'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='pku'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='stibp-always-on'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='vaes'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='vpclmulqdq'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='xsaves'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       </blockers>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <blockers model='EPYC-Milan'>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='erms'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='fsrm'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='invpcid'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='pcid'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='pku'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='xsaves'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       </blockers>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <blockers model='EPYC-Milan-v1'>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='erms'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='fsrm'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='invpcid'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='pcid'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='pku'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='xsaves'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       </blockers>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <blockers model='EPYC-Milan-v2'>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='amd-psfd'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='erms'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='fsrm'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='invpcid'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='no-nested-data-bp'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='null-sel-clr-base'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='pcid'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='pku'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='stibp-always-on'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='vaes'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='vpclmulqdq'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='xsaves'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       </blockers>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <model usable='no' vendor='AMD'>EPYC-Milan-v3</model>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <blockers model='EPYC-Milan-v3'>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='amd-psfd'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='erms'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='fsrm'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='invpcid'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='no-nested-data-bp'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='null-sel-clr-base'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='pcid'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='pku'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='stibp-always-on'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='vaes'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='vpclmulqdq'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='xsaves'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       </blockers>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <blockers model='EPYC-Rome'>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='xsaves'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       </blockers>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <blockers model='EPYC-Rome-v1'>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='xsaves'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       </blockers>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <blockers model='EPYC-Rome-v2'>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='xsaves'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       </blockers>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <blockers model='EPYC-Rome-v3'>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='xsaves'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       </blockers>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v5</model>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <model usable='no' vendor='AMD' canonical='EPYC-Turin-v1'>EPYC-Turin</model>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <blockers model='EPYC-Turin'>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='amd-psfd'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='auto-ibrs'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx-vnni'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512-bf16'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512-vp2intersect'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512-vpopcntdq'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512bitalg'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512bw'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512cd'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512dq'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512f'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512ifma'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512vbmi'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512vbmi2'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512vl'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512vnni'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='erms'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='fs-gs-base-ns'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='fsrm'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='gfni'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='ibpb-brtype'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='invpcid'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='la57'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='movdir64b'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='movdiri'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='no-nested-data-bp'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='null-sel-clr-base'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='pcid'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='perfmon-v2'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='pku'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='prefetchi'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='sbpb'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='srso-user-kernel-no'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='stibp-always-on'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='vaes'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='vpclmulqdq'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='xsaves'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       </blockers>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <model usable='no' vendor='AMD'>EPYC-Turin-v1</model>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <blockers model='EPYC-Turin-v1'>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='amd-psfd'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='auto-ibrs'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx-vnni'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512-bf16'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512-vp2intersect'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512-vpopcntdq'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512bitalg'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512bw'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512cd'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512dq'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512f'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512ifma'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512vbmi'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512vbmi2'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512vl'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512vnni'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='erms'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='fs-gs-base-ns'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='fsrm'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='gfni'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='ibpb-brtype'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='invpcid'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='la57'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='movdir64b'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='movdiri'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='no-nested-data-bp'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='null-sel-clr-base'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='pcid'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='perfmon-v2'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='pku'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='prefetchi'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='sbpb'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='srso-user-kernel-no'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='stibp-always-on'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='vaes'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='vpclmulqdq'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='xsaves'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       </blockers>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <model usable='yes' vendor='AMD'>EPYC-v1</model>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <model usable='yes' vendor='AMD'>EPYC-v2</model>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <model usable='no' vendor='AMD'>EPYC-v3</model>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <blockers model='EPYC-v3'>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='xsaves'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       </blockers>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <model usable='no' vendor='AMD'>EPYC-v4</model>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <blockers model='EPYC-v4'>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='xsaves'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       </blockers>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <model usable='no' vendor='AMD'>EPYC-v5</model>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <blockers model='EPYC-v5'>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='xsaves'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       </blockers>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <blockers model='GraniteRapids'>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='amx-bf16'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='amx-fp16'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='amx-int8'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='amx-tile'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx-vnni'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512-bf16'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512-fp16'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512-vpopcntdq'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512bitalg'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512bw'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512cd'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512dq'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512f'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512ifma'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512vbmi'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512vbmi2'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512vl'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512vnni'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='bus-lock-detect'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='erms'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='fbsdp-no'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='fsrc'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='fsrm'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='fsrs'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='fzrm'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='gfni'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='hle'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='ibrs-all'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='invpcid'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='la57'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='mcdt-no'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='pbrsb-no'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='pcid'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='pku'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='prefetchiti'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='psdp-no'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='rtm'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='sbdr-ssdp-no'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='serialize'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='taa-no'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='tsx-ldtrk'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='vaes'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='vpclmulqdq'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='xfd'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='xsaves'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       </blockers>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <blockers model='GraniteRapids-v1'>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='amx-bf16'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='amx-fp16'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='amx-int8'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='amx-tile'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx-vnni'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512-bf16'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512-fp16'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512-vpopcntdq'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512bitalg'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512bw'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512cd'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512dq'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512f'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512ifma'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512vbmi'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512vbmi2'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512vl'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512vnni'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='bus-lock-detect'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='erms'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='fbsdp-no'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='fsrc'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='fsrm'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='fsrs'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='fzrm'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='gfni'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='hle'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='ibrs-all'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='invpcid'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='la57'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='mcdt-no'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='pbrsb-no'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='pcid'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='pku'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='prefetchiti'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='psdp-no'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='rtm'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='sbdr-ssdp-no'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='serialize'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='taa-no'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='tsx-ldtrk'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='vaes'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='vpclmulqdq'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='xfd'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='xsaves'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       </blockers>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <blockers model='GraniteRapids-v2'>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='amx-bf16'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='amx-fp16'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='amx-int8'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='amx-tile'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx-vnni'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx10'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx10-128'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx10-256'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx10-512'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512-bf16'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512-fp16'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512-vpopcntdq'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512bitalg'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512bw'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512cd'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512dq'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512f'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512ifma'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512vbmi'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512vbmi2'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512vl'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512vnni'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='bus-lock-detect'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='cldemote'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='erms'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='fbsdp-no'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='fsrc'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='fsrm'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='fsrs'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='fzrm'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='gfni'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='hle'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='ibrs-all'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='invpcid'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='la57'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='mcdt-no'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='movdir64b'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='movdiri'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='pbrsb-no'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='pcid'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='pku'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='prefetchiti'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='psdp-no'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='rtm'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='sbdr-ssdp-no'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='serialize'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='ss'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='taa-no'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='tsx-ldtrk'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='vaes'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='vpclmulqdq'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='xfd'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='xsaves'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       </blockers>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <model usable='no' vendor='Intel'>GraniteRapids-v3</model>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <blockers model='GraniteRapids-v3'>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='amx-bf16'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='amx-fp16'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='amx-int8'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='amx-tile'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx-vnni'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx10'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx10-128'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx10-256'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx10-512'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512-bf16'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512-fp16'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512-vpopcntdq'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512bitalg'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512bw'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512cd'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512dq'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512f'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512ifma'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512vbmi'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512vbmi2'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512vl'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512vnni'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='bus-lock-detect'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='cldemote'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='erms'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='fbsdp-no'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='fsrc'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='fsrm'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='fsrs'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='fzrm'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='gfni'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='hle'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='ibrs-all'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='invpcid'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='la57'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='mcdt-no'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='movdir64b'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='movdiri'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='pbrsb-no'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='pcid'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='pku'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='prefetchiti'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='psdp-no'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='rtm'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='sbdr-ssdp-no'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='serialize'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='ss'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='taa-no'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='tsx-ldtrk'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='vaes'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='vpclmulqdq'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='xfd'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='xsaves'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       </blockers>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <blockers model='Haswell'>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='erms'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='hle'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='invpcid'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='pcid'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='rtm'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       </blockers>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <blockers model='Haswell-IBRS'>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='erms'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='hle'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='invpcid'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='pcid'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='rtm'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       </blockers>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <blockers model='Haswell-noTSX'>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='erms'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='invpcid'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='pcid'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       </blockers>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <blockers model='Haswell-noTSX-IBRS'>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='erms'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='invpcid'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='pcid'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       </blockers>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <model usable='no' vendor='Intel'>Haswell-v1</model>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <blockers model='Haswell-v1'>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='erms'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='hle'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='invpcid'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='pcid'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='rtm'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       </blockers>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <model usable='no' vendor='Intel'>Haswell-v2</model>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <blockers model='Haswell-v2'>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='erms'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='invpcid'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='pcid'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       </blockers>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <model usable='no' vendor='Intel'>Haswell-v3</model>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <blockers model='Haswell-v3'>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='erms'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='hle'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='invpcid'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='pcid'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='rtm'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       </blockers>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <model usable='no' vendor='Intel'>Haswell-v4</model>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <blockers model='Haswell-v4'>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='erms'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='invpcid'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='pcid'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       </blockers>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <blockers model='Icelake-Server'>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512-vpopcntdq'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512bitalg'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512bw'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512cd'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512dq'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512f'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512vbmi'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512vbmi2'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512vl'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512vnni'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='erms'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='gfni'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='hle'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='invpcid'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='la57'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='pcid'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='pku'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='rtm'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='vaes'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='vpclmulqdq'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       </blockers>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <blockers model='Icelake-Server-noTSX'>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512-vpopcntdq'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512bitalg'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512bw'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512cd'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512dq'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512f'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512vbmi'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512vbmi2'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512vl'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512vnni'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='erms'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='gfni'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='invpcid'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='la57'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='pcid'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='pku'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='vaes'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='vpclmulqdq'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       </blockers>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <blockers model='Icelake-Server-v1'>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512-vpopcntdq'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512bitalg'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512bw'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512cd'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512dq'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512f'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512vbmi'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512vbmi2'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512vl'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512vnni'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='erms'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='gfni'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='hle'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='invpcid'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='la57'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='pcid'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='pku'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='rtm'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='vaes'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='vpclmulqdq'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       </blockers>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <blockers model='Icelake-Server-v2'>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512-vpopcntdq'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512bitalg'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512bw'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512cd'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512dq'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512f'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512vbmi'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512vbmi2'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512vl'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512vnni'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='erms'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='gfni'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='invpcid'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='la57'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='pcid'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='pku'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='vaes'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='vpclmulqdq'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       </blockers>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <blockers model='Icelake-Server-v3'>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512-vpopcntdq'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512bitalg'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512bw'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512cd'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512dq'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512f'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512vbmi'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512vbmi2'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512vl'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512vnni'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='erms'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='gfni'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='ibrs-all'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='invpcid'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='la57'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='pcid'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='pku'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='taa-no'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='vaes'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='vpclmulqdq'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       </blockers>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <blockers model='Icelake-Server-v4'>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512-vpopcntdq'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512bitalg'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512bw'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512cd'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512dq'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512f'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512ifma'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512vbmi'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512vbmi2'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512vl'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512vnni'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='erms'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='fsrm'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='gfni'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='ibrs-all'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='invpcid'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='la57'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='pcid'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='pku'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='taa-no'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='vaes'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='vpclmulqdq'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       </blockers>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <blockers model='Icelake-Server-v5'>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512-vpopcntdq'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512bitalg'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512bw'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512cd'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512dq'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512f'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512ifma'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512vbmi'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512vbmi2'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512vl'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512vnni'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='erms'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='fsrm'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='gfni'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='ibrs-all'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='invpcid'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='la57'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='pcid'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='pku'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='taa-no'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='vaes'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='vpclmulqdq'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='xsaves'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       </blockers>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <blockers model='Icelake-Server-v6'>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512-vpopcntdq'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512bitalg'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512bw'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512cd'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512dq'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512f'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512ifma'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512vbmi'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512vbmi2'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512vl'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512vnni'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='erms'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='fsrm'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='gfni'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='ibrs-all'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='invpcid'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='la57'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='pcid'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='pku'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='taa-no'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='vaes'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='vpclmulqdq'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='xsaves'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       </blockers>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <blockers model='Icelake-Server-v7'>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512-vpopcntdq'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512bitalg'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512bw'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512cd'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512dq'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512f'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512ifma'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512vbmi'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512vbmi2'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512vl'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512vnni'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='erms'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='fsrm'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='gfni'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='hle'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='ibrs-all'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='invpcid'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='la57'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='pcid'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='pku'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='rtm'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='taa-no'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='vaes'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='vpclmulqdq'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='xsaves'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       </blockers>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <blockers model='IvyBridge'>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='erms'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       </blockers>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <blockers model='IvyBridge-IBRS'>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='erms'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       </blockers>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <blockers model='IvyBridge-v1'>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='erms'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       </blockers>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <blockers model='IvyBridge-v2'>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='erms'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       </blockers>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <blockers model='KnightsMill'>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512-4fmaps'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512-4vnniw'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512-vpopcntdq'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512cd'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512er'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512f'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512pf'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='erms'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='ss'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       </blockers>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <blockers model='KnightsMill-v1'>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512-4fmaps'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512-4vnniw'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512-vpopcntdq'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512cd'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512er'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512f'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512pf'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='erms'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='ss'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       </blockers>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <blockers model='Opteron_G4'>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='fma4'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='xop'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       </blockers>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <blockers model='Opteron_G4-v1'>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='fma4'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='xop'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       </blockers>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <blockers model='Opteron_G5'>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='fma4'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='tbm'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='xop'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       </blockers>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <blockers model='Opteron_G5-v1'>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='fma4'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='tbm'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='xop'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       </blockers>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <blockers model='SapphireRapids'>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='amx-bf16'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='amx-int8'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='amx-tile'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx-vnni'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512-bf16'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512-fp16'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512-vpopcntdq'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512bitalg'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512bw'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512cd'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512dq'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512f'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512ifma'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512vbmi'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512vbmi2'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512vl'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512vnni'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='bus-lock-detect'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='erms'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='fsrc'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='fsrm'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='fsrs'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='fzrm'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='gfni'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='hle'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='ibrs-all'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='invpcid'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='la57'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='pcid'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='pku'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='rtm'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='serialize'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='taa-no'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='tsx-ldtrk'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='vaes'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='vpclmulqdq'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='xfd'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='xsaves'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       </blockers>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <blockers model='SapphireRapids-v1'>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='amx-bf16'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='amx-int8'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='amx-tile'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx-vnni'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512-bf16'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512-fp16'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512-vpopcntdq'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512bitalg'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512bw'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512cd'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512dq'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512f'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512ifma'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512vbmi'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512vbmi2'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512vl'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512vnni'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='bus-lock-detect'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='erms'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='fsrc'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='fsrm'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='fsrs'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='fzrm'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='gfni'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='hle'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='ibrs-all'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='invpcid'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='la57'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='pcid'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='pku'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='rtm'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='serialize'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='taa-no'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='tsx-ldtrk'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='vaes'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='vpclmulqdq'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='xfd'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='xsaves'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       </blockers>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <blockers model='SapphireRapids-v2'>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='amx-bf16'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='amx-int8'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='amx-tile'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx-vnni'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512-bf16'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512-fp16'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512-vpopcntdq'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512bitalg'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512bw'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512cd'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512dq'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512f'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512ifma'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512vbmi'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512vbmi2'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512vl'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512vnni'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='bus-lock-detect'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='erms'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='fbsdp-no'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='fsrc'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='fsrm'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='fsrs'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='fzrm'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='gfni'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='hle'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='ibrs-all'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='invpcid'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='la57'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='pcid'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='pku'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='psdp-no'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='rtm'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='sbdr-ssdp-no'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='serialize'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='taa-no'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='tsx-ldtrk'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='vaes'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='vpclmulqdq'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='xfd'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='xsaves'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       </blockers>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <blockers model='SapphireRapids-v3'>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='amx-bf16'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='amx-int8'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='amx-tile'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx-vnni'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512-bf16'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512-fp16'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512-vpopcntdq'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512bitalg'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512bw'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512cd'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512dq'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512f'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512ifma'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512vbmi'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512vbmi2'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512vl'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512vnni'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='bus-lock-detect'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='cldemote'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='erms'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='fbsdp-no'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='fsrc'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='fsrm'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='fsrs'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='fzrm'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='gfni'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='hle'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='ibrs-all'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='invpcid'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='la57'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='movdir64b'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='movdiri'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='pcid'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='pku'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='psdp-no'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='rtm'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='sbdr-ssdp-no'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='serialize'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='ss'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='taa-no'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='tsx-ldtrk'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='vaes'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='vpclmulqdq'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='xfd'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='xsaves'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       </blockers>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <model usable='no' vendor='Intel'>SapphireRapids-v4</model>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <blockers model='SapphireRapids-v4'>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='amx-bf16'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='amx-int8'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='amx-tile'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx-vnni'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512-bf16'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512-fp16'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512-vpopcntdq'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512bitalg'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512bw'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512cd'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512dq'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512f'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512ifma'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512vbmi'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512vbmi2'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512vl'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512vnni'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='bus-lock-detect'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='cldemote'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='erms'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='fbsdp-no'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='fsrc'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='fsrm'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='fsrs'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='fzrm'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='gfni'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='hle'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='ibrs-all'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='invpcid'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='la57'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='movdir64b'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='movdiri'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='pcid'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='pku'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='psdp-no'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='rtm'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='sbdr-ssdp-no'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='serialize'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='ss'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='taa-no'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='tsx-ldtrk'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='vaes'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='vpclmulqdq'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='xfd'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='xsaves'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       </blockers>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <blockers model='SierraForest'>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx-ifma'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx-ne-convert'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx-vnni'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx-vnni-int8'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='bus-lock-detect'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='cmpccxadd'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='erms'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='fbsdp-no'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='fsrm'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='fsrs'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='gfni'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='ibrs-all'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='invpcid'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='mcdt-no'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='pbrsb-no'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='pcid'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='pku'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='psdp-no'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='sbdr-ssdp-no'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='serialize'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='vaes'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='vpclmulqdq'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='xsaves'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       </blockers>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <model usable='no' vendor='Intel'>SierraForest-v1</model>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <blockers model='SierraForest-v1'>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx-ifma'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx-ne-convert'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx-vnni'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx-vnni-int8'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='bus-lock-detect'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='cmpccxadd'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='erms'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='fbsdp-no'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='fsrm'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='fsrs'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='gfni'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='ibrs-all'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='invpcid'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='mcdt-no'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='pbrsb-no'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='pcid'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='pku'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='psdp-no'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='sbdr-ssdp-no'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='serialize'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='vaes'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='vpclmulqdq'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='xsaves'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       </blockers>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <model usable='no' vendor='Intel'>SierraForest-v2</model>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <blockers model='SierraForest-v2'>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx-ifma'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx-ne-convert'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx-vnni'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx-vnni-int8'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='bhi-ctrl'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='bus-lock-detect'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='cldemote'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='cmpccxadd'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='erms'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='fbsdp-no'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='fsrm'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='fsrs'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='gfni'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='ibrs-all'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='intel-psfd'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='invpcid'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='ipred-ctrl'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='lam'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='mcdt-no'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='movdir64b'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='movdiri'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='pbrsb-no'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='pcid'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='pku'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='psdp-no'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='rrsba-ctrl'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='sbdr-ssdp-no'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='serialize'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='ss'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='vaes'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='vpclmulqdq'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='xsaves'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       </blockers>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <model usable='no' vendor='Intel'>SierraForest-v3</model>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <blockers model='SierraForest-v3'>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx-ifma'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx-ne-convert'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx-vnni'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx-vnni-int8'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='bhi-ctrl'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='bus-lock-detect'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='cldemote'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='cmpccxadd'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='erms'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='fbsdp-no'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='fsrm'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='fsrs'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='gfni'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='ibrs-all'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='intel-psfd'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='invpcid'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='ipred-ctrl'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='lam'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='mcdt-no'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='movdir64b'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='movdiri'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='pbrsb-no'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='pcid'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='pku'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='psdp-no'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='rrsba-ctrl'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='sbdr-ssdp-no'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='serialize'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='ss'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='vaes'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='vpclmulqdq'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='xsaves'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       </blockers>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <blockers model='Skylake-Client'>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='erms'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='hle'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='invpcid'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='pcid'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='rtm'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       </blockers>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <blockers model='Skylake-Client-IBRS'>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='erms'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='hle'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='invpcid'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='pcid'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='rtm'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       </blockers>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <blockers model='Skylake-Client-noTSX-IBRS'>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='erms'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='invpcid'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='pcid'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       </blockers>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <blockers model='Skylake-Client-v1'>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='erms'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='hle'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='invpcid'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='pcid'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='rtm'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       </blockers>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <blockers model='Skylake-Client-v2'>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='erms'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='hle'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='invpcid'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='pcid'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='rtm'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       </blockers>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <blockers model='Skylake-Client-v3'>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='erms'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='invpcid'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='pcid'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       </blockers>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <blockers model='Skylake-Client-v4'>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='erms'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='invpcid'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='pcid'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='xsaves'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       </blockers>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <blockers model='Skylake-Server'>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512bw'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512cd'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512dq'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512f'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512vl'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='erms'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='hle'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='invpcid'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='pcid'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='pku'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='rtm'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       </blockers>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <blockers model='Skylake-Server-IBRS'>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512bw'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512cd'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512dq'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512f'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512vl'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='erms'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='hle'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='invpcid'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='pcid'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='pku'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='rtm'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       </blockers>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <blockers model='Skylake-Server-noTSX-IBRS'>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512bw'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512cd'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512dq'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512f'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512vl'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='erms'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='invpcid'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='pcid'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='pku'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       </blockers>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <blockers model='Skylake-Server-v1'>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512bw'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512cd'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512dq'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512f'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512vl'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='erms'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='hle'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='invpcid'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='pcid'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='pku'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='rtm'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       </blockers>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <blockers model='Skylake-Server-v2'>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512bw'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512cd'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512dq'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512f'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512vl'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='erms'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='hle'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='invpcid'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='pcid'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='pku'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='rtm'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       </blockers>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <blockers model='Skylake-Server-v3'>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512bw'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512cd'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512dq'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512f'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512vl'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='erms'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='invpcid'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='pcid'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='pku'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       </blockers>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <blockers model='Skylake-Server-v4'>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512bw'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512cd'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512dq'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512f'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512vl'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='erms'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='invpcid'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='pcid'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='pku'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       </blockers>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <blockers model='Skylake-Server-v5'>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512bw'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512cd'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512dq'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512f'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='avx512vl'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='erms'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='invpcid'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='pcid'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='pku'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='xsaves'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       </blockers>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <blockers model='Snowridge'>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='cldemote'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='core-capability'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='erms'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='gfni'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='movdir64b'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='movdiri'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='mpx'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='split-lock-detect'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       </blockers>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <model usable='no' vendor='Intel'>Snowridge-v1</model>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <blockers model='Snowridge-v1'>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='cldemote'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='core-capability'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='erms'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='gfni'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='movdir64b'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='movdiri'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='mpx'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='split-lock-detect'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       </blockers>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <model usable='no' vendor='Intel'>Snowridge-v2</model>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <blockers model='Snowridge-v2'>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='cldemote'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='core-capability'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='erms'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='gfni'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='movdir64b'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='movdiri'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='split-lock-detect'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       </blockers>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <model usable='no' vendor='Intel'>Snowridge-v3</model>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <blockers model='Snowridge-v3'>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='cldemote'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='core-capability'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='erms'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='gfni'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='movdir64b'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='movdiri'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='split-lock-detect'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='xsaves'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       </blockers>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <model usable='no' vendor='Intel'>Snowridge-v4</model>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <blockers model='Snowridge-v4'>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='cldemote'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='erms'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='gfni'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='movdir64b'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='movdiri'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='xsaves'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       </blockers>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <model usable='yes' vendor='Intel'>Westmere-v1</model>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <model usable='yes' vendor='Intel'>Westmere-v2</model>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <blockers model='athlon'>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='3dnow'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='3dnowext'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       </blockers>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <blockers model='athlon-v1'>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='3dnow'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='3dnowext'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       </blockers>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <blockers model='core2duo'>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='ss'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       </blockers>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <blockers model='core2duo-v1'>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='ss'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       </blockers>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <blockers model='coreduo'>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='ss'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       </blockers>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <blockers model='coreduo-v1'>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='ss'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       </blockers>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <blockers model='n270'>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='ss'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       </blockers>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <blockers model='n270-v1'>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='ss'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       </blockers>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <blockers model='phenom'>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='3dnow'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='3dnowext'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       </blockers>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <blockers model='phenom-v1'>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='3dnow'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <feature name='3dnowext'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       </blockers>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Jan 31 08:24:09 compute-0 nova_compute[238824]:     </mode>
Jan 31 08:24:09 compute-0 nova_compute[238824]:   </cpu>
Jan 31 08:24:09 compute-0 nova_compute[238824]:   <memoryBacking supported='yes'>
Jan 31 08:24:09 compute-0 nova_compute[238824]:     <enum name='sourceType'>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <value>file</value>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <value>anonymous</value>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <value>memfd</value>
Jan 31 08:24:09 compute-0 nova_compute[238824]:     </enum>
Jan 31 08:24:09 compute-0 nova_compute[238824]:   </memoryBacking>
Jan 31 08:24:09 compute-0 nova_compute[238824]:   <devices>
Jan 31 08:24:09 compute-0 nova_compute[238824]:     <disk supported='yes'>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <enum name='diskDevice'>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <value>disk</value>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <value>cdrom</value>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <value>floppy</value>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <value>lun</value>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       </enum>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <enum name='bus'>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <value>fdc</value>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <value>scsi</value>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <value>virtio</value>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <value>usb</value>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <value>sata</value>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       </enum>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <enum name='model'>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <value>virtio</value>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <value>virtio-transitional</value>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <value>virtio-non-transitional</value>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       </enum>
Jan 31 08:24:09 compute-0 nova_compute[238824]:     </disk>
Jan 31 08:24:09 compute-0 nova_compute[238824]:     <graphics supported='yes'>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <enum name='type'>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <value>vnc</value>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <value>egl-headless</value>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <value>dbus</value>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       </enum>
Jan 31 08:24:09 compute-0 nova_compute[238824]:     </graphics>
Jan 31 08:24:09 compute-0 nova_compute[238824]:     <video supported='yes'>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <enum name='modelType'>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <value>vga</value>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <value>cirrus</value>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <value>virtio</value>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <value>none</value>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <value>bochs</value>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <value>ramfb</value>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       </enum>
Jan 31 08:24:09 compute-0 nova_compute[238824]:     </video>
Jan 31 08:24:09 compute-0 nova_compute[238824]:     <hostdev supported='yes'>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <enum name='mode'>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <value>subsystem</value>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       </enum>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <enum name='startupPolicy'>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <value>default</value>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <value>mandatory</value>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <value>requisite</value>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <value>optional</value>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       </enum>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <enum name='subsysType'>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <value>usb</value>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <value>pci</value>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <value>scsi</value>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       </enum>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <enum name='capsType'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <enum name='pciBackend'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:     </hostdev>
Jan 31 08:24:09 compute-0 nova_compute[238824]:     <rng supported='yes'>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <enum name='model'>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <value>virtio</value>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <value>virtio-transitional</value>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <value>virtio-non-transitional</value>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       </enum>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <enum name='backendModel'>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <value>random</value>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <value>egd</value>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <value>builtin</value>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       </enum>
Jan 31 08:24:09 compute-0 nova_compute[238824]:     </rng>
Jan 31 08:24:09 compute-0 nova_compute[238824]:     <filesystem supported='yes'>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <enum name='driverType'>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <value>path</value>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <value>handle</value>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <value>virtiofs</value>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       </enum>
Jan 31 08:24:09 compute-0 nova_compute[238824]:     </filesystem>
Jan 31 08:24:09 compute-0 nova_compute[238824]:     <tpm supported='yes'>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <enum name='model'>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <value>tpm-tis</value>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <value>tpm-crb</value>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       </enum>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <enum name='backendModel'>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <value>emulator</value>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <value>external</value>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       </enum>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <enum name='backendVersion'>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <value>2.0</value>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       </enum>
Jan 31 08:24:09 compute-0 nova_compute[238824]:     </tpm>
Jan 31 08:24:09 compute-0 nova_compute[238824]:     <redirdev supported='yes'>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <enum name='bus'>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <value>usb</value>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       </enum>
Jan 31 08:24:09 compute-0 nova_compute[238824]:     </redirdev>
Jan 31 08:24:09 compute-0 nova_compute[238824]:     <channel supported='yes'>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <enum name='type'>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <value>pty</value>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <value>unix</value>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       </enum>
Jan 31 08:24:09 compute-0 nova_compute[238824]:     </channel>
Jan 31 08:24:09 compute-0 nova_compute[238824]:     <crypto supported='yes'>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <enum name='model'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <enum name='type'>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <value>qemu</value>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       </enum>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <enum name='backendModel'>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <value>builtin</value>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       </enum>
Jan 31 08:24:09 compute-0 nova_compute[238824]:     </crypto>
Jan 31 08:24:09 compute-0 nova_compute[238824]:     <interface supported='yes'>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <enum name='backendType'>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <value>default</value>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <value>passt</value>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       </enum>
Jan 31 08:24:09 compute-0 nova_compute[238824]:     </interface>
Jan 31 08:24:09 compute-0 nova_compute[238824]:     <panic supported='yes'>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <enum name='model'>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <value>isa</value>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <value>hyperv</value>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       </enum>
Jan 31 08:24:09 compute-0 nova_compute[238824]:     </panic>
Jan 31 08:24:09 compute-0 nova_compute[238824]:     <console supported='yes'>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <enum name='type'>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <value>null</value>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <value>vc</value>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <value>pty</value>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <value>dev</value>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <value>file</value>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <value>pipe</value>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <value>stdio</value>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <value>udp</value>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <value>tcp</value>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <value>unix</value>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <value>qemu-vdagent</value>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <value>dbus</value>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       </enum>
Jan 31 08:24:09 compute-0 nova_compute[238824]:     </console>
Jan 31 08:24:09 compute-0 nova_compute[238824]:   </devices>
Jan 31 08:24:09 compute-0 nova_compute[238824]:   <features>
Jan 31 08:24:09 compute-0 nova_compute[238824]:     <gic supported='no'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:     <vmcoreinfo supported='yes'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:     <genid supported='yes'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:     <backingStoreInput supported='yes'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:     <backup supported='yes'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:     <async-teardown supported='yes'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:     <s390-pv supported='no'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:     <ps2 supported='yes'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:     <tdx supported='no'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:     <sev supported='no'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:     <sgx supported='no'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:     <hyperv supported='yes'>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <enum name='features'>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <value>relaxed</value>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <value>vapic</value>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <value>spinlocks</value>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <value>vpindex</value>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <value>runtime</value>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <value>synic</value>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <value>stimer</value>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <value>reset</value>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <value>vendor_id</value>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <value>frequencies</value>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <value>reenlightenment</value>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <value>tlbflush</value>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <value>ipi</value>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <value>avic</value>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <value>emsr_bitmap</value>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <value>xmm_input</value>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       </enum>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       <defaults>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <spinlocks>4095</spinlocks>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <stimer_direct>on</stimer_direct>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <tlbflush_direct>on</tlbflush_direct>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <tlbflush_extended>on</tlbflush_extended>
Jan 31 08:24:09 compute-0 nova_compute[238824]:         <vendor_id>Linux KVM Hv</vendor_id>
Jan 31 08:24:09 compute-0 nova_compute[238824]:       </defaults>
Jan 31 08:24:09 compute-0 nova_compute[238824]:     </hyperv>
Jan 31 08:24:09 compute-0 nova_compute[238824]:     <launchSecurity supported='no'/>
Jan 31 08:24:09 compute-0 nova_compute[238824]:   </features>
Jan 31 08:24:09 compute-0 nova_compute[238824]: </domainCapabilities>
Jan 31 08:24:09 compute-0 nova_compute[238824]:  _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037
Jan 31 08:24:09 compute-0 nova_compute[238824]: 2026-01-31 08:24:09.468 238828 DEBUG nova.virt.libvirt.host [None req-2c04beea-b301-4289-b04e-ab5631b910c8 - - - - - -] Checking secure boot support for host arch (x86_64) supports_secure_boot /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1782
Jan 31 08:24:09 compute-0 nova_compute[238824]: 2026-01-31 08:24:09.469 238828 DEBUG nova.virt.libvirt.host [None req-2c04beea-b301-4289-b04e-ab5631b910c8 - - - - - -] Checking secure boot support for host arch (x86_64) supports_secure_boot /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1782
Jan 31 08:24:09 compute-0 nova_compute[238824]: 2026-01-31 08:24:09.469 238828 DEBUG nova.virt.libvirt.host [None req-2c04beea-b301-4289-b04e-ab5631b910c8 - - - - - -] Checking secure boot support for host arch (x86_64) supports_secure_boot /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1782
Jan 31 08:24:09 compute-0 nova_compute[238824]: 2026-01-31 08:24:09.473 238828 INFO nova.virt.libvirt.host [None req-2c04beea-b301-4289-b04e-ab5631b910c8 - - - - - -] Secure Boot support detected
Jan 31 08:24:09 compute-0 nova_compute[238824]: 2026-01-31 08:24:09.474 238828 INFO nova.virt.libvirt.driver [None req-2c04beea-b301-4289-b04e-ab5631b910c8 - - - - - -] The live_migration_permit_post_copy is set to True and post copy live migration is available so auto-converge will not be in use.
Jan 31 08:24:09 compute-0 nova_compute[238824]: 2026-01-31 08:24:09.474 238828 INFO nova.virt.libvirt.driver [None req-2c04beea-b301-4289-b04e-ab5631b910c8 - - - - - -] The live_migration_permit_post_copy is set to True and post copy live migration is available so auto-converge will not be in use.
Jan 31 08:24:09 compute-0 nova_compute[238824]: 2026-01-31 08:24:09.483 238828 DEBUG nova.virt.libvirt.driver [None req-2c04beea-b301-4289-b04e-ab5631b910c8 - - - - - -] Enabling emulated TPM support _check_vtpm_support /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:1097
Jan 31 08:24:09 compute-0 nova_compute[238824]: 2026-01-31 08:24:09.791 238828 INFO nova.virt.node [None req-2c04beea-b301-4289-b04e-ab5631b910c8 - - - - - -] Determined node identity 6d4ff98f-eb37-47a1-bfaf-01e7f5329d98 from /var/lib/nova/compute_id
Jan 31 08:24:09 compute-0 nova_compute[238824]: 2026-01-31 08:24:09.824 238828 WARNING nova.compute.manager [None req-2c04beea-b301-4289-b04e-ab5631b910c8 - - - - - -] Compute nodes ['6d4ff98f-eb37-47a1-bfaf-01e7f5329d98'] for host compute-0.ctlplane.example.com were not found in the database. If this is the first time this service is starting on this host, then you can ignore this warning.
Jan 31 08:24:09 compute-0 nova_compute[238824]: 2026-01-31 08:24:09.914 238828 INFO nova.compute.manager [None req-2c04beea-b301-4289-b04e-ab5631b910c8 - - - - - -] Looking for unclaimed instances stuck in BUILDING status for nodes managed by this host
Jan 31 08:24:10 compute-0 nova_compute[238824]: 2026-01-31 08:24:10.016 238828 WARNING nova.compute.manager [None req-2c04beea-b301-4289-b04e-ab5631b910c8 - - - - - -] No compute node record found for host compute-0.ctlplane.example.com. If this is the first time this service is starting on this host, then you can ignore this warning.: nova.exception_Remote.ComputeHostNotFound_Remote: Compute host compute-0.ctlplane.example.com could not be found.
Jan 31 08:24:10 compute-0 nova_compute[238824]: 2026-01-31 08:24:10.017 238828 DEBUG oslo_concurrency.lockutils [None req-2c04beea-b301-4289-b04e-ab5631b910c8 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:24:10 compute-0 nova_compute[238824]: 2026-01-31 08:24:10.017 238828 DEBUG oslo_concurrency.lockutils [None req-2c04beea-b301-4289-b04e-ab5631b910c8 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:24:10 compute-0 nova_compute[238824]: 2026-01-31 08:24:10.018 238828 DEBUG oslo_concurrency.lockutils [None req-2c04beea-b301-4289-b04e-ab5631b910c8 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:24:10 compute-0 nova_compute[238824]: 2026-01-31 08:24:10.018 238828 DEBUG nova.compute.resource_tracker [None req-2c04beea-b301-4289-b04e-ab5631b910c8 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 31 08:24:10 compute-0 nova_compute[238824]: 2026-01-31 08:24:10.019 238828 DEBUG oslo_concurrency.processutils [None req-2c04beea-b301-4289-b04e-ab5631b910c8 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:24:10 compute-0 ceph-mon[75227]: pgmap v646: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:24:10 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 31 08:24:10 compute-0 ceph-mon[75227]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3323564417' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 31 08:24:10 compute-0 nova_compute[238824]: 2026-01-31 08:24:10.725 238828 DEBUG oslo_concurrency.processutils [None req-2c04beea-b301-4289-b04e-ab5631b910c8 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.706s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:24:10 compute-0 systemd[1]: Starting libvirt nodedev daemon...
Jan 31 08:24:10 compute-0 systemd[1]: Started libvirt nodedev daemon.
Jan 31 08:24:10 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v647: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:24:10 compute-0 nova_compute[238824]: 2026-01-31 08:24:10.957 238828 WARNING nova.virt.libvirt.driver [None req-2c04beea-b301-4289-b04e-ab5631b910c8 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 31 08:24:10 compute-0 nova_compute[238824]: 2026-01-31 08:24:10.958 238828 DEBUG nova.compute.resource_tracker [None req-2c04beea-b301-4289-b04e-ab5631b910c8 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5124MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 31 08:24:10 compute-0 nova_compute[238824]: 2026-01-31 08:24:10.958 238828 DEBUG oslo_concurrency.lockutils [None req-2c04beea-b301-4289-b04e-ab5631b910c8 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:24:10 compute-0 nova_compute[238824]: 2026-01-31 08:24:10.958 238828 DEBUG oslo_concurrency.lockutils [None req-2c04beea-b301-4289-b04e-ab5631b910c8 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:24:10 compute-0 nova_compute[238824]: 2026-01-31 08:24:10.973 238828 WARNING nova.compute.resource_tracker [None req-2c04beea-b301-4289-b04e-ab5631b910c8 - - - - - -] No compute node record for compute-0.ctlplane.example.com:6d4ff98f-eb37-47a1-bfaf-01e7f5329d98: nova.exception_Remote.ComputeHostNotFound_Remote: Compute host 6d4ff98f-eb37-47a1-bfaf-01e7f5329d98 could not be found.
Jan 31 08:24:10 compute-0 nova_compute[238824]: 2026-01-31 08:24:10.994 238828 INFO nova.compute.resource_tracker [None req-2c04beea-b301-4289-b04e-ab5631b910c8 - - - - - -] Compute node record created for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com with uuid: 6d4ff98f-eb37-47a1-bfaf-01e7f5329d98
Jan 31 08:24:11 compute-0 nova_compute[238824]: 2026-01-31 08:24:11.061 238828 DEBUG nova.compute.resource_tracker [None req-2c04beea-b301-4289-b04e-ab5631b910c8 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 31 08:24:11 compute-0 nova_compute[238824]: 2026-01-31 08:24:11.062 238828 DEBUG nova.compute.resource_tracker [None req-2c04beea-b301-4289-b04e-ab5631b910c8 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 31 08:24:11 compute-0 ceph-mon[75227]: from='client.? 192.168.122.100:0/3323564417' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 31 08:24:12 compute-0 nova_compute[238824]: 2026-01-31 08:24:12.055 238828 INFO nova.scheduler.client.report [None req-2c04beea-b301-4289-b04e-ab5631b910c8 - - - - - -] [req-cd6794b3-ca5f-493c-ba06-b8a85a3a3273] Created resource provider record via placement API for resource provider with UUID 6d4ff98f-eb37-47a1-bfaf-01e7f5329d98 and name compute-0.ctlplane.example.com.
Jan 31 08:24:12 compute-0 ceph-mon[75227]: pgmap v647: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:24:12 compute-0 nova_compute[238824]: 2026-01-31 08:24:12.437 238828 DEBUG oslo_concurrency.processutils [None req-2c04beea-b301-4289-b04e-ab5631b910c8 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:24:12 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v648: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:24:13 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 31 08:24:13 compute-0 ceph-mon[75227]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/229754555' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 31 08:24:13 compute-0 nova_compute[238824]: 2026-01-31 08:24:13.079 238828 DEBUG oslo_concurrency.processutils [None req-2c04beea-b301-4289-b04e-ab5631b910c8 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.642s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:24:13 compute-0 nova_compute[238824]: 2026-01-31 08:24:13.084 238828 DEBUG nova.virt.libvirt.host [None req-2c04beea-b301-4289-b04e-ab5631b910c8 - - - - - -] /sys/module/kvm_amd/parameters/sev contains [N
Jan 31 08:24:13 compute-0 nova_compute[238824]: ] _kernel_supports_amd_sev /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1803
Jan 31 08:24:13 compute-0 nova_compute[238824]: 2026-01-31 08:24:13.085 238828 INFO nova.virt.libvirt.host [None req-2c04beea-b301-4289-b04e-ab5631b910c8 - - - - - -] kernel doesn't support AMD SEV
Jan 31 08:24:13 compute-0 nova_compute[238824]: 2026-01-31 08:24:13.086 238828 DEBUG nova.compute.provider_tree [None req-2c04beea-b301-4289-b04e-ab5631b910c8 - - - - - -] Updating inventory in ProviderTree for provider 6d4ff98f-eb37-47a1-bfaf-01e7f5329d98 with inventory: {'MEMORY_MB': {'total': 7679, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 8, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0, 'reserved': 0}, 'DISK_GB': {'total': 59, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9, 'reserved': 0}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Jan 31 08:24:13 compute-0 nova_compute[238824]: 2026-01-31 08:24:13.086 238828 DEBUG nova.virt.libvirt.driver [None req-2c04beea-b301-4289-b04e-ab5631b910c8 - - - - - -] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 31 08:24:13 compute-0 nova_compute[238824]: 2026-01-31 08:24:13.137 238828 DEBUG nova.scheduler.client.report [None req-2c04beea-b301-4289-b04e-ab5631b910c8 - - - - - -] Updated inventory for provider 6d4ff98f-eb37-47a1-bfaf-01e7f5329d98 with generation 0 in Placement from set_inventory_for_provider using data: {'MEMORY_MB': {'total': 7679, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 8, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0, 'reserved': 0}, 'DISK_GB': {'total': 59, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9, 'reserved': 0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:957
Jan 31 08:24:13 compute-0 nova_compute[238824]: 2026-01-31 08:24:13.137 238828 DEBUG nova.compute.provider_tree [None req-2c04beea-b301-4289-b04e-ab5631b910c8 - - - - - -] Updating resource provider 6d4ff98f-eb37-47a1-bfaf-01e7f5329d98 generation from 0 to 1 during operation: update_inventory _update_generation /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:164
Jan 31 08:24:13 compute-0 nova_compute[238824]: 2026-01-31 08:24:13.137 238828 DEBUG nova.compute.provider_tree [None req-2c04beea-b301-4289-b04e-ab5631b910c8 - - - - - -] Updating inventory in ProviderTree for provider 6d4ff98f-eb37-47a1-bfaf-01e7f5329d98 with inventory: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Jan 31 08:24:13 compute-0 ceph-mon[75227]: from='client.? 192.168.122.100:0/229754555' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 31 08:24:13 compute-0 nova_compute[238824]: 2026-01-31 08:24:13.230 238828 DEBUG nova.compute.provider_tree [None req-2c04beea-b301-4289-b04e-ab5631b910c8 - - - - - -] Updating resource provider 6d4ff98f-eb37-47a1-bfaf-01e7f5329d98 generation from 1 to 2 during operation: update_traits _update_generation /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:164
Jan 31 08:24:13 compute-0 nova_compute[238824]: 2026-01-31 08:24:13.255 238828 DEBUG nova.compute.resource_tracker [None req-2c04beea-b301-4289-b04e-ab5631b910c8 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 31 08:24:13 compute-0 nova_compute[238824]: 2026-01-31 08:24:13.255 238828 DEBUG oslo_concurrency.lockutils [None req-2c04beea-b301-4289-b04e-ab5631b910c8 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 2.297s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:24:13 compute-0 nova_compute[238824]: 2026-01-31 08:24:13.256 238828 DEBUG nova.service [None req-2c04beea-b301-4289-b04e-ab5631b910c8 - - - - - -] Creating RPC server for service compute start /usr/lib/python3.9/site-packages/nova/service.py:182
Jan 31 08:24:13 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 08:24:13 compute-0 nova_compute[238824]: 2026-01-31 08:24:13.349 238828 DEBUG nova.service [None req-2c04beea-b301-4289-b04e-ab5631b910c8 - - - - - -] Join ServiceGroup membership for this service compute start /usr/lib/python3.9/site-packages/nova/service.py:199
Jan 31 08:24:13 compute-0 nova_compute[238824]: 2026-01-31 08:24:13.350 238828 DEBUG nova.servicegroup.drivers.db [None req-2c04beea-b301-4289-b04e-ab5631b910c8 - - - - - -] DB_Driver: join new ServiceGroup member compute-0.ctlplane.example.com to the compute group, service = <Service: host=compute-0.ctlplane.example.com, binary=nova-compute, manager_class_name=nova.compute.manager.ComputeManager> join /usr/lib/python3.9/site-packages/nova/servicegroup/drivers/db.py:44
Jan 31 08:24:14 compute-0 ceph-mon[75227]: pgmap v648: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:24:14 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v649: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:24:16 compute-0 ceph-mon[75227]: pgmap v649: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:24:16 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v650: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:24:17 compute-0 ovn_metadata_agent[154972]: 2026-01-31 08:24:17.883 154977 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:24:17 compute-0 ovn_metadata_agent[154972]: 2026-01-31 08:24:17.884 154977 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:24:17 compute-0 ovn_metadata_agent[154972]: 2026-01-31 08:24:17.884 154977 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:24:18 compute-0 ceph-mon[75227]: pgmap v650: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:24:18 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 08:24:18 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v651: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:24:20 compute-0 ceph-mon[75227]: pgmap v651: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:24:20 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v652: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:24:21 compute-0 podman[239235]: 2026-01-31 08:24:21.182269263 +0000 UTC m=+0.071487534 container health_status 5cc46d1955888fed41771eb977e7f9416e280539f01559118253757fe3eb0869 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '3b798815db4eef76ff54d2cfe5801aee605c637f16e47a4297de289ab1fdb9c1-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0)
Jan 31 08:24:21 compute-0 podman[239234]: 2026-01-31 08:24:21.203052563 +0000 UTC m=+0.097182226 container health_status 14c3c41ef3ecc0cb180f3b1c6b2646401de390411c75f2edf127900ced71a3ae (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '3b798815db4eef76ff54d2cfe5801aee605c637f16e47a4297de289ab1fdb9c1-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, container_name=ovn_controller)
Jan 31 08:24:21 compute-0 nova_compute[238824]: 2026-01-31 08:24:21.352 238828 DEBUG oslo_service.periodic_task [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:24:21 compute-0 nova_compute[238824]: 2026-01-31 08:24:21.373 238828 DEBUG oslo_service.periodic_task [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Running periodic task ComputeManager._cleanup_running_deleted_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:24:22 compute-0 ceph-mon[75227]: pgmap v652: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:24:22 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v653: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:24:23 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 08:24:24 compute-0 ceph-mon[75227]: pgmap v653: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:24:24 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v654: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:24:25 compute-0 sudo[239279]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 08:24:25 compute-0 sudo[239279]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:24:25 compute-0 sudo[239279]: pam_unix(sudo:session): session closed for user root
Jan 31 08:24:25 compute-0 ceph-mon[75227]: pgmap v654: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:24:25 compute-0 sudo[239304]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/82c880e6-d992-5408-8b12-efff9c275473/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Jan 31 08:24:25 compute-0 sudo[239304]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:24:26 compute-0 sudo[239304]: pam_unix(sudo:session): session closed for user root
Jan 31 08:24:26 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 31 08:24:26 compute-0 ceph-mon[75227]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 08:24:26 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 31 08:24:26 compute-0 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 31 08:24:26 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 31 08:24:26 compute-0 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 08:24:26 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 31 08:24:26 compute-0 ceph-mon[75227]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 31 08:24:26 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 31 08:24:26 compute-0 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 31 08:24:26 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 31 08:24:26 compute-0 ceph-mon[75227]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 08:24:26 compute-0 sudo[239359]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 08:24:26 compute-0 sudo[239359]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:24:26 compute-0 sudo[239359]: pam_unix(sudo:session): session closed for user root
Jan 31 08:24:26 compute-0 sudo[239384]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/82c880e6-d992-5408-8b12-efff9c275473/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 82c880e6-d992-5408-8b12-efff9c275473 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Jan 31 08:24:26 compute-0 sudo[239384]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:24:26 compute-0 podman[239422]: 2026-01-31 08:24:26.487398064 +0000 UTC m=+0.057620484 container create 13f191d870cbfb9a830ac4d35ee33d0e28dcb74828ffb3643b003082714c6771 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_carson, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, ceph=True)
Jan 31 08:24:26 compute-0 systemd[1]: Started libpod-conmon-13f191d870cbfb9a830ac4d35ee33d0e28dcb74828ffb3643b003082714c6771.scope.
Jan 31 08:24:26 compute-0 podman[239422]: 2026-01-31 08:24:26.459459788 +0000 UTC m=+0.029682258 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 08:24:26 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:24:26 compute-0 podman[239422]: 2026-01-31 08:24:26.591826538 +0000 UTC m=+0.162048968 container init 13f191d870cbfb9a830ac4d35ee33d0e28dcb74828ffb3643b003082714c6771 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_carson, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Jan 31 08:24:26 compute-0 podman[239422]: 2026-01-31 08:24:26.600009334 +0000 UTC m=+0.170231754 container start 13f191d870cbfb9a830ac4d35ee33d0e28dcb74828ffb3643b003082714c6771 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_carson, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Jan 31 08:24:26 compute-0 podman[239422]: 2026-01-31 08:24:26.604525485 +0000 UTC m=+0.174747905 container attach 13f191d870cbfb9a830ac4d35ee33d0e28dcb74828ffb3643b003082714c6771 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_carson, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 08:24:26 compute-0 focused_carson[239436]: 167 167
Jan 31 08:24:26 compute-0 systemd[1]: libpod-13f191d870cbfb9a830ac4d35ee33d0e28dcb74828ffb3643b003082714c6771.scope: Deactivated successfully.
Jan 31 08:24:26 compute-0 podman[239422]: 2026-01-31 08:24:26.608286393 +0000 UTC m=+0.178508803 container died 13f191d870cbfb9a830ac4d35ee33d0e28dcb74828ffb3643b003082714c6771 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_carson, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 08:24:26 compute-0 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 08:24:26 compute-0 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 31 08:24:26 compute-0 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 08:24:26 compute-0 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 31 08:24:26 compute-0 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 31 08:24:26 compute-0 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 08:24:26 compute-0 systemd[1]: var-lib-containers-storage-overlay-5fa9843d05484237daea77e45143acdeb16995fd0e7423b0d9d0f53819a35edf-merged.mount: Deactivated successfully.
Jan 31 08:24:26 compute-0 podman[239422]: 2026-01-31 08:24:26.653481027 +0000 UTC m=+0.223703417 container remove 13f191d870cbfb9a830ac4d35ee33d0e28dcb74828ffb3643b003082714c6771 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_carson, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 08:24:26 compute-0 systemd[1]: libpod-conmon-13f191d870cbfb9a830ac4d35ee33d0e28dcb74828ffb3643b003082714c6771.scope: Deactivated successfully.
Jan 31 08:24:26 compute-0 podman[239458]: 2026-01-31 08:24:26.840223707 +0000 UTC m=+0.055441581 container create ebbba155d63d3e69d7c6bce36670ec8c5337d91b3105d2a233192dd030873824 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hopeful_haslett, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 31 08:24:26 compute-0 systemd[1]: Started libpod-conmon-ebbba155d63d3e69d7c6bce36670ec8c5337d91b3105d2a233192dd030873824.scope.
Jan 31 08:24:26 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:24:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/83397706f2967715748973871b73d244ae6bbeed5c19f9b2c0ebd847abfc7cb6/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 08:24:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/83397706f2967715748973871b73d244ae6bbeed5c19f9b2c0ebd847abfc7cb6/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 08:24:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/83397706f2967715748973871b73d244ae6bbeed5c19f9b2c0ebd847abfc7cb6/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 08:24:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/83397706f2967715748973871b73d244ae6bbeed5c19f9b2c0ebd847abfc7cb6/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 08:24:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/83397706f2967715748973871b73d244ae6bbeed5c19f9b2c0ebd847abfc7cb6/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 31 08:24:26 compute-0 podman[239458]: 2026-01-31 08:24:26.817559983 +0000 UTC m=+0.032777947 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 08:24:26 compute-0 podman[239458]: 2026-01-31 08:24:26.937779313 +0000 UTC m=+0.152997277 container init ebbba155d63d3e69d7c6bce36670ec8c5337d91b3105d2a233192dd030873824 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hopeful_haslett, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Jan 31 08:24:26 compute-0 podman[239458]: 2026-01-31 08:24:26.945735352 +0000 UTC m=+0.160953236 container start ebbba155d63d3e69d7c6bce36670ec8c5337d91b3105d2a233192dd030873824 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hopeful_haslett, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 08:24:26 compute-0 podman[239458]: 2026-01-31 08:24:26.949676846 +0000 UTC m=+0.164894740 container attach ebbba155d63d3e69d7c6bce36670ec8c5337d91b3105d2a233192dd030873824 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hopeful_haslett, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, ceph=True)
Jan 31 08:24:26 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v655: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:24:27 compute-0 hopeful_haslett[239474]: --> passed data devices: 0 physical, 3 LVM
Jan 31 08:24:27 compute-0 hopeful_haslett[239474]: --> All data devices are unavailable
Jan 31 08:24:27 compute-0 systemd[1]: libpod-ebbba155d63d3e69d7c6bce36670ec8c5337d91b3105d2a233192dd030873824.scope: Deactivated successfully.
Jan 31 08:24:27 compute-0 podman[239458]: 2026-01-31 08:24:27.406032537 +0000 UTC m=+0.621250451 container died ebbba155d63d3e69d7c6bce36670ec8c5337d91b3105d2a233192dd030873824 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hopeful_haslett, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 08:24:27 compute-0 systemd[1]: var-lib-containers-storage-overlay-83397706f2967715748973871b73d244ae6bbeed5c19f9b2c0ebd847abfc7cb6-merged.mount: Deactivated successfully.
Jan 31 08:24:27 compute-0 podman[239458]: 2026-01-31 08:24:27.461036874 +0000 UTC m=+0.676254798 container remove ebbba155d63d3e69d7c6bce36670ec8c5337d91b3105d2a233192dd030873824 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hopeful_haslett, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Jan 31 08:24:27 compute-0 systemd[1]: libpod-conmon-ebbba155d63d3e69d7c6bce36670ec8c5337d91b3105d2a233192dd030873824.scope: Deactivated successfully.
Jan 31 08:24:27 compute-0 sudo[239384]: pam_unix(sudo:session): session closed for user root
Jan 31 08:24:27 compute-0 sudo[239505]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 08:24:27 compute-0 sudo[239505]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:24:27 compute-0 sudo[239505]: pam_unix(sudo:session): session closed for user root
Jan 31 08:24:27 compute-0 sudo[239530]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/82c880e6-d992-5408-8b12-efff9c275473/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 82c880e6-d992-5408-8b12-efff9c275473 -- lvm list --format json
Jan 31 08:24:27 compute-0 sudo[239530]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:24:27 compute-0 ceph-mon[75227]: pgmap v655: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:24:27 compute-0 podman[239567]: 2026-01-31 08:24:27.978347975 +0000 UTC m=+0.082709429 container create 0b33b2c445f0fd741ed3a3fb08e50458e562ff161a7398ebddc8894bc281a10e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pensive_bell, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 31 08:24:28 compute-0 podman[239567]: 2026-01-31 08:24:27.928548157 +0000 UTC m=+0.032909641 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 08:24:28 compute-0 systemd[1]: Started libpod-conmon-0b33b2c445f0fd741ed3a3fb08e50458e562ff161a7398ebddc8894bc281a10e.scope.
Jan 31 08:24:28 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:24:28 compute-0 podman[239567]: 2026-01-31 08:24:28.219443862 +0000 UTC m=+0.323805336 container init 0b33b2c445f0fd741ed3a3fb08e50458e562ff161a7398ebddc8894bc281a10e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pensive_bell, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.build-date=20251030, OSD_FLAVOR=default)
Jan 31 08:24:28 compute-0 podman[239567]: 2026-01-31 08:24:28.224422806 +0000 UTC m=+0.328784260 container start 0b33b2c445f0fd741ed3a3fb08e50458e562ff161a7398ebddc8894bc281a10e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pensive_bell, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 08:24:28 compute-0 pensive_bell[239584]: 167 167
Jan 31 08:24:28 compute-0 systemd[1]: libpod-0b33b2c445f0fd741ed3a3fb08e50458e562ff161a7398ebddc8894bc281a10e.scope: Deactivated successfully.
Jan 31 08:24:28 compute-0 podman[239567]: 2026-01-31 08:24:28.270902827 +0000 UTC m=+0.375264281 container attach 0b33b2c445f0fd741ed3a3fb08e50458e562ff161a7398ebddc8894bc281a10e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pensive_bell, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True)
Jan 31 08:24:28 compute-0 podman[239567]: 2026-01-31 08:24:28.271530705 +0000 UTC m=+0.375892169 container died 0b33b2c445f0fd741ed3a3fb08e50458e562ff161a7398ebddc8894bc281a10e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pensive_bell, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0)
Jan 31 08:24:28 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 08:24:28 compute-0 systemd[1]: var-lib-containers-storage-overlay-5c589394c3c5824bcc0fe5c89fa0e1b77033e63991d2618df2115cea26966399-merged.mount: Deactivated successfully.
Jan 31 08:24:28 compute-0 podman[239567]: 2026-01-31 08:24:28.758425088 +0000 UTC m=+0.862786562 container remove 0b33b2c445f0fd741ed3a3fb08e50458e562ff161a7398ebddc8894bc281a10e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pensive_bell, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20251030)
Jan 31 08:24:28 compute-0 systemd[1]: libpod-conmon-0b33b2c445f0fd741ed3a3fb08e50458e562ff161a7398ebddc8894bc281a10e.scope: Deactivated successfully.
Jan 31 08:24:28 compute-0 podman[239609]: 2026-01-31 08:24:28.956667179 +0000 UTC m=+0.092879101 container create 59e9cd841726b5f462ffd3b6a440ddf2804e822a89fad9c3ae8083dc353983b8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=naughty_kirch, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Jan 31 08:24:28 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v656: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:24:28 compute-0 podman[239609]: 2026-01-31 08:24:28.885843225 +0000 UTC m=+0.022055147 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 08:24:29 compute-0 systemd[1]: Started libpod-conmon-59e9cd841726b5f462ffd3b6a440ddf2804e822a89fad9c3ae8083dc353983b8.scope.
Jan 31 08:24:29 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:24:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/575fe725875348bf6f46ef34853cced2f02ec52c3691bf28dc931eaa7aa9db2e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 08:24:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/575fe725875348bf6f46ef34853cced2f02ec52c3691bf28dc931eaa7aa9db2e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 08:24:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/575fe725875348bf6f46ef34853cced2f02ec52c3691bf28dc931eaa7aa9db2e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 08:24:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/575fe725875348bf6f46ef34853cced2f02ec52c3691bf28dc931eaa7aa9db2e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 08:24:29 compute-0 podman[239609]: 2026-01-31 08:24:29.08109045 +0000 UTC m=+0.217302412 container init 59e9cd841726b5f462ffd3b6a440ddf2804e822a89fad9c3ae8083dc353983b8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=naughty_kirch, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Jan 31 08:24:29 compute-0 podman[239609]: 2026-01-31 08:24:29.089462252 +0000 UTC m=+0.225674154 container start 59e9cd841726b5f462ffd3b6a440ddf2804e822a89fad9c3ae8083dc353983b8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=naughty_kirch, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 31 08:24:29 compute-0 podman[239609]: 2026-01-31 08:24:29.09426436 +0000 UTC m=+0.230476262 container attach 59e9cd841726b5f462ffd3b6a440ddf2804e822a89fad9c3ae8083dc353983b8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=naughty_kirch, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Jan 31 08:24:29 compute-0 naughty_kirch[239625]: {
Jan 31 08:24:29 compute-0 naughty_kirch[239625]:     "0": [
Jan 31 08:24:29 compute-0 naughty_kirch[239625]:         {
Jan 31 08:24:29 compute-0 naughty_kirch[239625]:             "devices": [
Jan 31 08:24:29 compute-0 naughty_kirch[239625]:                 "/dev/loop3"
Jan 31 08:24:29 compute-0 naughty_kirch[239625]:             ],
Jan 31 08:24:29 compute-0 naughty_kirch[239625]:             "lv_name": "ceph_lv0",
Jan 31 08:24:29 compute-0 naughty_kirch[239625]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 08:24:29 compute-0 naughty_kirch[239625]:             "lv_size": "21470642176",
Jan 31 08:24:29 compute-0 naughty_kirch[239625]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=MTsNbY-MKaT-jGv0-3onj-5WQa-gnK0-BbfLsK,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=82c880e6-d992-5408-8b12-efff9c275473,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=39c36249-2898-4a76-b317-8e4ca379866f,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 31 08:24:29 compute-0 naughty_kirch[239625]:             "lv_uuid": "MTsNbY-MKaT-jGv0-3onj-5WQa-gnK0-BbfLsK",
Jan 31 08:24:29 compute-0 naughty_kirch[239625]:             "name": "ceph_lv0",
Jan 31 08:24:29 compute-0 naughty_kirch[239625]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 08:24:29 compute-0 naughty_kirch[239625]:             "tags": {
Jan 31 08:24:29 compute-0 naughty_kirch[239625]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 31 08:24:29 compute-0 naughty_kirch[239625]:                 "ceph.block_uuid": "MTsNbY-MKaT-jGv0-3onj-5WQa-gnK0-BbfLsK",
Jan 31 08:24:29 compute-0 naughty_kirch[239625]:                 "ceph.cephx_lockbox_secret": "",
Jan 31 08:24:29 compute-0 naughty_kirch[239625]:                 "ceph.cluster_fsid": "82c880e6-d992-5408-8b12-efff9c275473",
Jan 31 08:24:29 compute-0 naughty_kirch[239625]:                 "ceph.cluster_name": "ceph",
Jan 31 08:24:29 compute-0 naughty_kirch[239625]:                 "ceph.crush_device_class": "",
Jan 31 08:24:29 compute-0 naughty_kirch[239625]:                 "ceph.encrypted": "0",
Jan 31 08:24:29 compute-0 naughty_kirch[239625]:                 "ceph.objectstore": "bluestore",
Jan 31 08:24:29 compute-0 naughty_kirch[239625]:                 "ceph.osd_fsid": "39c36249-2898-4a76-b317-8e4ca379866f",
Jan 31 08:24:29 compute-0 naughty_kirch[239625]:                 "ceph.osd_id": "0",
Jan 31 08:24:29 compute-0 naughty_kirch[239625]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 31 08:24:29 compute-0 naughty_kirch[239625]:                 "ceph.type": "block",
Jan 31 08:24:29 compute-0 naughty_kirch[239625]:                 "ceph.vdo": "0",
Jan 31 08:24:29 compute-0 naughty_kirch[239625]:                 "ceph.with_tpm": "0"
Jan 31 08:24:29 compute-0 naughty_kirch[239625]:             },
Jan 31 08:24:29 compute-0 naughty_kirch[239625]:             "type": "block",
Jan 31 08:24:29 compute-0 naughty_kirch[239625]:             "vg_name": "ceph_vg0"
Jan 31 08:24:29 compute-0 naughty_kirch[239625]:         }
Jan 31 08:24:29 compute-0 naughty_kirch[239625]:     ],
Jan 31 08:24:29 compute-0 naughty_kirch[239625]:     "1": [
Jan 31 08:24:29 compute-0 naughty_kirch[239625]:         {
Jan 31 08:24:29 compute-0 naughty_kirch[239625]:             "devices": [
Jan 31 08:24:29 compute-0 naughty_kirch[239625]:                 "/dev/loop4"
Jan 31 08:24:29 compute-0 naughty_kirch[239625]:             ],
Jan 31 08:24:29 compute-0 naughty_kirch[239625]:             "lv_name": "ceph_lv1",
Jan 31 08:24:29 compute-0 naughty_kirch[239625]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 31 08:24:29 compute-0 naughty_kirch[239625]:             "lv_size": "21470642176",
Jan 31 08:24:29 compute-0 naughty_kirch[239625]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=p93Mbf-DMxT-pcUt-jSJE-SFna-oscq-yTAd40,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=82c880e6-d992-5408-8b12-efff9c275473,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=dacad4fa-56d8-4937-b2d8-306fb75187f3,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 31 08:24:29 compute-0 naughty_kirch[239625]:             "lv_uuid": "p93Mbf-DMxT-pcUt-jSJE-SFna-oscq-yTAd40",
Jan 31 08:24:29 compute-0 naughty_kirch[239625]:             "name": "ceph_lv1",
Jan 31 08:24:29 compute-0 naughty_kirch[239625]:             "path": "/dev/ceph_vg1/ceph_lv1",
Jan 31 08:24:29 compute-0 naughty_kirch[239625]:             "tags": {
Jan 31 08:24:29 compute-0 naughty_kirch[239625]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 31 08:24:29 compute-0 naughty_kirch[239625]:                 "ceph.block_uuid": "p93Mbf-DMxT-pcUt-jSJE-SFna-oscq-yTAd40",
Jan 31 08:24:29 compute-0 naughty_kirch[239625]:                 "ceph.cephx_lockbox_secret": "",
Jan 31 08:24:29 compute-0 naughty_kirch[239625]:                 "ceph.cluster_fsid": "82c880e6-d992-5408-8b12-efff9c275473",
Jan 31 08:24:29 compute-0 naughty_kirch[239625]:                 "ceph.cluster_name": "ceph",
Jan 31 08:24:29 compute-0 naughty_kirch[239625]:                 "ceph.crush_device_class": "",
Jan 31 08:24:29 compute-0 naughty_kirch[239625]:                 "ceph.encrypted": "0",
Jan 31 08:24:29 compute-0 naughty_kirch[239625]:                 "ceph.objectstore": "bluestore",
Jan 31 08:24:29 compute-0 naughty_kirch[239625]:                 "ceph.osd_fsid": "dacad4fa-56d8-4937-b2d8-306fb75187f3",
Jan 31 08:24:29 compute-0 naughty_kirch[239625]:                 "ceph.osd_id": "1",
Jan 31 08:24:29 compute-0 naughty_kirch[239625]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 31 08:24:29 compute-0 naughty_kirch[239625]:                 "ceph.type": "block",
Jan 31 08:24:29 compute-0 naughty_kirch[239625]:                 "ceph.vdo": "0",
Jan 31 08:24:29 compute-0 naughty_kirch[239625]:                 "ceph.with_tpm": "0"
Jan 31 08:24:29 compute-0 naughty_kirch[239625]:             },
Jan 31 08:24:29 compute-0 naughty_kirch[239625]:             "type": "block",
Jan 31 08:24:29 compute-0 naughty_kirch[239625]:             "vg_name": "ceph_vg1"
Jan 31 08:24:29 compute-0 naughty_kirch[239625]:         }
Jan 31 08:24:29 compute-0 naughty_kirch[239625]:     ],
Jan 31 08:24:29 compute-0 naughty_kirch[239625]:     "2": [
Jan 31 08:24:29 compute-0 naughty_kirch[239625]:         {
Jan 31 08:24:29 compute-0 naughty_kirch[239625]:             "devices": [
Jan 31 08:24:29 compute-0 naughty_kirch[239625]:                 "/dev/loop5"
Jan 31 08:24:29 compute-0 naughty_kirch[239625]:             ],
Jan 31 08:24:29 compute-0 naughty_kirch[239625]:             "lv_name": "ceph_lv2",
Jan 31 08:24:29 compute-0 naughty_kirch[239625]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 31 08:24:29 compute-0 naughty_kirch[239625]:             "lv_size": "21470642176",
Jan 31 08:24:29 compute-0 naughty_kirch[239625]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=fxd7JU-HnwP-NvcE-M4xv-EgEF-kK7y-w6dXCS,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=82c880e6-d992-5408-8b12-efff9c275473,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=faa25865-e7b6-44f9-8188-08bf287b941b,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 31 08:24:29 compute-0 naughty_kirch[239625]:             "lv_uuid": "fxd7JU-HnwP-NvcE-M4xv-EgEF-kK7y-w6dXCS",
Jan 31 08:24:29 compute-0 naughty_kirch[239625]:             "name": "ceph_lv2",
Jan 31 08:24:29 compute-0 naughty_kirch[239625]:             "path": "/dev/ceph_vg2/ceph_lv2",
Jan 31 08:24:29 compute-0 naughty_kirch[239625]:             "tags": {
Jan 31 08:24:29 compute-0 naughty_kirch[239625]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 31 08:24:29 compute-0 naughty_kirch[239625]:                 "ceph.block_uuid": "fxd7JU-HnwP-NvcE-M4xv-EgEF-kK7y-w6dXCS",
Jan 31 08:24:29 compute-0 naughty_kirch[239625]:                 "ceph.cephx_lockbox_secret": "",
Jan 31 08:24:29 compute-0 naughty_kirch[239625]:                 "ceph.cluster_fsid": "82c880e6-d992-5408-8b12-efff9c275473",
Jan 31 08:24:29 compute-0 naughty_kirch[239625]:                 "ceph.cluster_name": "ceph",
Jan 31 08:24:29 compute-0 naughty_kirch[239625]:                 "ceph.crush_device_class": "",
Jan 31 08:24:29 compute-0 naughty_kirch[239625]:                 "ceph.encrypted": "0",
Jan 31 08:24:29 compute-0 naughty_kirch[239625]:                 "ceph.objectstore": "bluestore",
Jan 31 08:24:29 compute-0 naughty_kirch[239625]:                 "ceph.osd_fsid": "faa25865-e7b6-44f9-8188-08bf287b941b",
Jan 31 08:24:29 compute-0 naughty_kirch[239625]:                 "ceph.osd_id": "2",
Jan 31 08:24:29 compute-0 naughty_kirch[239625]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 31 08:24:29 compute-0 naughty_kirch[239625]:                 "ceph.type": "block",
Jan 31 08:24:29 compute-0 naughty_kirch[239625]:                 "ceph.vdo": "0",
Jan 31 08:24:29 compute-0 naughty_kirch[239625]:                 "ceph.with_tpm": "0"
Jan 31 08:24:29 compute-0 naughty_kirch[239625]:             },
Jan 31 08:24:29 compute-0 naughty_kirch[239625]:             "type": "block",
Jan 31 08:24:29 compute-0 naughty_kirch[239625]:             "vg_name": "ceph_vg2"
Jan 31 08:24:29 compute-0 naughty_kirch[239625]:         }
Jan 31 08:24:29 compute-0 naughty_kirch[239625]:     ]
Jan 31 08:24:29 compute-0 naughty_kirch[239625]: }
Jan 31 08:24:29 compute-0 systemd[1]: libpod-59e9cd841726b5f462ffd3b6a440ddf2804e822a89fad9c3ae8083dc353983b8.scope: Deactivated successfully.
Jan 31 08:24:29 compute-0 podman[239609]: 2026-01-31 08:24:29.375804816 +0000 UTC m=+0.512016718 container died 59e9cd841726b5f462ffd3b6a440ddf2804e822a89fad9c3ae8083dc353983b8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=naughty_kirch, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_REF=tentacle, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Jan 31 08:24:29 compute-0 systemd[1]: var-lib-containers-storage-overlay-575fe725875348bf6f46ef34853cced2f02ec52c3691bf28dc931eaa7aa9db2e-merged.mount: Deactivated successfully.
Jan 31 08:24:29 compute-0 podman[239609]: 2026-01-31 08:24:29.424844021 +0000 UTC m=+0.561055903 container remove 59e9cd841726b5f462ffd3b6a440ddf2804e822a89fad9c3ae8083dc353983b8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=naughty_kirch, CEPH_REF=tentacle, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Jan 31 08:24:29 compute-0 systemd[1]: libpod-conmon-59e9cd841726b5f462ffd3b6a440ddf2804e822a89fad9c3ae8083dc353983b8.scope: Deactivated successfully.
Jan 31 08:24:29 compute-0 sudo[239530]: pam_unix(sudo:session): session closed for user root
Jan 31 08:24:29 compute-0 sudo[239646]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 08:24:29 compute-0 sudo[239646]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:24:29 compute-0 sudo[239646]: pam_unix(sudo:session): session closed for user root
Jan 31 08:24:29 compute-0 sudo[239671]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/82c880e6-d992-5408-8b12-efff9c275473/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 82c880e6-d992-5408-8b12-efff9c275473 -- raw list --format json
Jan 31 08:24:29 compute-0 sudo[239671]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:24:29 compute-0 podman[239708]: 2026-01-31 08:24:29.892376875 +0000 UTC m=+0.035693842 container create b32ddf75e53a8075c3a2df8eb224284a1717528bc86b6f1f13c69f79eed4dd21 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sad_goodall, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 08:24:29 compute-0 systemd[1]: Started libpod-conmon-b32ddf75e53a8075c3a2df8eb224284a1717528bc86b6f1f13c69f79eed4dd21.scope.
Jan 31 08:24:29 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:24:29 compute-0 podman[239708]: 2026-01-31 08:24:29.956063353 +0000 UTC m=+0.099380380 container init b32ddf75e53a8075c3a2df8eb224284a1717528bc86b6f1f13c69f79eed4dd21 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sad_goodall, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 08:24:29 compute-0 podman[239708]: 2026-01-31 08:24:29.964035133 +0000 UTC m=+0.107352150 container start b32ddf75e53a8075c3a2df8eb224284a1717528bc86b6f1f13c69f79eed4dd21 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sad_goodall, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20251030)
Jan 31 08:24:29 compute-0 sad_goodall[239724]: 167 167
Jan 31 08:24:29 compute-0 systemd[1]: libpod-b32ddf75e53a8075c3a2df8eb224284a1717528bc86b6f1f13c69f79eed4dd21.scope: Deactivated successfully.
Jan 31 08:24:29 compute-0 podman[239708]: 2026-01-31 08:24:29.970381266 +0000 UTC m=+0.113698263 container attach b32ddf75e53a8075c3a2df8eb224284a1717528bc86b6f1f13c69f79eed4dd21 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sad_goodall, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 08:24:29 compute-0 podman[239708]: 2026-01-31 08:24:29.970959233 +0000 UTC m=+0.114276250 container died b32ddf75e53a8075c3a2df8eb224284a1717528bc86b6f1f13c69f79eed4dd21 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sad_goodall, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 08:24:29 compute-0 podman[239708]: 2026-01-31 08:24:29.877801134 +0000 UTC m=+0.021118141 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 08:24:30 compute-0 systemd[1]: var-lib-containers-storage-overlay-6d76ec5c6d83dd2621d2d776e47f563e89dd42bb906306bd4fe677ac091aeb5b-merged.mount: Deactivated successfully.
Jan 31 08:24:30 compute-0 podman[239708]: 2026-01-31 08:24:30.023085477 +0000 UTC m=+0.166402494 container remove b32ddf75e53a8075c3a2df8eb224284a1717528bc86b6f1f13c69f79eed4dd21 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sad_goodall, CEPH_REF=tentacle, org.label-schema.build-date=20251030, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 08:24:30 compute-0 systemd[1]: libpod-conmon-b32ddf75e53a8075c3a2df8eb224284a1717528bc86b6f1f13c69f79eed4dd21.scope: Deactivated successfully.
Jan 31 08:24:30 compute-0 ceph-mon[75227]: pgmap v656: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:24:30 compute-0 podman[239747]: 2026-01-31 08:24:30.185242237 +0000 UTC m=+0.047217484 container create 689a36f7bf5ca61944cd533952e5677c40116343f106219c5637bd58e013e3e5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_meninsky, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3)
Jan 31 08:24:30 compute-0 systemd[1]: Started libpod-conmon-689a36f7bf5ca61944cd533952e5677c40116343f106219c5637bd58e013e3e5.scope.
Jan 31 08:24:30 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:24:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ddeb20b20f84a47cd12a43a434ebaf5b0e9bb79699f91b096a3bb3126ac7eccf/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 08:24:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ddeb20b20f84a47cd12a43a434ebaf5b0e9bb79699f91b096a3bb3126ac7eccf/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 08:24:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ddeb20b20f84a47cd12a43a434ebaf5b0e9bb79699f91b096a3bb3126ac7eccf/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 08:24:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ddeb20b20f84a47cd12a43a434ebaf5b0e9bb79699f91b096a3bb3126ac7eccf/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 08:24:30 compute-0 podman[239747]: 2026-01-31 08:24:30.160974697 +0000 UTC m=+0.022949964 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 08:24:30 compute-0 podman[239747]: 2026-01-31 08:24:30.264901526 +0000 UTC m=+0.126876743 container init 689a36f7bf5ca61944cd533952e5677c40116343f106219c5637bd58e013e3e5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_meninsky, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 08:24:30 compute-0 podman[239747]: 2026-01-31 08:24:30.270403935 +0000 UTC m=+0.132379142 container start 689a36f7bf5ca61944cd533952e5677c40116343f106219c5637bd58e013e3e5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_meninsky, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, ceph=True)
Jan 31 08:24:30 compute-0 podman[239747]: 2026-01-31 08:24:30.27406187 +0000 UTC m=+0.136037077 container attach 689a36f7bf5ca61944cd533952e5677c40116343f106219c5637bd58e013e3e5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_meninsky, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 08:24:30 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 31 08:24:30 compute-0 ceph-mon[75227]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2886002857' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 31 08:24:30 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 31 08:24:30 compute-0 ceph-mon[75227]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2886002857' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 31 08:24:30 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 31 08:24:30 compute-0 ceph-mon[75227]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1466788001' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 31 08:24:30 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 31 08:24:30 compute-0 ceph-mon[75227]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1466788001' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 31 08:24:30 compute-0 lvm[239840]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 31 08:24:30 compute-0 lvm[239840]: VG ceph_vg0 finished
Jan 31 08:24:30 compute-0 lvm[239843]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 31 08:24:30 compute-0 lvm[239843]: VG ceph_vg1 finished
Jan 31 08:24:30 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 31 08:24:30 compute-0 ceph-mon[75227]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2834280013' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 31 08:24:30 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 31 08:24:30 compute-0 ceph-mon[75227]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2834280013' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 31 08:24:30 compute-0 lvm[239845]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 31 08:24:30 compute-0 lvm[239845]: VG ceph_vg2 finished
Jan 31 08:24:30 compute-0 lvm[239846]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 31 08:24:30 compute-0 lvm[239846]: VG ceph_vg1 finished
Jan 31 08:24:30 compute-0 lvm[239848]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 31 08:24:30 compute-0 lvm[239848]: VG ceph_vg1 finished
Jan 31 08:24:30 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v657: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:24:30 compute-0 zen_meninsky[239764]: {}
Jan 31 08:24:31 compute-0 systemd[1]: libpod-689a36f7bf5ca61944cd533952e5677c40116343f106219c5637bd58e013e3e5.scope: Deactivated successfully.
Jan 31 08:24:31 compute-0 podman[239747]: 2026-01-31 08:24:31.004811011 +0000 UTC m=+0.866786218 container died 689a36f7bf5ca61944cd533952e5677c40116343f106219c5637bd58e013e3e5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_meninsky, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 08:24:31 compute-0 systemd[1]: var-lib-containers-storage-overlay-ddeb20b20f84a47cd12a43a434ebaf5b0e9bb79699f91b096a3bb3126ac7eccf-merged.mount: Deactivated successfully.
Jan 31 08:24:31 compute-0 podman[239747]: 2026-01-31 08:24:31.049448689 +0000 UTC m=+0.911423906 container remove 689a36f7bf5ca61944cd533952e5677c40116343f106219c5637bd58e013e3e5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_meninsky, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Jan 31 08:24:31 compute-0 systemd[1]: libpod-conmon-689a36f7bf5ca61944cd533952e5677c40116343f106219c5637bd58e013e3e5.scope: Deactivated successfully.
Jan 31 08:24:31 compute-0 ceph-mon[75227]: from='client.? 192.168.122.10:0/2886002857' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 31 08:24:31 compute-0 ceph-mon[75227]: from='client.? 192.168.122.10:0/2886002857' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 31 08:24:31 compute-0 ceph-mon[75227]: from='client.? 192.168.122.10:0/1466788001' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 31 08:24:31 compute-0 ceph-mon[75227]: from='client.? 192.168.122.10:0/1466788001' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 31 08:24:31 compute-0 ceph-mon[75227]: from='client.? 192.168.122.10:0/2834280013' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 31 08:24:31 compute-0 ceph-mon[75227]: from='client.? 192.168.122.10:0/2834280013' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 31 08:24:31 compute-0 sudo[239671]: pam_unix(sudo:session): session closed for user root
Jan 31 08:24:31 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 31 08:24:31 compute-0 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 08:24:31 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 31 08:24:31 compute-0 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 08:24:31 compute-0 sudo[239862]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 31 08:24:31 compute-0 sudo[239862]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:24:31 compute-0 sudo[239862]: pam_unix(sudo:session): session closed for user root
Jan 31 08:24:31 compute-0 ceph-mgr[75519]: [balancer INFO root] Optimize plan auto_2026-01-31_08:24:31
Jan 31 08:24:31 compute-0 ceph-mgr[75519]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 31 08:24:31 compute-0 ceph-mgr[75519]: [balancer INFO root] do_upmap
Jan 31 08:24:31 compute-0 ceph-mgr[75519]: [balancer INFO root] pools ['backups', 'default.rgw.control', 'images', 'cephfs.cephfs.data', 'default.rgw.log', 'cephfs.cephfs.meta', 'default.rgw.meta', '.mgr', 'vms', 'volumes', '.rgw.root']
Jan 31 08:24:31 compute-0 ceph-mgr[75519]: [balancer INFO root] prepared 0/10 upmap changes
Jan 31 08:24:32 compute-0 ceph-mon[75227]: pgmap v657: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:24:32 compute-0 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 08:24:32 compute-0 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 08:24:32 compute-0 ceph-mgr[75519]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:24:32 compute-0 ceph-mgr[75519]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:24:32 compute-0 ceph-mgr[75519]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:24:32 compute-0 ceph-mgr[75519]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:24:32 compute-0 ceph-mgr[75519]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:24:32 compute-0 ceph-mgr[75519]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:24:32 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v658: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:24:33 compute-0 ceph-mgr[75519]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 31 08:24:33 compute-0 ceph-mgr[75519]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 08:24:33 compute-0 ceph-mgr[75519]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 31 08:24:33 compute-0 ceph-mgr[75519]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 08:24:33 compute-0 ceph-mgr[75519]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 08:24:33 compute-0 ceph-mgr[75519]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 08:24:33 compute-0 ceph-mgr[75519]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 08:24:33 compute-0 ceph-mgr[75519]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 08:24:33 compute-0 ceph-mgr[75519]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 08:24:33 compute-0 ceph-mgr[75519]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 08:24:33 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 08:24:34 compute-0 ceph-mon[75227]: pgmap v658: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:24:34 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v659: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:24:36 compute-0 ceph-mon[75227]: pgmap v659: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:24:36 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v660: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:24:38 compute-0 ceph-mon[75227]: pgmap v660: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:24:38 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 08:24:38 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v661: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:24:40 compute-0 ceph-mon[75227]: pgmap v661: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:24:40 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v662: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:24:42 compute-0 ceph-mon[75227]: pgmap v662: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:24:42 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v663: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:24:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] _maybe_adjust
Jan 31 08:24:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:24:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 31 08:24:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:24:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 08:24:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:24:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 08:24:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:24:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 08:24:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:24:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 08:24:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:24:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 2.6947183441958982e-06 of space, bias 4.0, pg target 0.003233662013035078 quantized to 16 (current 16)
Jan 31 08:24:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:24:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 08:24:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:24:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 31 08:24:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:24:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Jan 31 08:24:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:24:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 08:24:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:24:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 31 08:24:43 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 08:24:43 compute-0 ceph-mon[75227]: pgmap v663: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:24:44 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v664: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:24:46 compute-0 ceph-mon[75227]: pgmap v664: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:24:46 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v665: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:24:47 compute-0 ceph-mon[75227]: pgmap v665: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:24:48 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 08:24:48 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v666: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:24:50 compute-0 ceph-mon[75227]: pgmap v666: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:24:50 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v667: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:24:52 compute-0 ceph-mon[75227]: pgmap v667: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:24:52 compute-0 podman[239888]: 2026-01-31 08:24:52.155967129 +0000 UTC m=+0.051951501 container health_status 5cc46d1955888fed41771eb977e7f9416e280539f01559118253757fe3eb0869 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '3b798815db4eef76ff54d2cfe5801aee605c637f16e47a4297de289ab1fdb9c1-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Jan 31 08:24:52 compute-0 podman[239887]: 2026-01-31 08:24:52.187279713 +0000 UTC m=+0.083285375 container health_status 14c3c41ef3ecc0cb180f3b1c6b2646401de390411c75f2edf127900ced71a3ae (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '3b798815db4eef76ff54d2cfe5801aee605c637f16e47a4297de289ab1fdb9c1-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.license=GPLv2)
Jan 31 08:24:52 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v668: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:24:53 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 08:24:54 compute-0 ceph-mon[75227]: pgmap v668: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:24:54 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v669: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:24:56 compute-0 ceph-mon[75227]: pgmap v669: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:24:56 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v670: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:24:58 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 08:24:58 compute-0 ceph-mon[75227]: pgmap v670: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:24:58 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v671: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:24:59 compute-0 ceph-mon[75227]: pgmap v671: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:25:00 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v672: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:25:02 compute-0 ceph-mon[75227]: pgmap v672: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:25:02 compute-0 ceph-mgr[75519]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:25:02 compute-0 ceph-mgr[75519]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:25:02 compute-0 ceph-mgr[75519]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:25:02 compute-0 ceph-mgr[75519]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:25:02 compute-0 ceph-mgr[75519]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:25:02 compute-0 ceph-mgr[75519]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:25:02 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v673: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:25:03 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 08:25:04 compute-0 ceph-mon[75227]: pgmap v673: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:25:04 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v674: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:25:06 compute-0 ceph-mon[75227]: pgmap v674: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:25:06 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v675: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:25:07 compute-0 nova_compute[238824]: 2026-01-31 08:25:07.342 238828 DEBUG oslo_service.periodic_task [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:25:07 compute-0 nova_compute[238824]: 2026-01-31 08:25:07.342 238828 DEBUG oslo_service.periodic_task [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:25:07 compute-0 nova_compute[238824]: 2026-01-31 08:25:07.343 238828 DEBUG nova.compute.manager [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 31 08:25:07 compute-0 nova_compute[238824]: 2026-01-31 08:25:07.343 238828 DEBUG nova.compute.manager [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 31 08:25:07 compute-0 nova_compute[238824]: 2026-01-31 08:25:07.436 238828 DEBUG nova.compute.manager [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 31 08:25:07 compute-0 nova_compute[238824]: 2026-01-31 08:25:07.437 238828 DEBUG oslo_service.periodic_task [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:25:07 compute-0 nova_compute[238824]: 2026-01-31 08:25:07.437 238828 DEBUG oslo_service.periodic_task [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:25:07 compute-0 nova_compute[238824]: 2026-01-31 08:25:07.437 238828 DEBUG oslo_service.periodic_task [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:25:07 compute-0 nova_compute[238824]: 2026-01-31 08:25:07.438 238828 DEBUG oslo_service.periodic_task [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:25:07 compute-0 nova_compute[238824]: 2026-01-31 08:25:07.438 238828 DEBUG oslo_service.periodic_task [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:25:07 compute-0 nova_compute[238824]: 2026-01-31 08:25:07.438 238828 DEBUG oslo_service.periodic_task [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:25:07 compute-0 nova_compute[238824]: 2026-01-31 08:25:07.438 238828 DEBUG nova.compute.manager [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 31 08:25:07 compute-0 nova_compute[238824]: 2026-01-31 08:25:07.438 238828 DEBUG oslo_service.periodic_task [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:25:07 compute-0 nova_compute[238824]: 2026-01-31 08:25:07.529 238828 DEBUG oslo_concurrency.lockutils [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:25:07 compute-0 nova_compute[238824]: 2026-01-31 08:25:07.530 238828 DEBUG oslo_concurrency.lockutils [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:25:07 compute-0 nova_compute[238824]: 2026-01-31 08:25:07.530 238828 DEBUG oslo_concurrency.lockutils [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:25:07 compute-0 nova_compute[238824]: 2026-01-31 08:25:07.530 238828 DEBUG nova.compute.resource_tracker [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 31 08:25:07 compute-0 nova_compute[238824]: 2026-01-31 08:25:07.531 238828 DEBUG oslo_concurrency.processutils [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:25:08 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 31 08:25:08 compute-0 ceph-mon[75227]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4215308270' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 31 08:25:08 compute-0 nova_compute[238824]: 2026-01-31 08:25:08.055 238828 DEBUG oslo_concurrency.processutils [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.524s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:25:08 compute-0 ceph-mon[75227]: pgmap v675: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:25:08 compute-0 ceph-mon[75227]: from='client.? 192.168.122.100:0/4215308270' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 31 08:25:08 compute-0 nova_compute[238824]: 2026-01-31 08:25:08.203 238828 WARNING nova.virt.libvirt.driver [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 31 08:25:08 compute-0 nova_compute[238824]: 2026-01-31 08:25:08.204 238828 DEBUG nova.compute.resource_tracker [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5180MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 31 08:25:08 compute-0 nova_compute[238824]: 2026-01-31 08:25:08.204 238828 DEBUG oslo_concurrency.lockutils [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:25:08 compute-0 nova_compute[238824]: 2026-01-31 08:25:08.205 238828 DEBUG oslo_concurrency.lockutils [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:25:08 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 08:25:08 compute-0 nova_compute[238824]: 2026-01-31 08:25:08.478 238828 DEBUG nova.compute.resource_tracker [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 31 08:25:08 compute-0 nova_compute[238824]: 2026-01-31 08:25:08.479 238828 DEBUG nova.compute.resource_tracker [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 31 08:25:08 compute-0 nova_compute[238824]: 2026-01-31 08:25:08.516 238828 DEBUG oslo_concurrency.processutils [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:25:08 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v676: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:25:09 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 31 08:25:09 compute-0 ceph-mon[75227]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4120157718' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 31 08:25:09 compute-0 nova_compute[238824]: 2026-01-31 08:25:09.048 238828 DEBUG oslo_concurrency.processutils [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.532s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:25:09 compute-0 nova_compute[238824]: 2026-01-31 08:25:09.053 238828 DEBUG nova.compute.provider_tree [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Inventory has not changed in ProviderTree for provider: 6d4ff98f-eb37-47a1-bfaf-01e7f5329d98 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 31 08:25:09 compute-0 nova_compute[238824]: 2026-01-31 08:25:09.106 238828 DEBUG nova.scheduler.client.report [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Inventory has not changed for provider 6d4ff98f-eb37-47a1-bfaf-01e7f5329d98 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 31 08:25:09 compute-0 ceph-mon[75227]: from='client.? 192.168.122.100:0/4120157718' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 31 08:25:09 compute-0 nova_compute[238824]: 2026-01-31 08:25:09.172 238828 DEBUG nova.compute.resource_tracker [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 31 08:25:09 compute-0 nova_compute[238824]: 2026-01-31 08:25:09.172 238828 DEBUG oslo_concurrency.lockutils [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.968s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:25:10 compute-0 ceph-mon[75227]: pgmap v676: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:25:10 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v677: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:25:12 compute-0 ceph-mon[75227]: pgmap v677: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:25:12 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v678: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:25:13 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 08:25:13 compute-0 ceph-mon[75227]: pgmap v678: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:25:14 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v679: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:25:16 compute-0 ceph-mon[75227]: pgmap v679: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:25:16 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v680: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:25:17 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "version", "format": "json"} v 0)
Jan 31 08:25:17 compute-0 ceph-mon[75227]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2262470798' entity='client.openstack' cmd={"prefix": "version", "format": "json"} : dispatch
Jan 31 08:25:17 compute-0 ceph-mgr[75519]: log_channel(audit) log [DBG] : from='client.14340 -' entity='client.openstack' cmd=[{"prefix": "fs volume ls", "format": "json"}]: dispatch
Jan 31 08:25:17 compute-0 ceph-mgr[75519]: [volumes INFO volumes.module] Starting _cmd_fs_volume_ls(format:json, prefix:fs volume ls) < ""
Jan 31 08:25:17 compute-0 ceph-mgr[75519]: [volumes INFO volumes.module] Finishing _cmd_fs_volume_ls(format:json, prefix:fs volume ls) < ""
Jan 31 08:25:17 compute-0 ovn_metadata_agent[154972]: 2026-01-31 08:25:17.884 154977 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:25:17 compute-0 ovn_metadata_agent[154972]: 2026-01-31 08:25:17.885 154977 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:25:17 compute-0 ovn_metadata_agent[154972]: 2026-01-31 08:25:17.885 154977 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:25:18 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 08:25:18 compute-0 ceph-mon[75227]: pgmap v680: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:25:18 compute-0 ceph-mon[75227]: from='client.? 192.168.122.10:0/2262470798' entity='client.openstack' cmd={"prefix": "version", "format": "json"} : dispatch
Jan 31 08:25:18 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v681: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:25:19 compute-0 ceph-mon[75227]: from='client.14340 -' entity='client.openstack' cmd=[{"prefix": "fs volume ls", "format": "json"}]: dispatch
Jan 31 08:25:19 compute-0 ceph-mon[75227]: pgmap v681: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:25:20 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v682: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:25:22 compute-0 ceph-mon[75227]: pgmap v682: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:25:22 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v683: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:25:23 compute-0 podman[239975]: 2026-01-31 08:25:23.174824261 +0000 UTC m=+0.068640050 container health_status 14c3c41ef3ecc0cb180f3b1c6b2646401de390411c75f2edf127900ced71a3ae (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, container_name=ovn_controller, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '3b798815db4eef76ff54d2cfe5801aee605c637f16e47a4297de289ab1fdb9c1-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Jan 31 08:25:23 compute-0 podman[239976]: 2026-01-31 08:25:23.174875973 +0000 UTC m=+0.058778443 container health_status 5cc46d1955888fed41771eb977e7f9416e280539f01559118253757fe3eb0869 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '3b798815db4eef76ff54d2cfe5801aee605c637f16e47a4297de289ab1fdb9c1-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20260127)
Jan 31 08:25:23 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 08:25:23 compute-0 ceph-mon[75227]: pgmap v683: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:25:24 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v684: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:25:26 compute-0 ceph-mon[75227]: pgmap v684: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:25:26 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v685: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:25:27 compute-0 ceph-mon[75227]: pgmap v685: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:25:28 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 08:25:28 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v686: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:25:30 compute-0 ceph-mon[75227]: pgmap v686: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:25:30 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v687: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:25:31 compute-0 sudo[240021]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 08:25:31 compute-0 sudo[240021]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:25:31 compute-0 sudo[240021]: pam_unix(sudo:session): session closed for user root
Jan 31 08:25:31 compute-0 sudo[240046]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/82c880e6-d992-5408-8b12-efff9c275473/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Jan 31 08:25:31 compute-0 sudo[240046]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:25:31 compute-0 ceph-mgr[75519]: [balancer INFO root] Optimize plan auto_2026-01-31_08:25:31
Jan 31 08:25:31 compute-0 ceph-mgr[75519]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 31 08:25:31 compute-0 ceph-mgr[75519]: [balancer INFO root] do_upmap
Jan 31 08:25:31 compute-0 ceph-mgr[75519]: [balancer INFO root] pools ['.rgw.root', 'default.rgw.meta', 'cephfs.cephfs.data', 'default.rgw.control', 'default.rgw.log', 'backups', '.mgr', 'images', 'volumes', 'cephfs.cephfs.meta', 'vms']
Jan 31 08:25:31 compute-0 sudo[240046]: pam_unix(sudo:session): session closed for user root
Jan 31 08:25:31 compute-0 ceph-mgr[75519]: [balancer INFO root] prepared 0/10 upmap changes
Jan 31 08:25:31 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 31 08:25:31 compute-0 ceph-mon[75227]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 08:25:31 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 31 08:25:31 compute-0 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 31 08:25:31 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 31 08:25:31 compute-0 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 08:25:31 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 31 08:25:31 compute-0 ceph-mon[75227]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 31 08:25:31 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 31 08:25:31 compute-0 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 31 08:25:31 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 31 08:25:31 compute-0 ceph-mon[75227]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 08:25:31 compute-0 sudo[240102]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 08:25:31 compute-0 sudo[240102]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:25:31 compute-0 sudo[240102]: pam_unix(sudo:session): session closed for user root
Jan 31 08:25:31 compute-0 sudo[240127]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/82c880e6-d992-5408-8b12-efff9c275473/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 82c880e6-d992-5408-8b12-efff9c275473 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Jan 31 08:25:31 compute-0 sudo[240127]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:25:32 compute-0 podman[240165]: 2026-01-31 08:25:32.114641982 +0000 UTC m=+0.018733166 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 08:25:32 compute-0 podman[240165]: 2026-01-31 08:25:32.30788034 +0000 UTC m=+0.211971504 container create 2c160f41884d192f1a1113447c184f249fab7b57c744a6a0402920c6e529458d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elated_pasteur, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Jan 31 08:25:32 compute-0 ceph-mon[75227]: pgmap v687: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:25:32 compute-0 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 08:25:32 compute-0 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 31 08:25:32 compute-0 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 08:25:32 compute-0 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 31 08:25:32 compute-0 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 31 08:25:32 compute-0 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 08:25:32 compute-0 systemd[1]: Started libpod-conmon-2c160f41884d192f1a1113447c184f249fab7b57c744a6a0402920c6e529458d.scope.
Jan 31 08:25:32 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:25:32 compute-0 ceph-mgr[75519]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:25:32 compute-0 ceph-mgr[75519]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:25:32 compute-0 ceph-mgr[75519]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:25:32 compute-0 ceph-mgr[75519]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:25:32 compute-0 ceph-mgr[75519]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:25:32 compute-0 ceph-mgr[75519]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:25:32 compute-0 podman[240165]: 2026-01-31 08:25:32.855760976 +0000 UTC m=+0.759852180 container init 2c160f41884d192f1a1113447c184f249fab7b57c744a6a0402920c6e529458d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elated_pasteur, org.label-schema.build-date=20251030, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle)
Jan 31 08:25:32 compute-0 podman[240165]: 2026-01-31 08:25:32.862814411 +0000 UTC m=+0.766905585 container start 2c160f41884d192f1a1113447c184f249fab7b57c744a6a0402920c6e529458d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elated_pasteur, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Jan 31 08:25:32 compute-0 elated_pasteur[240184]: 167 167
Jan 31 08:25:32 compute-0 systemd[1]: libpod-2c160f41884d192f1a1113447c184f249fab7b57c744a6a0402920c6e529458d.scope: Deactivated successfully.
Jan 31 08:25:32 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v688: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:25:33 compute-0 ceph-mgr[75519]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 31 08:25:33 compute-0 ceph-mgr[75519]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 08:25:33 compute-0 ceph-mgr[75519]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 31 08:25:33 compute-0 ceph-mgr[75519]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 08:25:33 compute-0 ceph-mgr[75519]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 08:25:33 compute-0 ceph-mgr[75519]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 08:25:33 compute-0 ceph-mgr[75519]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 08:25:33 compute-0 ceph-mgr[75519]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 08:25:33 compute-0 ceph-mgr[75519]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 08:25:33 compute-0 ceph-mgr[75519]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 08:25:33 compute-0 podman[240165]: 2026-01-31 08:25:33.085202108 +0000 UTC m=+0.989293302 container attach 2c160f41884d192f1a1113447c184f249fab7b57c744a6a0402920c6e529458d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elated_pasteur, ceph=True, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 08:25:33 compute-0 podman[240165]: 2026-01-31 08:25:33.086892808 +0000 UTC m=+0.990983992 container died 2c160f41884d192f1a1113447c184f249fab7b57c744a6a0402920c6e529458d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elated_pasteur, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default)
Jan 31 08:25:33 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 08:25:33 compute-0 ceph-mon[75227]: pgmap v688: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:25:34 compute-0 systemd[1]: var-lib-containers-storage-overlay-520b4e5d87dbc55e9a6d354c7cfa130e4032296f4c556e0a82fe8ede27de3688-merged.mount: Deactivated successfully.
Jan 31 08:25:34 compute-0 podman[240165]: 2026-01-31 08:25:34.959936008 +0000 UTC m=+2.864027162 container remove 2c160f41884d192f1a1113447c184f249fab7b57c744a6a0402920c6e529458d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elated_pasteur, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 08:25:34 compute-0 systemd[1]: libpod-conmon-2c160f41884d192f1a1113447c184f249fab7b57c744a6a0402920c6e529458d.scope: Deactivated successfully.
Jan 31 08:25:34 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v689: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:25:35 compute-0 podman[240208]: 2026-01-31 08:25:35.061209718 +0000 UTC m=+0.025607297 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 08:25:35 compute-0 podman[240208]: 2026-01-31 08:25:35.187651431 +0000 UTC m=+0.152048980 container create 5d08c215d3120fab25bec2a14ba3a36d57926e94f80277e67af59d1b136b8da6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=friendly_darwin, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0)
Jan 31 08:25:35 compute-0 systemd[1]: Started libpod-conmon-5d08c215d3120fab25bec2a14ba3a36d57926e94f80277e67af59d1b136b8da6.scope.
Jan 31 08:25:35 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:25:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/31ff8929783fbbec1809d855c76c1fb1d8ee053931a7ecc8dfb7d217ac265369/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 08:25:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/31ff8929783fbbec1809d855c76c1fb1d8ee053931a7ecc8dfb7d217ac265369/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 08:25:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/31ff8929783fbbec1809d855c76c1fb1d8ee053931a7ecc8dfb7d217ac265369/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 08:25:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/31ff8929783fbbec1809d855c76c1fb1d8ee053931a7ecc8dfb7d217ac265369/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 08:25:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/31ff8929783fbbec1809d855c76c1fb1d8ee053931a7ecc8dfb7d217ac265369/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 31 08:25:35 compute-0 podman[240208]: 2026-01-31 08:25:35.896102754 +0000 UTC m=+0.860500353 container init 5d08c215d3120fab25bec2a14ba3a36d57926e94f80277e67af59d1b136b8da6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=friendly_darwin, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 08:25:35 compute-0 podman[240208]: 2026-01-31 08:25:35.902739267 +0000 UTC m=+0.867136836 container start 5d08c215d3120fab25bec2a14ba3a36d57926e94f80277e67af59d1b136b8da6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=friendly_darwin, io.buildah.version=1.41.3, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Jan 31 08:25:35 compute-0 ceph-mon[75227]: pgmap v689: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:25:36 compute-0 podman[240208]: 2026-01-31 08:25:36.083861231 +0000 UTC m=+1.048258800 container attach 5d08c215d3120fab25bec2a14ba3a36d57926e94f80277e67af59d1b136b8da6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=friendly_darwin, ceph=True, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_REF=tentacle, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 08:25:36 compute-0 friendly_darwin[240224]: --> passed data devices: 0 physical, 3 LVM
Jan 31 08:25:36 compute-0 friendly_darwin[240224]: --> All data devices are unavailable
Jan 31 08:25:36 compute-0 systemd[1]: libpod-5d08c215d3120fab25bec2a14ba3a36d57926e94f80277e67af59d1b136b8da6.scope: Deactivated successfully.
Jan 31 08:25:36 compute-0 podman[240208]: 2026-01-31 08:25:36.317451614 +0000 UTC m=+1.281849213 container died 5d08c215d3120fab25bec2a14ba3a36d57926e94f80277e67af59d1b136b8da6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=friendly_darwin, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=tentacle, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 08:25:36 compute-0 systemd[1]: var-lib-containers-storage-overlay-31ff8929783fbbec1809d855c76c1fb1d8ee053931a7ecc8dfb7d217ac265369-merged.mount: Deactivated successfully.
Jan 31 08:25:36 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v690: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:25:37 compute-0 podman[240208]: 2026-01-31 08:25:37.035176336 +0000 UTC m=+1.999573875 container remove 5d08c215d3120fab25bec2a14ba3a36d57926e94f80277e67af59d1b136b8da6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=friendly_darwin, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 08:25:37 compute-0 systemd[1]: libpod-conmon-5d08c215d3120fab25bec2a14ba3a36d57926e94f80277e67af59d1b136b8da6.scope: Deactivated successfully.
Jan 31 08:25:37 compute-0 sudo[240127]: pam_unix(sudo:session): session closed for user root
Jan 31 08:25:37 compute-0 sudo[240255]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 08:25:37 compute-0 sudo[240255]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:25:37 compute-0 sudo[240255]: pam_unix(sudo:session): session closed for user root
Jan 31 08:25:37 compute-0 sudo[240280]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/82c880e6-d992-5408-8b12-efff9c275473/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 82c880e6-d992-5408-8b12-efff9c275473 -- lvm list --format json
Jan 31 08:25:37 compute-0 sudo[240280]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:25:37 compute-0 podman[240317]: 2026-01-31 08:25:37.484538854 +0000 UTC m=+0.062898053 container create 1121a47dde0a73111cbe7fd624846ba4c69bd658cab6548dfd8b28a31d853bb6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cranky_liskov, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 08:25:37 compute-0 podman[240317]: 2026-01-31 08:25:37.449535564 +0000 UTC m=+0.027894823 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 08:25:37 compute-0 systemd[1]: Started libpod-conmon-1121a47dde0a73111cbe7fd624846ba4c69bd658cab6548dfd8b28a31d853bb6.scope.
Jan 31 08:25:37 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:25:37 compute-0 podman[240317]: 2026-01-31 08:25:37.669236343 +0000 UTC m=+0.247595522 container init 1121a47dde0a73111cbe7fd624846ba4c69bd658cab6548dfd8b28a31d853bb6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cranky_liskov, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 08:25:37 compute-0 podman[240317]: 2026-01-31 08:25:37.675264308 +0000 UTC m=+0.253623477 container start 1121a47dde0a73111cbe7fd624846ba4c69bd658cab6548dfd8b28a31d853bb6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cranky_liskov, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 31 08:25:37 compute-0 cranky_liskov[240334]: 167 167
Jan 31 08:25:37 compute-0 systemd[1]: libpod-1121a47dde0a73111cbe7fd624846ba4c69bd658cab6548dfd8b28a31d853bb6.scope: Deactivated successfully.
Jan 31 08:25:37 compute-0 podman[240317]: 2026-01-31 08:25:37.705475118 +0000 UTC m=+0.283834317 container attach 1121a47dde0a73111cbe7fd624846ba4c69bd658cab6548dfd8b28a31d853bb6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cranky_liskov, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20251030)
Jan 31 08:25:37 compute-0 podman[240317]: 2026-01-31 08:25:37.706170798 +0000 UTC m=+0.284529987 container died 1121a47dde0a73111cbe7fd624846ba4c69bd658cab6548dfd8b28a31d853bb6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cranky_liskov, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 08:25:37 compute-0 systemd[1]: var-lib-containers-storage-overlay-76f5a5c166f4076e067f27147d417eeb30d2677ee0756cb05a927a37317153d2-merged.mount: Deactivated successfully.
Jan 31 08:25:37 compute-0 podman[240317]: 2026-01-31 08:25:37.937106634 +0000 UTC m=+0.515465793 container remove 1121a47dde0a73111cbe7fd624846ba4c69bd658cab6548dfd8b28a31d853bb6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cranky_liskov, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, OSD_FLAVOR=default, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20251030)
Jan 31 08:25:37 compute-0 systemd[1]: libpod-conmon-1121a47dde0a73111cbe7fd624846ba4c69bd658cab6548dfd8b28a31d853bb6.scope: Deactivated successfully.
Jan 31 08:25:38 compute-0 podman[240360]: 2026-01-31 08:25:38.110501884 +0000 UTC m=+0.050991016 container create 79f58523c8b081220e514dce47db724b60480ff4b47d2022ef80409eaf886c24 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=busy_newton, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 08:25:38 compute-0 systemd[1]: Started libpod-conmon-79f58523c8b081220e514dce47db724b60480ff4b47d2022ef80409eaf886c24.scope.
Jan 31 08:25:38 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:25:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/54588d878f26ed421bb6e6fa147c5829373a479297b0cb69aa96da50537a48ad/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 08:25:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/54588d878f26ed421bb6e6fa147c5829373a479297b0cb69aa96da50537a48ad/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 08:25:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/54588d878f26ed421bb6e6fa147c5829373a479297b0cb69aa96da50537a48ad/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 08:25:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/54588d878f26ed421bb6e6fa147c5829373a479297b0cb69aa96da50537a48ad/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 08:25:38 compute-0 podman[240360]: 2026-01-31 08:25:38.084603409 +0000 UTC m=+0.025092521 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 08:25:38 compute-0 podman[240360]: 2026-01-31 08:25:38.199055803 +0000 UTC m=+0.139544915 container init 79f58523c8b081220e514dce47db724b60480ff4b47d2022ef80409eaf886c24 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=busy_newton, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 08:25:38 compute-0 podman[240360]: 2026-01-31 08:25:38.205932093 +0000 UTC m=+0.146421185 container start 79f58523c8b081220e514dce47db724b60480ff4b47d2022ef80409eaf886c24 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=busy_newton, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3)
Jan 31 08:25:38 compute-0 ceph-mon[75227]: pgmap v690: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:25:38 compute-0 podman[240360]: 2026-01-31 08:25:38.23295652 +0000 UTC m=+0.173445612 container attach 79f58523c8b081220e514dce47db724b60480ff4b47d2022ef80409eaf886c24 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=busy_newton, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 08:25:38 compute-0 busy_newton[240377]: {
Jan 31 08:25:38 compute-0 busy_newton[240377]:     "0": [
Jan 31 08:25:38 compute-0 busy_newton[240377]:         {
Jan 31 08:25:38 compute-0 busy_newton[240377]:             "devices": [
Jan 31 08:25:38 compute-0 busy_newton[240377]:                 "/dev/loop3"
Jan 31 08:25:38 compute-0 busy_newton[240377]:             ],
Jan 31 08:25:38 compute-0 busy_newton[240377]:             "lv_name": "ceph_lv0",
Jan 31 08:25:38 compute-0 busy_newton[240377]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 08:25:38 compute-0 busy_newton[240377]:             "lv_size": "21470642176",
Jan 31 08:25:38 compute-0 busy_newton[240377]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=MTsNbY-MKaT-jGv0-3onj-5WQa-gnK0-BbfLsK,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=82c880e6-d992-5408-8b12-efff9c275473,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=39c36249-2898-4a76-b317-8e4ca379866f,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 31 08:25:38 compute-0 busy_newton[240377]:             "lv_uuid": "MTsNbY-MKaT-jGv0-3onj-5WQa-gnK0-BbfLsK",
Jan 31 08:25:38 compute-0 busy_newton[240377]:             "name": "ceph_lv0",
Jan 31 08:25:38 compute-0 busy_newton[240377]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 08:25:38 compute-0 busy_newton[240377]:             "tags": {
Jan 31 08:25:38 compute-0 busy_newton[240377]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 31 08:25:38 compute-0 busy_newton[240377]:                 "ceph.block_uuid": "MTsNbY-MKaT-jGv0-3onj-5WQa-gnK0-BbfLsK",
Jan 31 08:25:38 compute-0 busy_newton[240377]:                 "ceph.cephx_lockbox_secret": "",
Jan 31 08:25:38 compute-0 busy_newton[240377]:                 "ceph.cluster_fsid": "82c880e6-d992-5408-8b12-efff9c275473",
Jan 31 08:25:38 compute-0 busy_newton[240377]:                 "ceph.cluster_name": "ceph",
Jan 31 08:25:38 compute-0 busy_newton[240377]:                 "ceph.crush_device_class": "",
Jan 31 08:25:38 compute-0 busy_newton[240377]:                 "ceph.encrypted": "0",
Jan 31 08:25:38 compute-0 busy_newton[240377]:                 "ceph.objectstore": "bluestore",
Jan 31 08:25:38 compute-0 busy_newton[240377]:                 "ceph.osd_fsid": "39c36249-2898-4a76-b317-8e4ca379866f",
Jan 31 08:25:38 compute-0 busy_newton[240377]:                 "ceph.osd_id": "0",
Jan 31 08:25:38 compute-0 busy_newton[240377]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 31 08:25:38 compute-0 busy_newton[240377]:                 "ceph.type": "block",
Jan 31 08:25:38 compute-0 busy_newton[240377]:                 "ceph.vdo": "0",
Jan 31 08:25:38 compute-0 busy_newton[240377]:                 "ceph.with_tpm": "0"
Jan 31 08:25:38 compute-0 busy_newton[240377]:             },
Jan 31 08:25:38 compute-0 busy_newton[240377]:             "type": "block",
Jan 31 08:25:38 compute-0 busy_newton[240377]:             "vg_name": "ceph_vg0"
Jan 31 08:25:38 compute-0 busy_newton[240377]:         }
Jan 31 08:25:38 compute-0 busy_newton[240377]:     ],
Jan 31 08:25:38 compute-0 busy_newton[240377]:     "1": [
Jan 31 08:25:38 compute-0 busy_newton[240377]:         {
Jan 31 08:25:38 compute-0 busy_newton[240377]:             "devices": [
Jan 31 08:25:38 compute-0 busy_newton[240377]:                 "/dev/loop4"
Jan 31 08:25:38 compute-0 busy_newton[240377]:             ],
Jan 31 08:25:38 compute-0 busy_newton[240377]:             "lv_name": "ceph_lv1",
Jan 31 08:25:38 compute-0 busy_newton[240377]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 31 08:25:38 compute-0 busy_newton[240377]:             "lv_size": "21470642176",
Jan 31 08:25:38 compute-0 busy_newton[240377]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=p93Mbf-DMxT-pcUt-jSJE-SFna-oscq-yTAd40,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=82c880e6-d992-5408-8b12-efff9c275473,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=dacad4fa-56d8-4937-b2d8-306fb75187f3,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 31 08:25:38 compute-0 busy_newton[240377]:             "lv_uuid": "p93Mbf-DMxT-pcUt-jSJE-SFna-oscq-yTAd40",
Jan 31 08:25:38 compute-0 busy_newton[240377]:             "name": "ceph_lv1",
Jan 31 08:25:38 compute-0 busy_newton[240377]:             "path": "/dev/ceph_vg1/ceph_lv1",
Jan 31 08:25:38 compute-0 busy_newton[240377]:             "tags": {
Jan 31 08:25:38 compute-0 busy_newton[240377]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 31 08:25:38 compute-0 busy_newton[240377]:                 "ceph.block_uuid": "p93Mbf-DMxT-pcUt-jSJE-SFna-oscq-yTAd40",
Jan 31 08:25:38 compute-0 busy_newton[240377]:                 "ceph.cephx_lockbox_secret": "",
Jan 31 08:25:38 compute-0 busy_newton[240377]:                 "ceph.cluster_fsid": "82c880e6-d992-5408-8b12-efff9c275473",
Jan 31 08:25:38 compute-0 busy_newton[240377]:                 "ceph.cluster_name": "ceph",
Jan 31 08:25:38 compute-0 busy_newton[240377]:                 "ceph.crush_device_class": "",
Jan 31 08:25:38 compute-0 busy_newton[240377]:                 "ceph.encrypted": "0",
Jan 31 08:25:38 compute-0 busy_newton[240377]:                 "ceph.objectstore": "bluestore",
Jan 31 08:25:38 compute-0 busy_newton[240377]:                 "ceph.osd_fsid": "dacad4fa-56d8-4937-b2d8-306fb75187f3",
Jan 31 08:25:38 compute-0 busy_newton[240377]:                 "ceph.osd_id": "1",
Jan 31 08:25:38 compute-0 busy_newton[240377]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 31 08:25:38 compute-0 busy_newton[240377]:                 "ceph.type": "block",
Jan 31 08:25:38 compute-0 busy_newton[240377]:                 "ceph.vdo": "0",
Jan 31 08:25:38 compute-0 busy_newton[240377]:                 "ceph.with_tpm": "0"
Jan 31 08:25:38 compute-0 busy_newton[240377]:             },
Jan 31 08:25:38 compute-0 busy_newton[240377]:             "type": "block",
Jan 31 08:25:38 compute-0 busy_newton[240377]:             "vg_name": "ceph_vg1"
Jan 31 08:25:38 compute-0 busy_newton[240377]:         }
Jan 31 08:25:38 compute-0 busy_newton[240377]:     ],
Jan 31 08:25:38 compute-0 busy_newton[240377]:     "2": [
Jan 31 08:25:38 compute-0 busy_newton[240377]:         {
Jan 31 08:25:38 compute-0 busy_newton[240377]:             "devices": [
Jan 31 08:25:38 compute-0 busy_newton[240377]:                 "/dev/loop5"
Jan 31 08:25:38 compute-0 busy_newton[240377]:             ],
Jan 31 08:25:38 compute-0 busy_newton[240377]:             "lv_name": "ceph_lv2",
Jan 31 08:25:38 compute-0 busy_newton[240377]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 31 08:25:38 compute-0 busy_newton[240377]:             "lv_size": "21470642176",
Jan 31 08:25:38 compute-0 busy_newton[240377]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=fxd7JU-HnwP-NvcE-M4xv-EgEF-kK7y-w6dXCS,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=82c880e6-d992-5408-8b12-efff9c275473,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=faa25865-e7b6-44f9-8188-08bf287b941b,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 31 08:25:38 compute-0 busy_newton[240377]:             "lv_uuid": "fxd7JU-HnwP-NvcE-M4xv-EgEF-kK7y-w6dXCS",
Jan 31 08:25:38 compute-0 busy_newton[240377]:             "name": "ceph_lv2",
Jan 31 08:25:38 compute-0 busy_newton[240377]:             "path": "/dev/ceph_vg2/ceph_lv2",
Jan 31 08:25:38 compute-0 busy_newton[240377]:             "tags": {
Jan 31 08:25:38 compute-0 busy_newton[240377]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 31 08:25:38 compute-0 busy_newton[240377]:                 "ceph.block_uuid": "fxd7JU-HnwP-NvcE-M4xv-EgEF-kK7y-w6dXCS",
Jan 31 08:25:38 compute-0 busy_newton[240377]:                 "ceph.cephx_lockbox_secret": "",
Jan 31 08:25:38 compute-0 busy_newton[240377]:                 "ceph.cluster_fsid": "82c880e6-d992-5408-8b12-efff9c275473",
Jan 31 08:25:38 compute-0 busy_newton[240377]:                 "ceph.cluster_name": "ceph",
Jan 31 08:25:38 compute-0 busy_newton[240377]:                 "ceph.crush_device_class": "",
Jan 31 08:25:38 compute-0 busy_newton[240377]:                 "ceph.encrypted": "0",
Jan 31 08:25:38 compute-0 busy_newton[240377]:                 "ceph.objectstore": "bluestore",
Jan 31 08:25:38 compute-0 busy_newton[240377]:                 "ceph.osd_fsid": "faa25865-e7b6-44f9-8188-08bf287b941b",
Jan 31 08:25:38 compute-0 busy_newton[240377]:                 "ceph.osd_id": "2",
Jan 31 08:25:38 compute-0 busy_newton[240377]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 31 08:25:38 compute-0 busy_newton[240377]:                 "ceph.type": "block",
Jan 31 08:25:38 compute-0 busy_newton[240377]:                 "ceph.vdo": "0",
Jan 31 08:25:38 compute-0 busy_newton[240377]:                 "ceph.with_tpm": "0"
Jan 31 08:25:38 compute-0 busy_newton[240377]:             },
Jan 31 08:25:38 compute-0 busy_newton[240377]:             "type": "block",
Jan 31 08:25:38 compute-0 busy_newton[240377]:             "vg_name": "ceph_vg2"
Jan 31 08:25:38 compute-0 busy_newton[240377]:         }
Jan 31 08:25:38 compute-0 busy_newton[240377]:     ]
Jan 31 08:25:38 compute-0 busy_newton[240377]: }
Jan 31 08:25:38 compute-0 systemd[1]: libpod-79f58523c8b081220e514dce47db724b60480ff4b47d2022ef80409eaf886c24.scope: Deactivated successfully.
Jan 31 08:25:38 compute-0 podman[240360]: 2026-01-31 08:25:38.497622598 +0000 UTC m=+0.438111700 container died 79f58523c8b081220e514dce47db724b60480ff4b47d2022ef80409eaf886c24 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=busy_newton, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030)
Jan 31 08:25:38 compute-0 systemd[1]: var-lib-containers-storage-overlay-54588d878f26ed421bb6e6fa147c5829373a479297b0cb69aa96da50537a48ad-merged.mount: Deactivated successfully.
Jan 31 08:25:38 compute-0 podman[240360]: 2026-01-31 08:25:38.631032584 +0000 UTC m=+0.571521686 container remove 79f58523c8b081220e514dce47db724b60480ff4b47d2022ef80409eaf886c24 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=busy_newton, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 08:25:38 compute-0 systemd[1]: libpod-conmon-79f58523c8b081220e514dce47db724b60480ff4b47d2022ef80409eaf886c24.scope: Deactivated successfully.
Jan 31 08:25:38 compute-0 sudo[240280]: pam_unix(sudo:session): session closed for user root
Jan 31 08:25:38 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 08:25:38 compute-0 sudo[240400]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 08:25:38 compute-0 sudo[240400]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:25:38 compute-0 sudo[240400]: pam_unix(sudo:session): session closed for user root
Jan 31 08:25:38 compute-0 sudo[240425]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/82c880e6-d992-5408-8b12-efff9c275473/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 82c880e6-d992-5408-8b12-efff9c275473 -- raw list --format json
Jan 31 08:25:38 compute-0 sudo[240425]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:25:38 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v691: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:25:39 compute-0 podman[240462]: 2026-01-31 08:25:39.060520532 +0000 UTC m=+0.031872939 container create c90587bd8d80f4d8aa91d2f4c2acbc09b9cd7f653bbe4f703e0c13f013a4d213 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=optimistic_greider, ceph=True, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 08:25:39 compute-0 systemd[1]: Started libpod-conmon-c90587bd8d80f4d8aa91d2f4c2acbc09b9cd7f653bbe4f703e0c13f013a4d213.scope.
Jan 31 08:25:39 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:25:39 compute-0 podman[240462]: 2026-01-31 08:25:39.045134264 +0000 UTC m=+0.016486681 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 08:25:39 compute-0 podman[240462]: 2026-01-31 08:25:39.198864101 +0000 UTC m=+0.170216618 container init c90587bd8d80f4d8aa91d2f4c2acbc09b9cd7f653bbe4f703e0c13f013a4d213 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=optimistic_greider, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=tentacle)
Jan 31 08:25:39 compute-0 podman[240462]: 2026-01-31 08:25:39.203921728 +0000 UTC m=+0.175274135 container start c90587bd8d80f4d8aa91d2f4c2acbc09b9cd7f653bbe4f703e0c13f013a4d213 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=optimistic_greider, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Jan 31 08:25:39 compute-0 optimistic_greider[240478]: 167 167
Jan 31 08:25:39 compute-0 systemd[1]: libpod-c90587bd8d80f4d8aa91d2f4c2acbc09b9cd7f653bbe4f703e0c13f013a4d213.scope: Deactivated successfully.
Jan 31 08:25:39 compute-0 podman[240462]: 2026-01-31 08:25:39.266492201 +0000 UTC m=+0.237844648 container attach c90587bd8d80f4d8aa91d2f4c2acbc09b9cd7f653bbe4f703e0c13f013a4d213 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=optimistic_greider, ceph=True, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 08:25:39 compute-0 podman[240462]: 2026-01-31 08:25:39.267036726 +0000 UTC m=+0.238389233 container died c90587bd8d80f4d8aa91d2f4c2acbc09b9cd7f653bbe4f703e0c13f013a4d213 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=optimistic_greider, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=tentacle)
Jan 31 08:25:39 compute-0 systemd[1]: var-lib-containers-storage-overlay-99e2320e65e38026d061c2086bd5681e3de533c320752845f610136230f82318-merged.mount: Deactivated successfully.
Jan 31 08:25:39 compute-0 podman[240462]: 2026-01-31 08:25:39.430616371 +0000 UTC m=+0.401968808 container remove c90587bd8d80f4d8aa91d2f4c2acbc09b9cd7f653bbe4f703e0c13f013a4d213 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=optimistic_greider, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 08:25:39 compute-0 systemd[1]: libpod-conmon-c90587bd8d80f4d8aa91d2f4c2acbc09b9cd7f653bbe4f703e0c13f013a4d213.scope: Deactivated successfully.
Jan 31 08:25:39 compute-0 podman[240503]: 2026-01-31 08:25:39.616719481 +0000 UTC m=+0.089768146 container create 3b611a3a5e5ffbec09e2bcaae74c52e74d5ca184bb79f7903cfdecf5e1d1d005 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_germain, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Jan 31 08:25:39 compute-0 podman[240503]: 2026-01-31 08:25:39.570401972 +0000 UTC m=+0.043450657 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 08:25:39 compute-0 systemd[1]: Started libpod-conmon-3b611a3a5e5ffbec09e2bcaae74c52e74d5ca184bb79f7903cfdecf5e1d1d005.scope.
Jan 31 08:25:39 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:25:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d7563788a1e461d2210915c08e98a5bf84baeade6ef82219aaf84a1440ac50fc/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 08:25:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d7563788a1e461d2210915c08e98a5bf84baeade6ef82219aaf84a1440ac50fc/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 08:25:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d7563788a1e461d2210915c08e98a5bf84baeade6ef82219aaf84a1440ac50fc/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 08:25:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d7563788a1e461d2210915c08e98a5bf84baeade6ef82219aaf84a1440ac50fc/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 08:25:39 compute-0 podman[240503]: 2026-01-31 08:25:39.794600611 +0000 UTC m=+0.267649376 container init 3b611a3a5e5ffbec09e2bcaae74c52e74d5ca184bb79f7903cfdecf5e1d1d005 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_germain, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 08:25:39 compute-0 podman[240503]: 2026-01-31 08:25:39.803707845 +0000 UTC m=+0.276756520 container start 3b611a3a5e5ffbec09e2bcaae74c52e74d5ca184bb79f7903cfdecf5e1d1d005 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_germain, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 08:25:39 compute-0 podman[240503]: 2026-01-31 08:25:39.903820671 +0000 UTC m=+0.376869456 container attach 3b611a3a5e5ffbec09e2bcaae74c52e74d5ca184bb79f7903cfdecf5e1d1d005 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_germain, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 31 08:25:40 compute-0 ceph-mon[75227]: pgmap v691: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:25:40 compute-0 lvm[240598]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 31 08:25:40 compute-0 lvm[240596]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 31 08:25:40 compute-0 lvm[240596]: VG ceph_vg0 finished
Jan 31 08:25:40 compute-0 lvm[240598]: VG ceph_vg1 finished
Jan 31 08:25:40 compute-0 lvm[240600]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 31 08:25:40 compute-0 lvm[240600]: VG ceph_vg2 finished
Jan 31 08:25:40 compute-0 pedantic_germain[240519]: {}
Jan 31 08:25:40 compute-0 systemd[1]: libpod-3b611a3a5e5ffbec09e2bcaae74c52e74d5ca184bb79f7903cfdecf5e1d1d005.scope: Deactivated successfully.
Jan 31 08:25:40 compute-0 systemd[1]: libpod-3b611a3a5e5ffbec09e2bcaae74c52e74d5ca184bb79f7903cfdecf5e1d1d005.scope: Consumed 1.113s CPU time.
Jan 31 08:25:40 compute-0 podman[240503]: 2026-01-31 08:25:40.610377029 +0000 UTC m=+1.083425714 container died 3b611a3a5e5ffbec09e2bcaae74c52e74d5ca184bb79f7903cfdecf5e1d1d005 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_germain, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 08:25:40 compute-0 systemd[1]: var-lib-containers-storage-overlay-d7563788a1e461d2210915c08e98a5bf84baeade6ef82219aaf84a1440ac50fc-merged.mount: Deactivated successfully.
Jan 31 08:25:40 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v692: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:25:41 compute-0 podman[240503]: 2026-01-31 08:25:41.240950043 +0000 UTC m=+1.713998718 container remove 3b611a3a5e5ffbec09e2bcaae74c52e74d5ca184bb79f7903cfdecf5e1d1d005 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_germain, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 08:25:41 compute-0 systemd[1]: libpod-conmon-3b611a3a5e5ffbec09e2bcaae74c52e74d5ca184bb79f7903cfdecf5e1d1d005.scope: Deactivated successfully.
Jan 31 08:25:41 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "version", "format": "json"} v 0)
Jan 31 08:25:41 compute-0 ceph-mon[75227]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1086333539' entity='client.openstack' cmd={"prefix": "version", "format": "json"} : dispatch
Jan 31 08:25:41 compute-0 ceph-mgr[75519]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs volume ls", "format": "json"}]: dispatch
Jan 31 08:25:41 compute-0 ceph-mgr[75519]: [volumes INFO volumes.module] Starting _cmd_fs_volume_ls(format:json, prefix:fs volume ls) < ""
Jan 31 08:25:41 compute-0 ceph-mgr[75519]: [volumes INFO volumes.module] Finishing _cmd_fs_volume_ls(format:json, prefix:fs volume ls) < ""
Jan 31 08:25:41 compute-0 sudo[240425]: pam_unix(sudo:session): session closed for user root
Jan 31 08:25:41 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 31 08:25:41 compute-0 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 08:25:41 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 31 08:25:41 compute-0 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 08:25:41 compute-0 sudo[240615]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 31 08:25:41 compute-0 sudo[240615]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:25:41 compute-0 sudo[240615]: pam_unix(sudo:session): session closed for user root
Jan 31 08:25:41 compute-0 ceph-mon[75227]: pgmap v692: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:25:41 compute-0 ceph-mon[75227]: from='client.? 192.168.122.10:0/1086333539' entity='client.openstack' cmd={"prefix": "version", "format": "json"} : dispatch
Jan 31 08:25:41 compute-0 ceph-mon[75227]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs volume ls", "format": "json"}]: dispatch
Jan 31 08:25:41 compute-0 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 08:25:42 compute-0 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 08:25:42 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v693: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:25:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] _maybe_adjust
Jan 31 08:25:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:25:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 31 08:25:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:25:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 08:25:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:25:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 08:25:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:25:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 08:25:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:25:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 08:25:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:25:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 2.6947183441958982e-06 of space, bias 4.0, pg target 0.003233662013035078 quantized to 16 (current 16)
Jan 31 08:25:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:25:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 08:25:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:25:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 31 08:25:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:25:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Jan 31 08:25:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:25:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 08:25:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:25:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 31 08:25:43 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 08:25:43 compute-0 ceph-mon[75227]: pgmap v693: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:25:44 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v694: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:25:45 compute-0 ceph-mon[75227]: pgmap v694: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:25:46 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v695: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:25:47 compute-0 ceph-mon[75227]: pgmap v695: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:25:48 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 08:25:48 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v696: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:25:50 compute-0 ceph-mon[75227]: pgmap v696: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:25:50 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v697: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:25:51 compute-0 ceph-mon[75227]: pgmap v697: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:25:52 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v698: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:25:53 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 08:25:54 compute-0 ceph-mon[75227]: pgmap v698: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:25:54 compute-0 podman[240641]: 2026-01-31 08:25:54.169964235 +0000 UTC m=+0.055900259 container health_status 5cc46d1955888fed41771eb977e7f9416e280539f01559118253757fe3eb0869 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20260127, managed_by=edpm_ansible, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '3b798815db4eef76ff54d2cfe5801aee605c637f16e47a4297de289ab1fdb9c1-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent)
Jan 31 08:25:54 compute-0 podman[240640]: 2026-01-31 08:25:54.201184534 +0000 UTC m=+0.091810224 container health_status 14c3c41ef3ecc0cb180f3b1c6b2646401de390411c75f2edf127900ced71a3ae (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '3b798815db4eef76ff54d2cfe5801aee605c637f16e47a4297de289ab1fdb9c1-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, container_name=ovn_controller)
Jan 31 08:25:54 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v699: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:25:56 compute-0 ceph-mon[75227]: pgmap v699: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:25:56 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v700: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:25:57 compute-0 ceph-mon[75227]: pgmap v700: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:25:58 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 08:25:58 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v701: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:26:00 compute-0 ceph-mon[75227]: pgmap v701: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:26:01 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v702: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:26:01 compute-0 ceph-mon[75227]: pgmap v702: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:26:02 compute-0 ceph-mgr[75519]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:26:02 compute-0 ceph-mgr[75519]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:26:02 compute-0 ceph-mgr[75519]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:26:02 compute-0 ceph-mgr[75519]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:26:02 compute-0 ceph-mgr[75519]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:26:02 compute-0 ceph-mgr[75519]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:26:03 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v703: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:26:03 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 08:26:04 compute-0 ceph-mon[75227]: pgmap v703: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:26:05 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v704: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:26:05 compute-0 ceph-mon[75227]: pgmap v704: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:26:07 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v705: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:26:08 compute-0 ceph-mon[75227]: pgmap v705: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:26:08 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 08:26:09 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v706: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:26:09 compute-0 nova_compute[238824]: 2026-01-31 08:26:09.164 238828 DEBUG oslo_service.periodic_task [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:26:09 compute-0 nova_compute[238824]: 2026-01-31 08:26:09.204 238828 DEBUG oslo_service.periodic_task [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:26:09 compute-0 nova_compute[238824]: 2026-01-31 08:26:09.205 238828 DEBUG nova.compute.manager [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 31 08:26:09 compute-0 nova_compute[238824]: 2026-01-31 08:26:09.205 238828 DEBUG nova.compute.manager [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 31 08:26:09 compute-0 nova_compute[238824]: 2026-01-31 08:26:09.242 238828 DEBUG nova.compute.manager [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 31 08:26:09 compute-0 nova_compute[238824]: 2026-01-31 08:26:09.242 238828 DEBUG oslo_service.periodic_task [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:26:09 compute-0 nova_compute[238824]: 2026-01-31 08:26:09.242 238828 DEBUG oslo_service.periodic_task [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:26:09 compute-0 nova_compute[238824]: 2026-01-31 08:26:09.243 238828 DEBUG oslo_service.periodic_task [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:26:09 compute-0 nova_compute[238824]: 2026-01-31 08:26:09.243 238828 DEBUG oslo_service.periodic_task [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:26:09 compute-0 nova_compute[238824]: 2026-01-31 08:26:09.243 238828 DEBUG oslo_service.periodic_task [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:26:09 compute-0 nova_compute[238824]: 2026-01-31 08:26:09.243 238828 DEBUG oslo_service.periodic_task [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:26:09 compute-0 nova_compute[238824]: 2026-01-31 08:26:09.317 238828 DEBUG oslo_concurrency.lockutils [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:26:09 compute-0 nova_compute[238824]: 2026-01-31 08:26:09.318 238828 DEBUG oslo_concurrency.lockutils [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:26:09 compute-0 nova_compute[238824]: 2026-01-31 08:26:09.318 238828 DEBUG oslo_concurrency.lockutils [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:26:09 compute-0 nova_compute[238824]: 2026-01-31 08:26:09.318 238828 DEBUG nova.compute.resource_tracker [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 31 08:26:09 compute-0 nova_compute[238824]: 2026-01-31 08:26:09.318 238828 DEBUG oslo_concurrency.processutils [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:26:09 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 31 08:26:09 compute-0 ceph-mon[75227]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/866780905' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 31 08:26:09 compute-0 nova_compute[238824]: 2026-01-31 08:26:09.882 238828 DEBUG oslo_concurrency.processutils [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.564s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:26:10 compute-0 nova_compute[238824]: 2026-01-31 08:26:10.040 238828 WARNING nova.virt.libvirt.driver [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 31 08:26:10 compute-0 nova_compute[238824]: 2026-01-31 08:26:10.041 238828 DEBUG nova.compute.resource_tracker [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5133MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 31 08:26:10 compute-0 nova_compute[238824]: 2026-01-31 08:26:10.041 238828 DEBUG oslo_concurrency.lockutils [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:26:10 compute-0 nova_compute[238824]: 2026-01-31 08:26:10.042 238828 DEBUG oslo_concurrency.lockutils [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:26:10 compute-0 ceph-mon[75227]: pgmap v706: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:26:10 compute-0 ceph-mon[75227]: from='client.? 192.168.122.100:0/866780905' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 31 08:26:10 compute-0 nova_compute[238824]: 2026-01-31 08:26:10.227 238828 DEBUG nova.compute.resource_tracker [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 31 08:26:10 compute-0 nova_compute[238824]: 2026-01-31 08:26:10.228 238828 DEBUG nova.compute.resource_tracker [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 31 08:26:10 compute-0 nova_compute[238824]: 2026-01-31 08:26:10.243 238828 DEBUG oslo_concurrency.processutils [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:26:10 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 31 08:26:10 compute-0 ceph-mon[75227]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/511150945' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 31 08:26:10 compute-0 nova_compute[238824]: 2026-01-31 08:26:10.830 238828 DEBUG oslo_concurrency.processutils [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.586s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:26:10 compute-0 nova_compute[238824]: 2026-01-31 08:26:10.835 238828 DEBUG nova.compute.provider_tree [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Inventory has not changed in ProviderTree for provider: 6d4ff98f-eb37-47a1-bfaf-01e7f5329d98 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 31 08:26:10 compute-0 nova_compute[238824]: 2026-01-31 08:26:10.871 238828 DEBUG nova.scheduler.client.report [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Inventory has not changed for provider 6d4ff98f-eb37-47a1-bfaf-01e7f5329d98 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 31 08:26:10 compute-0 nova_compute[238824]: 2026-01-31 08:26:10.873 238828 DEBUG nova.compute.resource_tracker [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 31 08:26:10 compute-0 nova_compute[238824]: 2026-01-31 08:26:10.873 238828 DEBUG oslo_concurrency.lockutils [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.832s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:26:10 compute-0 nova_compute[238824]: 2026-01-31 08:26:10.970 238828 DEBUG oslo_service.periodic_task [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:26:10 compute-0 nova_compute[238824]: 2026-01-31 08:26:10.970 238828 DEBUG oslo_service.periodic_task [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:26:10 compute-0 nova_compute[238824]: 2026-01-31 08:26:10.971 238828 DEBUG nova.compute.manager [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 31 08:26:11 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v707: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:26:11 compute-0 ceph-mon[75227]: from='client.? 192.168.122.100:0/511150945' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 31 08:26:12 compute-0 ceph-mon[75227]: pgmap v707: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:26:13 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v708: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:26:13 compute-0 ceph-mon[75227]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 31 08:26:13 compute-0 ceph-mon[75227]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 1200.0 total, 600.0 interval
                                           Cumulative writes: 3308 writes, 14K keys, 3308 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.02 MB/s
                                           Cumulative WAL: 3308 writes, 3308 syncs, 1.00 writes per sync, written: 0.02 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 1274 writes, 5564 keys, 1274 commit groups, 1.0 writes per commit group, ingest: 8.57 MB, 0.01 MB/s
                                           Interval WAL: 1274 writes, 1274 syncs, 1.00 writes per sync, written: 0.01 GB, 0.01 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0     51.5      0.29              0.03         6    0.048       0      0       0.0       0.0
                                             L6      1/0    7.33 MB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   2.4     72.7     60.0      0.60              0.11         5    0.120     19K   2202       0.0       0.0
                                            Sum      1/0    7.33 MB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   3.4     49.0     57.2      0.89              0.14        11    0.081     19K   2202       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   4.5     36.8     37.6      0.74              0.09         6    0.123     12K   1463       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Low      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0     72.7     60.0      0.60              0.11         5    0.120     19K   2202       0.0       0.0
                                           High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     52.1      0.29              0.03         5    0.057       0      0       0.0       0.0
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     13.5      0.00              0.00         1    0.004       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.015, interval 0.006
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.05 GB write, 0.04 MB/s write, 0.04 GB read, 0.04 MB/s read, 0.9 seconds
                                           Interval compaction: 0.03 GB write, 0.05 MB/s write, 0.03 GB read, 0.05 MB/s read, 0.7 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55bf4c7858d0#2 capacity: 308.00 MB usage: 1.62 MB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 0 last_secs: 7e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(88,1.43 MB,0.464848%) FilterBlock(12,63.17 KB,0.0200296%) IndexBlock(12,130.39 KB,0.0413424%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
Jan 31 08:26:13 compute-0 ceph-mon[75227]: pgmap v708: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:26:13 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 08:26:15 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v709: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:26:16 compute-0 ceph-mon[75227]: pgmap v709: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:26:17 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v710: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:26:17 compute-0 ovn_metadata_agent[154972]: 2026-01-31 08:26:17.483 154977 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=2, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ae:5f:f2', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'd6:1b:f0:08:31:5c'}, ipsec=False) old=SB_Global(nb_cfg=1) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 31 08:26:17 compute-0 ovn_metadata_agent[154972]: 2026-01-31 08:26:17.485 154977 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 0 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 31 08:26:17 compute-0 ovn_metadata_agent[154972]: 2026-01-31 08:26:17.486 154977 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=c8bc61c4-1b90-42d4-9c52-3d83532ede66, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '2'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:26:17 compute-0 ovn_metadata_agent[154972]: 2026-01-31 08:26:17.885 154977 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:26:17 compute-0 ovn_metadata_agent[154972]: 2026-01-31 08:26:17.885 154977 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:26:17 compute-0 ovn_metadata_agent[154972]: 2026-01-31 08:26:17.885 154977 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:26:17 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 31 08:26:17 compute-0 ceph-mon[75227]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2284052896' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 31 08:26:17 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 31 08:26:17 compute-0 ceph-mon[75227]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2284052896' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 31 08:26:18 compute-0 ceph-mon[75227]: pgmap v710: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:26:18 compute-0 ceph-mon[75227]: from='client.? 192.168.122.10:0/2284052896' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 31 08:26:18 compute-0 ceph-mon[75227]: from='client.? 192.168.122.10:0/2284052896' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 31 08:26:18 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 08:26:19 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v711: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:26:20 compute-0 ceph-mon[75227]: pgmap v711: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:26:21 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v712: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:26:22 compute-0 ceph-mon[75227]: pgmap v712: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:26:23 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v713: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:26:23 compute-0 ceph-mon[75227]: pgmap v713: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:26:23 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 08:26:25 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v714: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:26:25 compute-0 podman[240730]: 2026-01-31 08:26:25.187338538 +0000 UTC m=+0.067508207 container health_status 5cc46d1955888fed41771eb977e7f9416e280539f01559118253757fe3eb0869 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, tcib_managed=true, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, container_name=ovn_metadata_agent, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '3b798815db4eef76ff54d2cfe5801aee605c637f16e47a4297de289ab1fdb9c1-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Jan 31 08:26:25 compute-0 podman[240729]: 2026-01-31 08:26:25.208237557 +0000 UTC m=+0.092205816 container health_status 14c3c41ef3ecc0cb180f3b1c6b2646401de390411c75f2edf127900ced71a3ae (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '3b798815db4eef76ff54d2cfe5801aee605c637f16e47a4297de289ab1fdb9c1-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, managed_by=edpm_ansible, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Jan 31 08:26:26 compute-0 ceph-mon[75227]: pgmap v714: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:26:27 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v715: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:26:27 compute-0 ceph-mon[75227]: pgmap v715: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:26:28 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 08:26:29 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v716: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:26:30 compute-0 ceph-mon[75227]: pgmap v716: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:26:31 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v717: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:26:31 compute-0 ceph-mgr[75519]: [balancer INFO root] Optimize plan auto_2026-01-31_08:26:31
Jan 31 08:26:31 compute-0 ceph-mgr[75519]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 31 08:26:31 compute-0 ceph-mgr[75519]: [balancer INFO root] do_upmap
Jan 31 08:26:31 compute-0 ceph-mgr[75519]: [balancer INFO root] pools ['volumes', 'default.rgw.meta', 'default.rgw.control', 'images', 'default.rgw.log', '.rgw.root', 'cephfs.cephfs.data', 'cephfs.cephfs.meta', 'backups', '.mgr', 'vms']
Jan 31 08:26:31 compute-0 ceph-mgr[75519]: [balancer INFO root] prepared 0/10 upmap changes
Jan 31 08:26:32 compute-0 ceph-mon[75227]: pgmap v717: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:26:32 compute-0 ceph-mgr[75519]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:26:32 compute-0 ceph-mgr[75519]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:26:32 compute-0 ceph-mgr[75519]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:26:32 compute-0 ceph-mgr[75519]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:26:32 compute-0 ceph-mgr[75519]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:26:32 compute-0 ceph-mgr[75519]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:26:33 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v718: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:26:33 compute-0 ceph-mgr[75519]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 31 08:26:33 compute-0 ceph-mgr[75519]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 08:26:33 compute-0 ceph-mgr[75519]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 31 08:26:33 compute-0 ceph-mgr[75519]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 08:26:33 compute-0 ceph-mgr[75519]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 08:26:33 compute-0 ceph-mgr[75519]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 08:26:33 compute-0 ceph-mgr[75519]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 08:26:33 compute-0 ceph-mgr[75519]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 08:26:33 compute-0 ceph-mgr[75519]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 08:26:33 compute-0 ceph-mgr[75519]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 08:26:33 compute-0 ceph-mon[75227]: pgmap v718: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:26:33 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 08:26:35 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v719: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:26:36 compute-0 ceph-mon[75227]: pgmap v719: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:26:37 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v720: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:26:37 compute-0 ceph-mon[75227]: pgmap v720: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:26:38 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 08:26:39 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v721: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:26:40 compute-0 ceph-mon[75227]: pgmap v721: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:26:41 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v722: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:26:41 compute-0 sudo[240773]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 08:26:41 compute-0 sudo[240773]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:26:41 compute-0 sudo[240773]: pam_unix(sudo:session): session closed for user root
Jan 31 08:26:41 compute-0 sudo[240798]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/82c880e6-d992-5408-8b12-efff9c275473/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Jan 31 08:26:41 compute-0 sudo[240798]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:26:42 compute-0 sudo[240798]: pam_unix(sudo:session): session closed for user root
Jan 31 08:26:42 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 31 08:26:42 compute-0 ceph-mon[75227]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 08:26:42 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 31 08:26:42 compute-0 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 31 08:26:42 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 31 08:26:42 compute-0 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 08:26:42 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 31 08:26:42 compute-0 ceph-mon[75227]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 31 08:26:42 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 31 08:26:42 compute-0 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 31 08:26:42 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 31 08:26:42 compute-0 ceph-mon[75227]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 08:26:42 compute-0 ceph-mon[75227]: pgmap v722: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:26:42 compute-0 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 08:26:42 compute-0 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 31 08:26:42 compute-0 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 08:26:42 compute-0 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 31 08:26:42 compute-0 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 31 08:26:42 compute-0 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 08:26:42 compute-0 sudo[240854]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 08:26:42 compute-0 sudo[240854]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:26:42 compute-0 sudo[240854]: pam_unix(sudo:session): session closed for user root
Jan 31 08:26:42 compute-0 sudo[240879]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/82c880e6-d992-5408-8b12-efff9c275473/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 82c880e6-d992-5408-8b12-efff9c275473 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Jan 31 08:26:42 compute-0 sudo[240879]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:26:42 compute-0 podman[240917]: 2026-01-31 08:26:42.631983512 +0000 UTC m=+0.049211674 container create eff07b4be281745ee49ac4add1682ed66443809ade1cd6696913d499d1fc9965 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=admiring_mendeleev, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.41.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Jan 31 08:26:42 compute-0 systemd[1]: Started libpod-conmon-eff07b4be281745ee49ac4add1682ed66443809ade1cd6696913d499d1fc9965.scope.
Jan 31 08:26:42 compute-0 podman[240917]: 2026-01-31 08:26:42.604354317 +0000 UTC m=+0.021582479 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 08:26:42 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:26:42 compute-0 podman[240917]: 2026-01-31 08:26:42.763569704 +0000 UTC m=+0.180797786 container init eff07b4be281745ee49ac4add1682ed66443809ade1cd6696913d499d1fc9965 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=admiring_mendeleev, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, io.buildah.version=1.41.3, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 08:26:42 compute-0 podman[240917]: 2026-01-31 08:26:42.773488083 +0000 UTC m=+0.190716145 container start eff07b4be281745ee49ac4add1682ed66443809ade1cd6696913d499d1fc9965 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=admiring_mendeleev, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=tentacle, ceph=True)
Jan 31 08:26:42 compute-0 systemd[1]: libpod-eff07b4be281745ee49ac4add1682ed66443809ade1cd6696913d499d1fc9965.scope: Deactivated successfully.
Jan 31 08:26:42 compute-0 admiring_mendeleev[240933]: 167 167
Jan 31 08:26:42 compute-0 conmon[240933]: conmon eff07b4be281745ee49a <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-eff07b4be281745ee49ac4add1682ed66443809ade1cd6696913d499d1fc9965.scope/container/memory.events
Jan 31 08:26:42 compute-0 podman[240917]: 2026-01-31 08:26:42.793790304 +0000 UTC m=+0.211018416 container attach eff07b4be281745ee49ac4add1682ed66443809ade1cd6696913d499d1fc9965 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=admiring_mendeleev, ceph=True, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Jan 31 08:26:42 compute-0 podman[240917]: 2026-01-31 08:26:42.79606083 +0000 UTC m=+0.213288912 container died eff07b4be281745ee49ac4add1682ed66443809ade1cd6696913d499d1fc9965 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=admiring_mendeleev, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 31 08:26:42 compute-0 systemd[1]: var-lib-containers-storage-overlay-890727d47b249c47f4296fb55192a1ce188aab0af839f58dc49ab94004668e4f-merged.mount: Deactivated successfully.
Jan 31 08:26:42 compute-0 podman[240917]: 2026-01-31 08:26:42.970540282 +0000 UTC m=+0.387768394 container remove eff07b4be281745ee49ac4add1682ed66443809ade1cd6696913d499d1fc9965 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=admiring_mendeleev, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_REF=tentacle)
Jan 31 08:26:43 compute-0 systemd[1]: libpod-conmon-eff07b4be281745ee49ac4add1682ed66443809ade1cd6696913d499d1fc9965.scope: Deactivated successfully.
Jan 31 08:26:43 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v723: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:26:43 compute-0 podman[240958]: 2026-01-31 08:26:43.194403311 +0000 UTC m=+0.069309329 container create ad36464f66b53e87dd75d3daa30a65cd8707f8f4bc7c2da3c85b7dae93128e77 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_carver, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.41.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle)
Jan 31 08:26:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] _maybe_adjust
Jan 31 08:26:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:26:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 31 08:26:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:26:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 08:26:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:26:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 08:26:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:26:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 08:26:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:26:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 08:26:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:26:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 2.6947183441958982e-06 of space, bias 4.0, pg target 0.003233662013035078 quantized to 16 (current 16)
Jan 31 08:26:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:26:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 08:26:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:26:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 31 08:26:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:26:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Jan 31 08:26:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:26:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 08:26:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:26:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 31 08:26:43 compute-0 systemd[1]: Started libpod-conmon-ad36464f66b53e87dd75d3daa30a65cd8707f8f4bc7c2da3c85b7dae93128e77.scope.
Jan 31 08:26:43 compute-0 podman[240958]: 2026-01-31 08:26:43.167027614 +0000 UTC m=+0.041933732 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 08:26:43 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:26:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cb021d359e186263095e216f612297863e47a7b1012b53cdeded46eb14567763/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 08:26:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cb021d359e186263095e216f612297863e47a7b1012b53cdeded46eb14567763/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 08:26:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cb021d359e186263095e216f612297863e47a7b1012b53cdeded46eb14567763/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 08:26:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cb021d359e186263095e216f612297863e47a7b1012b53cdeded46eb14567763/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 08:26:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cb021d359e186263095e216f612297863e47a7b1012b53cdeded46eb14567763/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 31 08:26:43 compute-0 podman[240958]: 2026-01-31 08:26:43.318052233 +0000 UTC m=+0.192958261 container init ad36464f66b53e87dd75d3daa30a65cd8707f8f4bc7c2da3c85b7dae93128e77 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_carver, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 08:26:43 compute-0 podman[240958]: 2026-01-31 08:26:43.329538537 +0000 UTC m=+0.204444595 container start ad36464f66b53e87dd75d3daa30a65cd8707f8f4bc7c2da3c85b7dae93128e77 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_carver, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 08:26:43 compute-0 podman[240958]: 2026-01-31 08:26:43.333599365 +0000 UTC m=+0.208505423 container attach ad36464f66b53e87dd75d3daa30a65cd8707f8f4bc7c2da3c85b7dae93128e77 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_carver, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Jan 31 08:26:43 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 08:26:43 compute-0 bold_carver[240975]: --> passed data devices: 0 physical, 3 LVM
Jan 31 08:26:43 compute-0 bold_carver[240975]: --> All data devices are unavailable
Jan 31 08:26:43 compute-0 systemd[1]: libpod-ad36464f66b53e87dd75d3daa30a65cd8707f8f4bc7c2da3c85b7dae93128e77.scope: Deactivated successfully.
Jan 31 08:26:43 compute-0 podman[240958]: 2026-01-31 08:26:43.798617089 +0000 UTC m=+0.673523137 container died ad36464f66b53e87dd75d3daa30a65cd8707f8f4bc7c2da3c85b7dae93128e77 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_carver, ceph=True, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Jan 31 08:26:43 compute-0 ceph-mon[75227]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #33. Immutable memtables: 0.
Jan 31 08:26:43 compute-0 ceph-mon[75227]: rocksdb: (Original Log Time 2026/01/31-08:26:43.971143) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 31 08:26:43 compute-0 ceph-mon[75227]: rocksdb: [db/flush_job.cc:856] [default] [JOB 13] Flushing memtable with next log file: 33
Jan 31 08:26:43 compute-0 ceph-mon[75227]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769848003971198, "job": 13, "event": "flush_started", "num_memtables": 1, "num_entries": 1843, "num_deletes": 505, "total_data_size": 2580126, "memory_usage": 2622608, "flush_reason": "Manual Compaction"}
Jan 31 08:26:43 compute-0 ceph-mon[75227]: rocksdb: [db/flush_job.cc:885] [default] [JOB 13] Level-0 flush table #34: started
Jan 31 08:26:44 compute-0 ceph-mon[75227]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769848004056642, "cf_name": "default", "job": 13, "event": "table_file_creation", "file_number": 34, "file_size": 2544428, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 13436, "largest_seqno": 15278, "table_properties": {"data_size": 2536397, "index_size": 4397, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2565, "raw_key_size": 18805, "raw_average_key_size": 18, "raw_value_size": 2518403, "raw_average_value_size": 2478, "num_data_blocks": 200, "num_entries": 1016, "num_filter_entries": 1016, "num_deletions": 505, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769847833, "oldest_key_time": 1769847833, "file_creation_time": 1769848003, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "91992687-9ca4-489a-811f-a25b3432622d", "db_session_id": "RDN3DWKE2K2I6QTJYIJY", "orig_file_number": 34, "seqno_to_time_mapping": "N/A"}}
Jan 31 08:26:44 compute-0 ceph-mon[75227]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 13] Flush lasted 85616 microseconds, and 5938 cpu microseconds.
Jan 31 08:26:44 compute-0 ceph-mon[75227]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 31 08:26:44 compute-0 ceph-mon[75227]: rocksdb: (Original Log Time 2026/01/31-08:26:44.056747) [db/flush_job.cc:967] [default] [JOB 13] Level-0 flush table #34: 2544428 bytes OK
Jan 31 08:26:44 compute-0 ceph-mon[75227]: rocksdb: (Original Log Time 2026/01/31-08:26:44.056786) [db/memtable_list.cc:519] [default] Level-0 commit table #34 started
Jan 31 08:26:44 compute-0 ceph-mon[75227]: rocksdb: (Original Log Time 2026/01/31-08:26:44.096859) [db/memtable_list.cc:722] [default] Level-0 commit table #34: memtable #1 done
Jan 31 08:26:44 compute-0 ceph-mon[75227]: rocksdb: (Original Log Time 2026/01/31-08:26:44.096928) EVENT_LOG_v1 {"time_micros": 1769848004096918, "job": 13, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 31 08:26:44 compute-0 ceph-mon[75227]: rocksdb: (Original Log Time 2026/01/31-08:26:44.096963) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 31 08:26:44 compute-0 ceph-mon[75227]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 13] Try to delete WAL files size 2571184, prev total WAL file size 2571184, number of live WAL files 2.
Jan 31 08:26:44 compute-0 ceph-mon[75227]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000030.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 08:26:44 compute-0 ceph-mon[75227]: rocksdb: (Original Log Time 2026/01/31-08:26:44.097835) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0030' seq:72057594037927935, type:22 .. '6C6F676D00323531' seq:0, type:0; will stop at (end)
Jan 31 08:26:44 compute-0 ceph-mon[75227]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 14] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 31 08:26:44 compute-0 ceph-mon[75227]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 13 Base level 0, inputs: [34(2484KB)], [32(7506KB)]
Jan 31 08:26:44 compute-0 ceph-mon[75227]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769848004097909, "job": 14, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [34], "files_L6": [32], "score": -1, "input_data_size": 10231578, "oldest_snapshot_seqno": -1}
Jan 31 08:26:44 compute-0 ceph-mon[75227]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 14] Generated table #35: 3987 keys, 8178838 bytes, temperature: kUnknown
Jan 31 08:26:44 compute-0 ceph-mon[75227]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769848004477800, "cf_name": "default", "job": 14, "event": "table_file_creation", "file_number": 35, "file_size": 8178838, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 8149457, "index_size": 18327, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 9989, "raw_key_size": 97350, "raw_average_key_size": 24, "raw_value_size": 8074618, "raw_average_value_size": 2025, "num_data_blocks": 776, "num_entries": 3987, "num_filter_entries": 3987, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769846771, "oldest_key_time": 0, "file_creation_time": 1769848004, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "91992687-9ca4-489a-811f-a25b3432622d", "db_session_id": "RDN3DWKE2K2I6QTJYIJY", "orig_file_number": 35, "seqno_to_time_mapping": "N/A"}}
Jan 31 08:26:44 compute-0 ceph-mon[75227]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 31 08:26:44 compute-0 systemd[1]: var-lib-containers-storage-overlay-cb021d359e186263095e216f612297863e47a7b1012b53cdeded46eb14567763-merged.mount: Deactivated successfully.
Jan 31 08:26:44 compute-0 ceph-mon[75227]: rocksdb: (Original Log Time 2026/01/31-08:26:44.478143) [db/compaction/compaction_job.cc:1663] [default] [JOB 14] Compacted 1@0 + 1@6 files to L6 => 8178838 bytes
Jan 31 08:26:44 compute-0 ceph-mon[75227]: rocksdb: (Original Log Time 2026/01/31-08:26:44.548924) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 26.9 rd, 21.5 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.4, 7.3 +0.0 blob) out(7.8 +0.0 blob), read-write-amplify(7.2) write-amplify(3.2) OK, records in: 5010, records dropped: 1023 output_compression: NoCompression
Jan 31 08:26:44 compute-0 ceph-mon[75227]: rocksdb: (Original Log Time 2026/01/31-08:26:44.548969) EVENT_LOG_v1 {"time_micros": 1769848004548952, "job": 14, "event": "compaction_finished", "compaction_time_micros": 380009, "compaction_time_cpu_micros": 16096, "output_level": 6, "num_output_files": 1, "total_output_size": 8178838, "num_input_records": 5010, "num_output_records": 3987, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 31 08:26:44 compute-0 ceph-mon[75227]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000034.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 08:26:44 compute-0 ceph-mon[75227]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769848004549367, "job": 14, "event": "table_file_deletion", "file_number": 34}
Jan 31 08:26:44 compute-0 ceph-mon[75227]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000032.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 08:26:44 compute-0 ceph-mon[75227]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769848004550247, "job": 14, "event": "table_file_deletion", "file_number": 32}
Jan 31 08:26:44 compute-0 ceph-mon[75227]: rocksdb: (Original Log Time 2026/01/31-08:26:44.097698) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 08:26:44 compute-0 ceph-mon[75227]: rocksdb: (Original Log Time 2026/01/31-08:26:44.550529) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 08:26:44 compute-0 ceph-mon[75227]: rocksdb: (Original Log Time 2026/01/31-08:26:44.550537) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 08:26:44 compute-0 ceph-mon[75227]: rocksdb: (Original Log Time 2026/01/31-08:26:44.550539) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 08:26:44 compute-0 ceph-mon[75227]: rocksdb: (Original Log Time 2026/01/31-08:26:44.550541) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 08:26:44 compute-0 ceph-mon[75227]: rocksdb: (Original Log Time 2026/01/31-08:26:44.550543) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 08:26:44 compute-0 ceph-mon[75227]: pgmap v723: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:26:44 compute-0 podman[240958]: 2026-01-31 08:26:44.988202943 +0000 UTC m=+1.863108961 container remove ad36464f66b53e87dd75d3daa30a65cd8707f8f4bc7c2da3c85b7dae93128e77 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_carver, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.41.3, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 08:26:45 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v724: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:26:45 compute-0 sudo[240879]: pam_unix(sudo:session): session closed for user root
Jan 31 08:26:45 compute-0 systemd[1]: libpod-conmon-ad36464f66b53e87dd75d3daa30a65cd8707f8f4bc7c2da3c85b7dae93128e77.scope: Deactivated successfully.
Jan 31 08:26:45 compute-0 sudo[241007]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 08:26:45 compute-0 sudo[241007]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:26:45 compute-0 sudo[241007]: pam_unix(sudo:session): session closed for user root
Jan 31 08:26:45 compute-0 sudo[241032]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/82c880e6-d992-5408-8b12-efff9c275473/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 82c880e6-d992-5408-8b12-efff9c275473 -- lvm list --format json
Jan 31 08:26:45 compute-0 sudo[241032]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:26:45 compute-0 podman[241067]: 2026-01-31 08:26:45.394876537 +0000 UTC m=+0.058958748 container create 23f5b49a9de6732e158bd5bca1a027c6a31c97fa39a49cbdfdaa74214d785189 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sharp_hertz, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.build-date=20251030, OSD_FLAVOR=default)
Jan 31 08:26:45 compute-0 systemd[1]: Started libpod-conmon-23f5b49a9de6732e158bd5bca1a027c6a31c97fa39a49cbdfdaa74214d785189.scope.
Jan 31 08:26:45 compute-0 podman[241067]: 2026-01-31 08:26:45.368058326 +0000 UTC m=+0.032140577 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 08:26:45 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:26:45 compute-0 podman[241067]: 2026-01-31 08:26:45.491692496 +0000 UTC m=+0.155774757 container init 23f5b49a9de6732e158bd5bca1a027c6a31c97fa39a49cbdfdaa74214d785189 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sharp_hertz, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 08:26:45 compute-0 podman[241067]: 2026-01-31 08:26:45.498541836 +0000 UTC m=+0.162624017 container start 23f5b49a9de6732e158bd5bca1a027c6a31c97fa39a49cbdfdaa74214d785189 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sharp_hertz, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle)
Jan 31 08:26:45 compute-0 podman[241067]: 2026-01-31 08:26:45.50314343 +0000 UTC m=+0.167225701 container attach 23f5b49a9de6732e158bd5bca1a027c6a31c97fa39a49cbdfdaa74214d785189 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sharp_hertz, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.build-date=20251030, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 08:26:45 compute-0 sharp_hertz[241083]: 167 167
Jan 31 08:26:45 compute-0 systemd[1]: libpod-23f5b49a9de6732e158bd5bca1a027c6a31c97fa39a49cbdfdaa74214d785189.scope: Deactivated successfully.
Jan 31 08:26:45 compute-0 podman[241067]: 2026-01-31 08:26:45.504204831 +0000 UTC m=+0.168287022 container died 23f5b49a9de6732e158bd5bca1a027c6a31c97fa39a49cbdfdaa74214d785189 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sharp_hertz, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.build-date=20251030, io.buildah.version=1.41.3)
Jan 31 08:26:45 compute-0 systemd[1]: var-lib-containers-storage-overlay-fc7ddb6011d7a64d2a2424c5e40c591cece94db472c0572d40063e2419c79990-merged.mount: Deactivated successfully.
Jan 31 08:26:45 compute-0 podman[241067]: 2026-01-31 08:26:45.701890488 +0000 UTC m=+0.365972689 container remove 23f5b49a9de6732e158bd5bca1a027c6a31c97fa39a49cbdfdaa74214d785189 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sharp_hertz, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Jan 31 08:26:45 compute-0 systemd[1]: libpod-conmon-23f5b49a9de6732e158bd5bca1a027c6a31c97fa39a49cbdfdaa74214d785189.scope: Deactivated successfully.
Jan 31 08:26:45 compute-0 ceph-mon[75227]: pgmap v724: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:26:45 compute-0 podman[241107]: 2026-01-31 08:26:45.867040078 +0000 UTC m=+0.034832306 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 08:26:46 compute-0 podman[241107]: 2026-01-31 08:26:46.100786626 +0000 UTC m=+0.268578844 container create 3412e05deca99e8659b703a421add0942a322f3cc1e7a0f2749f56cabff49339 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_curran, org.label-schema.build-date=20251030, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 08:26:46 compute-0 systemd[1]: Started libpod-conmon-3412e05deca99e8659b703a421add0942a322f3cc1e7a0f2749f56cabff49339.scope.
Jan 31 08:26:46 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:26:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5a0c311924b5e2fad2dce581d3441122bcf32a23c136c13944f3cd3325e7c46f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 08:26:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5a0c311924b5e2fad2dce581d3441122bcf32a23c136c13944f3cd3325e7c46f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 08:26:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5a0c311924b5e2fad2dce581d3441122bcf32a23c136c13944f3cd3325e7c46f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 08:26:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5a0c311924b5e2fad2dce581d3441122bcf32a23c136c13944f3cd3325e7c46f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 08:26:46 compute-0 podman[241107]: 2026-01-31 08:26:46.557545008 +0000 UTC m=+0.725337196 container init 3412e05deca99e8659b703a421add0942a322f3cc1e7a0f2749f56cabff49339 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_curran, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle)
Jan 31 08:26:46 compute-0 podman[241107]: 2026-01-31 08:26:46.563924944 +0000 UTC m=+0.731717122 container start 3412e05deca99e8659b703a421add0942a322f3cc1e7a0f2749f56cabff49339 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_curran, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, io.buildah.version=1.41.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 08:26:46 compute-0 podman[241107]: 2026-01-31 08:26:46.568106106 +0000 UTC m=+0.735898314 container attach 3412e05deca99e8659b703a421add0942a322f3cc1e7a0f2749f56cabff49339 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_curran, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle)
Jan 31 08:26:46 compute-0 fervent_curran[241124]: {
Jan 31 08:26:46 compute-0 fervent_curran[241124]:     "0": [
Jan 31 08:26:46 compute-0 fervent_curran[241124]:         {
Jan 31 08:26:46 compute-0 fervent_curran[241124]:             "devices": [
Jan 31 08:26:46 compute-0 fervent_curran[241124]:                 "/dev/loop3"
Jan 31 08:26:46 compute-0 fervent_curran[241124]:             ],
Jan 31 08:26:46 compute-0 fervent_curran[241124]:             "lv_name": "ceph_lv0",
Jan 31 08:26:46 compute-0 fervent_curran[241124]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 08:26:46 compute-0 fervent_curran[241124]:             "lv_size": "21470642176",
Jan 31 08:26:46 compute-0 fervent_curran[241124]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=MTsNbY-MKaT-jGv0-3onj-5WQa-gnK0-BbfLsK,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=82c880e6-d992-5408-8b12-efff9c275473,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=39c36249-2898-4a76-b317-8e4ca379866f,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 31 08:26:46 compute-0 fervent_curran[241124]:             "lv_uuid": "MTsNbY-MKaT-jGv0-3onj-5WQa-gnK0-BbfLsK",
Jan 31 08:26:46 compute-0 fervent_curran[241124]:             "name": "ceph_lv0",
Jan 31 08:26:46 compute-0 fervent_curran[241124]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 08:26:46 compute-0 fervent_curran[241124]:             "tags": {
Jan 31 08:26:46 compute-0 fervent_curran[241124]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 31 08:26:46 compute-0 fervent_curran[241124]:                 "ceph.block_uuid": "MTsNbY-MKaT-jGv0-3onj-5WQa-gnK0-BbfLsK",
Jan 31 08:26:46 compute-0 fervent_curran[241124]:                 "ceph.cephx_lockbox_secret": "",
Jan 31 08:26:46 compute-0 fervent_curran[241124]:                 "ceph.cluster_fsid": "82c880e6-d992-5408-8b12-efff9c275473",
Jan 31 08:26:46 compute-0 fervent_curran[241124]:                 "ceph.cluster_name": "ceph",
Jan 31 08:26:46 compute-0 fervent_curran[241124]:                 "ceph.crush_device_class": "",
Jan 31 08:26:46 compute-0 fervent_curran[241124]:                 "ceph.encrypted": "0",
Jan 31 08:26:46 compute-0 fervent_curran[241124]:                 "ceph.objectstore": "bluestore",
Jan 31 08:26:46 compute-0 fervent_curran[241124]:                 "ceph.osd_fsid": "39c36249-2898-4a76-b317-8e4ca379866f",
Jan 31 08:26:46 compute-0 fervent_curran[241124]:                 "ceph.osd_id": "0",
Jan 31 08:26:46 compute-0 fervent_curran[241124]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 31 08:26:46 compute-0 fervent_curran[241124]:                 "ceph.type": "block",
Jan 31 08:26:46 compute-0 fervent_curran[241124]:                 "ceph.vdo": "0",
Jan 31 08:26:46 compute-0 fervent_curran[241124]:                 "ceph.with_tpm": "0"
Jan 31 08:26:46 compute-0 fervent_curran[241124]:             },
Jan 31 08:26:46 compute-0 fervent_curran[241124]:             "type": "block",
Jan 31 08:26:46 compute-0 fervent_curran[241124]:             "vg_name": "ceph_vg0"
Jan 31 08:26:46 compute-0 fervent_curran[241124]:         }
Jan 31 08:26:46 compute-0 fervent_curran[241124]:     ],
Jan 31 08:26:46 compute-0 fervent_curran[241124]:     "1": [
Jan 31 08:26:46 compute-0 fervent_curran[241124]:         {
Jan 31 08:26:46 compute-0 fervent_curran[241124]:             "devices": [
Jan 31 08:26:46 compute-0 fervent_curran[241124]:                 "/dev/loop4"
Jan 31 08:26:46 compute-0 fervent_curran[241124]:             ],
Jan 31 08:26:46 compute-0 fervent_curran[241124]:             "lv_name": "ceph_lv1",
Jan 31 08:26:46 compute-0 fervent_curran[241124]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 31 08:26:46 compute-0 fervent_curran[241124]:             "lv_size": "21470642176",
Jan 31 08:26:46 compute-0 fervent_curran[241124]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=p93Mbf-DMxT-pcUt-jSJE-SFna-oscq-yTAd40,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=82c880e6-d992-5408-8b12-efff9c275473,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=dacad4fa-56d8-4937-b2d8-306fb75187f3,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 31 08:26:46 compute-0 fervent_curran[241124]:             "lv_uuid": "p93Mbf-DMxT-pcUt-jSJE-SFna-oscq-yTAd40",
Jan 31 08:26:46 compute-0 fervent_curran[241124]:             "name": "ceph_lv1",
Jan 31 08:26:46 compute-0 fervent_curran[241124]:             "path": "/dev/ceph_vg1/ceph_lv1",
Jan 31 08:26:46 compute-0 fervent_curran[241124]:             "tags": {
Jan 31 08:26:46 compute-0 fervent_curran[241124]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 31 08:26:46 compute-0 fervent_curran[241124]:                 "ceph.block_uuid": "p93Mbf-DMxT-pcUt-jSJE-SFna-oscq-yTAd40",
Jan 31 08:26:46 compute-0 fervent_curran[241124]:                 "ceph.cephx_lockbox_secret": "",
Jan 31 08:26:46 compute-0 fervent_curran[241124]:                 "ceph.cluster_fsid": "82c880e6-d992-5408-8b12-efff9c275473",
Jan 31 08:26:46 compute-0 fervent_curran[241124]:                 "ceph.cluster_name": "ceph",
Jan 31 08:26:46 compute-0 fervent_curran[241124]:                 "ceph.crush_device_class": "",
Jan 31 08:26:46 compute-0 fervent_curran[241124]:                 "ceph.encrypted": "0",
Jan 31 08:26:46 compute-0 fervent_curran[241124]:                 "ceph.objectstore": "bluestore",
Jan 31 08:26:46 compute-0 fervent_curran[241124]:                 "ceph.osd_fsid": "dacad4fa-56d8-4937-b2d8-306fb75187f3",
Jan 31 08:26:46 compute-0 fervent_curran[241124]:                 "ceph.osd_id": "1",
Jan 31 08:26:46 compute-0 fervent_curran[241124]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 31 08:26:46 compute-0 fervent_curran[241124]:                 "ceph.type": "block",
Jan 31 08:26:46 compute-0 fervent_curran[241124]:                 "ceph.vdo": "0",
Jan 31 08:26:46 compute-0 fervent_curran[241124]:                 "ceph.with_tpm": "0"
Jan 31 08:26:46 compute-0 fervent_curran[241124]:             },
Jan 31 08:26:46 compute-0 fervent_curran[241124]:             "type": "block",
Jan 31 08:26:46 compute-0 fervent_curran[241124]:             "vg_name": "ceph_vg1"
Jan 31 08:26:46 compute-0 fervent_curran[241124]:         }
Jan 31 08:26:46 compute-0 fervent_curran[241124]:     ],
Jan 31 08:26:46 compute-0 fervent_curran[241124]:     "2": [
Jan 31 08:26:46 compute-0 fervent_curran[241124]:         {
Jan 31 08:26:46 compute-0 fervent_curran[241124]:             "devices": [
Jan 31 08:26:46 compute-0 fervent_curran[241124]:                 "/dev/loop5"
Jan 31 08:26:46 compute-0 fervent_curran[241124]:             ],
Jan 31 08:26:46 compute-0 fervent_curran[241124]:             "lv_name": "ceph_lv2",
Jan 31 08:26:46 compute-0 fervent_curran[241124]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 31 08:26:46 compute-0 fervent_curran[241124]:             "lv_size": "21470642176",
Jan 31 08:26:46 compute-0 fervent_curran[241124]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=fxd7JU-HnwP-NvcE-M4xv-EgEF-kK7y-w6dXCS,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=82c880e6-d992-5408-8b12-efff9c275473,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=faa25865-e7b6-44f9-8188-08bf287b941b,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 31 08:26:46 compute-0 fervent_curran[241124]:             "lv_uuid": "fxd7JU-HnwP-NvcE-M4xv-EgEF-kK7y-w6dXCS",
Jan 31 08:26:46 compute-0 fervent_curran[241124]:             "name": "ceph_lv2",
Jan 31 08:26:46 compute-0 fervent_curran[241124]:             "path": "/dev/ceph_vg2/ceph_lv2",
Jan 31 08:26:46 compute-0 fervent_curran[241124]:             "tags": {
Jan 31 08:26:46 compute-0 fervent_curran[241124]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 31 08:26:46 compute-0 fervent_curran[241124]:                 "ceph.block_uuid": "fxd7JU-HnwP-NvcE-M4xv-EgEF-kK7y-w6dXCS",
Jan 31 08:26:46 compute-0 fervent_curran[241124]:                 "ceph.cephx_lockbox_secret": "",
Jan 31 08:26:46 compute-0 fervent_curran[241124]:                 "ceph.cluster_fsid": "82c880e6-d992-5408-8b12-efff9c275473",
Jan 31 08:26:46 compute-0 fervent_curran[241124]:                 "ceph.cluster_name": "ceph",
Jan 31 08:26:46 compute-0 fervent_curran[241124]:                 "ceph.crush_device_class": "",
Jan 31 08:26:46 compute-0 fervent_curran[241124]:                 "ceph.encrypted": "0",
Jan 31 08:26:46 compute-0 fervent_curran[241124]:                 "ceph.objectstore": "bluestore",
Jan 31 08:26:46 compute-0 fervent_curran[241124]:                 "ceph.osd_fsid": "faa25865-e7b6-44f9-8188-08bf287b941b",
Jan 31 08:26:46 compute-0 fervent_curran[241124]:                 "ceph.osd_id": "2",
Jan 31 08:26:46 compute-0 fervent_curran[241124]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 31 08:26:46 compute-0 fervent_curran[241124]:                 "ceph.type": "block",
Jan 31 08:26:46 compute-0 fervent_curran[241124]:                 "ceph.vdo": "0",
Jan 31 08:26:46 compute-0 fervent_curran[241124]:                 "ceph.with_tpm": "0"
Jan 31 08:26:46 compute-0 fervent_curran[241124]:             },
Jan 31 08:26:46 compute-0 fervent_curran[241124]:             "type": "block",
Jan 31 08:26:46 compute-0 fervent_curran[241124]:             "vg_name": "ceph_vg2"
Jan 31 08:26:46 compute-0 fervent_curran[241124]:         }
Jan 31 08:26:46 compute-0 fervent_curran[241124]:     ]
Jan 31 08:26:46 compute-0 fervent_curran[241124]: }
Jan 31 08:26:46 compute-0 systemd[1]: libpod-3412e05deca99e8659b703a421add0942a322f3cc1e7a0f2749f56cabff49339.scope: Deactivated successfully.
Jan 31 08:26:46 compute-0 podman[241133]: 2026-01-31 08:26:46.879098663 +0000 UTC m=+0.033123326 container died 3412e05deca99e8659b703a421add0942a322f3cc1e7a0f2749f56cabff49339 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_curran, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 08:26:46 compute-0 systemd[1]: var-lib-containers-storage-overlay-5a0c311924b5e2fad2dce581d3441122bcf32a23c136c13944f3cd3325e7c46f-merged.mount: Deactivated successfully.
Jan 31 08:26:47 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v725: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:26:47 compute-0 podman[241133]: 2026-01-31 08:26:47.105558548 +0000 UTC m=+0.259583131 container remove 3412e05deca99e8659b703a421add0942a322f3cc1e7a0f2749f56cabff49339 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_curran, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 08:26:47 compute-0 systemd[1]: libpod-conmon-3412e05deca99e8659b703a421add0942a322f3cc1e7a0f2749f56cabff49339.scope: Deactivated successfully.
Jan 31 08:26:47 compute-0 sudo[241032]: pam_unix(sudo:session): session closed for user root
Jan 31 08:26:47 compute-0 sudo[241149]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 08:26:47 compute-0 sudo[241149]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:26:47 compute-0 sudo[241149]: pam_unix(sudo:session): session closed for user root
Jan 31 08:26:47 compute-0 sudo[241174]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/82c880e6-d992-5408-8b12-efff9c275473/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 82c880e6-d992-5408-8b12-efff9c275473 -- raw list --format json
Jan 31 08:26:47 compute-0 sudo[241174]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:26:47 compute-0 ceph-mon[75227]: pgmap v725: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:26:47 compute-0 podman[241211]: 2026-01-31 08:26:47.579183782 +0000 UTC m=+0.047569616 container create 2db6be10e04b2dd2cca71b45d1a2add5410b8e82c24fca78be7f9ff0ca9a24d6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gallant_payne, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.license=GPLv2)
Jan 31 08:26:47 compute-0 systemd[1]: Started libpod-conmon-2db6be10e04b2dd2cca71b45d1a2add5410b8e82c24fca78be7f9ff0ca9a24d6.scope.
Jan 31 08:26:47 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:26:47 compute-0 podman[241211]: 2026-01-31 08:26:47.558109448 +0000 UTC m=+0.026495272 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 08:26:47 compute-0 podman[241211]: 2026-01-31 08:26:47.658651317 +0000 UTC m=+0.127037131 container init 2db6be10e04b2dd2cca71b45d1a2add5410b8e82c24fca78be7f9ff0ca9a24d6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gallant_payne, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Jan 31 08:26:47 compute-0 podman[241211]: 2026-01-31 08:26:47.664050474 +0000 UTC m=+0.132436268 container start 2db6be10e04b2dd2cca71b45d1a2add5410b8e82c24fca78be7f9ff0ca9a24d6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gallant_payne, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 08:26:47 compute-0 gallant_payne[241228]: 167 167
Jan 31 08:26:47 compute-0 podman[241211]: 2026-01-31 08:26:47.669439581 +0000 UTC m=+0.137825395 container attach 2db6be10e04b2dd2cca71b45d1a2add5410b8e82c24fca78be7f9ff0ca9a24d6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gallant_payne, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 08:26:47 compute-0 systemd[1]: libpod-2db6be10e04b2dd2cca71b45d1a2add5410b8e82c24fca78be7f9ff0ca9a24d6.scope: Deactivated successfully.
Jan 31 08:26:47 compute-0 podman[241211]: 2026-01-31 08:26:47.670232664 +0000 UTC m=+0.138618488 container died 2db6be10e04b2dd2cca71b45d1a2add5410b8e82c24fca78be7f9ff0ca9a24d6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gallant_payne, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Jan 31 08:26:47 compute-0 systemd[1]: var-lib-containers-storage-overlay-48ea4443c358b096939367753becb02523dad83bc66980da33e9e81743e018a5-merged.mount: Deactivated successfully.
Jan 31 08:26:47 compute-0 podman[241211]: 2026-01-31 08:26:47.742285672 +0000 UTC m=+0.210671476 container remove 2db6be10e04b2dd2cca71b45d1a2add5410b8e82c24fca78be7f9ff0ca9a24d6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gallant_payne, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 31 08:26:47 compute-0 systemd[1]: libpod-conmon-2db6be10e04b2dd2cca71b45d1a2add5410b8e82c24fca78be7f9ff0ca9a24d6.scope: Deactivated successfully.
Jan 31 08:26:47 compute-0 podman[241252]: 2026-01-31 08:26:47.889222111 +0000 UTC m=+0.051120480 container create 139d98b4d4dc59f8f8c1c978f63a02270a3aa20d4c2d38ef808f363f1d03cbc9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=interesting_moore, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, io.buildah.version=1.41.3)
Jan 31 08:26:47 compute-0 systemd[1]: Started libpod-conmon-139d98b4d4dc59f8f8c1c978f63a02270a3aa20d4c2d38ef808f363f1d03cbc9.scope.
Jan 31 08:26:47 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:26:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/af2314ebab0a724d6c4f5c249fd59af3f8c0e20845d41656a0adc2e01e8592da/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 08:26:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/af2314ebab0a724d6c4f5c249fd59af3f8c0e20845d41656a0adc2e01e8592da/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 08:26:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/af2314ebab0a724d6c4f5c249fd59af3f8c0e20845d41656a0adc2e01e8592da/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 08:26:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/af2314ebab0a724d6c4f5c249fd59af3f8c0e20845d41656a0adc2e01e8592da/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 08:26:47 compute-0 podman[241252]: 2026-01-31 08:26:47.858199637 +0000 UTC m=+0.020098046 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 08:26:47 compute-0 podman[241252]: 2026-01-31 08:26:47.970302622 +0000 UTC m=+0.132201031 container init 139d98b4d4dc59f8f8c1c978f63a02270a3aa20d4c2d38ef808f363f1d03cbc9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=interesting_moore, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 08:26:47 compute-0 podman[241252]: 2026-01-31 08:26:47.976975157 +0000 UTC m=+0.138873526 container start 139d98b4d4dc59f8f8c1c978f63a02270a3aa20d4c2d38ef808f363f1d03cbc9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=interesting_moore, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Jan 31 08:26:47 compute-0 podman[241252]: 2026-01-31 08:26:47.984489485 +0000 UTC m=+0.146387874 container attach 139d98b4d4dc59f8f8c1c978f63a02270a3aa20d4c2d38ef808f363f1d03cbc9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=interesting_moore, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 08:26:48 compute-0 lvm[241348]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 31 08:26:48 compute-0 lvm[241348]: VG ceph_vg1 finished
Jan 31 08:26:48 compute-0 lvm[241347]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 31 08:26:48 compute-0 lvm[241347]: VG ceph_vg0 finished
Jan 31 08:26:48 compute-0 lvm[241350]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 31 08:26:48 compute-0 lvm[241350]: VG ceph_vg2 finished
Jan 31 08:26:48 compute-0 interesting_moore[241269]: {}
Jan 31 08:26:48 compute-0 systemd[1]: libpod-139d98b4d4dc59f8f8c1c978f63a02270a3aa20d4c2d38ef808f363f1d03cbc9.scope: Deactivated successfully.
Jan 31 08:26:48 compute-0 systemd[1]: libpod-139d98b4d4dc59f8f8c1c978f63a02270a3aa20d4c2d38ef808f363f1d03cbc9.scope: Consumed 1.056s CPU time.
Jan 31 08:26:48 compute-0 podman[241252]: 2026-01-31 08:26:48.710222892 +0000 UTC m=+0.872121271 container died 139d98b4d4dc59f8f8c1c978f63a02270a3aa20d4c2d38ef808f363f1d03cbc9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=interesting_moore, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 08:26:48 compute-0 systemd[1]: var-lib-containers-storage-overlay-af2314ebab0a724d6c4f5c249fd59af3f8c0e20845d41656a0adc2e01e8592da-merged.mount: Deactivated successfully.
Jan 31 08:26:48 compute-0 podman[241252]: 2026-01-31 08:26:48.761344431 +0000 UTC m=+0.923242800 container remove 139d98b4d4dc59f8f8c1c978f63a02270a3aa20d4c2d38ef808f363f1d03cbc9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=interesting_moore, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, io.buildah.version=1.41.3)
Jan 31 08:26:48 compute-0 systemd[1]: libpod-conmon-139d98b4d4dc59f8f8c1c978f63a02270a3aa20d4c2d38ef808f363f1d03cbc9.scope: Deactivated successfully.
Jan 31 08:26:48 compute-0 sudo[241174]: pam_unix(sudo:session): session closed for user root
Jan 31 08:26:48 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 31 08:26:48 compute-0 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 08:26:48 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 31 08:26:48 compute-0 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 08:26:48 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 08:26:48 compute-0 sudo[241367]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 31 08:26:48 compute-0 sudo[241367]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:26:48 compute-0 sudo[241367]: pam_unix(sudo:session): session closed for user root
Jan 31 08:26:49 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v726: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:26:49 compute-0 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 08:26:49 compute-0 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 08:26:49 compute-0 ceph-mon[75227]: pgmap v726: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:26:51 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v727: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:26:52 compute-0 ceph-mon[75227]: pgmap v727: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:26:53 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v728: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:26:53 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 08:26:54 compute-0 ceph-mon[75227]: pgmap v728: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:26:55 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v729: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:26:56 compute-0 ceph-mon[75227]: pgmap v729: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:26:56 compute-0 podman[241398]: 2026-01-31 08:26:56.181063651 +0000 UTC m=+0.056042623 container health_status 5cc46d1955888fed41771eb977e7f9416e280539f01559118253757fe3eb0869 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, tcib_managed=true, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '3b798815db4eef76ff54d2cfe5801aee605c637f16e47a4297de289ab1fdb9c1-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Jan 31 08:26:56 compute-0 podman[241392]: 2026-01-31 08:26:56.206930425 +0000 UTC m=+0.096039178 container health_status 14c3c41ef3ecc0cb180f3b1c6b2646401de390411c75f2edf127900ced71a3ae (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20260127, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '3b798815db4eef76ff54d2cfe5801aee605c637f16e47a4297de289ab1fdb9c1-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_controller, container_name=ovn_controller)
Jan 31 08:26:57 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v730: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:26:58 compute-0 ceph-mon[75227]: pgmap v730: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:26:58 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 08:26:59 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v731: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:27:00 compute-0 ceph-mon[75227]: pgmap v731: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:27:01 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v732: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:27:02 compute-0 ceph-mon[75227]: pgmap v732: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:27:02 compute-0 ceph-mgr[75519]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:27:02 compute-0 ceph-mgr[75519]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:27:02 compute-0 ceph-mgr[75519]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:27:02 compute-0 ceph-mgr[75519]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:27:02 compute-0 ceph-mgr[75519]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:27:02 compute-0 ceph-mgr[75519]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:27:03 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v733: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:27:03 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 08:27:04 compute-0 ceph-mon[75227]: pgmap v733: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:27:05 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v734: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:27:06 compute-0 ceph-mon[75227]: pgmap v734: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:27:07 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v735: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:27:07 compute-0 nova_compute[238824]: 2026-01-31 08:27:07.340 238828 DEBUG oslo_service.periodic_task [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:27:08 compute-0 nova_compute[238824]: 2026-01-31 08:27:08.339 238828 DEBUG oslo_service.periodic_task [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:27:08 compute-0 nova_compute[238824]: 2026-01-31 08:27:08.339 238828 DEBUG nova.compute.manager [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 31 08:27:08 compute-0 nova_compute[238824]: 2026-01-31 08:27:08.339 238828 DEBUG nova.compute.manager [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 31 08:27:08 compute-0 nova_compute[238824]: 2026-01-31 08:27:08.416 238828 DEBUG nova.compute.manager [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 31 08:27:08 compute-0 nova_compute[238824]: 2026-01-31 08:27:08.417 238828 DEBUG oslo_service.periodic_task [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:27:08 compute-0 nova_compute[238824]: 2026-01-31 08:27:08.417 238828 DEBUG oslo_service.periodic_task [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:27:08 compute-0 nova_compute[238824]: 2026-01-31 08:27:08.418 238828 DEBUG oslo_service.periodic_task [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:27:08 compute-0 nova_compute[238824]: 2026-01-31 08:27:08.418 238828 DEBUG oslo_service.periodic_task [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:27:08 compute-0 ceph-mon[75227]: pgmap v735: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:27:08 compute-0 nova_compute[238824]: 2026-01-31 08:27:08.507 238828 DEBUG oslo_concurrency.lockutils [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:27:08 compute-0 nova_compute[238824]: 2026-01-31 08:27:08.508 238828 DEBUG oslo_concurrency.lockutils [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:27:08 compute-0 nova_compute[238824]: 2026-01-31 08:27:08.508 238828 DEBUG oslo_concurrency.lockutils [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:27:08 compute-0 nova_compute[238824]: 2026-01-31 08:27:08.508 238828 DEBUG nova.compute.resource_tracker [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 31 08:27:08 compute-0 nova_compute[238824]: 2026-01-31 08:27:08.509 238828 DEBUG oslo_concurrency.processutils [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:27:08 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 08:27:09 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 31 08:27:09 compute-0 ceph-mon[75227]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1837891686' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 31 08:27:09 compute-0 nova_compute[238824]: 2026-01-31 08:27:09.102 238828 DEBUG oslo_concurrency.processutils [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.594s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:27:09 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v736: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:27:09 compute-0 nova_compute[238824]: 2026-01-31 08:27:09.236 238828 WARNING nova.virt.libvirt.driver [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 31 08:27:09 compute-0 nova_compute[238824]: 2026-01-31 08:27:09.238 238828 DEBUG nova.compute.resource_tracker [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5152MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 31 08:27:09 compute-0 nova_compute[238824]: 2026-01-31 08:27:09.238 238828 DEBUG oslo_concurrency.lockutils [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:27:09 compute-0 nova_compute[238824]: 2026-01-31 08:27:09.238 238828 DEBUG oslo_concurrency.lockutils [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:27:09 compute-0 ceph-mon[75227]: from='client.? 192.168.122.100:0/1837891686' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 31 08:27:09 compute-0 ceph-mon[75227]: pgmap v736: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:27:09 compute-0 nova_compute[238824]: 2026-01-31 08:27:09.453 238828 DEBUG nova.compute.resource_tracker [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 31 08:27:09 compute-0 nova_compute[238824]: 2026-01-31 08:27:09.454 238828 DEBUG nova.compute.resource_tracker [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 31 08:27:09 compute-0 nova_compute[238824]: 2026-01-31 08:27:09.473 238828 DEBUG oslo_concurrency.processutils [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:27:09 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 31 08:27:09 compute-0 ceph-mon[75227]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3720361535' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 31 08:27:09 compute-0 nova_compute[238824]: 2026-01-31 08:27:09.997 238828 DEBUG oslo_concurrency.processutils [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.524s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:27:10 compute-0 nova_compute[238824]: 2026-01-31 08:27:10.004 238828 DEBUG nova.compute.provider_tree [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Inventory has not changed in ProviderTree for provider: 6d4ff98f-eb37-47a1-bfaf-01e7f5329d98 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 31 08:27:10 compute-0 nova_compute[238824]: 2026-01-31 08:27:10.081 238828 DEBUG nova.scheduler.client.report [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Inventory has not changed for provider 6d4ff98f-eb37-47a1-bfaf-01e7f5329d98 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 31 08:27:10 compute-0 nova_compute[238824]: 2026-01-31 08:27:10.085 238828 DEBUG nova.compute.resource_tracker [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 31 08:27:10 compute-0 nova_compute[238824]: 2026-01-31 08:27:10.086 238828 DEBUG oslo_concurrency.lockutils [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.847s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:27:10 compute-0 ceph-mon[75227]: from='client.? 192.168.122.100:0/3720361535' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 31 08:27:11 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v737: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:27:11 compute-0 ceph-mon[75227]: pgmap v737: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:27:12 compute-0 nova_compute[238824]: 2026-01-31 08:27:12.009 238828 DEBUG oslo_service.periodic_task [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:27:12 compute-0 nova_compute[238824]: 2026-01-31 08:27:12.010 238828 DEBUG oslo_service.periodic_task [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:27:12 compute-0 nova_compute[238824]: 2026-01-31 08:27:12.010 238828 DEBUG oslo_service.periodic_task [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:27:12 compute-0 nova_compute[238824]: 2026-01-31 08:27:12.011 238828 DEBUG nova.compute.manager [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 31 08:27:13 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v738: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:27:13 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 08:27:14 compute-0 ceph-mon[75227]: pgmap v738: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:27:15 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v739: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:27:15 compute-0 ceph-mon[75227]: pgmap v739: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:27:16 compute-0 ceph-osd[85971]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 31 08:27:16 compute-0 ceph-osd[85971]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Cumulative writes: 5823 writes, 24K keys, 5823 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.02 MB/s
                                           Cumulative WAL: 5823 writes, 961 syncs, 6.06 writes per sync, written: 0.02 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 248 writes, 372 keys, 248 commit groups, 1.0 writes per commit group, ingest: 0.13 MB, 0.00 MB/s
                                           Interval WAL: 248 writes, 124 syncs, 2.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.01              0.00         1    0.010       0      0       0.0       0.0
                                            Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.01              0.00         1    0.010       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.1      0.01              0.00         1    0.010       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x561e014618d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.7e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
                                           
                                           ** Compaction Stats [m-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x561e014618d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.7e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-0] **
                                           
                                           ** Compaction Stats [m-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x561e014618d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.7e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-1] **
                                           
                                           ** Compaction Stats [m-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x561e014618d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.7e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-2] **
                                           
                                           ** Compaction Stats [p-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.56 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.01              0.00         1    0.012       0      0       0.0       0.0
                                            Sum      1/0    1.56 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.01              0.00         1    0.012       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.1      0.01              0.00         1    0.012       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x561e014618d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.7e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-0] **
                                           
                                           ** Compaction Stats [p-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x561e014618d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.7e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-1] **
                                           
                                           ** Compaction Stats [p-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x561e014618d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.7e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-2] **
                                           
                                           ** Compaction Stats [O-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x561e01461a30#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 8e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-0] **
                                           
                                           ** Compaction Stats [O-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x561e01461a30#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 8e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-1] **
                                           
                                           ** Compaction Stats [O-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.25 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.01              0.00         1    0.010       0      0       0.0       0.0
                                            Sum      1/0    1.25 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.01              0.00         1    0.010       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.1      0.01              0.00         1    0.010       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x561e01461a30#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 8e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-2] **
                                           
                                           ** Compaction Stats [L] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [L] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.003       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x561e014618d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.7e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [L] **
                                           
                                           ** Compaction Stats [P] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [P] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x561e014618d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.7e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [P] **
Jan 31 08:27:17 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v740: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:27:17 compute-0 ovn_metadata_agent[154972]: 2026-01-31 08:27:17.886 154977 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:27:17 compute-0 ovn_metadata_agent[154972]: 2026-01-31 08:27:17.886 154977 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:27:17 compute-0 ovn_metadata_agent[154972]: 2026-01-31 08:27:17.886 154977 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:27:17 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 31 08:27:17 compute-0 ceph-mon[75227]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/296449466' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 31 08:27:17 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 31 08:27:17 compute-0 ceph-mon[75227]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/296449466' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 31 08:27:18 compute-0 ceph-mon[75227]: pgmap v740: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:27:18 compute-0 ceph-mon[75227]: from='client.? 192.168.122.10:0/296449466' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 31 08:27:18 compute-0 ceph-mon[75227]: from='client.? 192.168.122.10:0/296449466' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 31 08:27:18 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 08:27:19 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v741: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:27:20 compute-0 ceph-mon[75227]: pgmap v741: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:27:21 compute-0 ceph-osd[87035]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 31 08:27:21 compute-0 ceph-osd[87035]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 1200.3 total, 600.0 interval
                                           Cumulative writes: 7056 writes, 29K keys, 7056 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.02 MB/s
                                           Cumulative WAL: 7056 writes, 1347 syncs, 5.24 writes per sync, written: 0.02 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 224 writes, 336 keys, 224 commit groups, 1.0 writes per commit group, ingest: 0.12 MB, 0.00 MB/s
                                           Interval WAL: 224 writes, 112 syncs, 2.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.06              0.00         1    0.065       0      0       0.0       0.0
                                            Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.06              0.00         1    0.065       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.06              0.00         1    0.065       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.3 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.1 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55d7805d98d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 5.7e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
                                           
                                           ** Compaction Stats [m-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.3 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55d7805d98d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 5.7e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-0] **
                                           
                                           ** Compaction Stats [m-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.3 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55d7805d98d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 5.7e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-1] **
                                           
                                           ** Compaction Stats [m-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.3 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55d7805d98d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 5.7e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-2] **
                                           
                                           ** Compaction Stats [p-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.56 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.01              0.00         1    0.011       0      0       0.0       0.0
                                            Sum      1/0    1.56 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.01              0.00         1    0.011       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.1      0.01              0.00         1    0.011       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.3 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55d7805d98d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 5.7e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-0] **
                                           
                                           ** Compaction Stats [p-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.3 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55d7805d98d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 5.7e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-1] **
                                           
                                           ** Compaction Stats [p-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.3 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55d7805d98d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 5.7e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-2] **
                                           
                                           ** Compaction Stats [O-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.3 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55d7805d9a30#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 9e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-0] **
                                           
                                           ** Compaction Stats [O-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.3 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55d7805d9a30#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 9e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-1] **
                                           
                                           ** Compaction Stats [O-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.25 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.03              0.00         1    0.031       0      0       0.0       0.0
                                            Sum      1/0    1.25 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.03              0.00         1    0.031       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.03              0.00         1    0.031       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.3 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55d7805d9a30#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 9e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-2] **
                                           
                                           ** Compaction Stats [L] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.03              0.00         1    0.032       0      0       0.0       0.0
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.03              0.00         1    0.032       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [L] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.03              0.00         1    0.032       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.3 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55d7805d98d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 5.7e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [L] **
                                           
                                           ** Compaction Stats [P] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [P] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.3 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55d7805d98d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 5.7e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [P] **
Jan 31 08:27:21 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v742: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:27:21 compute-0 ceph-mon[75227]: pgmap v742: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:27:23 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v743: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:27:23 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 08:27:24 compute-0 ceph-mon[75227]: pgmap v743: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:27:25 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v744: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:27:26 compute-0 ceph-mon[75227]: pgmap v744: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:27:27 compute-0 podman[241481]: 2026-01-31 08:27:27.176881606 +0000 UTC m=+0.064269410 container health_status 5cc46d1955888fed41771eb977e7f9416e280539f01559118253757fe3eb0869 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '3b798815db4eef76ff54d2cfe5801aee605c637f16e47a4297de289ab1fdb9c1-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true)
Jan 31 08:27:27 compute-0 podman[241480]: 2026-01-31 08:27:27.185425378 +0000 UTC m=+0.079756108 container health_status 14c3c41ef3ecc0cb180f3b1c6b2646401de390411c75f2edf127900ced71a3ae (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '3b798815db4eef76ff54d2cfe5801aee605c637f16e47a4297de289ab1fdb9c1-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.build-date=20260127)
Jan 31 08:27:27 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v745: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:27:27 compute-0 ceph-mon[75227]: pgmap v745: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:27:27 compute-0 ceph-osd[88096]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 31 08:27:27 compute-0 ceph-osd[88096]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 1200.8 total, 600.0 interval
                                           Cumulative writes: 5591 writes, 24K keys, 5591 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.02 MB/s
                                           Cumulative WAL: 5591 writes, 826 syncs, 6.77 writes per sync, written: 0.02 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 227 writes, 342 keys, 227 commit groups, 1.0 writes per commit group, ingest: 0.12 MB, 0.00 MB/s
                                           Interval WAL: 227 writes, 113 syncs, 2.01 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.25              0.00         1    0.249       0      0       0.0       0.0
                                            Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.25              0.00         1    0.249       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.25              0.00         1    0.249       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.8 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.2 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5603a1de18d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 7e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
                                           
                                           ** Compaction Stats [m-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.8 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5603a1de18d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 7e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-0] **
                                           
                                           ** Compaction Stats [m-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.8 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5603a1de18d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 7e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-1] **
                                           
                                           ** Compaction Stats [m-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.8 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5603a1de18d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 7e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-2] **
                                           
                                           ** Compaction Stats [p-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.56 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.07              0.00         1    0.071       0      0       0.0       0.0
                                            Sum      1/0    1.56 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.07              0.00         1    0.071       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.07              0.00         1    0.071       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.8 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.1 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5603a1de18d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 7e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-0] **
                                           
                                           ** Compaction Stats [p-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.8 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5603a1de18d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 7e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-1] **
                                           
                                           ** Compaction Stats [p-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.8 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5603a1de18d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 7e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-2] **
                                           
                                           ** Compaction Stats [O-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.8 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5603a1de1a30#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 4e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-0] **
                                           
                                           ** Compaction Stats [O-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.8 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5603a1de1a30#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 4e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-1] **
                                           
                                           ** Compaction Stats [O-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.25 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.21              0.00         1    0.207       0      0       0.0       0.0
                                            Sum      1/0    1.25 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.21              0.00         1    0.207       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.21              0.00         1    0.207       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.8 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.2 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5603a1de1a30#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 4e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-2] **
                                           
                                           ** Compaction Stats [L] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.04              0.00         1    0.043       0      0       0.0       0.0
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.04              0.00         1    0.043       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [L] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.04              0.00         1    0.043       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.8 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5603a1de18d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 7e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [L] **
                                           
                                           ** Compaction Stats [P] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [P] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.8 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5603a1de18d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 7e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [P] **
Jan 31 08:27:28 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 08:27:29 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v746: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:27:30 compute-0 ceph-mon[75227]: pgmap v746: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:27:31 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v747: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:27:31 compute-0 ceph-mgr[75519]: [balancer INFO root] Optimize plan auto_2026-01-31_08:27:31
Jan 31 08:27:31 compute-0 ceph-mgr[75519]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 31 08:27:31 compute-0 ceph-mgr[75519]: [balancer INFO root] do_upmap
Jan 31 08:27:31 compute-0 ceph-mgr[75519]: [balancer INFO root] pools ['default.rgw.log', 'images', 'vms', 'backups', 'default.rgw.control', 'cephfs.cephfs.meta', 'default.rgw.meta', 'cephfs.cephfs.data', 'volumes', '.mgr', '.rgw.root']
Jan 31 08:27:31 compute-0 ceph-mgr[75519]: [balancer INFO root] prepared 0/10 upmap changes
Jan 31 08:27:32 compute-0 ceph-mgr[75519]: [devicehealth INFO root] Check health
Jan 31 08:27:32 compute-0 ceph-mon[75227]: pgmap v747: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:27:32 compute-0 ceph-mgr[75519]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:27:32 compute-0 ceph-mgr[75519]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:27:32 compute-0 ceph-mgr[75519]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:27:32 compute-0 ceph-mgr[75519]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:27:32 compute-0 ceph-mgr[75519]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:27:32 compute-0 ceph-mgr[75519]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:27:33 compute-0 ceph-mgr[75519]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 31 08:27:33 compute-0 ceph-mgr[75519]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 08:27:33 compute-0 ceph-mgr[75519]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 31 08:27:33 compute-0 ceph-mgr[75519]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 08:27:33 compute-0 ceph-mgr[75519]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 08:27:33 compute-0 ceph-mgr[75519]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 08:27:33 compute-0 ceph-mgr[75519]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 08:27:33 compute-0 ceph-mgr[75519]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 08:27:33 compute-0 ceph-mgr[75519]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 08:27:33 compute-0 ceph-mgr[75519]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 08:27:33 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v748: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:27:33 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 08:27:34 compute-0 ceph-mon[75227]: pgmap v748: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:27:35 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v749: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:27:36 compute-0 ceph-mon[75227]: pgmap v749: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:27:37 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v750: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:27:38 compute-0 ceph-mon[75227]: pgmap v750: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:27:39 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 08:27:39 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v751: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:27:40 compute-0 ceph-mon[75227]: pgmap v751: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:27:41 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v752: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:27:42 compute-0 ceph-mon[75227]: pgmap v752: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:27:43 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v753: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:27:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] _maybe_adjust
Jan 31 08:27:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:27:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 31 08:27:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:27:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 08:27:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:27:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 08:27:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:27:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 08:27:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:27:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 08:27:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:27:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 2.6947183441958982e-06 of space, bias 4.0, pg target 0.003233662013035078 quantized to 16 (current 16)
Jan 31 08:27:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:27:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 08:27:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:27:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 31 08:27:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:27:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Jan 31 08:27:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:27:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 08:27:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:27:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 31 08:27:43 compute-0 ceph-mon[75227]: pgmap v753: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:27:44 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 08:27:45 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v754: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:27:45 compute-0 ceph-mon[75227]: pgmap v754: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:27:47 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v755: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:27:47 compute-0 ceph-mon[75227]: pgmap v755: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:27:48 compute-0 sudo[241525]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 08:27:48 compute-0 sudo[241525]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:27:48 compute-0 sudo[241525]: pam_unix(sudo:session): session closed for user root
Jan 31 08:27:49 compute-0 sudo[241550]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/82c880e6-d992-5408-8b12-efff9c275473/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 check-host
Jan 31 08:27:49 compute-0 sudo[241550]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:27:49 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 08:27:49 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v756: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:27:49 compute-0 sudo[241550]: pam_unix(sudo:session): session closed for user root
Jan 31 08:27:49 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 31 08:27:49 compute-0 ceph-mon[75227]: pgmap v756: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:27:49 compute-0 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 08:27:49 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 31 08:27:49 compute-0 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 08:27:50 compute-0 sudo[241595]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 08:27:50 compute-0 sudo[241595]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:27:50 compute-0 sudo[241595]: pam_unix(sudo:session): session closed for user root
Jan 31 08:27:50 compute-0 sudo[241620]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/82c880e6-d992-5408-8b12-efff9c275473/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Jan 31 08:27:50 compute-0 sudo[241620]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:27:50 compute-0 sudo[241620]: pam_unix(sudo:session): session closed for user root
Jan 31 08:27:50 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 31 08:27:50 compute-0 ceph-mon[75227]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 08:27:50 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 31 08:27:50 compute-0 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 31 08:27:50 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 31 08:27:50 compute-0 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 08:27:50 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 31 08:27:50 compute-0 ceph-mon[75227]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 31 08:27:50 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 31 08:27:50 compute-0 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 31 08:27:50 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 31 08:27:50 compute-0 ceph-mon[75227]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 08:27:50 compute-0 sudo[241677]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 08:27:50 compute-0 sudo[241677]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:27:50 compute-0 sudo[241677]: pam_unix(sudo:session): session closed for user root
Jan 31 08:27:51 compute-0 sudo[241702]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/82c880e6-d992-5408-8b12-efff9c275473/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 82c880e6-d992-5408-8b12-efff9c275473 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Jan 31 08:27:51 compute-0 sudo[241702]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:27:51 compute-0 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 08:27:51 compute-0 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 08:27:51 compute-0 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 08:27:51 compute-0 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 31 08:27:51 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v757: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:27:51 compute-0 podman[241739]: 2026-01-31 08:27:51.295028118 +0000 UTC m=+0.032392959 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 08:27:51 compute-0 podman[241739]: 2026-01-31 08:27:51.689507128 +0000 UTC m=+0.426871909 container create 912538ae4de42401ee4ba6364a6719ddf9a60f64641fb8197d43e18c3da8c4c6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_haslett, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 08:27:51 compute-0 systemd[1]: Started libpod-conmon-912538ae4de42401ee4ba6364a6719ddf9a60f64641fb8197d43e18c3da8c4c6.scope.
Jan 31 08:27:52 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:27:52 compute-0 podman[241739]: 2026-01-31 08:27:52.482085988 +0000 UTC m=+1.219450869 container init 912538ae4de42401ee4ba6364a6719ddf9a60f64641fb8197d43e18c3da8c4c6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_haslett, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 31 08:27:52 compute-0 podman[241739]: 2026-01-31 08:27:52.492219145 +0000 UTC m=+1.229583936 container start 912538ae4de42401ee4ba6364a6719ddf9a60f64641fb8197d43e18c3da8c4c6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_haslett, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 31 08:27:52 compute-0 focused_haslett[241756]: 167 167
Jan 31 08:27:52 compute-0 systemd[1]: libpod-912538ae4de42401ee4ba6364a6719ddf9a60f64641fb8197d43e18c3da8c4c6.scope: Deactivated successfully.
Jan 31 08:27:52 compute-0 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 08:27:52 compute-0 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 31 08:27:52 compute-0 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 31 08:27:52 compute-0 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 08:27:52 compute-0 ceph-mon[75227]: pgmap v757: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:27:52 compute-0 podman[241739]: 2026-01-31 08:27:52.708357655 +0000 UTC m=+1.445722466 container attach 912538ae4de42401ee4ba6364a6719ddf9a60f64641fb8197d43e18c3da8c4c6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_haslett, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS)
Jan 31 08:27:52 compute-0 podman[241739]: 2026-01-31 08:27:52.708726656 +0000 UTC m=+1.446091477 container died 912538ae4de42401ee4ba6364a6719ddf9a60f64641fb8197d43e18c3da8c4c6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_haslett, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 08:27:53 compute-0 systemd[1]: var-lib-containers-storage-overlay-01880d2c3c11a601890f059a0790628f8b14800ebf57072c8f0225abcc57be6b-merged.mount: Deactivated successfully.
Jan 31 08:27:53 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v758: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:27:53 compute-0 podman[241739]: 2026-01-31 08:27:53.441552226 +0000 UTC m=+2.178917017 container remove 912538ae4de42401ee4ba6364a6719ddf9a60f64641fb8197d43e18c3da8c4c6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_haslett, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20251030)
Jan 31 08:27:53 compute-0 systemd[1]: libpod-conmon-912538ae4de42401ee4ba6364a6719ddf9a60f64641fb8197d43e18c3da8c4c6.scope: Deactivated successfully.
Jan 31 08:27:53 compute-0 podman[241781]: 2026-01-31 08:27:53.630321141 +0000 UTC m=+0.075517659 container create d437ee616b9cfb48614c1553d7aeeac56d125415e80c4e7c88868ffd72d52c26 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sad_napier, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True)
Jan 31 08:27:53 compute-0 podman[241781]: 2026-01-31 08:27:53.588429195 +0000 UTC m=+0.033625703 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 08:27:53 compute-0 systemd[1]: Started libpod-conmon-d437ee616b9cfb48614c1553d7aeeac56d125415e80c4e7c88868ffd72d52c26.scope.
Jan 31 08:27:53 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:27:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/47398fb3688c1d8ff4c8723131b125bf139b18b59854892e0a5864f236c01e1f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 08:27:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/47398fb3688c1d8ff4c8723131b125bf139b18b59854892e0a5864f236c01e1f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 08:27:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/47398fb3688c1d8ff4c8723131b125bf139b18b59854892e0a5864f236c01e1f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 08:27:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/47398fb3688c1d8ff4c8723131b125bf139b18b59854892e0a5864f236c01e1f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 08:27:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/47398fb3688c1d8ff4c8723131b125bf139b18b59854892e0a5864f236c01e1f/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 31 08:27:53 compute-0 ceph-mon[75227]: pgmap v758: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:27:53 compute-0 podman[241781]: 2026-01-31 08:27:53.771448987 +0000 UTC m=+0.216645485 container init d437ee616b9cfb48614c1553d7aeeac56d125415e80c4e7c88868ffd72d52c26 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sad_napier, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=tentacle, ceph=True, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 08:27:53 compute-0 podman[241781]: 2026-01-31 08:27:53.777991952 +0000 UTC m=+0.223188470 container start d437ee616b9cfb48614c1553d7aeeac56d125415e80c4e7c88868ffd72d52c26 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sad_napier, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=tentacle, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 08:27:53 compute-0 podman[241781]: 2026-01-31 08:27:53.801768496 +0000 UTC m=+0.246964974 container attach d437ee616b9cfb48614c1553d7aeeac56d125415e80c4e7c88868ffd72d52c26 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sad_napier, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 08:27:54 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 08:27:54 compute-0 sad_napier[241797]: --> passed data devices: 0 physical, 3 LVM
Jan 31 08:27:54 compute-0 sad_napier[241797]: --> All data devices are unavailable
Jan 31 08:27:54 compute-0 systemd[1]: libpod-d437ee616b9cfb48614c1553d7aeeac56d125415e80c4e7c88868ffd72d52c26.scope: Deactivated successfully.
Jan 31 08:27:54 compute-0 podman[241781]: 2026-01-31 08:27:54.246866079 +0000 UTC m=+0.692062557 container died d437ee616b9cfb48614c1553d7aeeac56d125415e80c4e7c88868ffd72d52c26 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sad_napier, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Jan 31 08:27:54 compute-0 systemd[1]: var-lib-containers-storage-overlay-47398fb3688c1d8ff4c8723131b125bf139b18b59854892e0a5864f236c01e1f-merged.mount: Deactivated successfully.
Jan 31 08:27:54 compute-0 podman[241781]: 2026-01-31 08:27:54.765923245 +0000 UTC m=+1.211119763 container remove d437ee616b9cfb48614c1553d7aeeac56d125415e80c4e7c88868ffd72d52c26 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sad_napier, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 31 08:27:54 compute-0 systemd[1]: libpod-conmon-d437ee616b9cfb48614c1553d7aeeac56d125415e80c4e7c88868ffd72d52c26.scope: Deactivated successfully.
Jan 31 08:27:54 compute-0 sudo[241702]: pam_unix(sudo:session): session closed for user root
Jan 31 08:27:54 compute-0 sudo[241830]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 08:27:54 compute-0 sudo[241830]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:27:54 compute-0 sudo[241830]: pam_unix(sudo:session): session closed for user root
Jan 31 08:27:54 compute-0 sudo[241855]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/82c880e6-d992-5408-8b12-efff9c275473/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 82c880e6-d992-5408-8b12-efff9c275473 -- lvm list --format json
Jan 31 08:27:54 compute-0 sudo[241855]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:27:55 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v759: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:27:55 compute-0 podman[241893]: 2026-01-31 08:27:55.267847317 +0000 UTC m=+0.047200918 container create 5264b8119a79680b058d6a528f8f9626ff034d6d42ed5ebd735899a3cc22b991 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=frosty_banzai, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 08:27:55 compute-0 systemd[1]: Started libpod-conmon-5264b8119a79680b058d6a528f8f9626ff034d6d42ed5ebd735899a3cc22b991.scope.
Jan 31 08:27:55 compute-0 podman[241893]: 2026-01-31 08:27:55.247845501 +0000 UTC m=+0.027199132 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 08:27:55 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:27:55 compute-0 podman[241893]: 2026-01-31 08:27:55.400843963 +0000 UTC m=+0.180197614 container init 5264b8119a79680b058d6a528f8f9626ff034d6d42ed5ebd735899a3cc22b991 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=frosty_banzai, org.label-schema.build-date=20251030, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, io.buildah.version=1.41.3)
Jan 31 08:27:55 compute-0 podman[241893]: 2026-01-31 08:27:55.407210163 +0000 UTC m=+0.186563784 container start 5264b8119a79680b058d6a528f8f9626ff034d6d42ed5ebd735899a3cc22b991 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=frosty_banzai, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Jan 31 08:27:55 compute-0 frosty_banzai[241909]: 167 167
Jan 31 08:27:55 compute-0 systemd[1]: libpod-5264b8119a79680b058d6a528f8f9626ff034d6d42ed5ebd735899a3cc22b991.scope: Deactivated successfully.
Jan 31 08:27:55 compute-0 podman[241893]: 2026-01-31 08:27:55.414907551 +0000 UTC m=+0.194261172 container attach 5264b8119a79680b058d6a528f8f9626ff034d6d42ed5ebd735899a3cc22b991 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=frosty_banzai, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Jan 31 08:27:55 compute-0 podman[241893]: 2026-01-31 08:27:55.415530358 +0000 UTC m=+0.194883969 container died 5264b8119a79680b058d6a528f8f9626ff034d6d42ed5ebd735899a3cc22b991 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=frosty_banzai, CEPH_REF=tentacle, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Jan 31 08:27:55 compute-0 systemd[1]: var-lib-containers-storage-overlay-f464a803a5d9c90002fe68286d479fe653bf847e2da12b998638244cb0288b16-merged.mount: Deactivated successfully.
Jan 31 08:27:55 compute-0 podman[241893]: 2026-01-31 08:27:55.513726789 +0000 UTC m=+0.293080440 container remove 5264b8119a79680b058d6a528f8f9626ff034d6d42ed5ebd735899a3cc22b991 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=frosty_banzai, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 08:27:55 compute-0 systemd[1]: libpod-conmon-5264b8119a79680b058d6a528f8f9626ff034d6d42ed5ebd735899a3cc22b991.scope: Deactivated successfully.
Jan 31 08:27:55 compute-0 podman[241934]: 2026-01-31 08:27:55.700036874 +0000 UTC m=+0.081090887 container create b81e87343576bd367b36ebbc60a0f72c327ca47c3b4aba845b81b061666e2bc1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wonderful_hawking, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 08:27:55 compute-0 podman[241934]: 2026-01-31 08:27:55.643470063 +0000 UTC m=+0.024524126 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 08:27:55 compute-0 systemd[1]: Started libpod-conmon-b81e87343576bd367b36ebbc60a0f72c327ca47c3b4aba845b81b061666e2bc1.scope.
Jan 31 08:27:55 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:27:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4c17854be6eb3f45f72ac0b69c12f4281790371492d00a3cf4a9383fc417750d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 08:27:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4c17854be6eb3f45f72ac0b69c12f4281790371492d00a3cf4a9383fc417750d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 08:27:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4c17854be6eb3f45f72ac0b69c12f4281790371492d00a3cf4a9383fc417750d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 08:27:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4c17854be6eb3f45f72ac0b69c12f4281790371492d00a3cf4a9383fc417750d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 08:27:55 compute-0 podman[241934]: 2026-01-31 08:27:55.843037672 +0000 UTC m=+0.224091675 container init b81e87343576bd367b36ebbc60a0f72c327ca47c3b4aba845b81b061666e2bc1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wonderful_hawking, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 31 08:27:55 compute-0 podman[241934]: 2026-01-31 08:27:55.849619359 +0000 UTC m=+0.230673322 container start b81e87343576bd367b36ebbc60a0f72c327ca47c3b4aba845b81b061666e2bc1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wonderful_hawking, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 08:27:55 compute-0 podman[241934]: 2026-01-31 08:27:55.877808037 +0000 UTC m=+0.258862100 container attach b81e87343576bd367b36ebbc60a0f72c327ca47c3b4aba845b81b061666e2bc1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wonderful_hawking, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Jan 31 08:27:56 compute-0 wonderful_hawking[241951]: {
Jan 31 08:27:56 compute-0 wonderful_hawking[241951]:     "0": [
Jan 31 08:27:56 compute-0 wonderful_hawking[241951]:         {
Jan 31 08:27:56 compute-0 wonderful_hawking[241951]:             "devices": [
Jan 31 08:27:56 compute-0 wonderful_hawking[241951]:                 "/dev/loop3"
Jan 31 08:27:56 compute-0 wonderful_hawking[241951]:             ],
Jan 31 08:27:56 compute-0 wonderful_hawking[241951]:             "lv_name": "ceph_lv0",
Jan 31 08:27:56 compute-0 wonderful_hawking[241951]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 08:27:56 compute-0 wonderful_hawking[241951]:             "lv_size": "21470642176",
Jan 31 08:27:56 compute-0 wonderful_hawking[241951]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=MTsNbY-MKaT-jGv0-3onj-5WQa-gnK0-BbfLsK,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=82c880e6-d992-5408-8b12-efff9c275473,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=39c36249-2898-4a76-b317-8e4ca379866f,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 31 08:27:56 compute-0 wonderful_hawking[241951]:             "lv_uuid": "MTsNbY-MKaT-jGv0-3onj-5WQa-gnK0-BbfLsK",
Jan 31 08:27:56 compute-0 wonderful_hawking[241951]:             "name": "ceph_lv0",
Jan 31 08:27:56 compute-0 wonderful_hawking[241951]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 08:27:56 compute-0 wonderful_hawking[241951]:             "tags": {
Jan 31 08:27:56 compute-0 wonderful_hawking[241951]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 31 08:27:56 compute-0 wonderful_hawking[241951]:                 "ceph.block_uuid": "MTsNbY-MKaT-jGv0-3onj-5WQa-gnK0-BbfLsK",
Jan 31 08:27:56 compute-0 wonderful_hawking[241951]:                 "ceph.cephx_lockbox_secret": "",
Jan 31 08:27:56 compute-0 wonderful_hawking[241951]:                 "ceph.cluster_fsid": "82c880e6-d992-5408-8b12-efff9c275473",
Jan 31 08:27:56 compute-0 wonderful_hawking[241951]:                 "ceph.cluster_name": "ceph",
Jan 31 08:27:56 compute-0 wonderful_hawking[241951]:                 "ceph.crush_device_class": "",
Jan 31 08:27:56 compute-0 wonderful_hawking[241951]:                 "ceph.encrypted": "0",
Jan 31 08:27:56 compute-0 wonderful_hawking[241951]:                 "ceph.objectstore": "bluestore",
Jan 31 08:27:56 compute-0 wonderful_hawking[241951]:                 "ceph.osd_fsid": "39c36249-2898-4a76-b317-8e4ca379866f",
Jan 31 08:27:56 compute-0 wonderful_hawking[241951]:                 "ceph.osd_id": "0",
Jan 31 08:27:56 compute-0 wonderful_hawking[241951]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 31 08:27:56 compute-0 wonderful_hawking[241951]:                 "ceph.type": "block",
Jan 31 08:27:56 compute-0 wonderful_hawking[241951]:                 "ceph.vdo": "0",
Jan 31 08:27:56 compute-0 wonderful_hawking[241951]:                 "ceph.with_tpm": "0"
Jan 31 08:27:56 compute-0 wonderful_hawking[241951]:             },
Jan 31 08:27:56 compute-0 wonderful_hawking[241951]:             "type": "block",
Jan 31 08:27:56 compute-0 wonderful_hawking[241951]:             "vg_name": "ceph_vg0"
Jan 31 08:27:56 compute-0 wonderful_hawking[241951]:         }
Jan 31 08:27:56 compute-0 wonderful_hawking[241951]:     ],
Jan 31 08:27:56 compute-0 wonderful_hawking[241951]:     "1": [
Jan 31 08:27:56 compute-0 wonderful_hawking[241951]:         {
Jan 31 08:27:56 compute-0 wonderful_hawking[241951]:             "devices": [
Jan 31 08:27:56 compute-0 wonderful_hawking[241951]:                 "/dev/loop4"
Jan 31 08:27:56 compute-0 wonderful_hawking[241951]:             ],
Jan 31 08:27:56 compute-0 wonderful_hawking[241951]:             "lv_name": "ceph_lv1",
Jan 31 08:27:56 compute-0 wonderful_hawking[241951]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 31 08:27:56 compute-0 wonderful_hawking[241951]:             "lv_size": "21470642176",
Jan 31 08:27:56 compute-0 wonderful_hawking[241951]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=p93Mbf-DMxT-pcUt-jSJE-SFna-oscq-yTAd40,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=82c880e6-d992-5408-8b12-efff9c275473,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=dacad4fa-56d8-4937-b2d8-306fb75187f3,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 31 08:27:56 compute-0 wonderful_hawking[241951]:             "lv_uuid": "p93Mbf-DMxT-pcUt-jSJE-SFna-oscq-yTAd40",
Jan 31 08:27:56 compute-0 wonderful_hawking[241951]:             "name": "ceph_lv1",
Jan 31 08:27:56 compute-0 wonderful_hawking[241951]:             "path": "/dev/ceph_vg1/ceph_lv1",
Jan 31 08:27:56 compute-0 wonderful_hawking[241951]:             "tags": {
Jan 31 08:27:56 compute-0 wonderful_hawking[241951]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 31 08:27:56 compute-0 wonderful_hawking[241951]:                 "ceph.block_uuid": "p93Mbf-DMxT-pcUt-jSJE-SFna-oscq-yTAd40",
Jan 31 08:27:56 compute-0 wonderful_hawking[241951]:                 "ceph.cephx_lockbox_secret": "",
Jan 31 08:27:56 compute-0 wonderful_hawking[241951]:                 "ceph.cluster_fsid": "82c880e6-d992-5408-8b12-efff9c275473",
Jan 31 08:27:56 compute-0 wonderful_hawking[241951]:                 "ceph.cluster_name": "ceph",
Jan 31 08:27:56 compute-0 wonderful_hawking[241951]:                 "ceph.crush_device_class": "",
Jan 31 08:27:56 compute-0 wonderful_hawking[241951]:                 "ceph.encrypted": "0",
Jan 31 08:27:56 compute-0 wonderful_hawking[241951]:                 "ceph.objectstore": "bluestore",
Jan 31 08:27:56 compute-0 wonderful_hawking[241951]:                 "ceph.osd_fsid": "dacad4fa-56d8-4937-b2d8-306fb75187f3",
Jan 31 08:27:56 compute-0 wonderful_hawking[241951]:                 "ceph.osd_id": "1",
Jan 31 08:27:56 compute-0 wonderful_hawking[241951]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 31 08:27:56 compute-0 wonderful_hawking[241951]:                 "ceph.type": "block",
Jan 31 08:27:56 compute-0 wonderful_hawking[241951]:                 "ceph.vdo": "0",
Jan 31 08:27:56 compute-0 wonderful_hawking[241951]:                 "ceph.with_tpm": "0"
Jan 31 08:27:56 compute-0 wonderful_hawking[241951]:             },
Jan 31 08:27:56 compute-0 wonderful_hawking[241951]:             "type": "block",
Jan 31 08:27:56 compute-0 wonderful_hawking[241951]:             "vg_name": "ceph_vg1"
Jan 31 08:27:56 compute-0 wonderful_hawking[241951]:         }
Jan 31 08:27:56 compute-0 wonderful_hawking[241951]:     ],
Jan 31 08:27:56 compute-0 wonderful_hawking[241951]:     "2": [
Jan 31 08:27:56 compute-0 wonderful_hawking[241951]:         {
Jan 31 08:27:56 compute-0 wonderful_hawking[241951]:             "devices": [
Jan 31 08:27:56 compute-0 wonderful_hawking[241951]:                 "/dev/loop5"
Jan 31 08:27:56 compute-0 wonderful_hawking[241951]:             ],
Jan 31 08:27:56 compute-0 wonderful_hawking[241951]:             "lv_name": "ceph_lv2",
Jan 31 08:27:56 compute-0 wonderful_hawking[241951]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 31 08:27:56 compute-0 wonderful_hawking[241951]:             "lv_size": "21470642176",
Jan 31 08:27:56 compute-0 wonderful_hawking[241951]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=fxd7JU-HnwP-NvcE-M4xv-EgEF-kK7y-w6dXCS,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=82c880e6-d992-5408-8b12-efff9c275473,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=faa25865-e7b6-44f9-8188-08bf287b941b,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 31 08:27:56 compute-0 wonderful_hawking[241951]:             "lv_uuid": "fxd7JU-HnwP-NvcE-M4xv-EgEF-kK7y-w6dXCS",
Jan 31 08:27:56 compute-0 wonderful_hawking[241951]:             "name": "ceph_lv2",
Jan 31 08:27:56 compute-0 wonderful_hawking[241951]:             "path": "/dev/ceph_vg2/ceph_lv2",
Jan 31 08:27:56 compute-0 wonderful_hawking[241951]:             "tags": {
Jan 31 08:27:56 compute-0 wonderful_hawking[241951]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 31 08:27:56 compute-0 wonderful_hawking[241951]:                 "ceph.block_uuid": "fxd7JU-HnwP-NvcE-M4xv-EgEF-kK7y-w6dXCS",
Jan 31 08:27:56 compute-0 wonderful_hawking[241951]:                 "ceph.cephx_lockbox_secret": "",
Jan 31 08:27:56 compute-0 wonderful_hawking[241951]:                 "ceph.cluster_fsid": "82c880e6-d992-5408-8b12-efff9c275473",
Jan 31 08:27:56 compute-0 wonderful_hawking[241951]:                 "ceph.cluster_name": "ceph",
Jan 31 08:27:56 compute-0 wonderful_hawking[241951]:                 "ceph.crush_device_class": "",
Jan 31 08:27:56 compute-0 wonderful_hawking[241951]:                 "ceph.encrypted": "0",
Jan 31 08:27:56 compute-0 wonderful_hawking[241951]:                 "ceph.objectstore": "bluestore",
Jan 31 08:27:56 compute-0 wonderful_hawking[241951]:                 "ceph.osd_fsid": "faa25865-e7b6-44f9-8188-08bf287b941b",
Jan 31 08:27:56 compute-0 wonderful_hawking[241951]:                 "ceph.osd_id": "2",
Jan 31 08:27:56 compute-0 wonderful_hawking[241951]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 31 08:27:56 compute-0 wonderful_hawking[241951]:                 "ceph.type": "block",
Jan 31 08:27:56 compute-0 wonderful_hawking[241951]:                 "ceph.vdo": "0",
Jan 31 08:27:56 compute-0 wonderful_hawking[241951]:                 "ceph.with_tpm": "0"
Jan 31 08:27:56 compute-0 wonderful_hawking[241951]:             },
Jan 31 08:27:56 compute-0 wonderful_hawking[241951]:             "type": "block",
Jan 31 08:27:56 compute-0 wonderful_hawking[241951]:             "vg_name": "ceph_vg2"
Jan 31 08:27:56 compute-0 wonderful_hawking[241951]:         }
Jan 31 08:27:56 compute-0 wonderful_hawking[241951]:     ]
Jan 31 08:27:56 compute-0 wonderful_hawking[241951]: }
Jan 31 08:27:56 compute-0 systemd[1]: libpod-b81e87343576bd367b36ebbc60a0f72c327ca47c3b4aba845b81b061666e2bc1.scope: Deactivated successfully.
Jan 31 08:27:56 compute-0 podman[241934]: 2026-01-31 08:27:56.16222391 +0000 UTC m=+0.543277903 container died b81e87343576bd367b36ebbc60a0f72c327ca47c3b4aba845b81b061666e2bc1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wonderful_hawking, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 08:27:56 compute-0 systemd[1]: var-lib-containers-storage-overlay-4c17854be6eb3f45f72ac0b69c12f4281790371492d00a3cf4a9383fc417750d-merged.mount: Deactivated successfully.
Jan 31 08:27:56 compute-0 podman[241934]: 2026-01-31 08:27:56.232092338 +0000 UTC m=+0.613146311 container remove b81e87343576bd367b36ebbc60a0f72c327ca47c3b4aba845b81b061666e2bc1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wonderful_hawking, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.vendor=CentOS)
Jan 31 08:27:56 compute-0 systemd[1]: libpod-conmon-b81e87343576bd367b36ebbc60a0f72c327ca47c3b4aba845b81b061666e2bc1.scope: Deactivated successfully.
Jan 31 08:27:56 compute-0 sudo[241855]: pam_unix(sudo:session): session closed for user root
Jan 31 08:27:56 compute-0 ceph-mon[75227]: pgmap v759: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:27:56 compute-0 sudo[241976]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 08:27:56 compute-0 sudo[241976]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:27:56 compute-0 sudo[241976]: pam_unix(sudo:session): session closed for user root
Jan 31 08:27:56 compute-0 sudo[242001]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/82c880e6-d992-5408-8b12-efff9c275473/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 82c880e6-d992-5408-8b12-efff9c275473 -- raw list --format json
Jan 31 08:27:56 compute-0 sudo[242001]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:27:56 compute-0 podman[242039]: 2026-01-31 08:27:56.637027424 +0000 UTC m=+0.024716111 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 08:27:56 compute-0 podman[242039]: 2026-01-31 08:27:56.796949982 +0000 UTC m=+0.184638679 container create 5305134cc25a55132d753abdd220582d2a4f9ed59dd07a7fe9a1b98ce1befa0b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zealous_hypatia, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Jan 31 08:27:56 compute-0 systemd[1]: Started libpod-conmon-5305134cc25a55132d753abdd220582d2a4f9ed59dd07a7fe9a1b98ce1befa0b.scope.
Jan 31 08:27:56 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:27:56 compute-0 podman[242039]: 2026-01-31 08:27:56.926162421 +0000 UTC m=+0.313851078 container init 5305134cc25a55132d753abdd220582d2a4f9ed59dd07a7fe9a1b98ce1befa0b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zealous_hypatia, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Jan 31 08:27:56 compute-0 podman[242039]: 2026-01-31 08:27:56.935691491 +0000 UTC m=+0.323380188 container start 5305134cc25a55132d753abdd220582d2a4f9ed59dd07a7fe9a1b98ce1befa0b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zealous_hypatia, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 31 08:27:56 compute-0 systemd[1]: libpod-5305134cc25a55132d753abdd220582d2a4f9ed59dd07a7fe9a1b98ce1befa0b.scope: Deactivated successfully.
Jan 31 08:27:56 compute-0 zealous_hypatia[242055]: 167 167
Jan 31 08:27:56 compute-0 conmon[242055]: conmon 5305134cc25a55132d75 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-5305134cc25a55132d753abdd220582d2a4f9ed59dd07a7fe9a1b98ce1befa0b.scope/container/memory.events
Jan 31 08:27:56 compute-0 podman[242039]: 2026-01-31 08:27:56.944577063 +0000 UTC m=+0.332265760 container attach 5305134cc25a55132d753abdd220582d2a4f9ed59dd07a7fe9a1b98ce1befa0b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zealous_hypatia, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 08:27:56 compute-0 podman[242039]: 2026-01-31 08:27:56.945562771 +0000 UTC m=+0.333251428 container died 5305134cc25a55132d753abdd220582d2a4f9ed59dd07a7fe9a1b98ce1befa0b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zealous_hypatia, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2)
Jan 31 08:27:56 compute-0 systemd[1]: var-lib-containers-storage-overlay-40158207d6c3cef511d0686ce6bc117049c878d57bf9f6c2f3a2c0fe9636a0fe-merged.mount: Deactivated successfully.
Jan 31 08:27:56 compute-0 podman[242039]: 2026-01-31 08:27:56.998297934 +0000 UTC m=+0.385986591 container remove 5305134cc25a55132d753abdd220582d2a4f9ed59dd07a7fe9a1b98ce1befa0b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zealous_hypatia, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default)
Jan 31 08:27:57 compute-0 systemd[1]: libpod-conmon-5305134cc25a55132d753abdd220582d2a4f9ed59dd07a7fe9a1b98ce1befa0b.scope: Deactivated successfully.
Jan 31 08:27:57 compute-0 podman[242081]: 2026-01-31 08:27:57.138600367 +0000 UTC m=+0.043880494 container create f948d7b53d178d5039ed3964c46025ecd6c292db9b4937e6845418c563d69b99 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=competent_jemison, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, io.buildah.version=1.41.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 31 08:27:57 compute-0 systemd[1]: Started libpod-conmon-f948d7b53d178d5039ed3964c46025ecd6c292db9b4937e6845418c563d69b99.scope.
Jan 31 08:27:57 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:27:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7c11cbdeb6070b294f9c6058e73e151df7e0f210d4f77a6ec85c62cd7712c680/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 08:27:57 compute-0 podman[242081]: 2026-01-31 08:27:57.118886588 +0000 UTC m=+0.024166735 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 08:27:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7c11cbdeb6070b294f9c6058e73e151df7e0f210d4f77a6ec85c62cd7712c680/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 08:27:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7c11cbdeb6070b294f9c6058e73e151df7e0f210d4f77a6ec85c62cd7712c680/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 08:27:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7c11cbdeb6070b294f9c6058e73e151df7e0f210d4f77a6ec85c62cd7712c680/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 08:27:57 compute-0 podman[242081]: 2026-01-31 08:27:57.235951373 +0000 UTC m=+0.141231550 container init f948d7b53d178d5039ed3964c46025ecd6c292db9b4937e6845418c563d69b99 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=competent_jemison, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.license=GPLv2)
Jan 31 08:27:57 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v760: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:27:57 compute-0 podman[242081]: 2026-01-31 08:27:57.243808996 +0000 UTC m=+0.149089133 container start f948d7b53d178d5039ed3964c46025ecd6c292db9b4937e6845418c563d69b99 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=competent_jemison, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 08:27:57 compute-0 podman[242081]: 2026-01-31 08:27:57.258126811 +0000 UTC m=+0.163406948 container attach f948d7b53d178d5039ed3964c46025ecd6c292db9b4937e6845418c563d69b99 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=competent_jemison, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Jan 31 08:27:57 compute-0 podman[242100]: 2026-01-31 08:27:57.275493763 +0000 UTC m=+0.069820878 container health_status 5cc46d1955888fed41771eb977e7f9416e280539f01559118253757fe3eb0869 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20260127, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '3b798815db4eef76ff54d2cfe5801aee605c637f16e47a4297de289ab1fdb9c1-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Jan 31 08:27:57 compute-0 podman[242102]: 2026-01-31 08:27:57.329938954 +0000 UTC m=+0.115079959 container health_status 14c3c41ef3ecc0cb180f3b1c6b2646401de390411c75f2edf127900ced71a3ae (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '3b798815db4eef76ff54d2cfe5801aee605c637f16e47a4297de289ab1fdb9c1-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, config_id=ovn_controller, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 31 08:27:57 compute-0 ceph-mon[75227]: pgmap v760: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:27:57 compute-0 lvm[242221]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 31 08:27:57 compute-0 lvm[242222]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 31 08:27:57 compute-0 lvm[242222]: VG ceph_vg1 finished
Jan 31 08:27:57 compute-0 lvm[242221]: VG ceph_vg0 finished
Jan 31 08:27:57 compute-0 lvm[242224]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 31 08:27:57 compute-0 lvm[242224]: VG ceph_vg2 finished
Jan 31 08:27:58 compute-0 competent_jemison[242098]: {}
Jan 31 08:27:58 compute-0 systemd[1]: libpod-f948d7b53d178d5039ed3964c46025ecd6c292db9b4937e6845418c563d69b99.scope: Deactivated successfully.
Jan 31 08:27:58 compute-0 systemd[1]: libpod-f948d7b53d178d5039ed3964c46025ecd6c292db9b4937e6845418c563d69b99.scope: Consumed 1.210s CPU time.
Jan 31 08:27:58 compute-0 podman[242227]: 2026-01-31 08:27:58.082429801 +0000 UTC m=+0.022535759 container died f948d7b53d178d5039ed3964c46025ecd6c292db9b4937e6845418c563d69b99 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=competent_jemison, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 08:27:58 compute-0 systemd[1]: var-lib-containers-storage-overlay-7c11cbdeb6070b294f9c6058e73e151df7e0f210d4f77a6ec85c62cd7712c680-merged.mount: Deactivated successfully.
Jan 31 08:27:58 compute-0 podman[242227]: 2026-01-31 08:27:58.139868588 +0000 UTC m=+0.079974566 container remove f948d7b53d178d5039ed3964c46025ecd6c292db9b4937e6845418c563d69b99 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=competent_jemison, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, ceph=True, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 08:27:58 compute-0 systemd[1]: libpod-conmon-f948d7b53d178d5039ed3964c46025ecd6c292db9b4937e6845418c563d69b99.scope: Deactivated successfully.
Jan 31 08:27:58 compute-0 sudo[242001]: pam_unix(sudo:session): session closed for user root
Jan 31 08:27:58 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 31 08:27:58 compute-0 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 08:27:58 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 31 08:27:58 compute-0 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 08:27:58 compute-0 sudo[242242]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 31 08:27:58 compute-0 sudo[242242]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:27:58 compute-0 sudo[242242]: pam_unix(sudo:session): session closed for user root
Jan 31 08:27:59 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 08:27:59 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v761: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:27:59 compute-0 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 08:27:59 compute-0 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 08:28:00 compute-0 ceph-mon[75227]: pgmap v761: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:28:01 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v762: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:28:01 compute-0 ceph-mon[75227]: pgmap v762: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:28:02 compute-0 ceph-mgr[75519]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:28:02 compute-0 ceph-mgr[75519]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:28:02 compute-0 ceph-mgr[75519]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:28:02 compute-0 ceph-mgr[75519]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:28:02 compute-0 ceph-mgr[75519]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:28:02 compute-0 ceph-mgr[75519]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:28:03 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v763: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:28:04 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 08:28:04 compute-0 ceph-mon[75227]: pgmap v763: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:28:05 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v764: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:28:05 compute-0 ceph-mon[75227]: pgmap v764: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:28:07 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v765: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:28:07 compute-0 nova_compute[238824]: 2026-01-31 08:28:07.341 238828 DEBUG oslo_service.periodic_task [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:28:08 compute-0 nova_compute[238824]: 2026-01-31 08:28:08.339 238828 DEBUG oslo_service.periodic_task [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:28:08 compute-0 nova_compute[238824]: 2026-01-31 08:28:08.340 238828 DEBUG oslo_service.periodic_task [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:28:08 compute-0 ceph-mon[75227]: pgmap v765: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:28:09 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 08:28:09 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v766: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:28:09 compute-0 nova_compute[238824]: 2026-01-31 08:28:09.335 238828 DEBUG oslo_service.periodic_task [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:28:09 compute-0 nova_compute[238824]: 2026-01-31 08:28:09.350 238828 DEBUG oslo_service.periodic_task [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:28:09 compute-0 nova_compute[238824]: 2026-01-31 08:28:09.350 238828 DEBUG nova.compute.manager [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 31 08:28:09 compute-0 nova_compute[238824]: 2026-01-31 08:28:09.350 238828 DEBUG nova.compute.manager [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 31 08:28:09 compute-0 nova_compute[238824]: 2026-01-31 08:28:09.361 238828 DEBUG nova.compute.manager [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 31 08:28:09 compute-0 nova_compute[238824]: 2026-01-31 08:28:09.362 238828 DEBUG oslo_service.periodic_task [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:28:09 compute-0 nova_compute[238824]: 2026-01-31 08:28:09.362 238828 DEBUG oslo_service.periodic_task [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:28:09 compute-0 nova_compute[238824]: 2026-01-31 08:28:09.385 238828 DEBUG oslo_concurrency.lockutils [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:28:09 compute-0 nova_compute[238824]: 2026-01-31 08:28:09.386 238828 DEBUG oslo_concurrency.lockutils [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:28:09 compute-0 nova_compute[238824]: 2026-01-31 08:28:09.386 238828 DEBUG oslo_concurrency.lockutils [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:28:09 compute-0 nova_compute[238824]: 2026-01-31 08:28:09.387 238828 DEBUG nova.compute.resource_tracker [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 31 08:28:09 compute-0 nova_compute[238824]: 2026-01-31 08:28:09.387 238828 DEBUG oslo_concurrency.processutils [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:28:09 compute-0 ceph-mon[75227]: pgmap v766: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:28:09 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 31 08:28:09 compute-0 ceph-mon[75227]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/820916702' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 31 08:28:09 compute-0 nova_compute[238824]: 2026-01-31 08:28:09.960 238828 DEBUG oslo_concurrency.processutils [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.573s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:28:10 compute-0 nova_compute[238824]: 2026-01-31 08:28:10.109 238828 WARNING nova.virt.libvirt.driver [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 31 08:28:10 compute-0 nova_compute[238824]: 2026-01-31 08:28:10.109 238828 DEBUG nova.compute.resource_tracker [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5143MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 31 08:28:10 compute-0 nova_compute[238824]: 2026-01-31 08:28:10.110 238828 DEBUG oslo_concurrency.lockutils [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:28:10 compute-0 nova_compute[238824]: 2026-01-31 08:28:10.110 238828 DEBUG oslo_concurrency.lockutils [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:28:10 compute-0 nova_compute[238824]: 2026-01-31 08:28:10.236 238828 DEBUG nova.compute.resource_tracker [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 31 08:28:10 compute-0 nova_compute[238824]: 2026-01-31 08:28:10.237 238828 DEBUG nova.compute.resource_tracker [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 31 08:28:10 compute-0 nova_compute[238824]: 2026-01-31 08:28:10.256 238828 DEBUG oslo_concurrency.processutils [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:28:10 compute-0 ceph-mon[75227]: from='client.? 192.168.122.100:0/820916702' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 31 08:28:10 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 31 08:28:10 compute-0 ceph-mon[75227]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3075693408' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 31 08:28:10 compute-0 nova_compute[238824]: 2026-01-31 08:28:10.810 238828 DEBUG oslo_concurrency.processutils [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.554s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:28:10 compute-0 nova_compute[238824]: 2026-01-31 08:28:10.817 238828 DEBUG nova.compute.provider_tree [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Inventory has not changed in ProviderTree for provider: 6d4ff98f-eb37-47a1-bfaf-01e7f5329d98 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 31 08:28:10 compute-0 nova_compute[238824]: 2026-01-31 08:28:10.833 238828 DEBUG nova.scheduler.client.report [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Inventory has not changed for provider 6d4ff98f-eb37-47a1-bfaf-01e7f5329d98 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 31 08:28:10 compute-0 nova_compute[238824]: 2026-01-31 08:28:10.834 238828 DEBUG nova.compute.resource_tracker [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 31 08:28:10 compute-0 nova_compute[238824]: 2026-01-31 08:28:10.834 238828 DEBUG oslo_concurrency.lockutils [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.724s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:28:11 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v767: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:28:11 compute-0 ceph-mon[75227]: from='client.? 192.168.122.100:0/3075693408' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 31 08:28:11 compute-0 ceph-mon[75227]: pgmap v767: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:28:12 compute-0 nova_compute[238824]: 2026-01-31 08:28:12.834 238828 DEBUG oslo_service.periodic_task [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:28:12 compute-0 nova_compute[238824]: 2026-01-31 08:28:12.835 238828 DEBUG oslo_service.periodic_task [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:28:13 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v768: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:28:13 compute-0 nova_compute[238824]: 2026-01-31 08:28:13.339 238828 DEBUG oslo_service.periodic_task [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:28:13 compute-0 nova_compute[238824]: 2026-01-31 08:28:13.339 238828 DEBUG nova.compute.manager [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 31 08:28:14 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 08:28:14 compute-0 ceph-mon[75227]: pgmap v768: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:28:15 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v769: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:28:15 compute-0 ceph-mon[75227]: pgmap v769: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:28:17 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v770: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:28:17 compute-0 ovn_metadata_agent[154972]: 2026-01-31 08:28:17.886 154977 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:28:17 compute-0 ovn_metadata_agent[154972]: 2026-01-31 08:28:17.887 154977 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:28:17 compute-0 ovn_metadata_agent[154972]: 2026-01-31 08:28:17.887 154977 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:28:17 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 31 08:28:17 compute-0 ceph-mon[75227]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1364113193' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 31 08:28:17 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 31 08:28:17 compute-0 ceph-mon[75227]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1364113193' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 31 08:28:18 compute-0 ceph-mon[75227]: pgmap v770: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:28:18 compute-0 ceph-mon[75227]: from='client.? 192.168.122.10:0/1364113193' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 31 08:28:18 compute-0 ceph-mon[75227]: from='client.? 192.168.122.10:0/1364113193' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 31 08:28:19 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 08:28:19 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v771: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:28:19 compute-0 ceph-mon[75227]: pgmap v771: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:28:21 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v772: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:28:22 compute-0 ceph-mon[75227]: pgmap v772: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:28:23 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v773: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:28:23 compute-0 ceph-mon[75227]: pgmap v773: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:28:24 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 08:28:25 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v774: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:28:25 compute-0 ceph-mon[75227]: pgmap v774: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:28:27 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v775: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:28:27 compute-0 ceph-mon[75227]: pgmap v775: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:28:28 compute-0 podman[242313]: 2026-01-31 08:28:28.173217768 +0000 UTC m=+0.052894268 container health_status 5cc46d1955888fed41771eb977e7f9416e280539f01559118253757fe3eb0869 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '3b798815db4eef76ff54d2cfe5801aee605c637f16e47a4297de289ab1fdb9c1-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Jan 31 08:28:28 compute-0 podman[242312]: 2026-01-31 08:28:28.190894269 +0000 UTC m=+0.070480247 container health_status 14c3c41ef3ecc0cb180f3b1c6b2646401de390411c75f2edf127900ced71a3ae (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '3b798815db4eef76ff54d2cfe5801aee605c637f16e47a4297de289ab1fdb9c1-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, config_id=ovn_controller)
Jan 31 08:28:29 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 08:28:29 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v776: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:28:30 compute-0 ceph-mon[75227]: pgmap v776: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:28:31 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v777: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 2.7 KiB/s rd, 0 B/s wr, 4 op/s
Jan 31 08:28:31 compute-0 ceph-mon[75227]: pgmap v777: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 2.7 KiB/s rd, 0 B/s wr, 4 op/s
Jan 31 08:28:31 compute-0 ceph-mgr[75519]: [balancer INFO root] Optimize plan auto_2026-01-31_08:28:31
Jan 31 08:28:31 compute-0 ceph-mgr[75519]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 31 08:28:31 compute-0 ceph-mgr[75519]: [balancer INFO root] do_upmap
Jan 31 08:28:31 compute-0 ceph-mgr[75519]: [balancer INFO root] pools ['.rgw.root', 'volumes', 'cephfs.cephfs.data', 'vms', 'default.rgw.meta', 'default.rgw.control', 'default.rgw.log', '.mgr', 'cephfs.cephfs.meta', 'backups', 'images']
Jan 31 08:28:31 compute-0 ceph-mgr[75519]: [balancer INFO root] prepared 0/10 upmap changes
Jan 31 08:28:32 compute-0 ceph-mgr[75519]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:28:32 compute-0 ceph-mgr[75519]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:28:32 compute-0 ceph-mgr[75519]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:28:32 compute-0 ceph-mgr[75519]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:28:32 compute-0 ceph-mgr[75519]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:28:32 compute-0 ceph-mgr[75519]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:28:33 compute-0 ceph-mgr[75519]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 31 08:28:33 compute-0 ceph-mgr[75519]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 08:28:33 compute-0 ceph-mgr[75519]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 31 08:28:33 compute-0 ceph-mgr[75519]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 08:28:33 compute-0 ceph-mgr[75519]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 08:28:33 compute-0 ceph-mgr[75519]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 08:28:33 compute-0 ceph-mgr[75519]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 08:28:33 compute-0 ceph-mgr[75519]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 08:28:33 compute-0 ceph-mgr[75519]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 08:28:33 compute-0 ceph-mgr[75519]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 08:28:33 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v778: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 2.7 KiB/s rd, 0 B/s wr, 4 op/s
Jan 31 08:28:34 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 08:28:34 compute-0 ceph-mon[75227]: pgmap v778: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 2.7 KiB/s rd, 0 B/s wr, 4 op/s
Jan 31 08:28:35 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v779: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 0 B/s wr, 29 op/s
Jan 31 08:28:35 compute-0 ceph-mon[75227]: pgmap v779: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 0 B/s wr, 29 op/s
Jan 31 08:28:37 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v780: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 25 KiB/s rd, 0 B/s wr, 41 op/s
Jan 31 08:28:38 compute-0 ceph-mon[75227]: pgmap v780: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 25 KiB/s rd, 0 B/s wr, 41 op/s
Jan 31 08:28:39 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 08:28:39 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v781: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 25 KiB/s rd, 0 B/s wr, 41 op/s
Jan 31 08:28:39 compute-0 ceph-mon[75227]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #36. Immutable memtables: 0.
Jan 31 08:28:39 compute-0 ceph-mon[75227]: rocksdb: (Original Log Time 2026/01/31-08:28:39.527924) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 31 08:28:39 compute-0 ceph-mon[75227]: rocksdb: [db/flush_job.cc:856] [default] [JOB 15] Flushing memtable with next log file: 36
Jan 31 08:28:39 compute-0 ceph-mon[75227]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769848119527990, "job": 15, "event": "flush_started", "num_memtables": 1, "num_entries": 1159, "num_deletes": 251, "total_data_size": 1798018, "memory_usage": 1830304, "flush_reason": "Manual Compaction"}
Jan 31 08:28:39 compute-0 ceph-mon[75227]: rocksdb: [db/flush_job.cc:885] [default] [JOB 15] Level-0 flush table #37: started
Jan 31 08:28:39 compute-0 ceph-mon[75227]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769848119595003, "cf_name": "default", "job": 15, "event": "table_file_creation", "file_number": 37, "file_size": 1759698, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 15279, "largest_seqno": 16437, "table_properties": {"data_size": 1754143, "index_size": 2950, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1541, "raw_key_size": 11630, "raw_average_key_size": 19, "raw_value_size": 1743042, "raw_average_value_size": 2924, "num_data_blocks": 135, "num_entries": 596, "num_filter_entries": 596, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769848004, "oldest_key_time": 1769848004, "file_creation_time": 1769848119, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "91992687-9ca4-489a-811f-a25b3432622d", "db_session_id": "RDN3DWKE2K2I6QTJYIJY", "orig_file_number": 37, "seqno_to_time_mapping": "N/A"}}
Jan 31 08:28:39 compute-0 ceph-mon[75227]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 15] Flush lasted 67147 microseconds, and 5569 cpu microseconds.
Jan 31 08:28:39 compute-0 ceph-mon[75227]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 31 08:28:39 compute-0 ceph-mon[75227]: pgmap v781: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 25 KiB/s rd, 0 B/s wr, 41 op/s
Jan 31 08:28:39 compute-0 ceph-mon[75227]: rocksdb: (Original Log Time 2026/01/31-08:28:39.595073) [db/flush_job.cc:967] [default] [JOB 15] Level-0 flush table #37: 1759698 bytes OK
Jan 31 08:28:39 compute-0 ceph-mon[75227]: rocksdb: (Original Log Time 2026/01/31-08:28:39.595098) [db/memtable_list.cc:519] [default] Level-0 commit table #37 started
Jan 31 08:28:39 compute-0 ceph-mon[75227]: rocksdb: (Original Log Time 2026/01/31-08:28:39.614126) [db/memtable_list.cc:722] [default] Level-0 commit table #37: memtable #1 done
Jan 31 08:28:39 compute-0 ceph-mon[75227]: rocksdb: (Original Log Time 2026/01/31-08:28:39.614171) EVENT_LOG_v1 {"time_micros": 1769848119614160, "job": 15, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 31 08:28:39 compute-0 ceph-mon[75227]: rocksdb: (Original Log Time 2026/01/31-08:28:39.614201) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 31 08:28:39 compute-0 ceph-mon[75227]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 15] Try to delete WAL files size 1792702, prev total WAL file size 1793985, number of live WAL files 2.
Jan 31 08:28:39 compute-0 ceph-mon[75227]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000033.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 08:28:39 compute-0 ceph-mon[75227]: rocksdb: (Original Log Time 2026/01/31-08:28:39.615159) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730031303034' seq:72057594037927935, type:22 .. '7061786F730031323536' seq:0, type:0; will stop at (end)
Jan 31 08:28:39 compute-0 ceph-mon[75227]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 16] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 31 08:28:39 compute-0 ceph-mon[75227]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 15 Base level 0, inputs: [37(1718KB)], [35(7987KB)]
Jan 31 08:28:39 compute-0 ceph-mon[75227]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769848119615234, "job": 16, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [37], "files_L6": [35], "score": -1, "input_data_size": 9938536, "oldest_snapshot_seqno": -1}
Jan 31 08:28:39 compute-0 ceph-mon[75227]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 16] Generated table #38: 4069 keys, 8126965 bytes, temperature: kUnknown
Jan 31 08:28:39 compute-0 ceph-mon[75227]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769848119799495, "cf_name": "default", "job": 16, "event": "table_file_creation", "file_number": 38, "file_size": 8126965, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 8097195, "index_size": 18524, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 10181, "raw_key_size": 99553, "raw_average_key_size": 24, "raw_value_size": 8020972, "raw_average_value_size": 1971, "num_data_blocks": 782, "num_entries": 4069, "num_filter_entries": 4069, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769846771, "oldest_key_time": 0, "file_creation_time": 1769848119, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "91992687-9ca4-489a-811f-a25b3432622d", "db_session_id": "RDN3DWKE2K2I6QTJYIJY", "orig_file_number": 38, "seqno_to_time_mapping": "N/A"}}
Jan 31 08:28:39 compute-0 ceph-mon[75227]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 31 08:28:39 compute-0 ceph-mon[75227]: rocksdb: (Original Log Time 2026/01/31-08:28:39.799798) [db/compaction/compaction_job.cc:1663] [default] [JOB 16] Compacted 1@0 + 1@6 files to L6 => 8126965 bytes
Jan 31 08:28:39 compute-0 ceph-mon[75227]: rocksdb: (Original Log Time 2026/01/31-08:28:39.812863) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 53.9 rd, 44.1 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.7, 7.8 +0.0 blob) out(7.8 +0.0 blob), read-write-amplify(10.3) write-amplify(4.6) OK, records in: 4583, records dropped: 514 output_compression: NoCompression
Jan 31 08:28:39 compute-0 ceph-mon[75227]: rocksdb: (Original Log Time 2026/01/31-08:28:39.812898) EVENT_LOG_v1 {"time_micros": 1769848119812883, "job": 16, "event": "compaction_finished", "compaction_time_micros": 184344, "compaction_time_cpu_micros": 13961, "output_level": 6, "num_output_files": 1, "total_output_size": 8126965, "num_input_records": 4583, "num_output_records": 4069, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 31 08:28:39 compute-0 ceph-mon[75227]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000037.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 08:28:39 compute-0 ceph-mon[75227]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769848119813469, "job": 16, "event": "table_file_deletion", "file_number": 37}
Jan 31 08:28:39 compute-0 ceph-mon[75227]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000035.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 08:28:39 compute-0 ceph-mon[75227]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769848119815011, "job": 16, "event": "table_file_deletion", "file_number": 35}
Jan 31 08:28:39 compute-0 ceph-mon[75227]: rocksdb: (Original Log Time 2026/01/31-08:28:39.614983) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 08:28:39 compute-0 ceph-mon[75227]: rocksdb: (Original Log Time 2026/01/31-08:28:39.815083) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 08:28:39 compute-0 ceph-mon[75227]: rocksdb: (Original Log Time 2026/01/31-08:28:39.815091) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 08:28:39 compute-0 ceph-mon[75227]: rocksdb: (Original Log Time 2026/01/31-08:28:39.815094) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 08:28:39 compute-0 ceph-mon[75227]: rocksdb: (Original Log Time 2026/01/31-08:28:39.815097) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 08:28:39 compute-0 ceph-mon[75227]: rocksdb: (Original Log Time 2026/01/31-08:28:39.815100) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 08:28:41 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v782: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Jan 31 08:28:42 compute-0 ceph-mon[75227]: pgmap v782: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Jan 31 08:28:43 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v783: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 33 KiB/s rd, 0 B/s wr, 55 op/s
Jan 31 08:28:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] _maybe_adjust
Jan 31 08:28:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:28:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 31 08:28:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:28:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 08:28:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:28:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 08:28:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:28:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 08:28:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:28:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 08:28:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:28:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 2.6947183441958982e-06 of space, bias 4.0, pg target 0.003233662013035078 quantized to 16 (current 16)
Jan 31 08:28:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:28:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 08:28:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:28:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 31 08:28:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:28:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Jan 31 08:28:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:28:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 08:28:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:28:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 31 08:28:44 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 08:28:44 compute-0 ceph-mon[75227]: pgmap v783: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 33 KiB/s rd, 0 B/s wr, 55 op/s
Jan 31 08:28:45 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v784: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 33 KiB/s rd, 0 B/s wr, 55 op/s
Jan 31 08:28:46 compute-0 ceph-mon[75227]: pgmap v784: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 33 KiB/s rd, 0 B/s wr, 55 op/s
Jan 31 08:28:47 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v785: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 0 B/s wr, 29 op/s
Jan 31 08:28:47 compute-0 ceph-mon[75227]: pgmap v785: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 0 B/s wr, 29 op/s
Jan 31 08:28:49 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 08:28:49 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v786: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 11 KiB/s rd, 0 B/s wr, 18 op/s
Jan 31 08:28:50 compute-0 ceph-mon[75227]: pgmap v786: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 11 KiB/s rd, 0 B/s wr, 18 op/s
Jan 31 08:28:51 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v787: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 11 KiB/s rd, 0 B/s wr, 18 op/s
Jan 31 08:28:51 compute-0 ceph-mon[75227]: pgmap v787: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 11 KiB/s rd, 0 B/s wr, 18 op/s
Jan 31 08:28:53 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v788: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:28:54 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 08:28:54 compute-0 ceph-mon[75227]: pgmap v788: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:28:55 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v789: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:28:56 compute-0 ceph-mon[75227]: pgmap v789: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:28:57 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v790: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:28:58 compute-0 sudo[242356]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 08:28:58 compute-0 sudo[242356]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:28:58 compute-0 sudo[242356]: pam_unix(sudo:session): session closed for user root
Jan 31 08:28:58 compute-0 podman[242381]: 2026-01-31 08:28:58.394675911 +0000 UTC m=+0.038185592 container health_status 5cc46d1955888fed41771eb977e7f9416e280539f01559118253757fe3eb0869 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '3b798815db4eef76ff54d2cfe5801aee605c637f16e47a4297de289ab1fdb9c1-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS)
Jan 31 08:28:58 compute-0 sudo[242393]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/82c880e6-d992-5408-8b12-efff9c275473/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ls
Jan 31 08:28:58 compute-0 sudo[242393]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:28:58 compute-0 podman[242380]: 2026-01-31 08:28:58.425926136 +0000 UTC m=+0.070838517 container health_status 14c3c41ef3ecc0cb180f3b1c6b2646401de390411c75f2edf127900ced71a3ae (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '3b798815db4eef76ff54d2cfe5801aee605c637f16e47a4297de289ab1fdb9c1-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller)
Jan 31 08:28:58 compute-0 ceph-mon[75227]: pgmap v790: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:28:58 compute-0 podman[242494]: 2026-01-31 08:28:58.851339101 +0000 UTC m=+0.142492785 container exec 2c160fb9852a007dc977740f88f96001cc57b1cb392a9e315d541aef8037777a (image=quay.io/ceph/ceph:v20, name=ceph-82c880e6-d992-5408-8b12-efff9c275473-mon-compute-0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 08:28:58 compute-0 podman[242494]: 2026-01-31 08:28:58.938004795 +0000 UTC m=+0.229158419 container exec_died 2c160fb9852a007dc977740f88f96001cc57b1cb392a9e315d541aef8037777a (image=quay.io/ceph/ceph:v20, name=ceph-82c880e6-d992-5408-8b12-efff9c275473-mon-compute-0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, ceph=True)
Jan 31 08:28:59 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 08:28:59 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v791: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:28:59 compute-0 sudo[242393]: pam_unix(sudo:session): session closed for user root
Jan 31 08:28:59 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 31 08:28:59 compute-0 ceph-mon[75227]: pgmap v791: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:28:59 compute-0 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 08:28:59 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 31 08:28:59 compute-0 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 08:28:59 compute-0 sudo[242679]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 08:28:59 compute-0 sudo[242679]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:28:59 compute-0 sudo[242679]: pam_unix(sudo:session): session closed for user root
Jan 31 08:28:59 compute-0 sudo[242704]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/82c880e6-d992-5408-8b12-efff9c275473/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Jan 31 08:28:59 compute-0 sudo[242704]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:29:00 compute-0 sudo[242704]: pam_unix(sudo:session): session closed for user root
Jan 31 08:29:00 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0)
Jan 31 08:29:00 compute-0 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} : dispatch
Jan 31 08:29:00 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 31 08:29:00 compute-0 ceph-mon[75227]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 08:29:00 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 31 08:29:00 compute-0 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 31 08:29:00 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 31 08:29:00 compute-0 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 08:29:00 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 31 08:29:00 compute-0 ceph-mon[75227]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 31 08:29:00 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 31 08:29:00 compute-0 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 31 08:29:00 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 31 08:29:00 compute-0 ceph-mon[75227]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 08:29:00 compute-0 sudo[242760]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 08:29:00 compute-0 sudo[242760]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:29:00 compute-0 sudo[242760]: pam_unix(sudo:session): session closed for user root
Jan 31 08:29:00 compute-0 sudo[242785]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/82c880e6-d992-5408-8b12-efff9c275473/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 82c880e6-d992-5408-8b12-efff9c275473 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Jan 31 08:29:00 compute-0 sudo[242785]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:29:00 compute-0 podman[242822]: 2026-01-31 08:29:00.792630408 +0000 UTC m=+0.025302908 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 08:29:00 compute-0 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 08:29:00 compute-0 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 08:29:00 compute-0 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} : dispatch
Jan 31 08:29:00 compute-0 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 08:29:00 compute-0 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 31 08:29:00 compute-0 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 08:29:00 compute-0 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 31 08:29:00 compute-0 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 31 08:29:00 compute-0 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 08:29:00 compute-0 podman[242822]: 2026-01-31 08:29:00.94845855 +0000 UTC m=+0.181130980 container create 1e4c812ce0f2b83072a6a1d6c2e236b422035e69824e60e0c267d331cd747298 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sleepy_curran, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 08:29:01 compute-0 systemd[1]: Started libpod-conmon-1e4c812ce0f2b83072a6a1d6c2e236b422035e69824e60e0c267d331cd747298.scope.
Jan 31 08:29:01 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:29:01 compute-0 podman[242822]: 2026-01-31 08:29:01.058037673 +0000 UTC m=+0.290710103 container init 1e4c812ce0f2b83072a6a1d6c2e236b422035e69824e60e0c267d331cd747298 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sleepy_curran, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3)
Jan 31 08:29:01 compute-0 podman[242822]: 2026-01-31 08:29:01.062914881 +0000 UTC m=+0.295587311 container start 1e4c812ce0f2b83072a6a1d6c2e236b422035e69824e60e0c267d331cd747298 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sleepy_curran, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 31 08:29:01 compute-0 sleepy_curran[242839]: 167 167
Jan 31 08:29:01 compute-0 systemd[1]: libpod-1e4c812ce0f2b83072a6a1d6c2e236b422035e69824e60e0c267d331cd747298.scope: Deactivated successfully.
Jan 31 08:29:01 compute-0 podman[242822]: 2026-01-31 08:29:01.066731969 +0000 UTC m=+0.299404399 container attach 1e4c812ce0f2b83072a6a1d6c2e236b422035e69824e60e0c267d331cd747298 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sleepy_curran, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 08:29:01 compute-0 podman[242822]: 2026-01-31 08:29:01.067171392 +0000 UTC m=+0.299843852 container died 1e4c812ce0f2b83072a6a1d6c2e236b422035e69824e60e0c267d331cd747298 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sleepy_curran, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=tentacle)
Jan 31 08:29:01 compute-0 systemd[1]: var-lib-containers-storage-overlay-b0e21d33da33a8ab516b54ea6a40aedb2c91b2ae0298f4140abe886822a404bb-merged.mount: Deactivated successfully.
Jan 31 08:29:01 compute-0 podman[242822]: 2026-01-31 08:29:01.109110079 +0000 UTC m=+0.341782509 container remove 1e4c812ce0f2b83072a6a1d6c2e236b422035e69824e60e0c267d331cd747298 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sleepy_curran, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 31 08:29:01 compute-0 systemd[1]: libpod-conmon-1e4c812ce0f2b83072a6a1d6c2e236b422035e69824e60e0c267d331cd747298.scope: Deactivated successfully.
Jan 31 08:29:01 compute-0 podman[242862]: 2026-01-31 08:29:01.253915909 +0000 UTC m=+0.049337758 container create 97efd2a01f1e2dbe412d55237ba42c4428b712614f9fe7bbb2a6426e757d48d5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=romantic_rosalind, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0)
Jan 31 08:29:01 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v792: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:29:01 compute-0 systemd[1]: Started libpod-conmon-97efd2a01f1e2dbe412d55237ba42c4428b712614f9fe7bbb2a6426e757d48d5.scope.
Jan 31 08:29:01 compute-0 podman[242862]: 2026-01-31 08:29:01.225413672 +0000 UTC m=+0.020835561 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 08:29:01 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:29:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3be88c131685d0e71679a699b09d22c1ae897427ded07e999d9660442b202ec1/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 08:29:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3be88c131685d0e71679a699b09d22c1ae897427ded07e999d9660442b202ec1/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 08:29:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3be88c131685d0e71679a699b09d22c1ae897427ded07e999d9660442b202ec1/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 08:29:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3be88c131685d0e71679a699b09d22c1ae897427ded07e999d9660442b202ec1/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 08:29:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3be88c131685d0e71679a699b09d22c1ae897427ded07e999d9660442b202ec1/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 31 08:29:01 compute-0 podman[242862]: 2026-01-31 08:29:01.3450461 +0000 UTC m=+0.140467979 container init 97efd2a01f1e2dbe412d55237ba42c4428b712614f9fe7bbb2a6426e757d48d5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=romantic_rosalind, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Jan 31 08:29:01 compute-0 podman[242862]: 2026-01-31 08:29:01.350488574 +0000 UTC m=+0.145910423 container start 97efd2a01f1e2dbe412d55237ba42c4428b712614f9fe7bbb2a6426e757d48d5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=romantic_rosalind, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_REF=tentacle)
Jan 31 08:29:01 compute-0 podman[242862]: 2026-01-31 08:29:01.355015082 +0000 UTC m=+0.150436961 container attach 97efd2a01f1e2dbe412d55237ba42c4428b712614f9fe7bbb2a6426e757d48d5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=romantic_rosalind, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Jan 31 08:29:01 compute-0 romantic_rosalind[242879]: --> passed data devices: 0 physical, 3 LVM
Jan 31 08:29:01 compute-0 romantic_rosalind[242879]: --> All data devices are unavailable
Jan 31 08:29:01 compute-0 systemd[1]: libpod-97efd2a01f1e2dbe412d55237ba42c4428b712614f9fe7bbb2a6426e757d48d5.scope: Deactivated successfully.
Jan 31 08:29:01 compute-0 podman[242862]: 2026-01-31 08:29:01.761291866 +0000 UTC m=+0.556713705 container died 97efd2a01f1e2dbe412d55237ba42c4428b712614f9fe7bbb2a6426e757d48d5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=romantic_rosalind, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 31 08:29:01 compute-0 systemd[1]: var-lib-containers-storage-overlay-3be88c131685d0e71679a699b09d22c1ae897427ded07e999d9660442b202ec1-merged.mount: Deactivated successfully.
Jan 31 08:29:01 compute-0 podman[242862]: 2026-01-31 08:29:01.800619179 +0000 UTC m=+0.596041078 container remove 97efd2a01f1e2dbe412d55237ba42c4428b712614f9fe7bbb2a6426e757d48d5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=romantic_rosalind, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 08:29:01 compute-0 systemd[1]: libpod-conmon-97efd2a01f1e2dbe412d55237ba42c4428b712614f9fe7bbb2a6426e757d48d5.scope: Deactivated successfully.
Jan 31 08:29:01 compute-0 sudo[242785]: pam_unix(sudo:session): session closed for user root
Jan 31 08:29:01 compute-0 sudo[242912]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 08:29:01 compute-0 sudo[242912]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:29:01 compute-0 sudo[242912]: pam_unix(sudo:session): session closed for user root
Jan 31 08:29:01 compute-0 ceph-mon[75227]: pgmap v792: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:29:01 compute-0 sudo[242937]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/82c880e6-d992-5408-8b12-efff9c275473/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 82c880e6-d992-5408-8b12-efff9c275473 -- lvm list --format json
Jan 31 08:29:01 compute-0 sudo[242937]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:29:02 compute-0 podman[242973]: 2026-01-31 08:29:02.230204903 +0000 UTC m=+0.034665853 container create bab28281af9b96eefb29370af72ff5fb5dd63f1dc22abb8a7c54f084dae6cbd8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=intelligent_allen, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 08:29:02 compute-0 systemd[1]: Started libpod-conmon-bab28281af9b96eefb29370af72ff5fb5dd63f1dc22abb8a7c54f084dae6cbd8.scope.
Jan 31 08:29:02 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:29:02 compute-0 podman[242973]: 2026-01-31 08:29:02.30848525 +0000 UTC m=+0.112946230 container init bab28281af9b96eefb29370af72ff5fb5dd63f1dc22abb8a7c54f084dae6cbd8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=intelligent_allen, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 08:29:02 compute-0 podman[242973]: 2026-01-31 08:29:02.214965482 +0000 UTC m=+0.019426392 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 08:29:02 compute-0 podman[242973]: 2026-01-31 08:29:02.316172217 +0000 UTC m=+0.120633167 container start bab28281af9b96eefb29370af72ff5fb5dd63f1dc22abb8a7c54f084dae6cbd8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=intelligent_allen, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, ceph=True, CEPH_REF=tentacle, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Jan 31 08:29:02 compute-0 intelligent_allen[242990]: 167 167
Jan 31 08:29:02 compute-0 podman[242973]: 2026-01-31 08:29:02.320566842 +0000 UTC m=+0.125027862 container attach bab28281af9b96eefb29370af72ff5fb5dd63f1dc22abb8a7c54f084dae6cbd8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=intelligent_allen, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Jan 31 08:29:02 compute-0 podman[242973]: 2026-01-31 08:29:02.32156222 +0000 UTC m=+0.126023140 container died bab28281af9b96eefb29370af72ff5fb5dd63f1dc22abb8a7c54f084dae6cbd8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=intelligent_allen, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 08:29:02 compute-0 systemd[1]: libpod-bab28281af9b96eefb29370af72ff5fb5dd63f1dc22abb8a7c54f084dae6cbd8.scope: Deactivated successfully.
Jan 31 08:29:02 compute-0 systemd[1]: var-lib-containers-storage-overlay-f711bcf2227c7c7ddfd3c88b941c7b7cad4b70876e15acd827abb6238eb5357f-merged.mount: Deactivated successfully.
Jan 31 08:29:02 compute-0 podman[242973]: 2026-01-31 08:29:02.364987149 +0000 UTC m=+0.169448099 container remove bab28281af9b96eefb29370af72ff5fb5dd63f1dc22abb8a7c54f084dae6cbd8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=intelligent_allen, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20251030)
Jan 31 08:29:02 compute-0 systemd[1]: libpod-conmon-bab28281af9b96eefb29370af72ff5fb5dd63f1dc22abb8a7c54f084dae6cbd8.scope: Deactivated successfully.
Jan 31 08:29:02 compute-0 podman[243013]: 2026-01-31 08:29:02.583565079 +0000 UTC m=+0.080831770 container create acccd0bd227e2f6622bfced5bdd8f701b7dce080997a0ecca224b047dbd88293 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=unruffled_turing, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 08:29:02 compute-0 systemd[1]: Started libpod-conmon-acccd0bd227e2f6622bfced5bdd8f701b7dce080997a0ecca224b047dbd88293.scope.
Jan 31 08:29:02 compute-0 podman[243013]: 2026-01-31 08:29:02.532543434 +0000 UTC m=+0.029810205 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 08:29:02 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:29:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aa602ebea2818680b66d59c7f2ea03b4339befb801f1cf5597ea4baffff1512b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 08:29:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aa602ebea2818680b66d59c7f2ea03b4339befb801f1cf5597ea4baffff1512b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 08:29:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aa602ebea2818680b66d59c7f2ea03b4339befb801f1cf5597ea4baffff1512b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 08:29:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aa602ebea2818680b66d59c7f2ea03b4339befb801f1cf5597ea4baffff1512b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 08:29:02 compute-0 podman[243013]: 2026-01-31 08:29:02.653833708 +0000 UTC m=+0.151100419 container init acccd0bd227e2f6622bfced5bdd8f701b7dce080997a0ecca224b047dbd88293 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=unruffled_turing, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 08:29:02 compute-0 podman[243013]: 2026-01-31 08:29:02.662558115 +0000 UTC m=+0.159824806 container start acccd0bd227e2f6622bfced5bdd8f701b7dce080997a0ecca224b047dbd88293 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=unruffled_turing, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 08:29:02 compute-0 podman[243013]: 2026-01-31 08:29:02.665496468 +0000 UTC m=+0.162763179 container attach acccd0bd227e2f6622bfced5bdd8f701b7dce080997a0ecca224b047dbd88293 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=unruffled_turing, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Jan 31 08:29:02 compute-0 ceph-mgr[75519]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:29:02 compute-0 ceph-mgr[75519]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:29:02 compute-0 ceph-mgr[75519]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:29:02 compute-0 ceph-mgr[75519]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:29:02 compute-0 ceph-mgr[75519]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:29:02 compute-0 ceph-mgr[75519]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:29:02 compute-0 unruffled_turing[243030]: {
Jan 31 08:29:02 compute-0 unruffled_turing[243030]:     "0": [
Jan 31 08:29:02 compute-0 unruffled_turing[243030]:         {
Jan 31 08:29:02 compute-0 unruffled_turing[243030]:             "devices": [
Jan 31 08:29:02 compute-0 unruffled_turing[243030]:                 "/dev/loop3"
Jan 31 08:29:02 compute-0 unruffled_turing[243030]:             ],
Jan 31 08:29:02 compute-0 unruffled_turing[243030]:             "lv_name": "ceph_lv0",
Jan 31 08:29:02 compute-0 unruffled_turing[243030]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 08:29:02 compute-0 unruffled_turing[243030]:             "lv_size": "21470642176",
Jan 31 08:29:02 compute-0 unruffled_turing[243030]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=MTsNbY-MKaT-jGv0-3onj-5WQa-gnK0-BbfLsK,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=82c880e6-d992-5408-8b12-efff9c275473,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=39c36249-2898-4a76-b317-8e4ca379866f,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 31 08:29:02 compute-0 unruffled_turing[243030]:             "lv_uuid": "MTsNbY-MKaT-jGv0-3onj-5WQa-gnK0-BbfLsK",
Jan 31 08:29:02 compute-0 unruffled_turing[243030]:             "name": "ceph_lv0",
Jan 31 08:29:02 compute-0 unruffled_turing[243030]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 08:29:02 compute-0 unruffled_turing[243030]:             "tags": {
Jan 31 08:29:02 compute-0 unruffled_turing[243030]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 31 08:29:02 compute-0 unruffled_turing[243030]:                 "ceph.block_uuid": "MTsNbY-MKaT-jGv0-3onj-5WQa-gnK0-BbfLsK",
Jan 31 08:29:02 compute-0 unruffled_turing[243030]:                 "ceph.cephx_lockbox_secret": "",
Jan 31 08:29:02 compute-0 unruffled_turing[243030]:                 "ceph.cluster_fsid": "82c880e6-d992-5408-8b12-efff9c275473",
Jan 31 08:29:02 compute-0 unruffled_turing[243030]:                 "ceph.cluster_name": "ceph",
Jan 31 08:29:02 compute-0 unruffled_turing[243030]:                 "ceph.crush_device_class": "",
Jan 31 08:29:02 compute-0 unruffled_turing[243030]:                 "ceph.encrypted": "0",
Jan 31 08:29:02 compute-0 unruffled_turing[243030]:                 "ceph.objectstore": "bluestore",
Jan 31 08:29:02 compute-0 unruffled_turing[243030]:                 "ceph.osd_fsid": "39c36249-2898-4a76-b317-8e4ca379866f",
Jan 31 08:29:02 compute-0 unruffled_turing[243030]:                 "ceph.osd_id": "0",
Jan 31 08:29:02 compute-0 unruffled_turing[243030]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 31 08:29:02 compute-0 unruffled_turing[243030]:                 "ceph.type": "block",
Jan 31 08:29:02 compute-0 unruffled_turing[243030]:                 "ceph.vdo": "0",
Jan 31 08:29:02 compute-0 unruffled_turing[243030]:                 "ceph.with_tpm": "0"
Jan 31 08:29:02 compute-0 unruffled_turing[243030]:             },
Jan 31 08:29:02 compute-0 unruffled_turing[243030]:             "type": "block",
Jan 31 08:29:02 compute-0 unruffled_turing[243030]:             "vg_name": "ceph_vg0"
Jan 31 08:29:02 compute-0 unruffled_turing[243030]:         }
Jan 31 08:29:02 compute-0 unruffled_turing[243030]:     ],
Jan 31 08:29:02 compute-0 unruffled_turing[243030]:     "1": [
Jan 31 08:29:02 compute-0 unruffled_turing[243030]:         {
Jan 31 08:29:02 compute-0 unruffled_turing[243030]:             "devices": [
Jan 31 08:29:02 compute-0 unruffled_turing[243030]:                 "/dev/loop4"
Jan 31 08:29:02 compute-0 unruffled_turing[243030]:             ],
Jan 31 08:29:02 compute-0 unruffled_turing[243030]:             "lv_name": "ceph_lv1",
Jan 31 08:29:02 compute-0 unruffled_turing[243030]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 31 08:29:02 compute-0 unruffled_turing[243030]:             "lv_size": "21470642176",
Jan 31 08:29:02 compute-0 unruffled_turing[243030]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=p93Mbf-DMxT-pcUt-jSJE-SFna-oscq-yTAd40,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=82c880e6-d992-5408-8b12-efff9c275473,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=dacad4fa-56d8-4937-b2d8-306fb75187f3,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 31 08:29:02 compute-0 unruffled_turing[243030]:             "lv_uuid": "p93Mbf-DMxT-pcUt-jSJE-SFna-oscq-yTAd40",
Jan 31 08:29:02 compute-0 unruffled_turing[243030]:             "name": "ceph_lv1",
Jan 31 08:29:02 compute-0 unruffled_turing[243030]:             "path": "/dev/ceph_vg1/ceph_lv1",
Jan 31 08:29:02 compute-0 unruffled_turing[243030]:             "tags": {
Jan 31 08:29:02 compute-0 unruffled_turing[243030]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 31 08:29:02 compute-0 unruffled_turing[243030]:                 "ceph.block_uuid": "p93Mbf-DMxT-pcUt-jSJE-SFna-oscq-yTAd40",
Jan 31 08:29:02 compute-0 unruffled_turing[243030]:                 "ceph.cephx_lockbox_secret": "",
Jan 31 08:29:02 compute-0 unruffled_turing[243030]:                 "ceph.cluster_fsid": "82c880e6-d992-5408-8b12-efff9c275473",
Jan 31 08:29:02 compute-0 unruffled_turing[243030]:                 "ceph.cluster_name": "ceph",
Jan 31 08:29:02 compute-0 unruffled_turing[243030]:                 "ceph.crush_device_class": "",
Jan 31 08:29:02 compute-0 unruffled_turing[243030]:                 "ceph.encrypted": "0",
Jan 31 08:29:02 compute-0 unruffled_turing[243030]:                 "ceph.objectstore": "bluestore",
Jan 31 08:29:02 compute-0 unruffled_turing[243030]:                 "ceph.osd_fsid": "dacad4fa-56d8-4937-b2d8-306fb75187f3",
Jan 31 08:29:02 compute-0 unruffled_turing[243030]:                 "ceph.osd_id": "1",
Jan 31 08:29:02 compute-0 unruffled_turing[243030]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 31 08:29:02 compute-0 unruffled_turing[243030]:                 "ceph.type": "block",
Jan 31 08:29:02 compute-0 unruffled_turing[243030]:                 "ceph.vdo": "0",
Jan 31 08:29:02 compute-0 unruffled_turing[243030]:                 "ceph.with_tpm": "0"
Jan 31 08:29:02 compute-0 unruffled_turing[243030]:             },
Jan 31 08:29:02 compute-0 unruffled_turing[243030]:             "type": "block",
Jan 31 08:29:02 compute-0 unruffled_turing[243030]:             "vg_name": "ceph_vg1"
Jan 31 08:29:02 compute-0 unruffled_turing[243030]:         }
Jan 31 08:29:02 compute-0 unruffled_turing[243030]:     ],
Jan 31 08:29:02 compute-0 unruffled_turing[243030]:     "2": [
Jan 31 08:29:02 compute-0 unruffled_turing[243030]:         {
Jan 31 08:29:02 compute-0 unruffled_turing[243030]:             "devices": [
Jan 31 08:29:02 compute-0 unruffled_turing[243030]:                 "/dev/loop5"
Jan 31 08:29:02 compute-0 unruffled_turing[243030]:             ],
Jan 31 08:29:02 compute-0 unruffled_turing[243030]:             "lv_name": "ceph_lv2",
Jan 31 08:29:02 compute-0 unruffled_turing[243030]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 31 08:29:02 compute-0 unruffled_turing[243030]:             "lv_size": "21470642176",
Jan 31 08:29:02 compute-0 unruffled_turing[243030]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=fxd7JU-HnwP-NvcE-M4xv-EgEF-kK7y-w6dXCS,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=82c880e6-d992-5408-8b12-efff9c275473,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=faa25865-e7b6-44f9-8188-08bf287b941b,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 31 08:29:02 compute-0 unruffled_turing[243030]:             "lv_uuid": "fxd7JU-HnwP-NvcE-M4xv-EgEF-kK7y-w6dXCS",
Jan 31 08:29:02 compute-0 unruffled_turing[243030]:             "name": "ceph_lv2",
Jan 31 08:29:02 compute-0 unruffled_turing[243030]:             "path": "/dev/ceph_vg2/ceph_lv2",
Jan 31 08:29:02 compute-0 unruffled_turing[243030]:             "tags": {
Jan 31 08:29:02 compute-0 unruffled_turing[243030]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 31 08:29:02 compute-0 unruffled_turing[243030]:                 "ceph.block_uuid": "fxd7JU-HnwP-NvcE-M4xv-EgEF-kK7y-w6dXCS",
Jan 31 08:29:02 compute-0 unruffled_turing[243030]:                 "ceph.cephx_lockbox_secret": "",
Jan 31 08:29:02 compute-0 unruffled_turing[243030]:                 "ceph.cluster_fsid": "82c880e6-d992-5408-8b12-efff9c275473",
Jan 31 08:29:02 compute-0 unruffled_turing[243030]:                 "ceph.cluster_name": "ceph",
Jan 31 08:29:02 compute-0 unruffled_turing[243030]:                 "ceph.crush_device_class": "",
Jan 31 08:29:02 compute-0 unruffled_turing[243030]:                 "ceph.encrypted": "0",
Jan 31 08:29:02 compute-0 unruffled_turing[243030]:                 "ceph.objectstore": "bluestore",
Jan 31 08:29:02 compute-0 unruffled_turing[243030]:                 "ceph.osd_fsid": "faa25865-e7b6-44f9-8188-08bf287b941b",
Jan 31 08:29:02 compute-0 unruffled_turing[243030]:                 "ceph.osd_id": "2",
Jan 31 08:29:02 compute-0 unruffled_turing[243030]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 31 08:29:02 compute-0 unruffled_turing[243030]:                 "ceph.type": "block",
Jan 31 08:29:02 compute-0 unruffled_turing[243030]:                 "ceph.vdo": "0",
Jan 31 08:29:02 compute-0 unruffled_turing[243030]:                 "ceph.with_tpm": "0"
Jan 31 08:29:02 compute-0 unruffled_turing[243030]:             },
Jan 31 08:29:02 compute-0 unruffled_turing[243030]:             "type": "block",
Jan 31 08:29:02 compute-0 unruffled_turing[243030]:             "vg_name": "ceph_vg2"
Jan 31 08:29:02 compute-0 unruffled_turing[243030]:         }
Jan 31 08:29:02 compute-0 unruffled_turing[243030]:     ]
Jan 31 08:29:02 compute-0 unruffled_turing[243030]: }
Jan 31 08:29:02 compute-0 systemd[1]: libpod-acccd0bd227e2f6622bfced5bdd8f701b7dce080997a0ecca224b047dbd88293.scope: Deactivated successfully.
Jan 31 08:29:02 compute-0 podman[243013]: 2026-01-31 08:29:02.970184196 +0000 UTC m=+0.467450927 container died acccd0bd227e2f6622bfced5bdd8f701b7dce080997a0ecca224b047dbd88293 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=unruffled_turing, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Jan 31 08:29:03 compute-0 systemd[1]: var-lib-containers-storage-overlay-aa602ebea2818680b66d59c7f2ea03b4339befb801f1cf5597ea4baffff1512b-merged.mount: Deactivated successfully.
Jan 31 08:29:03 compute-0 podman[243013]: 2026-01-31 08:29:03.148557086 +0000 UTC m=+0.645823787 container remove acccd0bd227e2f6622bfced5bdd8f701b7dce080997a0ecca224b047dbd88293 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=unruffled_turing, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS)
Jan 31 08:29:03 compute-0 systemd[1]: libpod-conmon-acccd0bd227e2f6622bfced5bdd8f701b7dce080997a0ecca224b047dbd88293.scope: Deactivated successfully.
Jan 31 08:29:03 compute-0 sudo[242937]: pam_unix(sudo:session): session closed for user root
Jan 31 08:29:03 compute-0 sudo[243052]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 08:29:03 compute-0 sudo[243052]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:29:03 compute-0 sudo[243052]: pam_unix(sudo:session): session closed for user root
Jan 31 08:29:03 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v793: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:29:03 compute-0 sudo[243077]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/82c880e6-d992-5408-8b12-efff9c275473/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 82c880e6-d992-5408-8b12-efff9c275473 -- raw list --format json
Jan 31 08:29:03 compute-0 sudo[243077]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:29:03 compute-0 podman[243113]: 2026-01-31 08:29:03.572750607 +0000 UTC m=+0.038404788 container create 45a4207a55f78ffd9d7ae6725665164046838045db44182c5ab7c960444af967 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=affectionate_ganguly, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Jan 31 08:29:03 compute-0 systemd[1]: Started libpod-conmon-45a4207a55f78ffd9d7ae6725665164046838045db44182c5ab7c960444af967.scope.
Jan 31 08:29:03 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:29:03 compute-0 podman[243113]: 2026-01-31 08:29:03.551321681 +0000 UTC m=+0.016975862 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 08:29:03 compute-0 podman[243113]: 2026-01-31 08:29:03.654760239 +0000 UTC m=+0.120414430 container init 45a4207a55f78ffd9d7ae6725665164046838045db44182c5ab7c960444af967 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=affectionate_ganguly, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 08:29:03 compute-0 podman[243113]: 2026-01-31 08:29:03.665835893 +0000 UTC m=+0.131490094 container start 45a4207a55f78ffd9d7ae6725665164046838045db44182c5ab7c960444af967 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=affectionate_ganguly, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=tentacle, org.label-schema.vendor=CentOS)
Jan 31 08:29:03 compute-0 podman[243113]: 2026-01-31 08:29:03.670005801 +0000 UTC m=+0.135659982 container attach 45a4207a55f78ffd9d7ae6725665164046838045db44182c5ab7c960444af967 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=affectionate_ganguly, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 31 08:29:03 compute-0 affectionate_ganguly[243129]: 167 167
Jan 31 08:29:03 compute-0 systemd[1]: libpod-45a4207a55f78ffd9d7ae6725665164046838045db44182c5ab7c960444af967.scope: Deactivated successfully.
Jan 31 08:29:03 compute-0 podman[243113]: 2026-01-31 08:29:03.671825983 +0000 UTC m=+0.137480184 container died 45a4207a55f78ffd9d7ae6725665164046838045db44182c5ab7c960444af967 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=affectionate_ganguly, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle)
Jan 31 08:29:03 compute-0 systemd[1]: var-lib-containers-storage-overlay-b0d03bdbf577f1b33027f84d9b88eb8de7b8354cb5b26bcf8d6319e90ae08a88-merged.mount: Deactivated successfully.
Jan 31 08:29:04 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 08:29:04 compute-0 ceph-mon[75227]: pgmap v793: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:29:04 compute-0 podman[243113]: 2026-01-31 08:29:04.356110468 +0000 UTC m=+0.821764629 container remove 45a4207a55f78ffd9d7ae6725665164046838045db44182c5ab7c960444af967 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=affectionate_ganguly, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 08:29:04 compute-0 systemd[1]: libpod-conmon-45a4207a55f78ffd9d7ae6725665164046838045db44182c5ab7c960444af967.scope: Deactivated successfully.
Jan 31 08:29:04 compute-0 podman[243154]: 2026-01-31 08:29:04.496340998 +0000 UTC m=+0.024687910 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 08:29:04 compute-0 podman[243154]: 2026-01-31 08:29:04.613515946 +0000 UTC m=+0.141862828 container create a86d06b8e38b3ff2b22cea19f777a55f8dec8134a4e39112f22030fe9b437dc6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hungry_tesla, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 08:29:04 compute-0 systemd[1]: Started libpod-conmon-a86d06b8e38b3ff2b22cea19f777a55f8dec8134a4e39112f22030fe9b437dc6.scope.
Jan 31 08:29:04 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:29:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/47f6473fb6e61b5d43cc5e5581c64cef8ad81f05e8e664adc38d797e8c5ee202/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 08:29:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/47f6473fb6e61b5d43cc5e5581c64cef8ad81f05e8e664adc38d797e8c5ee202/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 08:29:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/47f6473fb6e61b5d43cc5e5581c64cef8ad81f05e8e664adc38d797e8c5ee202/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 08:29:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/47f6473fb6e61b5d43cc5e5581c64cef8ad81f05e8e664adc38d797e8c5ee202/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 08:29:04 compute-0 podman[243154]: 2026-01-31 08:29:04.880389713 +0000 UTC m=+0.408736595 container init a86d06b8e38b3ff2b22cea19f777a55f8dec8134a4e39112f22030fe9b437dc6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hungry_tesla, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.license=GPLv2)
Jan 31 08:29:04 compute-0 podman[243154]: 2026-01-31 08:29:04.890380676 +0000 UTC m=+0.418727558 container start a86d06b8e38b3ff2b22cea19f777a55f8dec8134a4e39112f22030fe9b437dc6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hungry_tesla, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3)
Jan 31 08:29:04 compute-0 podman[243154]: 2026-01-31 08:29:04.953866523 +0000 UTC m=+0.482213435 container attach a86d06b8e38b3ff2b22cea19f777a55f8dec8134a4e39112f22030fe9b437dc6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hungry_tesla, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Jan 31 08:29:05 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v794: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:29:05 compute-0 lvm[243248]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 31 08:29:05 compute-0 lvm[243248]: VG ceph_vg0 finished
Jan 31 08:29:05 compute-0 lvm[243249]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 31 08:29:05 compute-0 lvm[243249]: VG ceph_vg1 finished
Jan 31 08:29:05 compute-0 lvm[243251]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 31 08:29:05 compute-0 lvm[243251]: VG ceph_vg2 finished
Jan 31 08:29:05 compute-0 ceph-mon[75227]: pgmap v794: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:29:05 compute-0 hungry_tesla[243170]: {}
Jan 31 08:29:05 compute-0 systemd[1]: libpod-a86d06b8e38b3ff2b22cea19f777a55f8dec8134a4e39112f22030fe9b437dc6.scope: Deactivated successfully.
Jan 31 08:29:05 compute-0 systemd[1]: libpod-a86d06b8e38b3ff2b22cea19f777a55f8dec8134a4e39112f22030fe9b437dc6.scope: Consumed 1.159s CPU time.
Jan 31 08:29:05 compute-0 podman[243154]: 2026-01-31 08:29:05.71905495 +0000 UTC m=+1.247401892 container died a86d06b8e38b3ff2b22cea19f777a55f8dec8134a4e39112f22030fe9b437dc6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hungry_tesla, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030)
Jan 31 08:29:06 compute-0 systemd[1]: var-lib-containers-storage-overlay-47f6473fb6e61b5d43cc5e5581c64cef8ad81f05e8e664adc38d797e8c5ee202-merged.mount: Deactivated successfully.
Jan 31 08:29:06 compute-0 podman[243154]: 2026-01-31 08:29:06.46460851 +0000 UTC m=+1.992955432 container remove a86d06b8e38b3ff2b22cea19f777a55f8dec8134a4e39112f22030fe9b437dc6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hungry_tesla, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2)
Jan 31 08:29:06 compute-0 sudo[243077]: pam_unix(sudo:session): session closed for user root
Jan 31 08:29:06 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 31 08:29:06 compute-0 systemd[1]: libpod-conmon-a86d06b8e38b3ff2b22cea19f777a55f8dec8134a4e39112f22030fe9b437dc6.scope: Deactivated successfully.
Jan 31 08:29:06 compute-0 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 08:29:06 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 31 08:29:06 compute-0 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 08:29:06 compute-0 sudo[243266]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 31 08:29:06 compute-0 sudo[243266]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:29:06 compute-0 sudo[243266]: pam_unix(sudo:session): session closed for user root
Jan 31 08:29:07 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v795: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:29:07 compute-0 nova_compute[238824]: 2026-01-31 08:29:07.340 238828 DEBUG oslo_service.periodic_task [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:29:07 compute-0 nova_compute[238824]: 2026-01-31 08:29:07.342 238828 DEBUG oslo_service.periodic_task [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:29:07 compute-0 nova_compute[238824]: 2026-01-31 08:29:07.342 238828 DEBUG nova.compute.manager [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Jan 31 08:29:07 compute-0 nova_compute[238824]: 2026-01-31 08:29:07.368 238828 DEBUG nova.compute.manager [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Jan 31 08:29:07 compute-0 nova_compute[238824]: 2026-01-31 08:29:07.370 238828 DEBUG oslo_service.periodic_task [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:29:07 compute-0 nova_compute[238824]: 2026-01-31 08:29:07.370 238828 DEBUG nova.compute.manager [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Jan 31 08:29:07 compute-0 nova_compute[238824]: 2026-01-31 08:29:07.462 238828 DEBUG oslo_service.periodic_task [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:29:07 compute-0 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 08:29:07 compute-0 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 08:29:07 compute-0 ceph-mon[75227]: pgmap v795: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:29:08 compute-0 nova_compute[238824]: 2026-01-31 08:29:08.550 238828 DEBUG oslo_service.periodic_task [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:29:09 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 08:29:09 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v796: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:29:09 compute-0 nova_compute[238824]: 2026-01-31 08:29:09.339 238828 DEBUG oslo_service.periodic_task [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:29:10 compute-0 ceph-mon[75227]: pgmap v796: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:29:10 compute-0 nova_compute[238824]: 2026-01-31 08:29:10.339 238828 DEBUG oslo_service.periodic_task [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:29:10 compute-0 nova_compute[238824]: 2026-01-31 08:29:10.366 238828 DEBUG oslo_concurrency.lockutils [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:29:10 compute-0 nova_compute[238824]: 2026-01-31 08:29:10.366 238828 DEBUG oslo_concurrency.lockutils [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:29:10 compute-0 nova_compute[238824]: 2026-01-31 08:29:10.366 238828 DEBUG oslo_concurrency.lockutils [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:29:10 compute-0 nova_compute[238824]: 2026-01-31 08:29:10.367 238828 DEBUG nova.compute.resource_tracker [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 31 08:29:10 compute-0 nova_compute[238824]: 2026-01-31 08:29:10.367 238828 DEBUG oslo_concurrency.processutils [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:29:10 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 31 08:29:10 compute-0 ceph-mon[75227]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2814591676' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 31 08:29:10 compute-0 nova_compute[238824]: 2026-01-31 08:29:10.965 238828 DEBUG oslo_concurrency.processutils [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.598s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:29:11 compute-0 nova_compute[238824]: 2026-01-31 08:29:11.131 238828 WARNING nova.virt.libvirt.driver [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 31 08:29:11 compute-0 nova_compute[238824]: 2026-01-31 08:29:11.132 238828 DEBUG nova.compute.resource_tracker [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5138MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 31 08:29:11 compute-0 nova_compute[238824]: 2026-01-31 08:29:11.132 238828 DEBUG oslo_concurrency.lockutils [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:29:11 compute-0 nova_compute[238824]: 2026-01-31 08:29:11.132 238828 DEBUG oslo_concurrency.lockutils [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:29:11 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v797: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:29:11 compute-0 nova_compute[238824]: 2026-01-31 08:29:11.322 238828 DEBUG nova.compute.resource_tracker [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 31 08:29:11 compute-0 nova_compute[238824]: 2026-01-31 08:29:11.322 238828 DEBUG nova.compute.resource_tracker [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 31 08:29:11 compute-0 ceph-mon[75227]: from='client.? 192.168.122.100:0/2814591676' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 31 08:29:11 compute-0 nova_compute[238824]: 2026-01-31 08:29:11.341 238828 DEBUG oslo_concurrency.processutils [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:29:11 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 31 08:29:11 compute-0 ceph-mon[75227]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2168732476' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 31 08:29:11 compute-0 nova_compute[238824]: 2026-01-31 08:29:11.855 238828 DEBUG oslo_concurrency.processutils [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.514s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:29:11 compute-0 nova_compute[238824]: 2026-01-31 08:29:11.860 238828 DEBUG nova.compute.provider_tree [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Inventory has not changed in ProviderTree for provider: 6d4ff98f-eb37-47a1-bfaf-01e7f5329d98 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 31 08:29:11 compute-0 nova_compute[238824]: 2026-01-31 08:29:11.876 238828 DEBUG nova.scheduler.client.report [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Inventory has not changed for provider 6d4ff98f-eb37-47a1-bfaf-01e7f5329d98 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 31 08:29:11 compute-0 nova_compute[238824]: 2026-01-31 08:29:11.878 238828 DEBUG nova.compute.resource_tracker [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 31 08:29:11 compute-0 nova_compute[238824]: 2026-01-31 08:29:11.878 238828 DEBUG oslo_concurrency.lockutils [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.746s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:29:12 compute-0 ceph-mon[75227]: pgmap v797: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:29:12 compute-0 ceph-mon[75227]: from='client.? 192.168.122.100:0/2168732476' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 31 08:29:12 compute-0 nova_compute[238824]: 2026-01-31 08:29:12.873 238828 DEBUG oslo_service.periodic_task [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:29:12 compute-0 nova_compute[238824]: 2026-01-31 08:29:12.874 238828 DEBUG oslo_service.periodic_task [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:29:12 compute-0 nova_compute[238824]: 2026-01-31 08:29:12.874 238828 DEBUG nova.compute.manager [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 31 08:29:12 compute-0 nova_compute[238824]: 2026-01-31 08:29:12.874 238828 DEBUG nova.compute.manager [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 31 08:29:12 compute-0 nova_compute[238824]: 2026-01-31 08:29:12.889 238828 DEBUG nova.compute.manager [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 31 08:29:12 compute-0 nova_compute[238824]: 2026-01-31 08:29:12.889 238828 DEBUG oslo_service.periodic_task [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:29:13 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v798: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:29:13 compute-0 nova_compute[238824]: 2026-01-31 08:29:13.339 238828 DEBUG oslo_service.periodic_task [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:29:13 compute-0 nova_compute[238824]: 2026-01-31 08:29:13.340 238828 DEBUG oslo_service.periodic_task [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:29:13 compute-0 nova_compute[238824]: 2026-01-31 08:29:13.341 238828 DEBUG nova.compute.manager [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 31 08:29:13 compute-0 ceph-mon[75227]: pgmap v798: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:29:14 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 08:29:15 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v799: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:29:15 compute-0 ceph-mon[75227]: pgmap v799: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:29:17 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v800: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:29:17 compute-0 ovn_metadata_agent[154972]: 2026-01-31 08:29:17.887 154977 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:29:17 compute-0 ovn_metadata_agent[154972]: 2026-01-31 08:29:17.888 154977 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:29:17 compute-0 ovn_metadata_agent[154972]: 2026-01-31 08:29:17.888 154977 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:29:17 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 31 08:29:17 compute-0 ceph-mon[75227]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3538313178' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 31 08:29:17 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 31 08:29:17 compute-0 ceph-mon[75227]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3538313178' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 31 08:29:18 compute-0 ceph-mon[75227]: pgmap v800: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:29:18 compute-0 ceph-mon[75227]: from='client.? 192.168.122.10:0/3538313178' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 31 08:29:18 compute-0 ceph-mon[75227]: from='client.? 192.168.122.10:0/3538313178' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 31 08:29:19 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 08:29:19 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v801: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:29:20 compute-0 ceph-mon[75227]: pgmap v801: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:29:21 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v802: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:29:21 compute-0 ceph-mon[75227]: pgmap v802: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:29:23 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v803: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:29:24 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 08:29:24 compute-0 ceph-mon[75227]: pgmap v803: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:29:25 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v804: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:29:25 compute-0 ceph-mon[75227]: pgmap v804: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:29:27 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v805: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:29:27 compute-0 ceph-mon[75227]: pgmap v805: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:29:28 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e127 do_prune osdmap full prune enabled
Jan 31 08:29:28 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e128 e128: 3 total, 3 up, 3 in
Jan 31 08:29:28 compute-0 ceph-mon[75227]: log_channel(cluster) log [DBG] : osdmap e128: 3 total, 3 up, 3 in
Jan 31 08:29:29 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 08:29:29 compute-0 podman[243336]: 2026-01-31 08:29:29.161611348 +0000 UTC m=+0.054275241 container health_status 5cc46d1955888fed41771eb977e7f9416e280539f01559118253757fe3eb0869 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_managed=true, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '3b798815db4eef76ff54d2cfe5801aee605c637f16e47a4297de289ab1fdb9c1-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team)
Jan 31 08:29:29 compute-0 podman[243335]: 2026-01-31 08:29:29.209429314 +0000 UTC m=+0.101140400 container health_status 14c3c41ef3ecc0cb180f3b1c6b2646401de390411c75f2edf127900ced71a3ae (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '3b798815db4eef76ff54d2cfe5801aee605c637f16e47a4297de289ab1fdb9c1-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3)
Jan 31 08:29:29 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v807: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:29:29 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e128 do_prune osdmap full prune enabled
Jan 31 08:29:29 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e129 e129: 3 total, 3 up, 3 in
Jan 31 08:29:30 compute-0 ceph-mon[75227]: osdmap e128: 3 total, 3 up, 3 in
Jan 31 08:29:30 compute-0 ceph-mon[75227]: pgmap v807: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:29:30 compute-0 ceph-mon[75227]: log_channel(cluster) log [DBG] : osdmap e129: 3 total, 3 up, 3 in
Jan 31 08:29:31 compute-0 ceph-mon[75227]: osdmap e129: 3 total, 3 up, 3 in
Jan 31 08:29:31 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v809: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail; 2.9 KiB/s rd, 383 B/s wr, 3 op/s
Jan 31 08:29:31 compute-0 ceph-mgr[75519]: [balancer INFO root] Optimize plan auto_2026-01-31_08:29:31
Jan 31 08:29:31 compute-0 ceph-mgr[75519]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 31 08:29:31 compute-0 ceph-mgr[75519]: [balancer INFO root] do_upmap
Jan 31 08:29:31 compute-0 ceph-mgr[75519]: [balancer INFO root] pools ['.rgw.root', 'cephfs.cephfs.meta', 'default.rgw.control', 'default.rgw.log', 'images', 'cephfs.cephfs.data', 'vms', 'default.rgw.meta', 'volumes', 'backups', '.mgr']
Jan 31 08:29:31 compute-0 ceph-mgr[75519]: [balancer INFO root] prepared 0/10 upmap changes
Jan 31 08:29:32 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e129 do_prune osdmap full prune enabled
Jan 31 08:29:32 compute-0 ceph-mon[75227]: pgmap v809: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail; 2.9 KiB/s rd, 383 B/s wr, 3 op/s
Jan 31 08:29:32 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e130 e130: 3 total, 3 up, 3 in
Jan 31 08:29:32 compute-0 ceph-mon[75227]: log_channel(cluster) log [DBG] : osdmap e130: 3 total, 3 up, 3 in
Jan 31 08:29:32 compute-0 ceph-mgr[75519]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:29:32 compute-0 ceph-mgr[75519]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:29:32 compute-0 ceph-mgr[75519]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:29:32 compute-0 ceph-mgr[75519]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:29:32 compute-0 ceph-mgr[75519]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:29:32 compute-0 ceph-mgr[75519]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:29:33 compute-0 ceph-mgr[75519]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 31 08:29:33 compute-0 ceph-mgr[75519]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 08:29:33 compute-0 ceph-mgr[75519]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 31 08:29:33 compute-0 ceph-mgr[75519]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 08:29:33 compute-0 ceph-mgr[75519]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 08:29:33 compute-0 ceph-mgr[75519]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 08:29:33 compute-0 ceph-mgr[75519]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 08:29:33 compute-0 ceph-mgr[75519]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 08:29:33 compute-0 ceph-mgr[75519]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 08:29:33 compute-0 ceph-mgr[75519]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 08:29:33 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v811: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail; 3.8 KiB/s rd, 511 B/s wr, 5 op/s
Jan 31 08:29:34 compute-0 ceph-mon[75227]: osdmap e130: 3 total, 3 up, 3 in
Jan 31 08:29:34 compute-0 ceph-mon[75227]: pgmap v811: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail; 3.8 KiB/s rd, 511 B/s wr, 5 op/s
Jan 31 08:29:34 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 08:29:35 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v812: 305 pgs: 305 active+clean; 8.5 MiB data, 145 MiB used, 60 GiB / 60 GiB avail; 9.5 KiB/s rd, 1.2 MiB/s wr, 17 op/s
Jan 31 08:29:36 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e130 do_prune osdmap full prune enabled
Jan 31 08:29:36 compute-0 ceph-mon[75227]: pgmap v812: 305 pgs: 305 active+clean; 8.5 MiB data, 145 MiB used, 60 GiB / 60 GiB avail; 9.5 KiB/s rd, 1.2 MiB/s wr, 17 op/s
Jan 31 08:29:36 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e131 e131: 3 total, 3 up, 3 in
Jan 31 08:29:37 compute-0 ceph-mon[75227]: log_channel(cluster) log [DBG] : osdmap e131: 3 total, 3 up, 3 in
Jan 31 08:29:37 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v814: 305 pgs: 305 active+clean; 37 MiB data, 165 MiB used, 60 GiB / 60 GiB avail; 27 KiB/s rd, 5.0 MiB/s wr, 40 op/s
Jan 31 08:29:38 compute-0 ceph-mon[75227]: osdmap e131: 3 total, 3 up, 3 in
Jan 31 08:29:38 compute-0 ceph-mon[75227]: pgmap v814: 305 pgs: 305 active+clean; 37 MiB data, 165 MiB used, 60 GiB / 60 GiB avail; 27 KiB/s rd, 5.0 MiB/s wr, 40 op/s
Jan 31 08:29:39 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e131 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 08:29:39 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e131 do_prune osdmap full prune enabled
Jan 31 08:29:39 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v815: 305 pgs: 305 active+clean; 37 MiB data, 165 MiB used, 60 GiB / 60 GiB avail; 22 KiB/s rd, 4.6 MiB/s wr, 32 op/s
Jan 31 08:29:39 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e132 e132: 3 total, 3 up, 3 in
Jan 31 08:29:39 compute-0 ceph-mon[75227]: log_channel(cluster) log [DBG] : osdmap e132: 3 total, 3 up, 3 in
Jan 31 08:29:40 compute-0 ceph-mon[75227]: pgmap v815: 305 pgs: 305 active+clean; 37 MiB data, 165 MiB used, 60 GiB / 60 GiB avail; 22 KiB/s rd, 4.6 MiB/s wr, 32 op/s
Jan 31 08:29:40 compute-0 ceph-mon[75227]: osdmap e132: 3 total, 3 up, 3 in
Jan 31 08:29:41 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v817: 305 pgs: 305 active+clean; 41 MiB data, 178 MiB used, 60 GiB / 60 GiB avail; 30 KiB/s rd, 5.1 MiB/s wr, 43 op/s
Jan 31 08:29:42 compute-0 ceph-mon[75227]: pgmap v817: 305 pgs: 305 active+clean; 41 MiB data, 178 MiB used, 60 GiB / 60 GiB avail; 30 KiB/s rd, 5.1 MiB/s wr, 43 op/s
Jan 31 08:29:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] _maybe_adjust
Jan 31 08:29:43 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v818: 305 pgs: 305 active+clean; 41 MiB data, 178 MiB used, 60 GiB / 60 GiB avail; 25 KiB/s rd, 4.1 MiB/s wr, 33 op/s
Jan 31 08:29:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:29:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 31 08:29:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:29:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 08:29:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:29:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 08:29:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:29:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 08:29:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:29:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006659181352284859 of space, bias 1.0, pg target 0.19977544056854576 quantized to 32 (current 32)
Jan 31 08:29:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:29:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 2.6927621847268092e-06 of space, bias 4.0, pg target 0.003231314621672171 quantized to 16 (current 16)
Jan 31 08:29:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:29:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 08:29:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:29:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 31 08:29:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:29:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Jan 31 08:29:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:29:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 08:29:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:29:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 31 08:29:44 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:29:44 compute-0 ceph-mon[75227]: pgmap v818: 305 pgs: 305 active+clean; 41 MiB data, 178 MiB used, 60 GiB / 60 GiB avail; 25 KiB/s rd, 4.1 MiB/s wr, 33 op/s
Jan 31 08:29:45 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v819: 305 pgs: 305 active+clean; 41 MiB data, 178 MiB used, 60 GiB / 60 GiB avail; 7.5 KiB/s rd, 552 KiB/s wr, 10 op/s
Jan 31 08:29:45 compute-0 ceph-mon[75227]: pgmap v819: 305 pgs: 305 active+clean; 41 MiB data, 178 MiB used, 60 GiB / 60 GiB avail; 7.5 KiB/s rd, 552 KiB/s wr, 10 op/s
Jan 31 08:29:46 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e132 do_prune osdmap full prune enabled
Jan 31 08:29:46 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e133 e133: 3 total, 3 up, 3 in
Jan 31 08:29:46 compute-0 ceph-mon[75227]: log_channel(cluster) log [DBG] : osdmap e133: 3 total, 3 up, 3 in
Jan 31 08:29:47 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v821: 305 pgs: 305 active+clean; 41 MiB data, 178 MiB used, 60 GiB / 60 GiB avail; 7.7 KiB/s rd, 569 KiB/s wr, 11 op/s
Jan 31 08:29:47 compute-0 ceph-mon[75227]: osdmap e133: 3 total, 3 up, 3 in
Jan 31 08:29:47 compute-0 ceph-mon[75227]: pgmap v821: 305 pgs: 305 active+clean; 41 MiB data, 178 MiB used, 60 GiB / 60 GiB avail; 7.7 KiB/s rd, 569 KiB/s wr, 11 op/s
Jan 31 08:29:49 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:29:49 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v822: 305 pgs: 305 active+clean; 41 MiB data, 178 MiB used, 60 GiB / 60 GiB avail; 6.3 KiB/s rd, 463 KiB/s wr, 9 op/s
Jan 31 08:29:50 compute-0 ceph-mon[75227]: pgmap v822: 305 pgs: 305 active+clean; 41 MiB data, 178 MiB used, 60 GiB / 60 GiB avail; 6.3 KiB/s rd, 463 KiB/s wr, 9 op/s
Jan 31 08:29:51 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v823: 305 pgs: 305 active+clean; 21 MiB data, 157 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 1.4 KiB/s wr, 25 op/s
Jan 31 08:29:51 compute-0 ceph-mon[75227]: pgmap v823: 305 pgs: 305 active+clean; 21 MiB data, 157 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 1.4 KiB/s wr, 25 op/s
Jan 31 08:29:52 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e133 do_prune osdmap full prune enabled
Jan 31 08:29:52 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e134 e134: 3 total, 3 up, 3 in
Jan 31 08:29:52 compute-0 ceph-mon[75227]: log_channel(cluster) log [DBG] : osdmap e134: 3 total, 3 up, 3 in
Jan 31 08:29:53 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v825: 305 pgs: 305 active+clean; 21 MiB data, 157 MiB used, 60 GiB / 60 GiB avail; 23 KiB/s rd, 1.7 KiB/s wr, 31 op/s
Jan 31 08:29:54 compute-0 ceph-mon[75227]: osdmap e134: 3 total, 3 up, 3 in
Jan 31 08:29:54 compute-0 ceph-mon[75227]: pgmap v825: 305 pgs: 305 active+clean; 21 MiB data, 157 MiB used, 60 GiB / 60 GiB avail; 23 KiB/s rd, 1.7 KiB/s wr, 31 op/s
Jan 31 08:29:54 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:29:54 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e134 do_prune osdmap full prune enabled
Jan 31 08:29:54 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e135 e135: 3 total, 3 up, 3 in
Jan 31 08:29:54 compute-0 ceph-mon[75227]: log_channel(cluster) log [DBG] : osdmap e135: 3 total, 3 up, 3 in
Jan 31 08:29:55 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v827: 305 pgs: 305 active+clean; 21 MiB data, 157 MiB used, 60 GiB / 60 GiB avail; 24 KiB/s rd, 2.0 KiB/s wr, 34 op/s
Jan 31 08:29:55 compute-0 ceph-mon[75227]: osdmap e135: 3 total, 3 up, 3 in
Jan 31 08:29:55 compute-0 ceph-mon[75227]: pgmap v827: 305 pgs: 305 active+clean; 21 MiB data, 157 MiB used, 60 GiB / 60 GiB avail; 24 KiB/s rd, 2.0 KiB/s wr, 34 op/s
Jan 31 08:29:57 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v828: 305 pgs: 305 active+clean; 4.9 MiB data, 157 MiB used, 60 GiB / 60 GiB avail; 43 KiB/s rd, 2.7 KiB/s wr, 57 op/s
Jan 31 08:29:58 compute-0 ceph-mon[75227]: pgmap v828: 305 pgs: 305 active+clean; 4.9 MiB data, 157 MiB used, 60 GiB / 60 GiB avail; 43 KiB/s rd, 2.7 KiB/s wr, 57 op/s
Jan 31 08:29:59 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v829: 305 pgs: 305 active+clean; 4.9 MiB data, 157 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 1023 B/s wr, 26 op/s
Jan 31 08:29:59 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:29:59 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e135 do_prune osdmap full prune enabled
Jan 31 08:29:59 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e136 e136: 3 total, 3 up, 3 in
Jan 31 08:29:59 compute-0 ceph-mon[75227]: log_channel(cluster) log [DBG] : osdmap e136: 3 total, 3 up, 3 in
Jan 31 08:30:00 compute-0 ceph-mon[75227]: pgmap v829: 305 pgs: 305 active+clean; 4.9 MiB data, 157 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 1023 B/s wr, 26 op/s
Jan 31 08:30:00 compute-0 podman[243385]: 2026-01-31 08:30:00.178979205 +0000 UTC m=+0.056065981 container health_status 5cc46d1955888fed41771eb977e7f9416e280539f01559118253757fe3eb0869 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '3b798815db4eef76ff54d2cfe5801aee605c637f16e47a4297de289ab1fdb9c1-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Jan 31 08:30:00 compute-0 podman[243384]: 2026-01-31 08:30:00.230038453 +0000 UTC m=+0.107539230 container health_status 14c3c41ef3ecc0cb180f3b1c6b2646401de390411c75f2edf127900ced71a3ae (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, io.buildah.version=1.41.3, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '3b798815db4eef76ff54d2cfe5801aee605c637f16e47a4297de289ab1fdb9c1-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Jan 31 08:30:01 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v831: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail; 23 KiB/s rd, 1.7 KiB/s wr, 31 op/s
Jan 31 08:30:01 compute-0 ceph-mon[75227]: osdmap e136: 3 total, 3 up, 3 in
Jan 31 08:30:02 compute-0 ceph-mon[75227]: pgmap v831: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail; 23 KiB/s rd, 1.7 KiB/s wr, 31 op/s
Jan 31 08:30:02 compute-0 ceph-mgr[75519]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:30:02 compute-0 ceph-mgr[75519]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:30:02 compute-0 ceph-mgr[75519]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:30:02 compute-0 ceph-mgr[75519]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:30:02 compute-0 ceph-mgr[75519]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:30:02 compute-0 ceph-mgr[75519]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:30:03 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v832: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 1.5 KiB/s wr, 26 op/s
Jan 31 08:30:03 compute-0 ceph-mon[75227]: pgmap v832: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 1.5 KiB/s wr, 26 op/s
Jan 31 08:30:04 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:30:05 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v833: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail; 17 KiB/s rd, 1.2 KiB/s wr, 22 op/s
Jan 31 08:30:06 compute-0 ceph-mon[75227]: pgmap v833: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail; 17 KiB/s rd, 1.2 KiB/s wr, 22 op/s
Jan 31 08:30:06 compute-0 sudo[243428]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 08:30:06 compute-0 sudo[243428]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:30:06 compute-0 sudo[243428]: pam_unix(sudo:session): session closed for user root
Jan 31 08:30:06 compute-0 sudo[243453]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/82c880e6-d992-5408-8b12-efff9c275473/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Jan 31 08:30:06 compute-0 sudo[243453]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:30:07 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v834: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 614 B/s wr, 4 op/s
Jan 31 08:30:07 compute-0 sudo[243453]: pam_unix(sudo:session): session closed for user root
Jan 31 08:30:07 compute-0 nova_compute[238824]: 2026-01-31 08:30:07.340 238828 DEBUG oslo_service.periodic_task [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:30:07 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 31 08:30:07 compute-0 ceph-mon[75227]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 08:30:07 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 31 08:30:07 compute-0 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 31 08:30:07 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 31 08:30:07 compute-0 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 08:30:07 compute-0 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 31 08:30:07 compute-0 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 08:30:07 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 31 08:30:07 compute-0 ceph-mon[75227]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 31 08:30:07 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 31 08:30:07 compute-0 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 31 08:30:07 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 31 08:30:07 compute-0 ceph-mon[75227]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 08:30:07 compute-0 sudo[243508]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 08:30:07 compute-0 sudo[243508]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:30:07 compute-0 sudo[243508]: pam_unix(sudo:session): session closed for user root
Jan 31 08:30:07 compute-0 sudo[243533]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/82c880e6-d992-5408-8b12-efff9c275473/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 82c880e6-d992-5408-8b12-efff9c275473 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Jan 31 08:30:07 compute-0 sudo[243533]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:30:08 compute-0 podman[243571]: 2026-01-31 08:30:08.090712083 +0000 UTC m=+0.019123194 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 08:30:08 compute-0 podman[243571]: 2026-01-31 08:30:08.306355677 +0000 UTC m=+0.234766808 container create f60874861738148039823e8aa87ea7de6e0004d35685ef57bf2f0125388f1919 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=unruffled_mahavira, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 31 08:30:08 compute-0 systemd[1]: Started libpod-conmon-f60874861738148039823e8aa87ea7de6e0004d35685ef57bf2f0125388f1919.scope.
Jan 31 08:30:08 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:30:08 compute-0 podman[243571]: 2026-01-31 08:30:08.738400719 +0000 UTC m=+0.666811910 container init f60874861738148039823e8aa87ea7de6e0004d35685ef57bf2f0125388f1919 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=unruffled_mahavira, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 08:30:08 compute-0 ceph-mon[75227]: pgmap v834: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 614 B/s wr, 4 op/s
Jan 31 08:30:08 compute-0 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 08:30:08 compute-0 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 31 08:30:08 compute-0 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 31 08:30:08 compute-0 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 08:30:08 compute-0 podman[243571]: 2026-01-31 08:30:08.747833387 +0000 UTC m=+0.676244478 container start f60874861738148039823e8aa87ea7de6e0004d35685ef57bf2f0125388f1919 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=unruffled_mahavira, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Jan 31 08:30:08 compute-0 unruffled_mahavira[243588]: 167 167
Jan 31 08:30:08 compute-0 systemd[1]: libpod-f60874861738148039823e8aa87ea7de6e0004d35685ef57bf2f0125388f1919.scope: Deactivated successfully.
Jan 31 08:30:08 compute-0 podman[243571]: 2026-01-31 08:30:08.885245783 +0000 UTC m=+0.813656904 container attach f60874861738148039823e8aa87ea7de6e0004d35685ef57bf2f0125388f1919 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=unruffled_mahavira, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Jan 31 08:30:08 compute-0 podman[243571]: 2026-01-31 08:30:08.886591121 +0000 UTC m=+0.815002232 container died f60874861738148039823e8aa87ea7de6e0004d35685ef57bf2f0125388f1919 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=unruffled_mahavira, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 08:30:09 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v835: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 614 B/s wr, 4 op/s
Jan 31 08:30:09 compute-0 systemd[1]: var-lib-containers-storage-overlay-c629b02b02c06c374517bb4bc934fc410c928146fc2c2cc6cd82cf915008fb97-merged.mount: Deactivated successfully.
Jan 31 08:30:09 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:30:09 compute-0 ceph-mon[75227]: pgmap v835: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 614 B/s wr, 4 op/s
Jan 31 08:30:10 compute-0 podman[243571]: 2026-01-31 08:30:10.034050662 +0000 UTC m=+1.962461773 container remove f60874861738148039823e8aa87ea7de6e0004d35685ef57bf2f0125388f1919 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=unruffled_mahavira, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.41.3, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 08:30:10 compute-0 systemd[1]: libpod-conmon-f60874861738148039823e8aa87ea7de6e0004d35685ef57bf2f0125388f1919.scope: Deactivated successfully.
Jan 31 08:30:10 compute-0 podman[243610]: 2026-01-31 08:30:10.173418154 +0000 UTC m=+0.028238291 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 08:30:10 compute-0 podman[243610]: 2026-01-31 08:30:10.314991929 +0000 UTC m=+0.169812026 container create 944d4b6e513522436e4663de5cad4803811158ac798fd0710a9a2c090dd1c0b1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigilant_elgamal, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 08:30:10 compute-0 nova_compute[238824]: 2026-01-31 08:30:10.340 238828 DEBUG oslo_service.periodic_task [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:30:10 compute-0 systemd[1]: Started libpod-conmon-944d4b6e513522436e4663de5cad4803811158ac798fd0710a9a2c090dd1c0b1.scope.
Jan 31 08:30:10 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:30:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/27cfbfc4039b8a530259aaa1ebcb2037984a787167404dd5b45960371939333d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 08:30:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/27cfbfc4039b8a530259aaa1ebcb2037984a787167404dd5b45960371939333d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 08:30:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/27cfbfc4039b8a530259aaa1ebcb2037984a787167404dd5b45960371939333d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 08:30:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/27cfbfc4039b8a530259aaa1ebcb2037984a787167404dd5b45960371939333d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 08:30:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/27cfbfc4039b8a530259aaa1ebcb2037984a787167404dd5b45960371939333d/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 31 08:30:10 compute-0 podman[243610]: 2026-01-31 08:30:10.844829125 +0000 UTC m=+0.699649202 container init 944d4b6e513522436e4663de5cad4803811158ac798fd0710a9a2c090dd1c0b1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigilant_elgamal, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 08:30:10 compute-0 podman[243610]: 2026-01-31 08:30:10.85169071 +0000 UTC m=+0.706510767 container start 944d4b6e513522436e4663de5cad4803811158ac798fd0710a9a2c090dd1c0b1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigilant_elgamal, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 08:30:10 compute-0 podman[243610]: 2026-01-31 08:30:10.946048116 +0000 UTC m=+0.800868173 container attach 944d4b6e513522436e4663de5cad4803811158ac798fd0710a9a2c090dd1c0b1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigilant_elgamal, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, ceph=True, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Jan 31 08:30:11 compute-0 vigilant_elgamal[243627]: --> passed data devices: 0 physical, 3 LVM
Jan 31 08:30:11 compute-0 vigilant_elgamal[243627]: --> All data devices are unavailable
Jan 31 08:30:11 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v836: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail; 90 B/s rd, 0 B/s wr, 0 op/s
Jan 31 08:30:11 compute-0 systemd[1]: libpod-944d4b6e513522436e4663de5cad4803811158ac798fd0710a9a2c090dd1c0b1.scope: Deactivated successfully.
Jan 31 08:30:11 compute-0 podman[243610]: 2026-01-31 08:30:11.334405099 +0000 UTC m=+1.189225176 container died 944d4b6e513522436e4663de5cad4803811158ac798fd0710a9a2c090dd1c0b1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigilant_elgamal, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Jan 31 08:30:11 compute-0 nova_compute[238824]: 2026-01-31 08:30:11.339 238828 DEBUG oslo_service.periodic_task [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:30:11 compute-0 nova_compute[238824]: 2026-01-31 08:30:11.339 238828 DEBUG oslo_service.periodic_task [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:30:11 compute-0 nova_compute[238824]: 2026-01-31 08:30:11.361 238828 DEBUG oslo_concurrency.lockutils [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:30:11 compute-0 nova_compute[238824]: 2026-01-31 08:30:11.362 238828 DEBUG oslo_concurrency.lockutils [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:30:11 compute-0 nova_compute[238824]: 2026-01-31 08:30:11.362 238828 DEBUG oslo_concurrency.lockutils [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:30:11 compute-0 nova_compute[238824]: 2026-01-31 08:30:11.362 238828 DEBUG nova.compute.resource_tracker [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 31 08:30:11 compute-0 nova_compute[238824]: 2026-01-31 08:30:11.363 238828 DEBUG oslo_concurrency.processutils [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:30:11 compute-0 systemd[1]: var-lib-containers-storage-overlay-27cfbfc4039b8a530259aaa1ebcb2037984a787167404dd5b45960371939333d-merged.mount: Deactivated successfully.
Jan 31 08:30:11 compute-0 podman[243610]: 2026-01-31 08:30:11.412927395 +0000 UTC m=+1.267747452 container remove 944d4b6e513522436e4663de5cad4803811158ac798fd0710a9a2c090dd1c0b1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigilant_elgamal, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Jan 31 08:30:11 compute-0 systemd[1]: libpod-conmon-944d4b6e513522436e4663de5cad4803811158ac798fd0710a9a2c090dd1c0b1.scope: Deactivated successfully.
Jan 31 08:30:11 compute-0 sudo[243533]: pam_unix(sudo:session): session closed for user root
Jan 31 08:30:11 compute-0 sudo[243679]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 08:30:11 compute-0 sudo[243679]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:30:11 compute-0 sudo[243679]: pam_unix(sudo:session): session closed for user root
Jan 31 08:30:11 compute-0 sudo[243704]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/82c880e6-d992-5408-8b12-efff9c275473/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 82c880e6-d992-5408-8b12-efff9c275473 -- lvm list --format json
Jan 31 08:30:11 compute-0 sudo[243704]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:30:11 compute-0 podman[243741]: 2026-01-31 08:30:11.871075867 +0000 UTC m=+0.063160001 container create 98aa6a036b5e5d4d253f86f04e7aed8dc9a1f1fbed75d8c80074dcba934dad5a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=great_hoover, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 08:30:11 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 31 08:30:11 compute-0 ceph-mon[75227]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3779464899' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 31 08:30:11 compute-0 systemd[1]: Started libpod-conmon-98aa6a036b5e5d4d253f86f04e7aed8dc9a1f1fbed75d8c80074dcba934dad5a.scope.
Jan 31 08:30:11 compute-0 podman[243741]: 2026-01-31 08:30:11.82606703 +0000 UTC m=+0.018151194 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 08:30:11 compute-0 nova_compute[238824]: 2026-01-31 08:30:11.950 238828 DEBUG oslo_concurrency.processutils [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.587s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:30:11 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:30:11 compute-0 podman[243741]: 2026-01-31 08:30:11.983079233 +0000 UTC m=+0.175163387 container init 98aa6a036b5e5d4d253f86f04e7aed8dc9a1f1fbed75d8c80074dcba934dad5a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=great_hoover, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_REF=tentacle)
Jan 31 08:30:11 compute-0 podman[243741]: 2026-01-31 08:30:11.990938576 +0000 UTC m=+0.183022710 container start 98aa6a036b5e5d4d253f86f04e7aed8dc9a1f1fbed75d8c80074dcba934dad5a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=great_hoover, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 08:30:11 compute-0 great_hoover[243759]: 167 167
Jan 31 08:30:11 compute-0 systemd[1]: libpod-98aa6a036b5e5d4d253f86f04e7aed8dc9a1f1fbed75d8c80074dcba934dad5a.scope: Deactivated successfully.
Jan 31 08:30:11 compute-0 conmon[243759]: conmon 98aa6a036b5e5d4d253f <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-98aa6a036b5e5d4d253f86f04e7aed8dc9a1f1fbed75d8c80074dcba934dad5a.scope/container/memory.events
Jan 31 08:30:12 compute-0 podman[243741]: 2026-01-31 08:30:12.01401174 +0000 UTC m=+0.206095894 container attach 98aa6a036b5e5d4d253f86f04e7aed8dc9a1f1fbed75d8c80074dcba934dad5a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=great_hoover, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3)
Jan 31 08:30:12 compute-0 podman[243741]: 2026-01-31 08:30:12.014509164 +0000 UTC m=+0.206593308 container died 98aa6a036b5e5d4d253f86f04e7aed8dc9a1f1fbed75d8c80074dcba934dad5a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=great_hoover, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.vendor=CentOS)
Jan 31 08:30:12 compute-0 systemd[1]: var-lib-containers-storage-overlay-386e94464947996c547e95d37a58faa5f2559efe3d2f86633e7d5e58f2e687d2-merged.mount: Deactivated successfully.
Jan 31 08:30:12 compute-0 nova_compute[238824]: 2026-01-31 08:30:12.103 238828 WARNING nova.virt.libvirt.driver [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 31 08:30:12 compute-0 nova_compute[238824]: 2026-01-31 08:30:12.105 238828 DEBUG nova.compute.resource_tracker [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5128MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 31 08:30:12 compute-0 nova_compute[238824]: 2026-01-31 08:30:12.105 238828 DEBUG oslo_concurrency.lockutils [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:30:12 compute-0 nova_compute[238824]: 2026-01-31 08:30:12.105 238828 DEBUG oslo_concurrency.lockutils [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:30:12 compute-0 podman[243741]: 2026-01-31 08:30:12.125854602 +0000 UTC m=+0.317938736 container remove 98aa6a036b5e5d4d253f86f04e7aed8dc9a1f1fbed75d8c80074dcba934dad5a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=great_hoover, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 08:30:12 compute-0 systemd[1]: libpod-conmon-98aa6a036b5e5d4d253f86f04e7aed8dc9a1f1fbed75d8c80074dcba934dad5a.scope: Deactivated successfully.
Jan 31 08:30:12 compute-0 nova_compute[238824]: 2026-01-31 08:30:12.174 238828 DEBUG nova.compute.resource_tracker [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 31 08:30:12 compute-0 nova_compute[238824]: 2026-01-31 08:30:12.175 238828 DEBUG nova.compute.resource_tracker [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 31 08:30:12 compute-0 nova_compute[238824]: 2026-01-31 08:30:12.242 238828 DEBUG nova.scheduler.client.report [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Refreshing inventories for resource provider 6d4ff98f-eb37-47a1-bfaf-01e7f5329d98 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Jan 31 08:30:12 compute-0 podman[243785]: 2026-01-31 08:30:12.290080449 +0000 UTC m=+0.082985674 container create 713f944638812fb04043a0575d302c0b358a87ea2ead2d4745ba649e165cd141 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=loving_darwin, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True)
Jan 31 08:30:12 compute-0 podman[243785]: 2026-01-31 08:30:12.227591517 +0000 UTC m=+0.020496742 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 08:30:12 compute-0 systemd[1]: Started libpod-conmon-713f944638812fb04043a0575d302c0b358a87ea2ead2d4745ba649e165cd141.scope.
Jan 31 08:30:12 compute-0 nova_compute[238824]: 2026-01-31 08:30:12.331 238828 DEBUG nova.scheduler.client.report [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Updating ProviderTree inventory for provider 6d4ff98f-eb37-47a1-bfaf-01e7f5329d98 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Jan 31 08:30:12 compute-0 nova_compute[238824]: 2026-01-31 08:30:12.331 238828 DEBUG nova.compute.provider_tree [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Updating inventory in ProviderTree for provider 6d4ff98f-eb37-47a1-bfaf-01e7f5329d98 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Jan 31 08:30:12 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:30:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ae3459ef63b366c1f8f5f78204e48762bbd29cd391fb5d137b90c41a2c4fe832/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 08:30:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ae3459ef63b366c1f8f5f78204e48762bbd29cd391fb5d137b90c41a2c4fe832/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 08:30:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ae3459ef63b366c1f8f5f78204e48762bbd29cd391fb5d137b90c41a2c4fe832/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 08:30:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ae3459ef63b366c1f8f5f78204e48762bbd29cd391fb5d137b90c41a2c4fe832/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 08:30:12 compute-0 ceph-mon[75227]: pgmap v836: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail; 90 B/s rd, 0 B/s wr, 0 op/s
Jan 31 08:30:12 compute-0 ceph-mon[75227]: from='client.? 192.168.122.100:0/3779464899' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 31 08:30:12 compute-0 podman[243785]: 2026-01-31 08:30:12.422368971 +0000 UTC m=+0.215274206 container init 713f944638812fb04043a0575d302c0b358a87ea2ead2d4745ba649e165cd141 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=loving_darwin, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle)
Jan 31 08:30:12 compute-0 podman[243785]: 2026-01-31 08:30:12.428680329 +0000 UTC m=+0.221585544 container start 713f944638812fb04043a0575d302c0b358a87ea2ead2d4745ba649e165cd141 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=loving_darwin, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=tentacle, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 08:30:12 compute-0 podman[243785]: 2026-01-31 08:30:12.439409494 +0000 UTC m=+0.232314699 container attach 713f944638812fb04043a0575d302c0b358a87ea2ead2d4745ba649e165cd141 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=loving_darwin, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 08:30:12 compute-0 nova_compute[238824]: 2026-01-31 08:30:12.505 238828 DEBUG nova.scheduler.client.report [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Refreshing aggregate associations for resource provider 6d4ff98f-eb37-47a1-bfaf-01e7f5329d98, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Jan 31 08:30:12 compute-0 nova_compute[238824]: 2026-01-31 08:30:12.525 238828 DEBUG nova.scheduler.client.report [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Refreshing trait associations for resource provider 6d4ff98f-eb37-47a1-bfaf-01e7f5329d98, traits: COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_NODE,COMPUTE_IMAGE_TYPE_ISO,HW_CPU_X86_F16C,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_STORAGE_BUS_SCSI,HW_CPU_X86_FMA3,HW_CPU_X86_SHA,HW_CPU_X86_SSE,HW_CPU_X86_SSSE3,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,HW_CPU_X86_MMX,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_STORAGE_BUS_IDE,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_ACCELERATORS,HW_CPU_X86_CLMUL,HW_CPU_X86_BMI,HW_CPU_X86_AESNI,HW_CPU_X86_SSE2,HW_CPU_X86_SVM,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_NET_VIF_MODEL_E1000E,HW_CPU_X86_AVX,COMPUTE_SECURITY_TPM_2_0,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_DEVICE_TAGGING,COMPUTE_SECURITY_UEFI_SECURE_BOOT,HW_CPU_X86_SSE41,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_STORAGE_BUS_FDC,COMPUTE_GRAPHICS_MODEL_BOCHS,HW_CPU_X86_AVX2,HW_CPU_X86_BMI2,COMPUTE_VOLUME_EXTEND,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_TRUSTED_CERTS,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_SECURITY_TPM_1_2,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_GRAPHICS_MODEL_VIRTIO,HW_CPU_X86_ABM,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_RESCUE_BFV,HW_CPU_X86_SSE42,HW_CPU_X86_SSE4A,COMPUTE_STORAGE_BUS_SATA,COMPUTE_STORAGE_BUS_USB,COMPUTE_VIOMMU_MODEL_INTEL,HW_CPU_X86_AMD_SVM _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Jan 31 08:30:12 compute-0 nova_compute[238824]: 2026-01-31 08:30:12.539 238828 DEBUG oslo_concurrency.processutils [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:30:12 compute-0 loving_darwin[243801]: {
Jan 31 08:30:12 compute-0 loving_darwin[243801]:     "0": [
Jan 31 08:30:12 compute-0 loving_darwin[243801]:         {
Jan 31 08:30:12 compute-0 loving_darwin[243801]:             "devices": [
Jan 31 08:30:12 compute-0 loving_darwin[243801]:                 "/dev/loop3"
Jan 31 08:30:12 compute-0 loving_darwin[243801]:             ],
Jan 31 08:30:12 compute-0 loving_darwin[243801]:             "lv_name": "ceph_lv0",
Jan 31 08:30:12 compute-0 loving_darwin[243801]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 08:30:12 compute-0 loving_darwin[243801]:             "lv_size": "21470642176",
Jan 31 08:30:12 compute-0 loving_darwin[243801]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=MTsNbY-MKaT-jGv0-3onj-5WQa-gnK0-BbfLsK,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=82c880e6-d992-5408-8b12-efff9c275473,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=39c36249-2898-4a76-b317-8e4ca379866f,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 31 08:30:12 compute-0 loving_darwin[243801]:             "lv_uuid": "MTsNbY-MKaT-jGv0-3onj-5WQa-gnK0-BbfLsK",
Jan 31 08:30:12 compute-0 loving_darwin[243801]:             "name": "ceph_lv0",
Jan 31 08:30:12 compute-0 loving_darwin[243801]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 08:30:12 compute-0 loving_darwin[243801]:             "tags": {
Jan 31 08:30:12 compute-0 loving_darwin[243801]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 31 08:30:12 compute-0 loving_darwin[243801]:                 "ceph.block_uuid": "MTsNbY-MKaT-jGv0-3onj-5WQa-gnK0-BbfLsK",
Jan 31 08:30:12 compute-0 loving_darwin[243801]:                 "ceph.cephx_lockbox_secret": "",
Jan 31 08:30:12 compute-0 loving_darwin[243801]:                 "ceph.cluster_fsid": "82c880e6-d992-5408-8b12-efff9c275473",
Jan 31 08:30:12 compute-0 loving_darwin[243801]:                 "ceph.cluster_name": "ceph",
Jan 31 08:30:12 compute-0 loving_darwin[243801]:                 "ceph.crush_device_class": "",
Jan 31 08:30:12 compute-0 loving_darwin[243801]:                 "ceph.encrypted": "0",
Jan 31 08:30:12 compute-0 loving_darwin[243801]:                 "ceph.objectstore": "bluestore",
Jan 31 08:30:12 compute-0 loving_darwin[243801]:                 "ceph.osd_fsid": "39c36249-2898-4a76-b317-8e4ca379866f",
Jan 31 08:30:12 compute-0 loving_darwin[243801]:                 "ceph.osd_id": "0",
Jan 31 08:30:12 compute-0 loving_darwin[243801]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 31 08:30:12 compute-0 loving_darwin[243801]:                 "ceph.type": "block",
Jan 31 08:30:12 compute-0 loving_darwin[243801]:                 "ceph.vdo": "0",
Jan 31 08:30:12 compute-0 loving_darwin[243801]:                 "ceph.with_tpm": "0"
Jan 31 08:30:12 compute-0 loving_darwin[243801]:             },
Jan 31 08:30:12 compute-0 loving_darwin[243801]:             "type": "block",
Jan 31 08:30:12 compute-0 loving_darwin[243801]:             "vg_name": "ceph_vg0"
Jan 31 08:30:12 compute-0 loving_darwin[243801]:         }
Jan 31 08:30:12 compute-0 loving_darwin[243801]:     ],
Jan 31 08:30:12 compute-0 loving_darwin[243801]:     "1": [
Jan 31 08:30:12 compute-0 loving_darwin[243801]:         {
Jan 31 08:30:12 compute-0 loving_darwin[243801]:             "devices": [
Jan 31 08:30:12 compute-0 loving_darwin[243801]:                 "/dev/loop4"
Jan 31 08:30:12 compute-0 loving_darwin[243801]:             ],
Jan 31 08:30:12 compute-0 loving_darwin[243801]:             "lv_name": "ceph_lv1",
Jan 31 08:30:12 compute-0 loving_darwin[243801]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 31 08:30:12 compute-0 loving_darwin[243801]:             "lv_size": "21470642176",
Jan 31 08:30:12 compute-0 loving_darwin[243801]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=p93Mbf-DMxT-pcUt-jSJE-SFna-oscq-yTAd40,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=82c880e6-d992-5408-8b12-efff9c275473,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=dacad4fa-56d8-4937-b2d8-306fb75187f3,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 31 08:30:12 compute-0 loving_darwin[243801]:             "lv_uuid": "p93Mbf-DMxT-pcUt-jSJE-SFna-oscq-yTAd40",
Jan 31 08:30:12 compute-0 loving_darwin[243801]:             "name": "ceph_lv1",
Jan 31 08:30:12 compute-0 loving_darwin[243801]:             "path": "/dev/ceph_vg1/ceph_lv1",
Jan 31 08:30:12 compute-0 loving_darwin[243801]:             "tags": {
Jan 31 08:30:12 compute-0 loving_darwin[243801]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 31 08:30:12 compute-0 loving_darwin[243801]:                 "ceph.block_uuid": "p93Mbf-DMxT-pcUt-jSJE-SFna-oscq-yTAd40",
Jan 31 08:30:12 compute-0 loving_darwin[243801]:                 "ceph.cephx_lockbox_secret": "",
Jan 31 08:30:12 compute-0 loving_darwin[243801]:                 "ceph.cluster_fsid": "82c880e6-d992-5408-8b12-efff9c275473",
Jan 31 08:30:12 compute-0 loving_darwin[243801]:                 "ceph.cluster_name": "ceph",
Jan 31 08:30:12 compute-0 loving_darwin[243801]:                 "ceph.crush_device_class": "",
Jan 31 08:30:12 compute-0 loving_darwin[243801]:                 "ceph.encrypted": "0",
Jan 31 08:30:12 compute-0 loving_darwin[243801]:                 "ceph.objectstore": "bluestore",
Jan 31 08:30:12 compute-0 loving_darwin[243801]:                 "ceph.osd_fsid": "dacad4fa-56d8-4937-b2d8-306fb75187f3",
Jan 31 08:30:12 compute-0 loving_darwin[243801]:                 "ceph.osd_id": "1",
Jan 31 08:30:12 compute-0 loving_darwin[243801]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 31 08:30:12 compute-0 loving_darwin[243801]:                 "ceph.type": "block",
Jan 31 08:30:12 compute-0 loving_darwin[243801]:                 "ceph.vdo": "0",
Jan 31 08:30:12 compute-0 loving_darwin[243801]:                 "ceph.with_tpm": "0"
Jan 31 08:30:12 compute-0 loving_darwin[243801]:             },
Jan 31 08:30:12 compute-0 loving_darwin[243801]:             "type": "block",
Jan 31 08:30:12 compute-0 loving_darwin[243801]:             "vg_name": "ceph_vg1"
Jan 31 08:30:12 compute-0 loving_darwin[243801]:         }
Jan 31 08:30:12 compute-0 loving_darwin[243801]:     ],
Jan 31 08:30:12 compute-0 loving_darwin[243801]:     "2": [
Jan 31 08:30:12 compute-0 loving_darwin[243801]:         {
Jan 31 08:30:12 compute-0 loving_darwin[243801]:             "devices": [
Jan 31 08:30:12 compute-0 loving_darwin[243801]:                 "/dev/loop5"
Jan 31 08:30:12 compute-0 loving_darwin[243801]:             ],
Jan 31 08:30:12 compute-0 loving_darwin[243801]:             "lv_name": "ceph_lv2",
Jan 31 08:30:12 compute-0 loving_darwin[243801]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 31 08:30:12 compute-0 loving_darwin[243801]:             "lv_size": "21470642176",
Jan 31 08:30:12 compute-0 loving_darwin[243801]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=fxd7JU-HnwP-NvcE-M4xv-EgEF-kK7y-w6dXCS,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=82c880e6-d992-5408-8b12-efff9c275473,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=faa25865-e7b6-44f9-8188-08bf287b941b,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 31 08:30:12 compute-0 loving_darwin[243801]:             "lv_uuid": "fxd7JU-HnwP-NvcE-M4xv-EgEF-kK7y-w6dXCS",
Jan 31 08:30:12 compute-0 loving_darwin[243801]:             "name": "ceph_lv2",
Jan 31 08:30:12 compute-0 loving_darwin[243801]:             "path": "/dev/ceph_vg2/ceph_lv2",
Jan 31 08:30:12 compute-0 loving_darwin[243801]:             "tags": {
Jan 31 08:30:12 compute-0 loving_darwin[243801]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 31 08:30:12 compute-0 loving_darwin[243801]:                 "ceph.block_uuid": "fxd7JU-HnwP-NvcE-M4xv-EgEF-kK7y-w6dXCS",
Jan 31 08:30:12 compute-0 loving_darwin[243801]:                 "ceph.cephx_lockbox_secret": "",
Jan 31 08:30:12 compute-0 loving_darwin[243801]:                 "ceph.cluster_fsid": "82c880e6-d992-5408-8b12-efff9c275473",
Jan 31 08:30:12 compute-0 loving_darwin[243801]:                 "ceph.cluster_name": "ceph",
Jan 31 08:30:12 compute-0 loving_darwin[243801]:                 "ceph.crush_device_class": "",
Jan 31 08:30:12 compute-0 loving_darwin[243801]:                 "ceph.encrypted": "0",
Jan 31 08:30:12 compute-0 loving_darwin[243801]:                 "ceph.objectstore": "bluestore",
Jan 31 08:30:12 compute-0 loving_darwin[243801]:                 "ceph.osd_fsid": "faa25865-e7b6-44f9-8188-08bf287b941b",
Jan 31 08:30:12 compute-0 loving_darwin[243801]:                 "ceph.osd_id": "2",
Jan 31 08:30:12 compute-0 loving_darwin[243801]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 31 08:30:12 compute-0 loving_darwin[243801]:                 "ceph.type": "block",
Jan 31 08:30:12 compute-0 loving_darwin[243801]:                 "ceph.vdo": "0",
Jan 31 08:30:12 compute-0 loving_darwin[243801]:                 "ceph.with_tpm": "0"
Jan 31 08:30:12 compute-0 loving_darwin[243801]:             },
Jan 31 08:30:12 compute-0 loving_darwin[243801]:             "type": "block",
Jan 31 08:30:12 compute-0 loving_darwin[243801]:             "vg_name": "ceph_vg2"
Jan 31 08:30:12 compute-0 loving_darwin[243801]:         }
Jan 31 08:30:12 compute-0 loving_darwin[243801]:     ]
Jan 31 08:30:12 compute-0 loving_darwin[243801]: }
Jan 31 08:30:12 compute-0 systemd[1]: libpod-713f944638812fb04043a0575d302c0b358a87ea2ead2d4745ba649e165cd141.scope: Deactivated successfully.
Jan 31 08:30:12 compute-0 podman[243785]: 2026-01-31 08:30:12.744942098 +0000 UTC m=+0.537847293 container died 713f944638812fb04043a0575d302c0b358a87ea2ead2d4745ba649e165cd141 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=loving_darwin, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 31 08:30:12 compute-0 systemd[1]: var-lib-containers-storage-overlay-ae3459ef63b366c1f8f5f78204e48762bbd29cd391fb5d137b90c41a2c4fe832-merged.mount: Deactivated successfully.
Jan 31 08:30:12 compute-0 podman[243785]: 2026-01-31 08:30:12.840623552 +0000 UTC m=+0.633528747 container remove 713f944638812fb04043a0575d302c0b358a87ea2ead2d4745ba649e165cd141 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=loving_darwin, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 08:30:12 compute-0 systemd[1]: libpod-conmon-713f944638812fb04043a0575d302c0b358a87ea2ead2d4745ba649e165cd141.scope: Deactivated successfully.
Jan 31 08:30:12 compute-0 sudo[243704]: pam_unix(sudo:session): session closed for user root
Jan 31 08:30:12 compute-0 sudo[243843]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 08:30:12 compute-0 sudo[243843]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:30:12 compute-0 sudo[243843]: pam_unix(sudo:session): session closed for user root
Jan 31 08:30:13 compute-0 sudo[243868]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/82c880e6-d992-5408-8b12-efff9c275473/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 82c880e6-d992-5408-8b12-efff9c275473 -- raw list --format json
Jan 31 08:30:13 compute-0 sudo[243868]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:30:13 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 31 08:30:13 compute-0 ceph-mon[75227]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2817873369' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 31 08:30:13 compute-0 nova_compute[238824]: 2026-01-31 08:30:13.072 238828 DEBUG oslo_concurrency.processutils [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.532s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:30:13 compute-0 nova_compute[238824]: 2026-01-31 08:30:13.086 238828 DEBUG nova.compute.provider_tree [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Inventory has not changed in ProviderTree for provider: 6d4ff98f-eb37-47a1-bfaf-01e7f5329d98 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 31 08:30:13 compute-0 nova_compute[238824]: 2026-01-31 08:30:13.101 238828 DEBUG nova.scheduler.client.report [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Inventory has not changed for provider 6d4ff98f-eb37-47a1-bfaf-01e7f5329d98 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 31 08:30:13 compute-0 nova_compute[238824]: 2026-01-31 08:30:13.103 238828 DEBUG nova.compute.resource_tracker [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 31 08:30:13 compute-0 nova_compute[238824]: 2026-01-31 08:30:13.103 238828 DEBUG oslo_concurrency.lockutils [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.998s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:30:13 compute-0 podman[243907]: 2026-01-31 08:30:13.294197834 +0000 UTC m=+0.051278115 container create 4132bf51c087acc3d8d165e2c009e7a5118f710b69b9b0a74984ad6a47481392 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zealous_buck, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Jan 31 08:30:13 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v837: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:30:13 compute-0 systemd[1]: Started libpod-conmon-4132bf51c087acc3d8d165e2c009e7a5118f710b69b9b0a74984ad6a47481392.scope.
Jan 31 08:30:13 compute-0 podman[243907]: 2026-01-31 08:30:13.261891198 +0000 UTC m=+0.018971479 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 08:30:13 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:30:13 compute-0 podman[243907]: 2026-01-31 08:30:13.486314422 +0000 UTC m=+0.243394703 container init 4132bf51c087acc3d8d165e2c009e7a5118f710b69b9b0a74984ad6a47481392 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zealous_buck, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 08:30:13 compute-0 podman[243907]: 2026-01-31 08:30:13.492043175 +0000 UTC m=+0.249123466 container start 4132bf51c087acc3d8d165e2c009e7a5118f710b69b9b0a74984ad6a47481392 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zealous_buck, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Jan 31 08:30:13 compute-0 zealous_buck[243923]: 167 167
Jan 31 08:30:13 compute-0 systemd[1]: libpod-4132bf51c087acc3d8d165e2c009e7a5118f710b69b9b0a74984ad6a47481392.scope: Deactivated successfully.
Jan 31 08:30:13 compute-0 ceph-mon[75227]: from='client.? 192.168.122.100:0/2817873369' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 31 08:30:13 compute-0 podman[243907]: 2026-01-31 08:30:13.64034176 +0000 UTC m=+0.397422021 container attach 4132bf51c087acc3d8d165e2c009e7a5118f710b69b9b0a74984ad6a47481392 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zealous_buck, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Jan 31 08:30:13 compute-0 podman[243907]: 2026-01-31 08:30:13.641161913 +0000 UTC m=+0.398242184 container died 4132bf51c087acc3d8d165e2c009e7a5118f710b69b9b0a74984ad6a47481392 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zealous_buck, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle)
Jan 31 08:30:13 compute-0 systemd[1]: var-lib-containers-storage-overlay-bf6ca8c53e00456ab5fce6fbbb0cdacd9b7ebd003df2c58bc494f14ec7d9a9fe-merged.mount: Deactivated successfully.
Jan 31 08:30:13 compute-0 podman[243907]: 2026-01-31 08:30:13.838444798 +0000 UTC m=+0.595525049 container remove 4132bf51c087acc3d8d165e2c009e7a5118f710b69b9b0a74984ad6a47481392 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zealous_buck, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 08:30:13 compute-0 systemd[1]: libpod-conmon-4132bf51c087acc3d8d165e2c009e7a5118f710b69b9b0a74984ad6a47481392.scope: Deactivated successfully.
Jan 31 08:30:14 compute-0 podman[243949]: 2026-01-31 08:30:14.001994046 +0000 UTC m=+0.071134248 container create a0639e847a6dc8b9551b7b7ed03ef2ed8dedea86e6096bb9b2cba59f61dbe9c5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=practical_swirles, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Jan 31 08:30:14 compute-0 podman[243949]: 2026-01-31 08:30:13.953558243 +0000 UTC m=+0.022698515 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 08:30:14 compute-0 systemd[1]: Started libpod-conmon-a0639e847a6dc8b9551b7b7ed03ef2ed8dedea86e6096bb9b2cba59f61dbe9c5.scope.
Jan 31 08:30:14 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:30:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/12c01960ba3a69de51b01b04e954cab5f899c726c2b4281691cda187a87c3769/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 08:30:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/12c01960ba3a69de51b01b04e954cab5f899c726c2b4281691cda187a87c3769/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 08:30:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/12c01960ba3a69de51b01b04e954cab5f899c726c2b4281691cda187a87c3769/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 08:30:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/12c01960ba3a69de51b01b04e954cab5f899c726c2b4281691cda187a87c3769/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 08:30:14 compute-0 podman[243949]: 2026-01-31 08:30:14.147026649 +0000 UTC m=+0.216166891 container init a0639e847a6dc8b9551b7b7ed03ef2ed8dedea86e6096bb9b2cba59f61dbe9c5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=practical_swirles, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2)
Jan 31 08:30:14 compute-0 podman[243949]: 2026-01-31 08:30:14.155412247 +0000 UTC m=+0.224552429 container start a0639e847a6dc8b9551b7b7ed03ef2ed8dedea86e6096bb9b2cba59f61dbe9c5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=practical_swirles, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 31 08:30:14 compute-0 podman[243949]: 2026-01-31 08:30:14.178393189 +0000 UTC m=+0.247533381 container attach a0639e847a6dc8b9551b7b7ed03ef2ed8dedea86e6096bb9b2cba59f61dbe9c5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=practical_swirles, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 31 08:30:14 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:30:14 compute-0 ceph-mon[75227]: pgmap v837: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:30:14 compute-0 lvm[244042]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 31 08:30:14 compute-0 lvm[244042]: VG ceph_vg0 finished
Jan 31 08:30:14 compute-0 lvm[244044]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 31 08:30:14 compute-0 lvm[244044]: VG ceph_vg1 finished
Jan 31 08:30:14 compute-0 lvm[244046]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 31 08:30:14 compute-0 lvm[244046]: VG ceph_vg2 finished
Jan 31 08:30:14 compute-0 practical_swirles[243965]: {}
Jan 31 08:30:14 compute-0 podman[243949]: 2026-01-31 08:30:14.942096616 +0000 UTC m=+1.011236798 container died a0639e847a6dc8b9551b7b7ed03ef2ed8dedea86e6096bb9b2cba59f61dbe9c5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=practical_swirles, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 08:30:14 compute-0 systemd[1]: libpod-a0639e847a6dc8b9551b7b7ed03ef2ed8dedea86e6096bb9b2cba59f61dbe9c5.scope: Deactivated successfully.
Jan 31 08:30:14 compute-0 systemd[1]: libpod-a0639e847a6dc8b9551b7b7ed03ef2ed8dedea86e6096bb9b2cba59f61dbe9c5.scope: Consumed 1.188s CPU time.
Jan 31 08:30:15 compute-0 systemd[1]: var-lib-containers-storage-overlay-12c01960ba3a69de51b01b04e954cab5f899c726c2b4281691cda187a87c3769-merged.mount: Deactivated successfully.
Jan 31 08:30:15 compute-0 podman[243949]: 2026-01-31 08:30:15.047451264 +0000 UTC m=+1.116591446 container remove a0639e847a6dc8b9551b7b7ed03ef2ed8dedea86e6096bb9b2cba59f61dbe9c5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=practical_swirles, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 08:30:15 compute-0 systemd[1]: libpod-conmon-a0639e847a6dc8b9551b7b7ed03ef2ed8dedea86e6096bb9b2cba59f61dbe9c5.scope: Deactivated successfully.
Jan 31 08:30:15 compute-0 sudo[243868]: pam_unix(sudo:session): session closed for user root
Jan 31 08:30:15 compute-0 nova_compute[238824]: 2026-01-31 08:30:15.099 238828 DEBUG oslo_service.periodic_task [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:30:15 compute-0 nova_compute[238824]: 2026-01-31 08:30:15.101 238828 DEBUG oslo_service.periodic_task [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:30:15 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 31 08:30:15 compute-0 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 08:30:15 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 31 08:30:15 compute-0 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 08:30:15 compute-0 nova_compute[238824]: 2026-01-31 08:30:15.131 238828 DEBUG oslo_service.periodic_task [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:30:15 compute-0 nova_compute[238824]: 2026-01-31 08:30:15.131 238828 DEBUG nova.compute.manager [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 31 08:30:15 compute-0 nova_compute[238824]: 2026-01-31 08:30:15.131 238828 DEBUG nova.compute.manager [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 31 08:30:15 compute-0 nova_compute[238824]: 2026-01-31 08:30:15.145 238828 DEBUG nova.compute.manager [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 31 08:30:15 compute-0 nova_compute[238824]: 2026-01-31 08:30:15.146 238828 DEBUG oslo_service.periodic_task [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:30:15 compute-0 nova_compute[238824]: 2026-01-31 08:30:15.146 238828 DEBUG oslo_service.periodic_task [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:30:15 compute-0 nova_compute[238824]: 2026-01-31 08:30:15.146 238828 DEBUG nova.compute.manager [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 31 08:30:15 compute-0 sudo[244064]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 31 08:30:15 compute-0 sudo[244064]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:30:15 compute-0 sudo[244064]: pam_unix(sudo:session): session closed for user root
Jan 31 08:30:15 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v838: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:30:15 compute-0 nova_compute[238824]: 2026-01-31 08:30:15.340 238828 DEBUG oslo_service.periodic_task [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:30:16 compute-0 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 08:30:16 compute-0 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 08:30:16 compute-0 ceph-mon[75227]: pgmap v838: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:30:17 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v839: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:30:17 compute-0 ovn_metadata_agent[154972]: 2026-01-31 08:30:17.888 154977 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:30:17 compute-0 ovn_metadata_agent[154972]: 2026-01-31 08:30:17.889 154977 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:30:17 compute-0 ovn_metadata_agent[154972]: 2026-01-31 08:30:17.889 154977 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:30:17 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 31 08:30:17 compute-0 ceph-mon[75227]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1441845726' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 31 08:30:17 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 31 08:30:17 compute-0 ceph-mon[75227]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1441845726' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 31 08:30:18 compute-0 ceph-mon[75227]: pgmap v839: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:30:18 compute-0 ceph-mon[75227]: from='client.? 192.168.122.10:0/1441845726' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 31 08:30:18 compute-0 ceph-mon[75227]: from='client.? 192.168.122.10:0/1441845726' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 31 08:30:19 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v840: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:30:19 compute-0 ceph-mon[75227]: pgmap v840: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:30:19 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:30:21 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v841: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:30:22 compute-0 ceph-mon[75227]: pgmap v841: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:30:23 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v842: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:30:23 compute-0 ceph-mon[75227]: pgmap v842: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:30:24 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:30:25 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v843: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:30:26 compute-0 ceph-mon[75227]: pgmap v843: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:30:27 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v844: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:30:27 compute-0 ceph-mon[75227]: pgmap v844: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:30:29 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v845: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:30:29 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:30:30 compute-0 ceph-mon[75227]: pgmap v845: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:30:31 compute-0 podman[244090]: 2026-01-31 08:30:31.178949832 +0000 UTC m=+0.062512524 container health_status 5cc46d1955888fed41771eb977e7f9416e280539f01559118253757fe3eb0869 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '3b798815db4eef76ff54d2cfe5801aee605c637f16e47a4297de289ab1fdb9c1-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2)
Jan 31 08:30:31 compute-0 podman[244089]: 2026-01-31 08:30:31.227971472 +0000 UTC m=+0.111811282 container health_status 14c3c41ef3ecc0cb180f3b1c6b2646401de390411c75f2edf127900ced71a3ae (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20260127, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '3b798815db4eef76ff54d2cfe5801aee605c637f16e47a4297de289ab1fdb9c1-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible)
Jan 31 08:30:31 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v846: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:30:31 compute-0 ceph-mgr[75519]: [balancer INFO root] Optimize plan auto_2026-01-31_08:30:31
Jan 31 08:30:31 compute-0 ceph-mgr[75519]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 31 08:30:31 compute-0 ceph-mgr[75519]: [balancer INFO root] do_upmap
Jan 31 08:30:31 compute-0 ceph-mgr[75519]: [balancer INFO root] pools ['volumes', 'default.rgw.control', '.mgr', 'default.rgw.meta', 'cephfs.cephfs.meta', 'images', 'backups', 'default.rgw.log', 'cephfs.cephfs.data', '.rgw.root', 'vms']
Jan 31 08:30:31 compute-0 ceph-mgr[75519]: [balancer INFO root] prepared 0/10 upmap changes
Jan 31 08:30:32 compute-0 ceph-mon[75227]: pgmap v846: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:30:32 compute-0 ceph-mgr[75519]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:30:32 compute-0 ceph-mgr[75519]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:30:32 compute-0 ceph-mgr[75519]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:30:32 compute-0 ceph-mgr[75519]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:30:32 compute-0 ceph-mgr[75519]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:30:32 compute-0 ceph-mgr[75519]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:30:33 compute-0 ceph-mgr[75519]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 31 08:30:33 compute-0 ceph-mgr[75519]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 08:30:33 compute-0 ceph-mgr[75519]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 31 08:30:33 compute-0 ceph-mgr[75519]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 08:30:33 compute-0 ceph-mgr[75519]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 08:30:33 compute-0 ceph-mgr[75519]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 08:30:33 compute-0 ceph-mgr[75519]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 08:30:33 compute-0 ceph-mgr[75519]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 08:30:33 compute-0 ceph-mgr[75519]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 08:30:33 compute-0 ceph-mgr[75519]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 08:30:33 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v847: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:30:34 compute-0 ceph-mon[75227]: pgmap v847: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:30:34 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:30:35 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v848: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:30:35 compute-0 ceph-mon[75227]: pgmap v848: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:30:37 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v849: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:30:38 compute-0 ceph-mon[75227]: pgmap v849: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:30:39 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v850: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:30:39 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:30:40 compute-0 ceph-mon[75227]: pgmap v850: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:30:41 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v851: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:30:41 compute-0 ceph-mon[75227]: pgmap v851: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:30:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] _maybe_adjust
Jan 31 08:30:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:30:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 31 08:30:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:30:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 08:30:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:30:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 08:30:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:30:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 08:30:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:30:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 1.9431184059615526e-07 of space, bias 1.0, pg target 5.829355217884658e-05 quantized to 32 (current 32)
Jan 31 08:30:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:30:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 2.607793448422658e-06 of space, bias 4.0, pg target 0.0031293521381071895 quantized to 16 (current 16)
Jan 31 08:30:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:30:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 08:30:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:30:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 31 08:30:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:30:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Jan 31 08:30:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:30:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 08:30:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:30:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 31 08:30:43 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v852: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:30:44 compute-0 ceph-mon[75227]: pgmap v852: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:30:44 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:30:45 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v853: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:30:45 compute-0 ceph-mon[75227]: pgmap v853: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:30:47 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v854: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:30:47 compute-0 ceph-mon[75227]: pgmap v854: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:30:49 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v855: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:30:49 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:30:49 compute-0 ceph-mon[75227]: pgmap v855: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:30:51 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v856: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:30:52 compute-0 ceph-mon[75227]: pgmap v856: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:30:53 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v857: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:30:54 compute-0 ceph-mon[75227]: pgmap v857: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:30:54 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:30:55 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v858: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:30:56 compute-0 ceph-mon[75227]: pgmap v858: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:30:57 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v859: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:30:58 compute-0 ceph-mon[75227]: pgmap v859: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:30:59 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v860: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:30:59 compute-0 ceph-mon[75227]: pgmap v860: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:30:59 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:31:01 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v861: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:31:01 compute-0 ceph-mon[75227]: pgmap v861: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:31:02 compute-0 podman[244136]: 2026-01-31 08:31:02.176869446 +0000 UTC m=+0.050160374 container health_status 5cc46d1955888fed41771eb977e7f9416e280539f01559118253757fe3eb0869 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '3b798815db4eef76ff54d2cfe5801aee605c637f16e47a4297de289ab1fdb9c1-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=ovn_metadata_agent)
Jan 31 08:31:02 compute-0 podman[244135]: 2026-01-31 08:31:02.225073983 +0000 UTC m=+0.095912031 container health_status 14c3c41ef3ecc0cb180f3b1c6b2646401de390411c75f2edf127900ced71a3ae (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '3b798815db4eef76ff54d2cfe5801aee605c637f16e47a4297de289ab1fdb9c1-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 08:31:02 compute-0 ceph-mgr[75519]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:31:02 compute-0 ceph-mgr[75519]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:31:02 compute-0 ceph-mgr[75519]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:31:02 compute-0 ceph-mgr[75519]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:31:02 compute-0 ceph-mgr[75519]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:31:02 compute-0 ceph-mgr[75519]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:31:03 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v862: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:31:03 compute-0 ceph-mon[75227]: pgmap v862: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:31:04 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:31:05 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v863: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:31:06 compute-0 ceph-mon[75227]: pgmap v863: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:31:07 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v864: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:31:08 compute-0 ceph-mon[75227]: pgmap v864: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:31:09 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v865: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:31:09 compute-0 nova_compute[238824]: 2026-01-31 08:31:09.340 238828 DEBUG oslo_service.periodic_task [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:31:09 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:31:10 compute-0 ceph-mon[75227]: pgmap v865: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:31:11 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v866: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:31:11 compute-0 nova_compute[238824]: 2026-01-31 08:31:11.339 238828 DEBUG oslo_service.periodic_task [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:31:11 compute-0 nova_compute[238824]: 2026-01-31 08:31:11.373 238828 DEBUG oslo_concurrency.lockutils [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:31:11 compute-0 nova_compute[238824]: 2026-01-31 08:31:11.373 238828 DEBUG oslo_concurrency.lockutils [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:31:11 compute-0 nova_compute[238824]: 2026-01-31 08:31:11.374 238828 DEBUG oslo_concurrency.lockutils [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:31:11 compute-0 nova_compute[238824]: 2026-01-31 08:31:11.374 238828 DEBUG nova.compute.resource_tracker [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 31 08:31:11 compute-0 nova_compute[238824]: 2026-01-31 08:31:11.374 238828 DEBUG oslo_concurrency.processutils [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:31:11 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 31 08:31:11 compute-0 ceph-mon[75227]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/975075714' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 31 08:31:11 compute-0 nova_compute[238824]: 2026-01-31 08:31:11.870 238828 DEBUG oslo_concurrency.processutils [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.496s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:31:12 compute-0 nova_compute[238824]: 2026-01-31 08:31:12.018 238828 WARNING nova.virt.libvirt.driver [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 31 08:31:12 compute-0 nova_compute[238824]: 2026-01-31 08:31:12.019 238828 DEBUG nova.compute.resource_tracker [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5137MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 31 08:31:12 compute-0 nova_compute[238824]: 2026-01-31 08:31:12.019 238828 DEBUG oslo_concurrency.lockutils [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:31:12 compute-0 nova_compute[238824]: 2026-01-31 08:31:12.020 238828 DEBUG oslo_concurrency.lockutils [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:31:12 compute-0 nova_compute[238824]: 2026-01-31 08:31:12.086 238828 DEBUG nova.compute.resource_tracker [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 31 08:31:12 compute-0 nova_compute[238824]: 2026-01-31 08:31:12.086 238828 DEBUG nova.compute.resource_tracker [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 31 08:31:12 compute-0 nova_compute[238824]: 2026-01-31 08:31:12.101 238828 DEBUG oslo_concurrency.processutils [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:31:12 compute-0 ceph-mon[75227]: pgmap v866: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:31:12 compute-0 ceph-mon[75227]: from='client.? 192.168.122.100:0/975075714' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 31 08:31:12 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 31 08:31:12 compute-0 ceph-mon[75227]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3572898568' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 31 08:31:12 compute-0 nova_compute[238824]: 2026-01-31 08:31:12.659 238828 DEBUG oslo_concurrency.processutils [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.559s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:31:12 compute-0 nova_compute[238824]: 2026-01-31 08:31:12.664 238828 DEBUG nova.compute.provider_tree [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Inventory has not changed in ProviderTree for provider: 6d4ff98f-eb37-47a1-bfaf-01e7f5329d98 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 31 08:31:12 compute-0 nova_compute[238824]: 2026-01-31 08:31:12.680 238828 DEBUG nova.scheduler.client.report [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Inventory has not changed for provider 6d4ff98f-eb37-47a1-bfaf-01e7f5329d98 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 31 08:31:12 compute-0 nova_compute[238824]: 2026-01-31 08:31:12.684 238828 DEBUG nova.compute.resource_tracker [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 31 08:31:12 compute-0 nova_compute[238824]: 2026-01-31 08:31:12.684 238828 DEBUG oslo_concurrency.lockutils [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.664s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:31:13 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v867: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:31:13 compute-0 ceph-mon[75227]: from='client.? 192.168.122.100:0/3572898568' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 31 08:31:13 compute-0 nova_compute[238824]: 2026-01-31 08:31:13.686 238828 DEBUG oslo_service.periodic_task [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:31:13 compute-0 nova_compute[238824]: 2026-01-31 08:31:13.686 238828 DEBUG oslo_service.periodic_task [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:31:13 compute-0 nova_compute[238824]: 2026-01-31 08:31:13.687 238828 DEBUG oslo_service.periodic_task [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:31:13 compute-0 nova_compute[238824]: 2026-01-31 08:31:13.687 238828 DEBUG nova.compute.manager [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 31 08:31:14 compute-0 nova_compute[238824]: 2026-01-31 08:31:14.335 238828 DEBUG oslo_service.periodic_task [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:31:14 compute-0 ceph-mon[75227]: pgmap v867: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:31:14 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:31:15 compute-0 sudo[244225]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 08:31:15 compute-0 sudo[244225]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:31:15 compute-0 sudo[244225]: pam_unix(sudo:session): session closed for user root
Jan 31 08:31:15 compute-0 sudo[244250]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/82c880e6-d992-5408-8b12-efff9c275473/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Jan 31 08:31:15 compute-0 sudo[244250]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:31:15 compute-0 nova_compute[238824]: 2026-01-31 08:31:15.339 238828 DEBUG oslo_service.periodic_task [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:31:15 compute-0 nova_compute[238824]: 2026-01-31 08:31:15.339 238828 DEBUG nova.compute.manager [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 31 08:31:15 compute-0 nova_compute[238824]: 2026-01-31 08:31:15.339 238828 DEBUG nova.compute.manager [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 31 08:31:15 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v868: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:31:15 compute-0 nova_compute[238824]: 2026-01-31 08:31:15.352 238828 DEBUG nova.compute.manager [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 31 08:31:15 compute-0 nova_compute[238824]: 2026-01-31 08:31:15.352 238828 DEBUG oslo_service.periodic_task [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:31:15 compute-0 ceph-mon[75227]: pgmap v868: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:31:15 compute-0 sudo[244250]: pam_unix(sudo:session): session closed for user root
Jan 31 08:31:15 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 31 08:31:15 compute-0 ceph-mon[75227]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 08:31:15 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 31 08:31:15 compute-0 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 31 08:31:15 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 31 08:31:16 compute-0 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 08:31:16 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 31 08:31:16 compute-0 ceph-mon[75227]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 31 08:31:16 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 31 08:31:16 compute-0 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 31 08:31:16 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 31 08:31:16 compute-0 ceph-mon[75227]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 08:31:16 compute-0 sudo[244305]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 08:31:16 compute-0 sudo[244305]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:31:16 compute-0 sudo[244305]: pam_unix(sudo:session): session closed for user root
Jan 31 08:31:16 compute-0 sudo[244330]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/82c880e6-d992-5408-8b12-efff9c275473/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 82c880e6-d992-5408-8b12-efff9c275473 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Jan 31 08:31:16 compute-0 sudo[244330]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:31:16 compute-0 podman[244367]: 2026-01-31 08:31:16.46035623 +0000 UTC m=+0.089522740 container create eddb36f6a78cfb430edc687fa8a3ee6f9d0a125823cfad5f86c882932049666b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=adoring_lumiere, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 08:31:16 compute-0 podman[244367]: 2026-01-31 08:31:16.402676514 +0000 UTC m=+0.031843014 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 08:31:16 compute-0 systemd[1]: Started libpod-conmon-eddb36f6a78cfb430edc687fa8a3ee6f9d0a125823cfad5f86c882932049666b.scope.
Jan 31 08:31:16 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:31:16 compute-0 podman[244367]: 2026-01-31 08:31:16.561904459 +0000 UTC m=+0.191070929 container init eddb36f6a78cfb430edc687fa8a3ee6f9d0a125823cfad5f86c882932049666b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=adoring_lumiere, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True)
Jan 31 08:31:16 compute-0 podman[244367]: 2026-01-31 08:31:16.568135396 +0000 UTC m=+0.197301916 container start eddb36f6a78cfb430edc687fa8a3ee6f9d0a125823cfad5f86c882932049666b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=adoring_lumiere, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3)
Jan 31 08:31:16 compute-0 adoring_lumiere[244383]: 167 167
Jan 31 08:31:16 compute-0 systemd[1]: libpod-eddb36f6a78cfb430edc687fa8a3ee6f9d0a125823cfad5f86c882932049666b.scope: Deactivated successfully.
Jan 31 08:31:16 compute-0 podman[244367]: 2026-01-31 08:31:16.587159286 +0000 UTC m=+0.216325886 container attach eddb36f6a78cfb430edc687fa8a3ee6f9d0a125823cfad5f86c882932049666b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=adoring_lumiere, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 31 08:31:16 compute-0 podman[244367]: 2026-01-31 08:31:16.587710581 +0000 UTC m=+0.216877071 container died eddb36f6a78cfb430edc687fa8a3ee6f9d0a125823cfad5f86c882932049666b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=adoring_lumiere, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True)
Jan 31 08:31:16 compute-0 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 08:31:16 compute-0 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 31 08:31:16 compute-0 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 08:31:16 compute-0 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 31 08:31:16 compute-0 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 31 08:31:16 compute-0 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 08:31:16 compute-0 systemd[1]: var-lib-containers-storage-overlay-cc615f14ce799d1a40cce9b5be9ced2762ca90445e89473f22e7b3930c30a775-merged.mount: Deactivated successfully.
Jan 31 08:31:16 compute-0 podman[244367]: 2026-01-31 08:31:16.713451247 +0000 UTC m=+0.342617767 container remove eddb36f6a78cfb430edc687fa8a3ee6f9d0a125823cfad5f86c882932049666b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=adoring_lumiere, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 08:31:16 compute-0 systemd[1]: libpod-conmon-eddb36f6a78cfb430edc687fa8a3ee6f9d0a125823cfad5f86c882932049666b.scope: Deactivated successfully.
Jan 31 08:31:16 compute-0 podman[244409]: 2026-01-31 08:31:16.889010855 +0000 UTC m=+0.068199004 container create a26bbb0cbbdfbbf442f3072bc5fc34beb8872ca7f079e8e1c167bc73bb04a82f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gifted_bouman, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.41.3, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 08:31:16 compute-0 podman[244409]: 2026-01-31 08:31:16.8450734 +0000 UTC m=+0.024261539 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 08:31:16 compute-0 systemd[1]: Started libpod-conmon-a26bbb0cbbdfbbf442f3072bc5fc34beb8872ca7f079e8e1c167bc73bb04a82f.scope.
Jan 31 08:31:16 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:31:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2086e64dfe88818bc7ae13af01c0a92530a8e3b0aaebf9721589616e353bbee2/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 08:31:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2086e64dfe88818bc7ae13af01c0a92530a8e3b0aaebf9721589616e353bbee2/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 08:31:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2086e64dfe88818bc7ae13af01c0a92530a8e3b0aaebf9721589616e353bbee2/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 08:31:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2086e64dfe88818bc7ae13af01c0a92530a8e3b0aaebf9721589616e353bbee2/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 08:31:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2086e64dfe88818bc7ae13af01c0a92530a8e3b0aaebf9721589616e353bbee2/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 31 08:31:17 compute-0 podman[244409]: 2026-01-31 08:31:17.01047751 +0000 UTC m=+0.189665699 container init a26bbb0cbbdfbbf442f3072bc5fc34beb8872ca7f079e8e1c167bc73bb04a82f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gifted_bouman, ceph=True, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 08:31:17 compute-0 podman[244409]: 2026-01-31 08:31:17.019918478 +0000 UTC m=+0.199106647 container start a26bbb0cbbdfbbf442f3072bc5fc34beb8872ca7f079e8e1c167bc73bb04a82f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gifted_bouman, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Jan 31 08:31:17 compute-0 podman[244409]: 2026-01-31 08:31:17.100519104 +0000 UTC m=+0.279707243 container attach a26bbb0cbbdfbbf442f3072bc5fc34beb8872ca7f079e8e1c167bc73bb04a82f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gifted_bouman, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 08:31:17 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v869: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:31:17 compute-0 nova_compute[238824]: 2026-01-31 08:31:17.340 238828 DEBUG oslo_service.periodic_task [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:31:17 compute-0 gifted_bouman[244426]: --> passed data devices: 0 physical, 3 LVM
Jan 31 08:31:17 compute-0 gifted_bouman[244426]: --> All data devices are unavailable
Jan 31 08:31:17 compute-0 systemd[1]: libpod-a26bbb0cbbdfbbf442f3072bc5fc34beb8872ca7f079e8e1c167bc73bb04a82f.scope: Deactivated successfully.
Jan 31 08:31:17 compute-0 podman[244446]: 2026-01-31 08:31:17.476230438 +0000 UTC m=+0.022882300 container died a26bbb0cbbdfbbf442f3072bc5fc34beb8872ca7f079e8e1c167bc73bb04a82f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gifted_bouman, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Jan 31 08:31:17 compute-0 systemd[1]: var-lib-containers-storage-overlay-2086e64dfe88818bc7ae13af01c0a92530a8e3b0aaebf9721589616e353bbee2-merged.mount: Deactivated successfully.
Jan 31 08:31:17 compute-0 podman[244446]: 2026-01-31 08:31:17.515761059 +0000 UTC m=+0.062412921 container remove a26bbb0cbbdfbbf442f3072bc5fc34beb8872ca7f079e8e1c167bc73bb04a82f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gifted_bouman, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 08:31:17 compute-0 systemd[1]: libpod-conmon-a26bbb0cbbdfbbf442f3072bc5fc34beb8872ca7f079e8e1c167bc73bb04a82f.scope: Deactivated successfully.
Jan 31 08:31:17 compute-0 sudo[244330]: pam_unix(sudo:session): session closed for user root
Jan 31 08:31:17 compute-0 sudo[244461]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 08:31:17 compute-0 sudo[244461]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:31:17 compute-0 sudo[244461]: pam_unix(sudo:session): session closed for user root
Jan 31 08:31:17 compute-0 ceph-mon[75227]: pgmap v869: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:31:17 compute-0 sudo[244486]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/82c880e6-d992-5408-8b12-efff9c275473/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 82c880e6-d992-5408-8b12-efff9c275473 -- lvm list --format json
Jan 31 08:31:17 compute-0 sudo[244486]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:31:17 compute-0 ovn_metadata_agent[154972]: 2026-01-31 08:31:17.888 154977 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:31:17 compute-0 ovn_metadata_agent[154972]: 2026-01-31 08:31:17.890 154977 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:31:17 compute-0 ovn_metadata_agent[154972]: 2026-01-31 08:31:17.890 154977 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:31:17 compute-0 podman[244523]: 2026-01-31 08:31:17.941280416 +0000 UTC m=+0.047573380 container create 707fb64113f76421203d3853c584dd482232dd52ddf19a45eef1eb9a44a2319c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=silly_einstein, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 08:31:17 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 31 08:31:17 compute-0 ceph-mon[75227]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/116771768' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 31 08:31:17 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 31 08:31:17 compute-0 ceph-mon[75227]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/116771768' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 31 08:31:17 compute-0 systemd[1]: Started libpod-conmon-707fb64113f76421203d3853c584dd482232dd52ddf19a45eef1eb9a44a2319c.scope.
Jan 31 08:31:18 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:31:18 compute-0 podman[244523]: 2026-01-31 08:31:17.920015813 +0000 UTC m=+0.026308837 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 08:31:18 compute-0 podman[244523]: 2026-01-31 08:31:18.022360395 +0000 UTC m=+0.128653379 container init 707fb64113f76421203d3853c584dd482232dd52ddf19a45eef1eb9a44a2319c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=silly_einstein, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Jan 31 08:31:18 compute-0 podman[244523]: 2026-01-31 08:31:18.030052924 +0000 UTC m=+0.136345858 container start 707fb64113f76421203d3853c584dd482232dd52ddf19a45eef1eb9a44a2319c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=silly_einstein, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Jan 31 08:31:18 compute-0 silly_einstein[244540]: 167 167
Jan 31 08:31:18 compute-0 systemd[1]: libpod-707fb64113f76421203d3853c584dd482232dd52ddf19a45eef1eb9a44a2319c.scope: Deactivated successfully.
Jan 31 08:31:18 compute-0 podman[244523]: 2026-01-31 08:31:18.036036393 +0000 UTC m=+0.142329337 container attach 707fb64113f76421203d3853c584dd482232dd52ddf19a45eef1eb9a44a2319c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=silly_einstein, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_REF=tentacle)
Jan 31 08:31:18 compute-0 podman[244523]: 2026-01-31 08:31:18.036534307 +0000 UTC m=+0.142827261 container died 707fb64113f76421203d3853c584dd482232dd52ddf19a45eef1eb9a44a2319c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=silly_einstein, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.build-date=20251030, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 31 08:31:18 compute-0 systemd[1]: var-lib-containers-storage-overlay-47855452fe1dcbe7165ace8db6e3fcd6c0c2cb83d087e5e962c252d4c2748333-merged.mount: Deactivated successfully.
Jan 31 08:31:18 compute-0 podman[244523]: 2026-01-31 08:31:18.109564408 +0000 UTC m=+0.215857342 container remove 707fb64113f76421203d3853c584dd482232dd52ddf19a45eef1eb9a44a2319c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=silly_einstein, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 08:31:18 compute-0 systemd[1]: libpod-conmon-707fb64113f76421203d3853c584dd482232dd52ddf19a45eef1eb9a44a2319c.scope: Deactivated successfully.
Jan 31 08:31:18 compute-0 podman[244563]: 2026-01-31 08:31:18.290502839 +0000 UTC m=+0.062107342 container create c607c6bb5e5b90fb7a2f250b8850b09dbc5a05f5eea06fa513f97f57a4c0ba32 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_austin, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Jan 31 08:31:18 compute-0 systemd[1]: Started libpod-conmon-c607c6bb5e5b90fb7a2f250b8850b09dbc5a05f5eea06fa513f97f57a4c0ba32.scope.
Jan 31 08:31:18 compute-0 podman[244563]: 2026-01-31 08:31:18.257975447 +0000 UTC m=+0.029580030 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 08:31:18 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:31:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0071043709eb05122e20569e6289f79d8bc752247749accc846d1bea75ee4e75/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 08:31:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0071043709eb05122e20569e6289f79d8bc752247749accc846d1bea75ee4e75/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 08:31:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0071043709eb05122e20569e6289f79d8bc752247749accc846d1bea75ee4e75/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 08:31:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0071043709eb05122e20569e6289f79d8bc752247749accc846d1bea75ee4e75/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 08:31:18 compute-0 podman[244563]: 2026-01-31 08:31:18.377299301 +0000 UTC m=+0.148903804 container init c607c6bb5e5b90fb7a2f250b8850b09dbc5a05f5eea06fa513f97f57a4c0ba32 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_austin, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.41.3)
Jan 31 08:31:18 compute-0 podman[244563]: 2026-01-31 08:31:18.383829826 +0000 UTC m=+0.155434319 container start c607c6bb5e5b90fb7a2f250b8850b09dbc5a05f5eea06fa513f97f57a4c0ba32 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_austin, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3)
Jan 31 08:31:18 compute-0 podman[244563]: 2026-01-31 08:31:18.3948877 +0000 UTC m=+0.166492203 container attach c607c6bb5e5b90fb7a2f250b8850b09dbc5a05f5eea06fa513f97f57a4c0ba32 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_austin, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 08:31:18 compute-0 focused_austin[244580]: {
Jan 31 08:31:18 compute-0 focused_austin[244580]:     "0": [
Jan 31 08:31:18 compute-0 focused_austin[244580]:         {
Jan 31 08:31:18 compute-0 focused_austin[244580]:             "devices": [
Jan 31 08:31:18 compute-0 focused_austin[244580]:                 "/dev/loop3"
Jan 31 08:31:18 compute-0 focused_austin[244580]:             ],
Jan 31 08:31:18 compute-0 focused_austin[244580]:             "lv_name": "ceph_lv0",
Jan 31 08:31:18 compute-0 focused_austin[244580]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 08:31:18 compute-0 focused_austin[244580]:             "lv_size": "21470642176",
Jan 31 08:31:18 compute-0 focused_austin[244580]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=MTsNbY-MKaT-jGv0-3onj-5WQa-gnK0-BbfLsK,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=82c880e6-d992-5408-8b12-efff9c275473,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=39c36249-2898-4a76-b317-8e4ca379866f,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 31 08:31:18 compute-0 focused_austin[244580]:             "lv_uuid": "MTsNbY-MKaT-jGv0-3onj-5WQa-gnK0-BbfLsK",
Jan 31 08:31:18 compute-0 focused_austin[244580]:             "name": "ceph_lv0",
Jan 31 08:31:18 compute-0 focused_austin[244580]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 08:31:18 compute-0 focused_austin[244580]:             "tags": {
Jan 31 08:31:18 compute-0 focused_austin[244580]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 31 08:31:18 compute-0 focused_austin[244580]:                 "ceph.block_uuid": "MTsNbY-MKaT-jGv0-3onj-5WQa-gnK0-BbfLsK",
Jan 31 08:31:18 compute-0 focused_austin[244580]:                 "ceph.cephx_lockbox_secret": "",
Jan 31 08:31:18 compute-0 focused_austin[244580]:                 "ceph.cluster_fsid": "82c880e6-d992-5408-8b12-efff9c275473",
Jan 31 08:31:18 compute-0 focused_austin[244580]:                 "ceph.cluster_name": "ceph",
Jan 31 08:31:18 compute-0 focused_austin[244580]:                 "ceph.crush_device_class": "",
Jan 31 08:31:18 compute-0 focused_austin[244580]:                 "ceph.encrypted": "0",
Jan 31 08:31:18 compute-0 focused_austin[244580]:                 "ceph.objectstore": "bluestore",
Jan 31 08:31:18 compute-0 focused_austin[244580]:                 "ceph.osd_fsid": "39c36249-2898-4a76-b317-8e4ca379866f",
Jan 31 08:31:18 compute-0 focused_austin[244580]:                 "ceph.osd_id": "0",
Jan 31 08:31:18 compute-0 focused_austin[244580]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 31 08:31:18 compute-0 focused_austin[244580]:                 "ceph.type": "block",
Jan 31 08:31:18 compute-0 focused_austin[244580]:                 "ceph.vdo": "0",
Jan 31 08:31:18 compute-0 focused_austin[244580]:                 "ceph.with_tpm": "0"
Jan 31 08:31:18 compute-0 focused_austin[244580]:             },
Jan 31 08:31:18 compute-0 focused_austin[244580]:             "type": "block",
Jan 31 08:31:18 compute-0 focused_austin[244580]:             "vg_name": "ceph_vg0"
Jan 31 08:31:18 compute-0 focused_austin[244580]:         }
Jan 31 08:31:18 compute-0 focused_austin[244580]:     ],
Jan 31 08:31:18 compute-0 focused_austin[244580]:     "1": [
Jan 31 08:31:18 compute-0 focused_austin[244580]:         {
Jan 31 08:31:18 compute-0 focused_austin[244580]:             "devices": [
Jan 31 08:31:18 compute-0 focused_austin[244580]:                 "/dev/loop4"
Jan 31 08:31:18 compute-0 focused_austin[244580]:             ],
Jan 31 08:31:18 compute-0 focused_austin[244580]:             "lv_name": "ceph_lv1",
Jan 31 08:31:18 compute-0 focused_austin[244580]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 31 08:31:18 compute-0 focused_austin[244580]:             "lv_size": "21470642176",
Jan 31 08:31:18 compute-0 focused_austin[244580]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=p93Mbf-DMxT-pcUt-jSJE-SFna-oscq-yTAd40,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=82c880e6-d992-5408-8b12-efff9c275473,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=dacad4fa-56d8-4937-b2d8-306fb75187f3,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 31 08:31:18 compute-0 focused_austin[244580]:             "lv_uuid": "p93Mbf-DMxT-pcUt-jSJE-SFna-oscq-yTAd40",
Jan 31 08:31:18 compute-0 focused_austin[244580]:             "name": "ceph_lv1",
Jan 31 08:31:18 compute-0 focused_austin[244580]:             "path": "/dev/ceph_vg1/ceph_lv1",
Jan 31 08:31:18 compute-0 focused_austin[244580]:             "tags": {
Jan 31 08:31:18 compute-0 focused_austin[244580]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 31 08:31:18 compute-0 focused_austin[244580]:                 "ceph.block_uuid": "p93Mbf-DMxT-pcUt-jSJE-SFna-oscq-yTAd40",
Jan 31 08:31:18 compute-0 focused_austin[244580]:                 "ceph.cephx_lockbox_secret": "",
Jan 31 08:31:18 compute-0 focused_austin[244580]:                 "ceph.cluster_fsid": "82c880e6-d992-5408-8b12-efff9c275473",
Jan 31 08:31:18 compute-0 focused_austin[244580]:                 "ceph.cluster_name": "ceph",
Jan 31 08:31:18 compute-0 focused_austin[244580]:                 "ceph.crush_device_class": "",
Jan 31 08:31:18 compute-0 focused_austin[244580]:                 "ceph.encrypted": "0",
Jan 31 08:31:18 compute-0 focused_austin[244580]:                 "ceph.objectstore": "bluestore",
Jan 31 08:31:18 compute-0 focused_austin[244580]:                 "ceph.osd_fsid": "dacad4fa-56d8-4937-b2d8-306fb75187f3",
Jan 31 08:31:18 compute-0 focused_austin[244580]:                 "ceph.osd_id": "1",
Jan 31 08:31:18 compute-0 focused_austin[244580]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 31 08:31:18 compute-0 focused_austin[244580]:                 "ceph.type": "block",
Jan 31 08:31:18 compute-0 focused_austin[244580]:                 "ceph.vdo": "0",
Jan 31 08:31:18 compute-0 focused_austin[244580]:                 "ceph.with_tpm": "0"
Jan 31 08:31:18 compute-0 focused_austin[244580]:             },
Jan 31 08:31:18 compute-0 focused_austin[244580]:             "type": "block",
Jan 31 08:31:18 compute-0 focused_austin[244580]:             "vg_name": "ceph_vg1"
Jan 31 08:31:18 compute-0 focused_austin[244580]:         }
Jan 31 08:31:18 compute-0 focused_austin[244580]:     ],
Jan 31 08:31:18 compute-0 focused_austin[244580]:     "2": [
Jan 31 08:31:18 compute-0 focused_austin[244580]:         {
Jan 31 08:31:18 compute-0 focused_austin[244580]:             "devices": [
Jan 31 08:31:18 compute-0 focused_austin[244580]:                 "/dev/loop5"
Jan 31 08:31:18 compute-0 focused_austin[244580]:             ],
Jan 31 08:31:18 compute-0 focused_austin[244580]:             "lv_name": "ceph_lv2",
Jan 31 08:31:18 compute-0 focused_austin[244580]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 31 08:31:18 compute-0 focused_austin[244580]:             "lv_size": "21470642176",
Jan 31 08:31:18 compute-0 focused_austin[244580]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=fxd7JU-HnwP-NvcE-M4xv-EgEF-kK7y-w6dXCS,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=82c880e6-d992-5408-8b12-efff9c275473,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=faa25865-e7b6-44f9-8188-08bf287b941b,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 31 08:31:18 compute-0 focused_austin[244580]:             "lv_uuid": "fxd7JU-HnwP-NvcE-M4xv-EgEF-kK7y-w6dXCS",
Jan 31 08:31:18 compute-0 focused_austin[244580]:             "name": "ceph_lv2",
Jan 31 08:31:18 compute-0 focused_austin[244580]:             "path": "/dev/ceph_vg2/ceph_lv2",
Jan 31 08:31:18 compute-0 focused_austin[244580]:             "tags": {
Jan 31 08:31:18 compute-0 focused_austin[244580]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 31 08:31:18 compute-0 focused_austin[244580]:                 "ceph.block_uuid": "fxd7JU-HnwP-NvcE-M4xv-EgEF-kK7y-w6dXCS",
Jan 31 08:31:18 compute-0 focused_austin[244580]:                 "ceph.cephx_lockbox_secret": "",
Jan 31 08:31:18 compute-0 focused_austin[244580]:                 "ceph.cluster_fsid": "82c880e6-d992-5408-8b12-efff9c275473",
Jan 31 08:31:18 compute-0 focused_austin[244580]:                 "ceph.cluster_name": "ceph",
Jan 31 08:31:18 compute-0 focused_austin[244580]:                 "ceph.crush_device_class": "",
Jan 31 08:31:18 compute-0 focused_austin[244580]:                 "ceph.encrypted": "0",
Jan 31 08:31:18 compute-0 focused_austin[244580]:                 "ceph.objectstore": "bluestore",
Jan 31 08:31:18 compute-0 focused_austin[244580]:                 "ceph.osd_fsid": "faa25865-e7b6-44f9-8188-08bf287b941b",
Jan 31 08:31:18 compute-0 focused_austin[244580]:                 "ceph.osd_id": "2",
Jan 31 08:31:18 compute-0 focused_austin[244580]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 31 08:31:18 compute-0 focused_austin[244580]:                 "ceph.type": "block",
Jan 31 08:31:18 compute-0 focused_austin[244580]:                 "ceph.vdo": "0",
Jan 31 08:31:18 compute-0 focused_austin[244580]:                 "ceph.with_tpm": "0"
Jan 31 08:31:18 compute-0 focused_austin[244580]:             },
Jan 31 08:31:18 compute-0 focused_austin[244580]:             "type": "block",
Jan 31 08:31:18 compute-0 focused_austin[244580]:             "vg_name": "ceph_vg2"
Jan 31 08:31:18 compute-0 focused_austin[244580]:         }
Jan 31 08:31:18 compute-0 focused_austin[244580]:     ]
Jan 31 08:31:18 compute-0 focused_austin[244580]: }
Jan 31 08:31:18 compute-0 ceph-mon[75227]: from='client.? 192.168.122.10:0/116771768' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 31 08:31:18 compute-0 ceph-mon[75227]: from='client.? 192.168.122.10:0/116771768' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 31 08:31:18 compute-0 systemd[1]: libpod-c607c6bb5e5b90fb7a2f250b8850b09dbc5a05f5eea06fa513f97f57a4c0ba32.scope: Deactivated successfully.
Jan 31 08:31:18 compute-0 podman[244563]: 2026-01-31 08:31:18.680678224 +0000 UTC m=+0.452282717 container died c607c6bb5e5b90fb7a2f250b8850b09dbc5a05f5eea06fa513f97f57a4c0ba32 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_austin, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 31 08:31:18 compute-0 systemd[1]: var-lib-containers-storage-overlay-0071043709eb05122e20569e6289f79d8bc752247749accc846d1bea75ee4e75-merged.mount: Deactivated successfully.
Jan 31 08:31:18 compute-0 podman[244563]: 2026-01-31 08:31:18.739551964 +0000 UTC m=+0.511156467 container remove c607c6bb5e5b90fb7a2f250b8850b09dbc5a05f5eea06fa513f97f57a4c0ba32 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_austin, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, ceph=True, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Jan 31 08:31:18 compute-0 systemd[1]: libpod-conmon-c607c6bb5e5b90fb7a2f250b8850b09dbc5a05f5eea06fa513f97f57a4c0ba32.scope: Deactivated successfully.
Jan 31 08:31:18 compute-0 sudo[244486]: pam_unix(sudo:session): session closed for user root
Jan 31 08:31:18 compute-0 sudo[244602]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 08:31:18 compute-0 sudo[244602]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:31:18 compute-0 sudo[244602]: pam_unix(sudo:session): session closed for user root
Jan 31 08:31:18 compute-0 sudo[244627]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/82c880e6-d992-5408-8b12-efff9c275473/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 82c880e6-d992-5408-8b12-efff9c275473 -- raw list --format json
Jan 31 08:31:18 compute-0 sudo[244627]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:31:19 compute-0 podman[244663]: 2026-01-31 08:31:19.184854592 +0000 UTC m=+0.042687402 container create 3e1b4e51cc30db0de8c7f675cc18c54317fee38c2d87926b743aea7172578f48 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=brave_gagarin, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 08:31:19 compute-0 systemd[1]: Started libpod-conmon-3e1b4e51cc30db0de8c7f675cc18c54317fee38c2d87926b743aea7172578f48.scope.
Jan 31 08:31:19 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:31:19 compute-0 podman[244663]: 2026-01-31 08:31:19.262764441 +0000 UTC m=+0.120597271 container init 3e1b4e51cc30db0de8c7f675cc18c54317fee38c2d87926b743aea7172578f48 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=brave_gagarin, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 31 08:31:19 compute-0 podman[244663]: 2026-01-31 08:31:19.168534779 +0000 UTC m=+0.026367619 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 08:31:19 compute-0 podman[244663]: 2026-01-31 08:31:19.269953925 +0000 UTC m=+0.127786735 container start 3e1b4e51cc30db0de8c7f675cc18c54317fee38c2d87926b743aea7172578f48 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=brave_gagarin, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 08:31:19 compute-0 podman[244663]: 2026-01-31 08:31:19.274056571 +0000 UTC m=+0.131889412 container attach 3e1b4e51cc30db0de8c7f675cc18c54317fee38c2d87926b743aea7172578f48 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=brave_gagarin, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 08:31:19 compute-0 brave_gagarin[244679]: 167 167
Jan 31 08:31:19 compute-0 systemd[1]: libpod-3e1b4e51cc30db0de8c7f675cc18c54317fee38c2d87926b743aea7172578f48.scope: Deactivated successfully.
Jan 31 08:31:19 compute-0 podman[244663]: 2026-01-31 08:31:19.277466438 +0000 UTC m=+0.135299278 container died 3e1b4e51cc30db0de8c7f675cc18c54317fee38c2d87926b743aea7172578f48 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=brave_gagarin, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Jan 31 08:31:19 compute-0 systemd[1]: var-lib-containers-storage-overlay-5cc15e7f2efa22ca95ef0d839e810d27dea9ab07ffd5432073788cd3c0de90c4-merged.mount: Deactivated successfully.
Jan 31 08:31:19 compute-0 podman[244663]: 2026-01-31 08:31:19.320764856 +0000 UTC m=+0.178597666 container remove 3e1b4e51cc30db0de8c7f675cc18c54317fee38c2d87926b743aea7172578f48 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=brave_gagarin, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=tentacle, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS)
Jan 31 08:31:19 compute-0 systemd[1]: libpod-conmon-3e1b4e51cc30db0de8c7f675cc18c54317fee38c2d87926b743aea7172578f48.scope: Deactivated successfully.
Jan 31 08:31:19 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v870: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:31:19 compute-0 podman[244703]: 2026-01-31 08:31:19.45171026 +0000 UTC m=+0.042463276 container create 43e7999e4593823b5ae5c2ad0d779d4e09b7d61587dd7e48745dc82bdc0da513 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=magical_shannon, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 08:31:19 compute-0 systemd[1]: Started libpod-conmon-43e7999e4593823b5ae5c2ad0d779d4e09b7d61587dd7e48745dc82bdc0da513.scope.
Jan 31 08:31:19 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:31:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8bd923c25c65e35df97155d5aba2b1cb1694e43add54e133beca03ece577008c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 08:31:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8bd923c25c65e35df97155d5aba2b1cb1694e43add54e133beca03ece577008c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 08:31:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8bd923c25c65e35df97155d5aba2b1cb1694e43add54e133beca03ece577008c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 08:31:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8bd923c25c65e35df97155d5aba2b1cb1694e43add54e133beca03ece577008c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 08:31:19 compute-0 podman[244703]: 2026-01-31 08:31:19.434150562 +0000 UTC m=+0.024903598 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 08:31:19 compute-0 podman[244703]: 2026-01-31 08:31:19.536536865 +0000 UTC m=+0.127289891 container init 43e7999e4593823b5ae5c2ad0d779d4e09b7d61587dd7e48745dc82bdc0da513 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=magical_shannon, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 08:31:19 compute-0 podman[244703]: 2026-01-31 08:31:19.548945657 +0000 UTC m=+0.139698703 container start 43e7999e4593823b5ae5c2ad0d779d4e09b7d61587dd7e48745dc82bdc0da513 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=magical_shannon, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Jan 31 08:31:19 compute-0 podman[244703]: 2026-01-31 08:31:19.556865602 +0000 UTC m=+0.147618638 container attach 43e7999e4593823b5ae5c2ad0d779d4e09b7d61587dd7e48745dc82bdc0da513 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=magical_shannon, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 31 08:31:19 compute-0 ceph-mon[75227]: pgmap v870: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:31:19 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:31:20 compute-0 lvm[244799]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 31 08:31:20 compute-0 lvm[244800]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 31 08:31:20 compute-0 lvm[244800]: VG ceph_vg1 finished
Jan 31 08:31:20 compute-0 lvm[244799]: VG ceph_vg0 finished
Jan 31 08:31:20 compute-0 lvm[244802]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 31 08:31:20 compute-0 lvm[244802]: VG ceph_vg2 finished
Jan 31 08:31:20 compute-0 magical_shannon[244720]: {}
Jan 31 08:31:20 compute-0 systemd[1]: libpod-43e7999e4593823b5ae5c2ad0d779d4e09b7d61587dd7e48745dc82bdc0da513.scope: Deactivated successfully.
Jan 31 08:31:20 compute-0 podman[244703]: 2026-01-31 08:31:20.394816044 +0000 UTC m=+0.985569160 container died 43e7999e4593823b5ae5c2ad0d779d4e09b7d61587dd7e48745dc82bdc0da513 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=magical_shannon, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, ceph=True, io.buildah.version=1.41.3)
Jan 31 08:31:20 compute-0 systemd[1]: libpod-43e7999e4593823b5ae5c2ad0d779d4e09b7d61587dd7e48745dc82bdc0da513.scope: Consumed 1.237s CPU time.
Jan 31 08:31:20 compute-0 systemd[1]: var-lib-containers-storage-overlay-8bd923c25c65e35df97155d5aba2b1cb1694e43add54e133beca03ece577008c-merged.mount: Deactivated successfully.
Jan 31 08:31:20 compute-0 podman[244703]: 2026-01-31 08:31:20.438966496 +0000 UTC m=+1.029719552 container remove 43e7999e4593823b5ae5c2ad0d779d4e09b7d61587dd7e48745dc82bdc0da513 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=magical_shannon, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Jan 31 08:31:20 compute-0 systemd[1]: libpod-conmon-43e7999e4593823b5ae5c2ad0d779d4e09b7d61587dd7e48745dc82bdc0da513.scope: Deactivated successfully.
Jan 31 08:31:20 compute-0 sudo[244627]: pam_unix(sudo:session): session closed for user root
Jan 31 08:31:20 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 31 08:31:20 compute-0 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 08:31:20 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 31 08:31:20 compute-0 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 08:31:20 compute-0 sudo[244817]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 31 08:31:20 compute-0 sudo[244817]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:31:20 compute-0 sudo[244817]: pam_unix(sudo:session): session closed for user root
Jan 31 08:31:21 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v871: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:31:21 compute-0 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 08:31:21 compute-0 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 08:31:22 compute-0 ceph-mon[75227]: pgmap v871: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:31:23 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v872: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:31:23 compute-0 ceph-mon[75227]: pgmap v872: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:31:24 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:31:25 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v873: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:31:26 compute-0 ceph-mon[75227]: pgmap v873: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:31:27 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v874: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:31:27 compute-0 ceph-mon[75227]: pgmap v874: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:31:29 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v875: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:31:29 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:31:30 compute-0 ceph-mon[75227]: pgmap v875: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:31:31 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v876: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:31:31 compute-0 ceph-mgr[75519]: [balancer INFO root] Optimize plan auto_2026-01-31_08:31:31
Jan 31 08:31:31 compute-0 ceph-mgr[75519]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 31 08:31:31 compute-0 ceph-mgr[75519]: [balancer INFO root] do_upmap
Jan 31 08:31:31 compute-0 ceph-mgr[75519]: [balancer INFO root] pools ['cephfs.cephfs.data', 'cephfs.cephfs.meta', '.mgr', 'images', 'backups', 'default.rgw.control', 'volumes', '.rgw.root', 'default.rgw.log', 'default.rgw.meta', 'vms']
Jan 31 08:31:31 compute-0 ceph-mgr[75519]: [balancer INFO root] prepared 0/10 upmap changes
Jan 31 08:31:32 compute-0 ceph-mon[75227]: pgmap v876: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:31:32 compute-0 ceph-mgr[75519]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:31:32 compute-0 ceph-mgr[75519]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:31:32 compute-0 ceph-mgr[75519]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:31:32 compute-0 ceph-mgr[75519]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:31:32 compute-0 ceph-mgr[75519]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:31:32 compute-0 ceph-mgr[75519]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:31:33 compute-0 ceph-mgr[75519]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 31 08:31:33 compute-0 ceph-mgr[75519]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 08:31:33 compute-0 ceph-mgr[75519]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 31 08:31:33 compute-0 ceph-mgr[75519]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 08:31:33 compute-0 ceph-mgr[75519]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 08:31:33 compute-0 ceph-mgr[75519]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 08:31:33 compute-0 ceph-mgr[75519]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 08:31:33 compute-0 ceph-mgr[75519]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 08:31:33 compute-0 ceph-mgr[75519]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 08:31:33 compute-0 ceph-mgr[75519]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 08:31:33 compute-0 podman[244843]: 2026-01-31 08:31:33.169092653 +0000 UTC m=+0.059271399 container health_status 5cc46d1955888fed41771eb977e7f9416e280539f01559118253757fe3eb0869 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '3b798815db4eef76ff54d2cfe5801aee605c637f16e47a4297de289ab1fdb9c1-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team)
Jan 31 08:31:33 compute-0 podman[244842]: 2026-01-31 08:31:33.189382708 +0000 UTC m=+0.080929323 container health_status 14c3c41ef3ecc0cb180f3b1c6b2646401de390411c75f2edf127900ced71a3ae (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '3b798815db4eef76ff54d2cfe5801aee605c637f16e47a4297de289ab1fdb9c1-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, managed_by=edpm_ansible)
Jan 31 08:31:33 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v877: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:31:33 compute-0 ceph-osd[85971]: bluestore.MempoolThread fragmentation_score=0.000115 took=0.000017s
Jan 31 08:31:33 compute-0 ceph-osd[87035]: bluestore.MempoolThread fragmentation_score=0.000128 took=0.000027s
Jan 31 08:31:33 compute-0 ceph-osd[88096]: bluestore.MempoolThread fragmentation_score=0.000143 took=0.000028s
Jan 31 08:31:34 compute-0 ceph-mon[75227]: pgmap v877: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:31:34 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:31:35 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v878: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:31:36 compute-0 ceph-mon[75227]: pgmap v878: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:31:37 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v879: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:31:38 compute-0 ceph-mon[75227]: pgmap v879: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:31:39 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v880: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:31:39 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:31:39 compute-0 ceph-mon[75227]: pgmap v880: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:31:41 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v881: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:31:42 compute-0 ceph-mon[75227]: pgmap v881: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:31:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] _maybe_adjust
Jan 31 08:31:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:31:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 31 08:31:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:31:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 08:31:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:31:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 08:31:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:31:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 08:31:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:31:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 1.9431184059615526e-07 of space, bias 1.0, pg target 5.829355217884658e-05 quantized to 32 (current 32)
Jan 31 08:31:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:31:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 2.607793448422658e-06 of space, bias 4.0, pg target 0.0031293521381071895 quantized to 16 (current 16)
Jan 31 08:31:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:31:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 08:31:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:31:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 31 08:31:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:31:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Jan 31 08:31:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:31:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 08:31:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:31:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 31 08:31:43 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v882: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:31:44 compute-0 ceph-mon[75227]: pgmap v882: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:31:44 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:31:45 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v883: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:31:46 compute-0 ceph-mon[75227]: pgmap v883: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:31:47 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v884: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:31:47 compute-0 ceph-mon[75227]: pgmap v884: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:31:49 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v885: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:31:49 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:31:50 compute-0 ceph-mon[75227]: pgmap v885: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:31:51 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v886: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:31:51 compute-0 ceph-mon[75227]: pgmap v886: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:31:53 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v887: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:31:54 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:31:55 compute-0 ceph-mon[75227]: pgmap v887: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:31:55 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v888: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:31:56 compute-0 ceph-mon[75227]: pgmap v888: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:31:57 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v889: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:31:58 compute-0 ceph-mon[75227]: pgmap v889: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:31:59 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v890: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:31:59 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:32:00 compute-0 ceph-mon[75227]: pgmap v890: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:32:01 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v891: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:32:02 compute-0 ceph-mon[75227]: pgmap v891: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:32:02 compute-0 ceph-mgr[75519]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:32:02 compute-0 ceph-mgr[75519]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:32:02 compute-0 ceph-mgr[75519]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:32:02 compute-0 ceph-mgr[75519]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:32:02 compute-0 ceph-mgr[75519]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:32:02 compute-0 ceph-mgr[75519]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:32:03 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v892: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:32:03 compute-0 ceph-mon[75227]: pgmap v892: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:32:04 compute-0 podman[244886]: 2026-01-31 08:32:04.191442375 +0000 UTC m=+0.075233641 container health_status 5cc46d1955888fed41771eb977e7f9416e280539f01559118253757fe3eb0869 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '3b798815db4eef76ff54d2cfe5801aee605c637f16e47a4297de289ab1fdb9c1-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20260127)
Jan 31 08:32:04 compute-0 podman[244885]: 2026-01-31 08:32:04.21313902 +0000 UTC m=+0.099004885 container health_status 14c3c41ef3ecc0cb180f3b1c6b2646401de390411c75f2edf127900ced71a3ae (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20260127, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '3b798815db4eef76ff54d2cfe5801aee605c637f16e47a4297de289ab1fdb9c1-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, config_id=ovn_controller, managed_by=edpm_ansible)
Jan 31 08:32:04 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:32:05 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v893: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:32:06 compute-0 ceph-mon[75227]: pgmap v893: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:32:07 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v894: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:32:08 compute-0 ceph-mon[75227]: pgmap v894: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:32:09 compute-0 nova_compute[238824]: 2026-01-31 08:32:09.341 238828 DEBUG oslo_service.periodic_task [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:32:09 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v895: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:32:09 compute-0 ceph-mon[75227]: pgmap v895: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:32:09 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:32:11 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v896: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:32:12 compute-0 ceph-mon[75227]: pgmap v896: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:32:13 compute-0 nova_compute[238824]: 2026-01-31 08:32:13.339 238828 DEBUG oslo_service.periodic_task [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:32:13 compute-0 nova_compute[238824]: 2026-01-31 08:32:13.340 238828 DEBUG oslo_service.periodic_task [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:32:13 compute-0 nova_compute[238824]: 2026-01-31 08:32:13.340 238828 DEBUG oslo_service.periodic_task [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:32:13 compute-0 nova_compute[238824]: 2026-01-31 08:32:13.340 238828 DEBUG nova.compute.manager [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 31 08:32:13 compute-0 nova_compute[238824]: 2026-01-31 08:32:13.340 238828 DEBUG oslo_service.periodic_task [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:32:13 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v897: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:32:13 compute-0 nova_compute[238824]: 2026-01-31 08:32:13.397 238828 DEBUG oslo_concurrency.lockutils [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:32:13 compute-0 nova_compute[238824]: 2026-01-31 08:32:13.398 238828 DEBUG oslo_concurrency.lockutils [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:32:13 compute-0 nova_compute[238824]: 2026-01-31 08:32:13.398 238828 DEBUG oslo_concurrency.lockutils [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:32:13 compute-0 nova_compute[238824]: 2026-01-31 08:32:13.398 238828 DEBUG nova.compute.resource_tracker [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 31 08:32:13 compute-0 nova_compute[238824]: 2026-01-31 08:32:13.398 238828 DEBUG oslo_concurrency.processutils [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:32:14 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 31 08:32:14 compute-0 ceph-mon[75227]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2410728628' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 31 08:32:14 compute-0 nova_compute[238824]: 2026-01-31 08:32:14.042 238828 DEBUG oslo_concurrency.processutils [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.644s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:32:14 compute-0 nova_compute[238824]: 2026-01-31 08:32:14.226 238828 WARNING nova.virt.libvirt.driver [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 31 08:32:14 compute-0 nova_compute[238824]: 2026-01-31 08:32:14.228 238828 DEBUG nova.compute.resource_tracker [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5139MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 31 08:32:14 compute-0 nova_compute[238824]: 2026-01-31 08:32:14.228 238828 DEBUG oslo_concurrency.lockutils [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:32:14 compute-0 nova_compute[238824]: 2026-01-31 08:32:14.228 238828 DEBUG oslo_concurrency.lockutils [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:32:14 compute-0 nova_compute[238824]: 2026-01-31 08:32:14.307 238828 DEBUG nova.compute.resource_tracker [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 31 08:32:14 compute-0 nova_compute[238824]: 2026-01-31 08:32:14.308 238828 DEBUG nova.compute.resource_tracker [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 31 08:32:14 compute-0 nova_compute[238824]: 2026-01-31 08:32:14.327 238828 DEBUG oslo_concurrency.processutils [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:32:14 compute-0 ceph-mon[75227]: pgmap v897: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:32:14 compute-0 ceph-mon[75227]: from='client.? 192.168.122.100:0/2410728628' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 31 08:32:14 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:32:15 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 31 08:32:15 compute-0 ceph-mon[75227]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1452447454' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 31 08:32:15 compute-0 nova_compute[238824]: 2026-01-31 08:32:15.072 238828 DEBUG oslo_concurrency.processutils [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.745s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:32:15 compute-0 nova_compute[238824]: 2026-01-31 08:32:15.077 238828 DEBUG nova.compute.provider_tree [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Inventory has not changed in ProviderTree for provider: 6d4ff98f-eb37-47a1-bfaf-01e7f5329d98 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 31 08:32:15 compute-0 nova_compute[238824]: 2026-01-31 08:32:15.099 238828 DEBUG nova.scheduler.client.report [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Inventory has not changed for provider 6d4ff98f-eb37-47a1-bfaf-01e7f5329d98 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 31 08:32:15 compute-0 nova_compute[238824]: 2026-01-31 08:32:15.100 238828 DEBUG nova.compute.resource_tracker [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 31 08:32:15 compute-0 nova_compute[238824]: 2026-01-31 08:32:15.101 238828 DEBUG oslo_concurrency.lockutils [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.872s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:32:15 compute-0 ceph-mon[75227]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #39. Immutable memtables: 0.
Jan 31 08:32:15 compute-0 ceph-mon[75227]: rocksdb: (Original Log Time 2026/01/31-08:32:15.234906) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 31 08:32:15 compute-0 ceph-mon[75227]: rocksdb: [db/flush_job.cc:856] [default] [JOB 17] Flushing memtable with next log file: 39
Jan 31 08:32:15 compute-0 ceph-mon[75227]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769848335234965, "job": 17, "event": "flush_started", "num_memtables": 1, "num_entries": 2031, "num_deletes": 253, "total_data_size": 3392812, "memory_usage": 3448168, "flush_reason": "Manual Compaction"}
Jan 31 08:32:15 compute-0 ceph-mon[75227]: rocksdb: [db/flush_job.cc:885] [default] [JOB 17] Level-0 flush table #40: started
Jan 31 08:32:15 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v898: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:32:16 compute-0 ceph-mon[75227]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769848336112029, "cf_name": "default", "job": 17, "event": "table_file_creation", "file_number": 40, "file_size": 1983624, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 16438, "largest_seqno": 18468, "table_properties": {"data_size": 1976835, "index_size": 3607, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2117, "raw_key_size": 16924, "raw_average_key_size": 20, "raw_value_size": 1961819, "raw_average_value_size": 2372, "num_data_blocks": 165, "num_entries": 827, "num_filter_entries": 827, "num_deletions": 253, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769848119, "oldest_key_time": 1769848119, "file_creation_time": 1769848335, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "91992687-9ca4-489a-811f-a25b3432622d", "db_session_id": "RDN3DWKE2K2I6QTJYIJY", "orig_file_number": 40, "seqno_to_time_mapping": "N/A"}}
Jan 31 08:32:16 compute-0 ceph-mon[75227]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 17] Flush lasted 877161 microseconds, and 3876 cpu microseconds.
Jan 31 08:32:16 compute-0 ceph-mon[75227]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 31 08:32:16 compute-0 ceph-mon[75227]: rocksdb: (Original Log Time 2026/01/31-08:32:16.112083) [db/flush_job.cc:967] [default] [JOB 17] Level-0 flush table #40: 1983624 bytes OK
Jan 31 08:32:16 compute-0 ceph-mon[75227]: rocksdb: (Original Log Time 2026/01/31-08:32:16.112106) [db/memtable_list.cc:519] [default] Level-0 commit table #40 started
Jan 31 08:32:16 compute-0 ceph-mon[75227]: rocksdb: (Original Log Time 2026/01/31-08:32:16.271797) [db/memtable_list.cc:722] [default] Level-0 commit table #40: memtable #1 done
Jan 31 08:32:16 compute-0 ceph-mon[75227]: rocksdb: (Original Log Time 2026/01/31-08:32:16.271850) EVENT_LOG_v1 {"time_micros": 1769848336271840, "job": 17, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 31 08:32:16 compute-0 ceph-mon[75227]: rocksdb: (Original Log Time 2026/01/31-08:32:16.271877) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 31 08:32:16 compute-0 ceph-mon[75227]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 17] Try to delete WAL files size 3384234, prev total WAL file size 3386485, number of live WAL files 2.
Jan 31 08:32:16 compute-0 ceph-mon[75227]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000036.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 08:32:16 compute-0 ceph-mon[75227]: rocksdb: (Original Log Time 2026/01/31-08:32:16.272762) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D67727374617400353033' seq:72057594037927935, type:22 .. '6D67727374617400373535' seq:0, type:0; will stop at (end)
Jan 31 08:32:16 compute-0 ceph-mon[75227]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 18] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 31 08:32:16 compute-0 ceph-mon[75227]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 17 Base level 0, inputs: [40(1937KB)], [38(7936KB)]
Jan 31 08:32:16 compute-0 ceph-mon[75227]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769848336272813, "job": 18, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [40], "files_L6": [38], "score": -1, "input_data_size": 10110589, "oldest_snapshot_seqno": -1}
Jan 31 08:32:16 compute-0 ceph-mon[75227]: from='client.? 192.168.122.100:0/1452447454' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 31 08:32:16 compute-0 ceph-mon[75227]: pgmap v898: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:32:17 compute-0 nova_compute[238824]: 2026-01-31 08:32:17.096 238828 DEBUG oslo_service.periodic_task [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:32:17 compute-0 nova_compute[238824]: 2026-01-31 08:32:17.096 238828 DEBUG oslo_service.periodic_task [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:32:17 compute-0 ceph-mon[75227]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 18] Generated table #41: 4477 keys, 8097248 bytes, temperature: kUnknown
Jan 31 08:32:17 compute-0 ceph-mon[75227]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769848337117975, "cf_name": "default", "job": 18, "event": "table_file_creation", "file_number": 41, "file_size": 8097248, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 8066255, "index_size": 18723, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 11205, "raw_key_size": 108263, "raw_average_key_size": 24, "raw_value_size": 7984230, "raw_average_value_size": 1783, "num_data_blocks": 794, "num_entries": 4477, "num_filter_entries": 4477, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769846771, "oldest_key_time": 0, "file_creation_time": 1769848336, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "91992687-9ca4-489a-811f-a25b3432622d", "db_session_id": "RDN3DWKE2K2I6QTJYIJY", "orig_file_number": 41, "seqno_to_time_mapping": "N/A"}}
Jan 31 08:32:17 compute-0 ceph-mon[75227]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 31 08:32:17 compute-0 nova_compute[238824]: 2026-01-31 08:32:17.168 238828 DEBUG oslo_service.periodic_task [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:32:17 compute-0 nova_compute[238824]: 2026-01-31 08:32:17.169 238828 DEBUG nova.compute.manager [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 31 08:32:17 compute-0 nova_compute[238824]: 2026-01-31 08:32:17.169 238828 DEBUG nova.compute.manager [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 31 08:32:17 compute-0 nova_compute[238824]: 2026-01-31 08:32:17.205 238828 DEBUG nova.compute.manager [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 31 08:32:17 compute-0 ceph-mon[75227]: rocksdb: (Original Log Time 2026/01/31-08:32:17.118232) [db/compaction/compaction_job.cc:1663] [default] [JOB 18] Compacted 1@0 + 1@6 files to L6 => 8097248 bytes
Jan 31 08:32:17 compute-0 ceph-mon[75227]: rocksdb: (Original Log Time 2026/01/31-08:32:17.275680) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 12.0 rd, 9.6 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.9, 7.8 +0.0 blob) out(7.7 +0.0 blob), read-write-amplify(9.2) write-amplify(4.1) OK, records in: 4896, records dropped: 419 output_compression: NoCompression
Jan 31 08:32:17 compute-0 ceph-mon[75227]: rocksdb: (Original Log Time 2026/01/31-08:32:17.275724) EVENT_LOG_v1 {"time_micros": 1769848337275706, "job": 18, "event": "compaction_finished", "compaction_time_micros": 845248, "compaction_time_cpu_micros": 24544, "output_level": 6, "num_output_files": 1, "total_output_size": 8097248, "num_input_records": 4896, "num_output_records": 4477, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 31 08:32:17 compute-0 ceph-mon[75227]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000040.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 08:32:17 compute-0 ceph-mon[75227]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769848337276228, "job": 18, "event": "table_file_deletion", "file_number": 40}
Jan 31 08:32:17 compute-0 ceph-mon[75227]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000038.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 08:32:17 compute-0 ceph-mon[75227]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769848337277310, "job": 18, "event": "table_file_deletion", "file_number": 38}
Jan 31 08:32:17 compute-0 ceph-mon[75227]: rocksdb: (Original Log Time 2026/01/31-08:32:16.272660) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 08:32:17 compute-0 ceph-mon[75227]: rocksdb: (Original Log Time 2026/01/31-08:32:17.277359) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 08:32:17 compute-0 ceph-mon[75227]: rocksdb: (Original Log Time 2026/01/31-08:32:17.277363) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 08:32:17 compute-0 ceph-mon[75227]: rocksdb: (Original Log Time 2026/01/31-08:32:17.277365) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 08:32:17 compute-0 ceph-mon[75227]: rocksdb: (Original Log Time 2026/01/31-08:32:17.277367) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 08:32:17 compute-0 ceph-mon[75227]: rocksdb: (Original Log Time 2026/01/31-08:32:17.277369) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 08:32:17 compute-0 nova_compute[238824]: 2026-01-31 08:32:17.338 238828 DEBUG oslo_service.periodic_task [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:32:17 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v899: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:32:17 compute-0 ovn_metadata_agent[154972]: 2026-01-31 08:32:17.889 154977 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:32:17 compute-0 ovn_metadata_agent[154972]: 2026-01-31 08:32:17.890 154977 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:32:17 compute-0 ovn_metadata_agent[154972]: 2026-01-31 08:32:17.890 154977 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:32:17 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 31 08:32:17 compute-0 ceph-mon[75227]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/119589343' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 31 08:32:17 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 31 08:32:17 compute-0 ceph-mon[75227]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/119589343' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 31 08:32:19 compute-0 nova_compute[238824]: 2026-01-31 08:32:19.340 238828 DEBUG oslo_service.periodic_task [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:32:19 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v900: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:32:19 compute-0 ceph-mon[75227]: pgmap v899: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:32:19 compute-0 ceph-mon[75227]: from='client.? 192.168.122.10:0/119589343' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 31 08:32:19 compute-0 ceph-mon[75227]: from='client.? 192.168.122.10:0/119589343' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 31 08:32:20 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:32:20 compute-0 sudo[244973]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 08:32:20 compute-0 sudo[244973]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:32:20 compute-0 sudo[244973]: pam_unix(sudo:session): session closed for user root
Jan 31 08:32:20 compute-0 sudo[244998]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/82c880e6-d992-5408-8b12-efff9c275473/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Jan 31 08:32:20 compute-0 sudo[244998]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:32:21 compute-0 sudo[244998]: pam_unix(sudo:session): session closed for user root
Jan 31 08:32:21 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v901: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:32:21 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 31 08:32:21 compute-0 ceph-mon[75227]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 08:32:21 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 31 08:32:21 compute-0 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 31 08:32:21 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 31 08:32:21 compute-0 ceph-mon[75227]: pgmap v900: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:32:23 compute-0 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 08:32:23 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v902: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:32:23 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 31 08:32:23 compute-0 ceph-mon[75227]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 31 08:32:23 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 31 08:32:23 compute-0 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 31 08:32:23 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 31 08:32:23 compute-0 ceph-mon[75227]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 08:32:23 compute-0 sudo[245054]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 08:32:23 compute-0 sudo[245054]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:32:23 compute-0 sudo[245054]: pam_unix(sudo:session): session closed for user root
Jan 31 08:32:23 compute-0 sudo[245079]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/82c880e6-d992-5408-8b12-efff9c275473/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 82c880e6-d992-5408-8b12-efff9c275473 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Jan 31 08:32:23 compute-0 sudo[245079]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:32:23 compute-0 podman[245116]: 2026-01-31 08:32:23.894021412 +0000 UTC m=+0.021178571 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 08:32:24 compute-0 ceph-mon[75227]: pgmap v901: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:32:24 compute-0 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 08:32:24 compute-0 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 31 08:32:24 compute-0 podman[245116]: 2026-01-31 08:32:24.407023659 +0000 UTC m=+0.534180728 container create c344c7950536fa2a18878dbb95c835a7963bb93c274924c796c3300758fff775 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=inspiring_mendeleev, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 08:32:24 compute-0 systemd[1]: Started libpod-conmon-c344c7950536fa2a18878dbb95c835a7963bb93c274924c796c3300758fff775.scope.
Jan 31 08:32:24 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:32:24 compute-0 podman[245116]: 2026-01-31 08:32:24.852785763 +0000 UTC m=+0.979942912 container init c344c7950536fa2a18878dbb95c835a7963bb93c274924c796c3300758fff775 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=inspiring_mendeleev, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 08:32:24 compute-0 podman[245116]: 2026-01-31 08:32:24.861081468 +0000 UTC m=+0.988238537 container start c344c7950536fa2a18878dbb95c835a7963bb93c274924c796c3300758fff775 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=inspiring_mendeleev, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, ceph=True)
Jan 31 08:32:24 compute-0 inspiring_mendeleev[245133]: 167 167
Jan 31 08:32:24 compute-0 systemd[1]: libpod-c344c7950536fa2a18878dbb95c835a7963bb93c274924c796c3300758fff775.scope: Deactivated successfully.
Jan 31 08:32:24 compute-0 conmon[245133]: conmon c344c7950536fa2a1887 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-c344c7950536fa2a18878dbb95c835a7963bb93c274924c796c3300758fff775.scope/container/memory.events
Jan 31 08:32:25 compute-0 podman[245116]: 2026-01-31 08:32:25.003908393 +0000 UTC m=+1.131065512 container attach c344c7950536fa2a18878dbb95c835a7963bb93c274924c796c3300758fff775 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=inspiring_mendeleev, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, ceph=True, CEPH_REF=tentacle, org.label-schema.build-date=20251030)
Jan 31 08:32:25 compute-0 podman[245116]: 2026-01-31 08:32:25.004688785 +0000 UTC m=+1.131845854 container died c344c7950536fa2a18878dbb95c835a7963bb93c274924c796c3300758fff775 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=inspiring_mendeleev, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, io.buildah.version=1.41.3, ceph=True, org.label-schema.schema-version=1.0)
Jan 31 08:32:25 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:32:25 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v903: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:32:25 compute-0 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 08:32:25 compute-0 ceph-mon[75227]: pgmap v902: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:32:25 compute-0 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 31 08:32:25 compute-0 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 31 08:32:25 compute-0 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 08:32:26 compute-0 ceph-mon[75227]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #42. Immutable memtables: 0.
Jan 31 08:32:26 compute-0 ceph-mon[75227]: rocksdb: (Original Log Time 2026/01/31-08:32:26.700947) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 31 08:32:26 compute-0 ceph-mon[75227]: rocksdb: [db/flush_job.cc:856] [default] [JOB 19] Flushing memtable with next log file: 42
Jan 31 08:32:26 compute-0 ceph-mon[75227]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769848346701005, "job": 19, "event": "flush_started", "num_memtables": 1, "num_entries": 334, "num_deletes": 251, "total_data_size": 163909, "memory_usage": 170104, "flush_reason": "Manual Compaction"}
Jan 31 08:32:26 compute-0 ceph-mon[75227]: rocksdb: [db/flush_job.cc:885] [default] [JOB 19] Level-0 flush table #43: started
Jan 31 08:32:26 compute-0 ceph-mon[75227]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769848346869165, "cf_name": "default", "job": 19, "event": "table_file_creation", "file_number": 43, "file_size": 162760, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 18469, "largest_seqno": 18802, "table_properties": {"data_size": 160632, "index_size": 292, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 773, "raw_key_size": 5534, "raw_average_key_size": 18, "raw_value_size": 156361, "raw_average_value_size": 530, "num_data_blocks": 13, "num_entries": 295, "num_filter_entries": 295, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769848336, "oldest_key_time": 1769848336, "file_creation_time": 1769848346, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "91992687-9ca4-489a-811f-a25b3432622d", "db_session_id": "RDN3DWKE2K2I6QTJYIJY", "orig_file_number": 43, "seqno_to_time_mapping": "N/A"}}
Jan 31 08:32:26 compute-0 ceph-mon[75227]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 19] Flush lasted 168243 microseconds, and 1160 cpu microseconds.
Jan 31 08:32:26 compute-0 ceph-mon[75227]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 31 08:32:27 compute-0 ceph-mon[75227]: rocksdb: (Original Log Time 2026/01/31-08:32:26.869200) [db/flush_job.cc:967] [default] [JOB 19] Level-0 flush table #43: 162760 bytes OK
Jan 31 08:32:27 compute-0 ceph-mon[75227]: rocksdb: (Original Log Time 2026/01/31-08:32:26.869216) [db/memtable_list.cc:519] [default] Level-0 commit table #43 started
Jan 31 08:32:27 compute-0 ceph-mon[75227]: rocksdb: (Original Log Time 2026/01/31-08:32:27.086051) [db/memtable_list.cc:722] [default] Level-0 commit table #43: memtable #1 done
Jan 31 08:32:27 compute-0 ceph-mon[75227]: rocksdb: (Original Log Time 2026/01/31-08:32:27.086109) EVENT_LOG_v1 {"time_micros": 1769848347086099, "job": 19, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 31 08:32:27 compute-0 ceph-mon[75227]: rocksdb: (Original Log Time 2026/01/31-08:32:27.086137) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 31 08:32:27 compute-0 ceph-mon[75227]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 19] Try to delete WAL files size 161560, prev total WAL file size 162715, number of live WAL files 2.
Jan 31 08:32:27 compute-0 ceph-mon[75227]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000039.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 08:32:27 compute-0 ceph-mon[75227]: rocksdb: (Original Log Time 2026/01/31-08:32:27.086628) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730031323535' seq:72057594037927935, type:22 .. '7061786F730031353037' seq:0, type:0; will stop at (end)
Jan 31 08:32:27 compute-0 ceph-mon[75227]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 20] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 31 08:32:27 compute-0 ceph-mon[75227]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 19 Base level 0, inputs: [43(158KB)], [41(7907KB)]
Jan 31 08:32:27 compute-0 ceph-mon[75227]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769848347086696, "job": 20, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [43], "files_L6": [41], "score": -1, "input_data_size": 8260008, "oldest_snapshot_seqno": -1}
Jan 31 08:32:27 compute-0 systemd[1]: var-lib-containers-storage-overlay-caf6d1304db9059e68b7f881624d33d8a0b19737a451f65e6b64cf12b0b4d054-merged.mount: Deactivated successfully.
Jan 31 08:32:27 compute-0 ceph-mon[75227]: pgmap v903: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:32:27 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v904: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:32:27 compute-0 ceph-mon[75227]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 20] Generated table #44: 4261 keys, 6487523 bytes, temperature: kUnknown
Jan 31 08:32:27 compute-0 ceph-mon[75227]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769848347580222, "cf_name": "default", "job": 20, "event": "table_file_creation", "file_number": 44, "file_size": 6487523, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 6459522, "index_size": 16244, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 10693, "raw_key_size": 104502, "raw_average_key_size": 24, "raw_value_size": 6382840, "raw_average_value_size": 1497, "num_data_blocks": 680, "num_entries": 4261, "num_filter_entries": 4261, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769846771, "oldest_key_time": 0, "file_creation_time": 1769848347, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "91992687-9ca4-489a-811f-a25b3432622d", "db_session_id": "RDN3DWKE2K2I6QTJYIJY", "orig_file_number": 44, "seqno_to_time_mapping": "N/A"}}
Jan 31 08:32:27 compute-0 ceph-mon[75227]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 31 08:32:27 compute-0 ceph-mon[75227]: rocksdb: (Original Log Time 2026/01/31-08:32:27.580594) [db/compaction/compaction_job.cc:1663] [default] [JOB 20] Compacted 1@0 + 1@6 files to L6 => 6487523 bytes
Jan 31 08:32:27 compute-0 ceph-mon[75227]: rocksdb: (Original Log Time 2026/01/31-08:32:27.906970) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 16.7 rd, 13.1 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.2, 7.7 +0.0 blob) out(6.2 +0.0 blob), read-write-amplify(90.6) write-amplify(39.9) OK, records in: 4772, records dropped: 511 output_compression: NoCompression
Jan 31 08:32:27 compute-0 ceph-mon[75227]: rocksdb: (Original Log Time 2026/01/31-08:32:27.907012) EVENT_LOG_v1 {"time_micros": 1769848347906995, "job": 20, "event": "compaction_finished", "compaction_time_micros": 493667, "compaction_time_cpu_micros": 18988, "output_level": 6, "num_output_files": 1, "total_output_size": 6487523, "num_input_records": 4772, "num_output_records": 4261, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 31 08:32:27 compute-0 ceph-mon[75227]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000043.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 08:32:27 compute-0 ceph-mon[75227]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769848347907202, "job": 20, "event": "table_file_deletion", "file_number": 43}
Jan 31 08:32:27 compute-0 ceph-mon[75227]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000041.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 08:32:27 compute-0 ceph-mon[75227]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769848347907963, "job": 20, "event": "table_file_deletion", "file_number": 41}
Jan 31 08:32:27 compute-0 ceph-mon[75227]: rocksdb: (Original Log Time 2026/01/31-08:32:27.086505) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 08:32:27 compute-0 ceph-mon[75227]: rocksdb: (Original Log Time 2026/01/31-08:32:27.907991) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 08:32:27 compute-0 ceph-mon[75227]: rocksdb: (Original Log Time 2026/01/31-08:32:27.907995) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 08:32:27 compute-0 ceph-mon[75227]: rocksdb: (Original Log Time 2026/01/31-08:32:27.907997) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 08:32:27 compute-0 ceph-mon[75227]: rocksdb: (Original Log Time 2026/01/31-08:32:27.907999) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 08:32:27 compute-0 ceph-mon[75227]: rocksdb: (Original Log Time 2026/01/31-08:32:27.908001) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 08:32:28 compute-0 ceph-mon[75227]: pgmap v904: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:32:28 compute-0 podman[245116]: 2026-01-31 08:32:28.851197619 +0000 UTC m=+4.978354688 container remove c344c7950536fa2a18878dbb95c835a7963bb93c274924c796c3300758fff775 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=inspiring_mendeleev, ceph=True, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030)
Jan 31 08:32:28 compute-0 systemd[1]: libpod-conmon-c344c7950536fa2a18878dbb95c835a7963bb93c274924c796c3300758fff775.scope: Deactivated successfully.
Jan 31 08:32:29 compute-0 podman[245158]: 2026-01-31 08:32:28.969760637 +0000 UTC m=+0.023937489 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 08:32:29 compute-0 podman[245158]: 2026-01-31 08:32:29.285655343 +0000 UTC m=+0.339832105 container create 5b7569a98bbff761d16f770fc51c5c7a0912d4b7bdd05ee86930f96b128a3ac9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=serene_kowalevski, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle)
Jan 31 08:32:29 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v905: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:32:29 compute-0 systemd[1]: Started libpod-conmon-5b7569a98bbff761d16f770fc51c5c7a0912d4b7bdd05ee86930f96b128a3ac9.scope.
Jan 31 08:32:29 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:32:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c51982355b421b737282369a434c17e51dfe9dcfa57ff3d03593214c77e67f0c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 08:32:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c51982355b421b737282369a434c17e51dfe9dcfa57ff3d03593214c77e67f0c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 08:32:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c51982355b421b737282369a434c17e51dfe9dcfa57ff3d03593214c77e67f0c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 08:32:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c51982355b421b737282369a434c17e51dfe9dcfa57ff3d03593214c77e67f0c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 08:32:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c51982355b421b737282369a434c17e51dfe9dcfa57ff3d03593214c77e67f0c/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 31 08:32:29 compute-0 podman[245158]: 2026-01-31 08:32:29.991668458 +0000 UTC m=+1.045845240 container init 5b7569a98bbff761d16f770fc51c5c7a0912d4b7bdd05ee86930f96b128a3ac9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=serene_kowalevski, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle)
Jan 31 08:32:30 compute-0 podman[245158]: 2026-01-31 08:32:30.00376593 +0000 UTC m=+1.057942692 container start 5b7569a98bbff761d16f770fc51c5c7a0912d4b7bdd05ee86930f96b128a3ac9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=serene_kowalevski, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 08:32:30 compute-0 ceph-mon[75227]: pgmap v905: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:32:30 compute-0 podman[245158]: 2026-01-31 08:32:30.224630225 +0000 UTC m=+1.278807017 container attach 5b7569a98bbff761d16f770fc51c5c7a0912d4b7bdd05ee86930f96b128a3ac9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=serene_kowalevski, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 08:32:30 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:32:30 compute-0 serene_kowalevski[245174]: --> passed data devices: 0 physical, 3 LVM
Jan 31 08:32:30 compute-0 serene_kowalevski[245174]: --> All data devices are unavailable
Jan 31 08:32:30 compute-0 systemd[1]: libpod-5b7569a98bbff761d16f770fc51c5c7a0912d4b7bdd05ee86930f96b128a3ac9.scope: Deactivated successfully.
Jan 31 08:32:30 compute-0 podman[245158]: 2026-01-31 08:32:30.466514055 +0000 UTC m=+1.520690817 container died 5b7569a98bbff761d16f770fc51c5c7a0912d4b7bdd05ee86930f96b128a3ac9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=serene_kowalevski, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, ceph=True)
Jan 31 08:32:31 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v906: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:32:31 compute-0 systemd[1]: var-lib-containers-storage-overlay-c51982355b421b737282369a434c17e51dfe9dcfa57ff3d03593214c77e67f0c-merged.mount: Deactivated successfully.
Jan 31 08:32:31 compute-0 ceph-mgr[75519]: [balancer INFO root] Optimize plan auto_2026-01-31_08:32:31
Jan 31 08:32:31 compute-0 ceph-mgr[75519]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 31 08:32:31 compute-0 ceph-mgr[75519]: [balancer INFO root] do_upmap
Jan 31 08:32:31 compute-0 ceph-mgr[75519]: [balancer INFO root] pools ['cephfs.cephfs.data', 'volumes', 'vms', '.rgw.root', 'cephfs.cephfs.meta', 'images', 'default.rgw.control', 'default.rgw.meta', '.mgr', 'backups', 'default.rgw.log']
Jan 31 08:32:31 compute-0 ceph-mgr[75519]: [balancer INFO root] prepared 0/10 upmap changes
Jan 31 08:32:32 compute-0 ceph-mon[75227]: pgmap v906: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:32:32 compute-0 ceph-mgr[75519]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:32:32 compute-0 ceph-mgr[75519]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:32:32 compute-0 ceph-mgr[75519]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:32:32 compute-0 ceph-mgr[75519]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:32:32 compute-0 ceph-mgr[75519]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:32:32 compute-0 ceph-mgr[75519]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:32:33 compute-0 ceph-mgr[75519]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 31 08:32:33 compute-0 ceph-mgr[75519]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 08:32:33 compute-0 ceph-mgr[75519]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 31 08:32:33 compute-0 ceph-mgr[75519]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 08:32:33 compute-0 ceph-mgr[75519]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 08:32:33 compute-0 ceph-mgr[75519]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 08:32:33 compute-0 ceph-mgr[75519]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 08:32:33 compute-0 ceph-mgr[75519]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 08:32:33 compute-0 ceph-mgr[75519]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 08:32:33 compute-0 ceph-mgr[75519]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 08:32:33 compute-0 podman[245158]: 2026-01-31 08:32:33.127926287 +0000 UTC m=+4.182103059 container remove 5b7569a98bbff761d16f770fc51c5c7a0912d4b7bdd05ee86930f96b128a3ac9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=serene_kowalevski, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, ceph=True, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 08:32:33 compute-0 systemd[1]: libpod-conmon-5b7569a98bbff761d16f770fc51c5c7a0912d4b7bdd05ee86930f96b128a3ac9.scope: Deactivated successfully.
Jan 31 08:32:33 compute-0 sudo[245079]: pam_unix(sudo:session): session closed for user root
Jan 31 08:32:33 compute-0 sudo[245207]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 08:32:33 compute-0 sudo[245207]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:32:33 compute-0 sudo[245207]: pam_unix(sudo:session): session closed for user root
Jan 31 08:32:33 compute-0 sudo[245232]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/82c880e6-d992-5408-8b12-efff9c275473/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 82c880e6-d992-5408-8b12-efff9c275473 -- lvm list --format json
Jan 31 08:32:33 compute-0 sudo[245232]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:32:33 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v907: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:32:33 compute-0 podman[245269]: 2026-01-31 08:32:33.548881929 +0000 UTC m=+0.026901663 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 08:32:33 compute-0 podman[245269]: 2026-01-31 08:32:33.827090818 +0000 UTC m=+0.305110492 container create faf8de832dea9ea5a4acd31c0b4728acc017082be1bbeb728e4214acc5925b4c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nifty_payne, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Jan 31 08:32:34 compute-0 ceph-mon[75227]: pgmap v907: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:32:34 compute-0 systemd[1]: Started libpod-conmon-faf8de832dea9ea5a4acd31c0b4728acc017082be1bbeb728e4214acc5925b4c.scope.
Jan 31 08:32:34 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:32:34 compute-0 podman[245269]: 2026-01-31 08:32:34.589564821 +0000 UTC m=+1.067584545 container init faf8de832dea9ea5a4acd31c0b4728acc017082be1bbeb728e4214acc5925b4c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nifty_payne, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Jan 31 08:32:34 compute-0 podman[245269]: 2026-01-31 08:32:34.596738294 +0000 UTC m=+1.074757968 container start faf8de832dea9ea5a4acd31c0b4728acc017082be1bbeb728e4214acc5925b4c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nifty_payne, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Jan 31 08:32:34 compute-0 nifty_payne[245285]: 167 167
Jan 31 08:32:34 compute-0 systemd[1]: libpod-faf8de832dea9ea5a4acd31c0b4728acc017082be1bbeb728e4214acc5925b4c.scope: Deactivated successfully.
Jan 31 08:32:34 compute-0 podman[245269]: 2026-01-31 08:32:34.751569959 +0000 UTC m=+1.229589623 container attach faf8de832dea9ea5a4acd31c0b4728acc017082be1bbeb728e4214acc5925b4c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nifty_payne, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030)
Jan 31 08:32:34 compute-0 podman[245269]: 2026-01-31 08:32:34.752532427 +0000 UTC m=+1.230552081 container died faf8de832dea9ea5a4acd31c0b4728acc017082be1bbeb728e4214acc5925b4c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nifty_payne, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 31 08:32:35 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:32:35 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v908: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:32:35 compute-0 ceph-mon[75227]: pgmap v908: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:32:36 compute-0 systemd[1]: var-lib-containers-storage-overlay-f26e65f073405f01beb432397e55b051e66a8c79c0c77a76d824f2b28270318d-merged.mount: Deactivated successfully.
Jan 31 08:32:36 compute-0 podman[245269]: 2026-01-31 08:32:36.737736497 +0000 UTC m=+3.215756131 container remove faf8de832dea9ea5a4acd31c0b4728acc017082be1bbeb728e4214acc5925b4c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nifty_payne, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 08:32:36 compute-0 systemd[1]: libpod-conmon-faf8de832dea9ea5a4acd31c0b4728acc017082be1bbeb728e4214acc5925b4c.scope: Deactivated successfully.
Jan 31 08:32:36 compute-0 podman[245286]: 2026-01-31 08:32:36.806455823 +0000 UTC m=+2.562363296 container health_status 5cc46d1955888fed41771eb977e7f9416e280539f01559118253757fe3eb0869 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '3b798815db4eef76ff54d2cfe5801aee605c637f16e47a4297de289ab1fdb9c1-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20260127)
Jan 31 08:32:36 compute-0 podman[245288]: 2026-01-31 08:32:36.915392318 +0000 UTC m=+2.671100855 container health_status 14c3c41ef3ecc0cb180f3b1c6b2646401de390411c75f2edf127900ced71a3ae (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '3b798815db4eef76ff54d2cfe5801aee605c637f16e47a4297de289ab1fdb9c1-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20260127)
Jan 31 08:32:37 compute-0 podman[245350]: 2026-01-31 08:32:36.914859093 +0000 UTC m=+0.031138213 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 08:32:37 compute-0 podman[245350]: 2026-01-31 08:32:37.116512754 +0000 UTC m=+0.232791834 container create 73269ca782ac75dc1d9cf8b5132ea56907ef8e01a6e296c3c0b487625b4b268a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sharp_lederberg, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 31 08:32:37 compute-0 systemd[1]: Started libpod-conmon-73269ca782ac75dc1d9cf8b5132ea56907ef8e01a6e296c3c0b487625b4b268a.scope.
Jan 31 08:32:37 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:32:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/494abb41d546ff466a28cdc90ec9032106cb60ba6abf16f25f9edea7670e77e9/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 08:32:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/494abb41d546ff466a28cdc90ec9032106cb60ba6abf16f25f9edea7670e77e9/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 08:32:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/494abb41d546ff466a28cdc90ec9032106cb60ba6abf16f25f9edea7670e77e9/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 08:32:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/494abb41d546ff466a28cdc90ec9032106cb60ba6abf16f25f9edea7670e77e9/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 08:32:37 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v909: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:32:37 compute-0 podman[245350]: 2026-01-31 08:32:37.722724972 +0000 UTC m=+0.839004062 container init 73269ca782ac75dc1d9cf8b5132ea56907ef8e01a6e296c3c0b487625b4b268a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sharp_lederberg, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 08:32:37 compute-0 podman[245350]: 2026-01-31 08:32:37.729741581 +0000 UTC m=+0.846020691 container start 73269ca782ac75dc1d9cf8b5132ea56907ef8e01a6e296c3c0b487625b4b268a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sharp_lederberg, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 08:32:38 compute-0 sharp_lederberg[245372]: {
Jan 31 08:32:38 compute-0 sharp_lederberg[245372]:     "0": [
Jan 31 08:32:38 compute-0 sharp_lederberg[245372]:         {
Jan 31 08:32:38 compute-0 sharp_lederberg[245372]:             "devices": [
Jan 31 08:32:38 compute-0 sharp_lederberg[245372]:                 "/dev/loop3"
Jan 31 08:32:38 compute-0 sharp_lederberg[245372]:             ],
Jan 31 08:32:38 compute-0 sharp_lederberg[245372]:             "lv_name": "ceph_lv0",
Jan 31 08:32:38 compute-0 sharp_lederberg[245372]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 08:32:38 compute-0 sharp_lederberg[245372]:             "lv_size": "21470642176",
Jan 31 08:32:38 compute-0 sharp_lederberg[245372]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=MTsNbY-MKaT-jGv0-3onj-5WQa-gnK0-BbfLsK,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=82c880e6-d992-5408-8b12-efff9c275473,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=39c36249-2898-4a76-b317-8e4ca379866f,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 31 08:32:38 compute-0 sharp_lederberg[245372]:             "lv_uuid": "MTsNbY-MKaT-jGv0-3onj-5WQa-gnK0-BbfLsK",
Jan 31 08:32:38 compute-0 sharp_lederberg[245372]:             "name": "ceph_lv0",
Jan 31 08:32:38 compute-0 sharp_lederberg[245372]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 08:32:38 compute-0 sharp_lederberg[245372]:             "tags": {
Jan 31 08:32:38 compute-0 sharp_lederberg[245372]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 31 08:32:38 compute-0 sharp_lederberg[245372]:                 "ceph.block_uuid": "MTsNbY-MKaT-jGv0-3onj-5WQa-gnK0-BbfLsK",
Jan 31 08:32:38 compute-0 sharp_lederberg[245372]:                 "ceph.cephx_lockbox_secret": "",
Jan 31 08:32:38 compute-0 sharp_lederberg[245372]:                 "ceph.cluster_fsid": "82c880e6-d992-5408-8b12-efff9c275473",
Jan 31 08:32:38 compute-0 sharp_lederberg[245372]:                 "ceph.cluster_name": "ceph",
Jan 31 08:32:38 compute-0 sharp_lederberg[245372]:                 "ceph.crush_device_class": "",
Jan 31 08:32:38 compute-0 sharp_lederberg[245372]:                 "ceph.encrypted": "0",
Jan 31 08:32:38 compute-0 sharp_lederberg[245372]:                 "ceph.objectstore": "bluestore",
Jan 31 08:32:38 compute-0 sharp_lederberg[245372]:                 "ceph.osd_fsid": "39c36249-2898-4a76-b317-8e4ca379866f",
Jan 31 08:32:38 compute-0 sharp_lederberg[245372]:                 "ceph.osd_id": "0",
Jan 31 08:32:38 compute-0 sharp_lederberg[245372]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 31 08:32:38 compute-0 sharp_lederberg[245372]:                 "ceph.type": "block",
Jan 31 08:32:38 compute-0 sharp_lederberg[245372]:                 "ceph.vdo": "0",
Jan 31 08:32:38 compute-0 sharp_lederberg[245372]:                 "ceph.with_tpm": "0"
Jan 31 08:32:38 compute-0 sharp_lederberg[245372]:             },
Jan 31 08:32:38 compute-0 sharp_lederberg[245372]:             "type": "block",
Jan 31 08:32:38 compute-0 sharp_lederberg[245372]:             "vg_name": "ceph_vg0"
Jan 31 08:32:38 compute-0 sharp_lederberg[245372]:         }
Jan 31 08:32:38 compute-0 sharp_lederberg[245372]:     ],
Jan 31 08:32:38 compute-0 sharp_lederberg[245372]:     "1": [
Jan 31 08:32:38 compute-0 sharp_lederberg[245372]:         {
Jan 31 08:32:38 compute-0 sharp_lederberg[245372]:             "devices": [
Jan 31 08:32:38 compute-0 sharp_lederberg[245372]:                 "/dev/loop4"
Jan 31 08:32:38 compute-0 sharp_lederberg[245372]:             ],
Jan 31 08:32:38 compute-0 sharp_lederberg[245372]:             "lv_name": "ceph_lv1",
Jan 31 08:32:38 compute-0 sharp_lederberg[245372]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 31 08:32:38 compute-0 sharp_lederberg[245372]:             "lv_size": "21470642176",
Jan 31 08:32:38 compute-0 sharp_lederberg[245372]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=p93Mbf-DMxT-pcUt-jSJE-SFna-oscq-yTAd40,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=82c880e6-d992-5408-8b12-efff9c275473,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=dacad4fa-56d8-4937-b2d8-306fb75187f3,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 31 08:32:38 compute-0 sharp_lederberg[245372]:             "lv_uuid": "p93Mbf-DMxT-pcUt-jSJE-SFna-oscq-yTAd40",
Jan 31 08:32:38 compute-0 sharp_lederberg[245372]:             "name": "ceph_lv1",
Jan 31 08:32:38 compute-0 sharp_lederberg[245372]:             "path": "/dev/ceph_vg1/ceph_lv1",
Jan 31 08:32:38 compute-0 sharp_lederberg[245372]:             "tags": {
Jan 31 08:32:38 compute-0 sharp_lederberg[245372]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 31 08:32:38 compute-0 sharp_lederberg[245372]:                 "ceph.block_uuid": "p93Mbf-DMxT-pcUt-jSJE-SFna-oscq-yTAd40",
Jan 31 08:32:38 compute-0 sharp_lederberg[245372]:                 "ceph.cephx_lockbox_secret": "",
Jan 31 08:32:38 compute-0 sharp_lederberg[245372]:                 "ceph.cluster_fsid": "82c880e6-d992-5408-8b12-efff9c275473",
Jan 31 08:32:38 compute-0 sharp_lederberg[245372]:                 "ceph.cluster_name": "ceph",
Jan 31 08:32:38 compute-0 sharp_lederberg[245372]:                 "ceph.crush_device_class": "",
Jan 31 08:32:38 compute-0 sharp_lederberg[245372]:                 "ceph.encrypted": "0",
Jan 31 08:32:38 compute-0 sharp_lederberg[245372]:                 "ceph.objectstore": "bluestore",
Jan 31 08:32:38 compute-0 sharp_lederberg[245372]:                 "ceph.osd_fsid": "dacad4fa-56d8-4937-b2d8-306fb75187f3",
Jan 31 08:32:38 compute-0 sharp_lederberg[245372]:                 "ceph.osd_id": "1",
Jan 31 08:32:38 compute-0 sharp_lederberg[245372]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 31 08:32:38 compute-0 sharp_lederberg[245372]:                 "ceph.type": "block",
Jan 31 08:32:38 compute-0 sharp_lederberg[245372]:                 "ceph.vdo": "0",
Jan 31 08:32:38 compute-0 sharp_lederberg[245372]:                 "ceph.with_tpm": "0"
Jan 31 08:32:38 compute-0 sharp_lederberg[245372]:             },
Jan 31 08:32:38 compute-0 sharp_lederberg[245372]:             "type": "block",
Jan 31 08:32:38 compute-0 sharp_lederberg[245372]:             "vg_name": "ceph_vg1"
Jan 31 08:32:38 compute-0 sharp_lederberg[245372]:         }
Jan 31 08:32:38 compute-0 sharp_lederberg[245372]:     ],
Jan 31 08:32:38 compute-0 sharp_lederberg[245372]:     "2": [
Jan 31 08:32:38 compute-0 sharp_lederberg[245372]:         {
Jan 31 08:32:38 compute-0 sharp_lederberg[245372]:             "devices": [
Jan 31 08:32:38 compute-0 sharp_lederberg[245372]:                 "/dev/loop5"
Jan 31 08:32:38 compute-0 sharp_lederberg[245372]:             ],
Jan 31 08:32:38 compute-0 sharp_lederberg[245372]:             "lv_name": "ceph_lv2",
Jan 31 08:32:38 compute-0 sharp_lederberg[245372]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 31 08:32:38 compute-0 sharp_lederberg[245372]:             "lv_size": "21470642176",
Jan 31 08:32:38 compute-0 sharp_lederberg[245372]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=fxd7JU-HnwP-NvcE-M4xv-EgEF-kK7y-w6dXCS,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=82c880e6-d992-5408-8b12-efff9c275473,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=faa25865-e7b6-44f9-8188-08bf287b941b,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 31 08:32:38 compute-0 sharp_lederberg[245372]:             "lv_uuid": "fxd7JU-HnwP-NvcE-M4xv-EgEF-kK7y-w6dXCS",
Jan 31 08:32:38 compute-0 sharp_lederberg[245372]:             "name": "ceph_lv2",
Jan 31 08:32:38 compute-0 sharp_lederberg[245372]:             "path": "/dev/ceph_vg2/ceph_lv2",
Jan 31 08:32:38 compute-0 sharp_lederberg[245372]:             "tags": {
Jan 31 08:32:38 compute-0 sharp_lederberg[245372]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 31 08:32:38 compute-0 sharp_lederberg[245372]:                 "ceph.block_uuid": "fxd7JU-HnwP-NvcE-M4xv-EgEF-kK7y-w6dXCS",
Jan 31 08:32:38 compute-0 sharp_lederberg[245372]:                 "ceph.cephx_lockbox_secret": "",
Jan 31 08:32:38 compute-0 sharp_lederberg[245372]:                 "ceph.cluster_fsid": "82c880e6-d992-5408-8b12-efff9c275473",
Jan 31 08:32:38 compute-0 sharp_lederberg[245372]:                 "ceph.cluster_name": "ceph",
Jan 31 08:32:38 compute-0 sharp_lederberg[245372]:                 "ceph.crush_device_class": "",
Jan 31 08:32:38 compute-0 sharp_lederberg[245372]:                 "ceph.encrypted": "0",
Jan 31 08:32:38 compute-0 sharp_lederberg[245372]:                 "ceph.objectstore": "bluestore",
Jan 31 08:32:38 compute-0 sharp_lederberg[245372]:                 "ceph.osd_fsid": "faa25865-e7b6-44f9-8188-08bf287b941b",
Jan 31 08:32:38 compute-0 sharp_lederberg[245372]:                 "ceph.osd_id": "2",
Jan 31 08:32:38 compute-0 sharp_lederberg[245372]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 31 08:32:38 compute-0 sharp_lederberg[245372]:                 "ceph.type": "block",
Jan 31 08:32:38 compute-0 sharp_lederberg[245372]:                 "ceph.vdo": "0",
Jan 31 08:32:38 compute-0 sharp_lederberg[245372]:                 "ceph.with_tpm": "0"
Jan 31 08:32:38 compute-0 sharp_lederberg[245372]:             },
Jan 31 08:32:38 compute-0 sharp_lederberg[245372]:             "type": "block",
Jan 31 08:32:38 compute-0 sharp_lederberg[245372]:             "vg_name": "ceph_vg2"
Jan 31 08:32:38 compute-0 sharp_lederberg[245372]:         }
Jan 31 08:32:38 compute-0 sharp_lederberg[245372]:     ]
Jan 31 08:32:38 compute-0 sharp_lederberg[245372]: }
Jan 31 08:32:38 compute-0 systemd[1]: libpod-73269ca782ac75dc1d9cf8b5132ea56907ef8e01a6e296c3c0b487625b4b268a.scope: Deactivated successfully.
Jan 31 08:32:38 compute-0 podman[245350]: 2026-01-31 08:32:38.08389784 +0000 UTC m=+1.200176960 container attach 73269ca782ac75dc1d9cf8b5132ea56907ef8e01a6e296c3c0b487625b4b268a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sharp_lederberg, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 08:32:38 compute-0 podman[245350]: 2026-01-31 08:32:38.084613791 +0000 UTC m=+1.200892891 container died 73269ca782ac75dc1d9cf8b5132ea56907ef8e01a6e296c3c0b487625b4b268a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sharp_lederberg, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030)
Jan 31 08:32:38 compute-0 ceph-mon[75227]: pgmap v909: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:32:38 compute-0 systemd[1]: var-lib-containers-storage-overlay-494abb41d546ff466a28cdc90ec9032106cb60ba6abf16f25f9edea7670e77e9-merged.mount: Deactivated successfully.
Jan 31 08:32:38 compute-0 podman[245350]: 2026-01-31 08:32:38.932226975 +0000 UTC m=+2.048506045 container remove 73269ca782ac75dc1d9cf8b5132ea56907ef8e01a6e296c3c0b487625b4b268a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sharp_lederberg, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 08:32:38 compute-0 sudo[245232]: pam_unix(sudo:session): session closed for user root
Jan 31 08:32:39 compute-0 systemd[1]: libpod-conmon-73269ca782ac75dc1d9cf8b5132ea56907ef8e01a6e296c3c0b487625b4b268a.scope: Deactivated successfully.
Jan 31 08:32:39 compute-0 sudo[245394]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 08:32:39 compute-0 sudo[245394]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:32:39 compute-0 sudo[245394]: pam_unix(sudo:session): session closed for user root
Jan 31 08:32:39 compute-0 sudo[245419]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/82c880e6-d992-5408-8b12-efff9c275473/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 82c880e6-d992-5408-8b12-efff9c275473 -- raw list --format json
Jan 31 08:32:39 compute-0 sudo[245419]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:32:39 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v910: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:32:39 compute-0 podman[245455]: 2026-01-31 08:32:39.376375023 +0000 UTC m=+0.023058894 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 08:32:39 compute-0 podman[245455]: 2026-01-31 08:32:39.660445208 +0000 UTC m=+0.307129059 container create 5f2483c5e1010ca65e87d615c6be94e05ba9371f182f9ca0b939ca6be252136f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=musing_goldwasser, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Jan 31 08:32:39 compute-0 ceph-mon[75227]: pgmap v910: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:32:39 compute-0 systemd[1]: Started libpod-conmon-5f2483c5e1010ca65e87d615c6be94e05ba9371f182f9ca0b939ca6be252136f.scope.
Jan 31 08:32:39 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:32:39 compute-0 podman[245455]: 2026-01-31 08:32:39.917180499 +0000 UTC m=+0.563864360 container init 5f2483c5e1010ca65e87d615c6be94e05ba9371f182f9ca0b939ca6be252136f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=musing_goldwasser, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_REF=tentacle, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 31 08:32:39 compute-0 podman[245455]: 2026-01-31 08:32:39.92429386 +0000 UTC m=+0.570977721 container start 5f2483c5e1010ca65e87d615c6be94e05ba9371f182f9ca0b939ca6be252136f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=musing_goldwasser, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, OSD_FLAVOR=default)
Jan 31 08:32:39 compute-0 musing_goldwasser[245471]: 167 167
Jan 31 08:32:39 compute-0 systemd[1]: libpod-5f2483c5e1010ca65e87d615c6be94e05ba9371f182f9ca0b939ca6be252136f.scope: Deactivated successfully.
Jan 31 08:32:40 compute-0 podman[245455]: 2026-01-31 08:32:40.044827233 +0000 UTC m=+0.691511504 container attach 5f2483c5e1010ca65e87d615c6be94e05ba9371f182f9ca0b939ca6be252136f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=musing_goldwasser, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 08:32:40 compute-0 podman[245455]: 2026-01-31 08:32:40.045480842 +0000 UTC m=+0.692164733 container died 5f2483c5e1010ca65e87d615c6be94e05ba9371f182f9ca0b939ca6be252136f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=musing_goldwasser, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.build-date=20251030)
Jan 31 08:32:40 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:32:40 compute-0 systemd[1]: var-lib-containers-storage-overlay-870f0b24ba848db99d8aa0a685d550cad6de8da3181389310208b8a3796cde58-merged.mount: Deactivated successfully.
Jan 31 08:32:41 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v911: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:32:41 compute-0 podman[245455]: 2026-01-31 08:32:41.4666462 +0000 UTC m=+2.113330091 container remove 5f2483c5e1010ca65e87d615c6be94e05ba9371f182f9ca0b939ca6be252136f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=musing_goldwasser, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 31 08:32:41 compute-0 systemd[1]: libpod-conmon-5f2483c5e1010ca65e87d615c6be94e05ba9371f182f9ca0b939ca6be252136f.scope: Deactivated successfully.
Jan 31 08:32:41 compute-0 podman[245495]: 2026-01-31 08:32:41.622567076 +0000 UTC m=+0.028224651 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 08:32:41 compute-0 podman[245495]: 2026-01-31 08:32:41.759245676 +0000 UTC m=+0.164903201 container create 611d49b54b417a7e3a3e255b628787439f0b184e47e4ee11a991134a1ab75e1c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=condescending_greider, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 08:32:41 compute-0 ceph-mon[75227]: pgmap v911: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:32:41 compute-0 systemd[1]: Started libpod-conmon-611d49b54b417a7e3a3e255b628787439f0b184e47e4ee11a991134a1ab75e1c.scope.
Jan 31 08:32:41 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:32:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4157d1799107329c83994056795f325c65456a10ade8aed4d2b2e8469bc11bca/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 08:32:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4157d1799107329c83994056795f325c65456a10ade8aed4d2b2e8469bc11bca/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 08:32:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4157d1799107329c83994056795f325c65456a10ade8aed4d2b2e8469bc11bca/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 08:32:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4157d1799107329c83994056795f325c65456a10ade8aed4d2b2e8469bc11bca/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 08:32:41 compute-0 podman[245495]: 2026-01-31 08:32:41.970595192 +0000 UTC m=+0.376252757 container init 611d49b54b417a7e3a3e255b628787439f0b184e47e4ee11a991134a1ab75e1c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=condescending_greider, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 08:32:41 compute-0 podman[245495]: 2026-01-31 08:32:41.981171301 +0000 UTC m=+0.386828806 container start 611d49b54b417a7e3a3e255b628787439f0b184e47e4ee11a991134a1ab75e1c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=condescending_greider, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0)
Jan 31 08:32:42 compute-0 podman[245495]: 2026-01-31 08:32:42.054086036 +0000 UTC m=+0.459743541 container attach 611d49b54b417a7e3a3e255b628787439f0b184e47e4ee11a991134a1ab75e1c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=condescending_greider, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_REF=tentacle)
Jan 31 08:32:42 compute-0 lvm[245590]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 31 08:32:42 compute-0 lvm[245591]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 31 08:32:42 compute-0 lvm[245591]: VG ceph_vg1 finished
Jan 31 08:32:42 compute-0 lvm[245590]: VG ceph_vg0 finished
Jan 31 08:32:42 compute-0 lvm[245593]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 31 08:32:42 compute-0 lvm[245593]: VG ceph_vg2 finished
Jan 31 08:32:42 compute-0 condescending_greider[245512]: {}
Jan 31 08:32:42 compute-0 systemd[1]: libpod-611d49b54b417a7e3a3e255b628787439f0b184e47e4ee11a991134a1ab75e1c.scope: Deactivated successfully.
Jan 31 08:32:42 compute-0 systemd[1]: libpod-611d49b54b417a7e3a3e255b628787439f0b184e47e4ee11a991134a1ab75e1c.scope: Consumed 1.099s CPU time.
Jan 31 08:32:42 compute-0 podman[245495]: 2026-01-31 08:32:42.815752336 +0000 UTC m=+1.221409861 container died 611d49b54b417a7e3a3e255b628787439f0b184e47e4ee11a991134a1ab75e1c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=condescending_greider, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle)
Jan 31 08:32:43 compute-0 systemd[1]: var-lib-containers-storage-overlay-4157d1799107329c83994056795f325c65456a10ade8aed4d2b2e8469bc11bca-merged.mount: Deactivated successfully.
Jan 31 08:32:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] _maybe_adjust
Jan 31 08:32:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:32:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 31 08:32:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:32:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 08:32:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:32:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 08:32:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:32:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 08:32:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:32:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 1.9431184059615526e-07 of space, bias 1.0, pg target 5.829355217884658e-05 quantized to 32 (current 32)
Jan 31 08:32:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:32:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 2.607793448422658e-06 of space, bias 4.0, pg target 0.0031293521381071895 quantized to 16 (current 16)
Jan 31 08:32:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:32:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 08:32:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:32:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 31 08:32:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:32:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Jan 31 08:32:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:32:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 08:32:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:32:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 31 08:32:43 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v912: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:32:43 compute-0 podman[245495]: 2026-01-31 08:32:43.472790412 +0000 UTC m=+1.878447937 container remove 611d49b54b417a7e3a3e255b628787439f0b184e47e4ee11a991134a1ab75e1c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=condescending_greider, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Jan 31 08:32:43 compute-0 systemd[1]: libpod-conmon-611d49b54b417a7e3a3e255b628787439f0b184e47e4ee11a991134a1ab75e1c.scope: Deactivated successfully.
Jan 31 08:32:43 compute-0 sudo[245419]: pam_unix(sudo:session): session closed for user root
Jan 31 08:32:43 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 31 08:32:43 compute-0 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 08:32:43 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 31 08:32:43 compute-0 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 08:32:43 compute-0 sudo[245609]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 31 08:32:43 compute-0 sudo[245609]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:32:43 compute-0 sudo[245609]: pam_unix(sudo:session): session closed for user root
Jan 31 08:32:44 compute-0 ceph-mon[75227]: pgmap v912: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:32:44 compute-0 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 08:32:44 compute-0 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 08:32:45 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:32:45 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v913: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:32:45 compute-0 ceph-mon[75227]: pgmap v913: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:32:47 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v914: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:32:48 compute-0 ceph-mon[75227]: pgmap v914: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:32:49 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v915: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:32:49 compute-0 ceph-mon[75227]: pgmap v915: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:32:50 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:32:51 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v916: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:32:51 compute-0 ceph-mon[75227]: pgmap v916: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:32:53 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v917: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:32:54 compute-0 ceph-mon[75227]: pgmap v917: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:32:55 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:32:55 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v918: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:32:55 compute-0 ceph-mon[75227]: pgmap v918: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:32:57 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v919: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:32:58 compute-0 ceph-mon[75227]: pgmap v919: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:32:59 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v920: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:33:00 compute-0 ceph-mon[75227]: pgmap v920: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:33:00 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:33:01 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v921: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:33:02 compute-0 ceph-mon[75227]: pgmap v921: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:33:02 compute-0 ceph-mgr[75519]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:33:02 compute-0 ceph-mgr[75519]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:33:02 compute-0 ceph-mgr[75519]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:33:02 compute-0 ceph-mgr[75519]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:33:02 compute-0 ceph-mgr[75519]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:33:02 compute-0 ceph-mgr[75519]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:33:03 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v922: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:33:03 compute-0 ceph-mon[75227]: pgmap v922: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:33:05 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:33:05 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v923: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:33:07 compute-0 ceph-mon[75227]: pgmap v923: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:33:07 compute-0 podman[245634]: 2026-01-31 08:33:07.173292627 +0000 UTC m=+0.069907321 container health_status 14c3c41ef3ecc0cb180f3b1c6b2646401de390411c75f2edf127900ced71a3ae (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '3b798815db4eef76ff54d2cfe5801aee605c637f16e47a4297de289ab1fdb9c1-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS)
Jan 31 08:33:07 compute-0 podman[245635]: 2026-01-31 08:33:07.18327858 +0000 UTC m=+0.074390958 container health_status 5cc46d1955888fed41771eb977e7f9416e280539f01559118253757fe3eb0869 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, org.label-schema.build-date=20260127, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '3b798815db4eef76ff54d2cfe5801aee605c637f16e47a4297de289ab1fdb9c1-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Jan 31 08:33:07 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v924: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:33:08 compute-0 ceph-mon[75227]: pgmap v924: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:33:09 compute-0 nova_compute[238824]: 2026-01-31 08:33:09.339 238828 DEBUG oslo_service.periodic_task [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:33:09 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v925: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:33:10 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:33:10 compute-0 ceph-mon[75227]: pgmap v925: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:33:11 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v926: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:33:11 compute-0 ceph-mon[75227]: pgmap v926: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:33:13 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v927: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:33:14 compute-0 nova_compute[238824]: 2026-01-31 08:33:14.339 238828 DEBUG oslo_service.periodic_task [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:33:14 compute-0 nova_compute[238824]: 2026-01-31 08:33:14.340 238828 DEBUG nova.compute.manager [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 31 08:33:14 compute-0 ceph-mon[75227]: pgmap v927: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:33:15 compute-0 nova_compute[238824]: 2026-01-31 08:33:15.339 238828 DEBUG oslo_service.periodic_task [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:33:15 compute-0 nova_compute[238824]: 2026-01-31 08:33:15.340 238828 DEBUG oslo_service.periodic_task [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:33:15 compute-0 nova_compute[238824]: 2026-01-31 08:33:15.340 238828 DEBUG oslo_service.periodic_task [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:33:15 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:33:15 compute-0 nova_compute[238824]: 2026-01-31 08:33:15.372 238828 DEBUG oslo_concurrency.lockutils [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:33:15 compute-0 nova_compute[238824]: 2026-01-31 08:33:15.372 238828 DEBUG oslo_concurrency.lockutils [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:33:15 compute-0 nova_compute[238824]: 2026-01-31 08:33:15.373 238828 DEBUG oslo_concurrency.lockutils [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:33:15 compute-0 nova_compute[238824]: 2026-01-31 08:33:15.373 238828 DEBUG nova.compute.resource_tracker [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 31 08:33:15 compute-0 nova_compute[238824]: 2026-01-31 08:33:15.374 238828 DEBUG oslo_concurrency.processutils [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:33:15 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v928: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:33:15 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 31 08:33:15 compute-0 ceph-mon[75227]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3605119693' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 31 08:33:15 compute-0 nova_compute[238824]: 2026-01-31 08:33:15.929 238828 DEBUG oslo_concurrency.processutils [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.555s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:33:16 compute-0 nova_compute[238824]: 2026-01-31 08:33:16.067 238828 WARNING nova.virt.libvirt.driver [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 31 08:33:16 compute-0 nova_compute[238824]: 2026-01-31 08:33:16.068 238828 DEBUG nova.compute.resource_tracker [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5141MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 31 08:33:16 compute-0 nova_compute[238824]: 2026-01-31 08:33:16.068 238828 DEBUG oslo_concurrency.lockutils [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:33:16 compute-0 nova_compute[238824]: 2026-01-31 08:33:16.068 238828 DEBUG oslo_concurrency.lockutils [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:33:16 compute-0 nova_compute[238824]: 2026-01-31 08:33:16.153 238828 DEBUG nova.compute.resource_tracker [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 31 08:33:16 compute-0 nova_compute[238824]: 2026-01-31 08:33:16.153 238828 DEBUG nova.compute.resource_tracker [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 31 08:33:16 compute-0 nova_compute[238824]: 2026-01-31 08:33:16.172 238828 DEBUG oslo_concurrency.processutils [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:33:16 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 31 08:33:16 compute-0 ceph-mon[75227]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1187874455' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 31 08:33:16 compute-0 nova_compute[238824]: 2026-01-31 08:33:16.652 238828 DEBUG oslo_concurrency.processutils [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.480s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:33:16 compute-0 nova_compute[238824]: 2026-01-31 08:33:16.658 238828 DEBUG nova.compute.provider_tree [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Inventory has not changed in ProviderTree for provider: 6d4ff98f-eb37-47a1-bfaf-01e7f5329d98 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 31 08:33:16 compute-0 ceph-mon[75227]: pgmap v928: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:33:16 compute-0 ceph-mon[75227]: from='client.? 192.168.122.100:0/3605119693' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 31 08:33:16 compute-0 nova_compute[238824]: 2026-01-31 08:33:16.683 238828 DEBUG nova.scheduler.client.report [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Inventory has not changed for provider 6d4ff98f-eb37-47a1-bfaf-01e7f5329d98 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 31 08:33:16 compute-0 nova_compute[238824]: 2026-01-31 08:33:16.686 238828 DEBUG nova.compute.resource_tracker [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 31 08:33:16 compute-0 nova_compute[238824]: 2026-01-31 08:33:16.687 238828 DEBUG oslo_concurrency.lockutils [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.619s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:33:17 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v929: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:33:17 compute-0 nova_compute[238824]: 2026-01-31 08:33:17.682 238828 DEBUG oslo_service.periodic_task [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:33:17 compute-0 nova_compute[238824]: 2026-01-31 08:33:17.683 238828 DEBUG oslo_service.periodic_task [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:33:17 compute-0 nova_compute[238824]: 2026-01-31 08:33:17.683 238828 DEBUG nova.compute.manager [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 31 08:33:17 compute-0 nova_compute[238824]: 2026-01-31 08:33:17.683 238828 DEBUG nova.compute.manager [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 31 08:33:17 compute-0 nova_compute[238824]: 2026-01-31 08:33:17.710 238828 DEBUG nova.compute.manager [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 31 08:33:17 compute-0 ceph-mon[75227]: from='client.? 192.168.122.100:0/1187874455' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 31 08:33:17 compute-0 ceph-mon[75227]: pgmap v929: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:33:17 compute-0 ovn_metadata_agent[154972]: 2026-01-31 08:33:17.891 154977 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:33:17 compute-0 ovn_metadata_agent[154972]: 2026-01-31 08:33:17.891 154977 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:33:17 compute-0 ovn_metadata_agent[154972]: 2026-01-31 08:33:17.892 154977 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:33:17 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 31 08:33:17 compute-0 ceph-mon[75227]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/34115812' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 31 08:33:17 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 31 08:33:17 compute-0 ceph-mon[75227]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/34115812' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 31 08:33:18 compute-0 nova_compute[238824]: 2026-01-31 08:33:18.339 238828 DEBUG oslo_service.periodic_task [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:33:18 compute-0 ceph-mon[75227]: from='client.? 192.168.122.10:0/34115812' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 31 08:33:18 compute-0 ceph-mon[75227]: from='client.? 192.168.122.10:0/34115812' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 31 08:33:19 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v930: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:33:19 compute-0 ceph-mon[75227]: pgmap v930: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:33:20 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:33:21 compute-0 nova_compute[238824]: 2026-01-31 08:33:21.340 238828 DEBUG oslo_service.periodic_task [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:33:21 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v931: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:33:22 compute-0 ceph-mon[75227]: pgmap v931: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:33:23 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v932: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:33:24 compute-0 ceph-mon[75227]: pgmap v932: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:33:25 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:33:25 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v933: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:33:26 compute-0 ceph-mon[75227]: pgmap v933: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:33:27 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v934: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:33:28 compute-0 ceph-mon[75227]: pgmap v934: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:33:29 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v935: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:33:30 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:33:30 compute-0 ceph-mon[75227]: pgmap v935: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:33:31 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v936: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:33:31 compute-0 ceph-mon[75227]: pgmap v936: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:33:31 compute-0 ceph-mgr[75519]: [balancer INFO root] Optimize plan auto_2026-01-31_08:33:31
Jan 31 08:33:31 compute-0 ceph-mgr[75519]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 31 08:33:31 compute-0 ceph-mgr[75519]: [balancer INFO root] do_upmap
Jan 31 08:33:31 compute-0 ceph-mgr[75519]: [balancer INFO root] pools ['.mgr', 'cephfs.cephfs.meta', 'default.rgw.control', '.rgw.root', 'default.rgw.meta', 'vms', 'volumes', 'images', 'default.rgw.log', 'backups', 'cephfs.cephfs.data']
Jan 31 08:33:31 compute-0 ceph-mgr[75519]: [balancer INFO root] prepared 0/10 upmap changes
Jan 31 08:33:32 compute-0 ceph-mgr[75519]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:33:32 compute-0 ceph-mgr[75519]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:33:32 compute-0 ceph-mgr[75519]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:33:32 compute-0 ceph-mgr[75519]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:33:32 compute-0 ceph-mgr[75519]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:33:32 compute-0 ceph-mgr[75519]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:33:33 compute-0 ceph-mgr[75519]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 31 08:33:33 compute-0 ceph-mgr[75519]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 08:33:33 compute-0 ceph-mgr[75519]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 31 08:33:33 compute-0 ceph-mgr[75519]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 08:33:33 compute-0 ceph-mgr[75519]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 08:33:33 compute-0 ceph-mgr[75519]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 08:33:33 compute-0 ceph-mgr[75519]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 08:33:33 compute-0 ceph-mgr[75519]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 08:33:33 compute-0 ceph-mgr[75519]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 08:33:33 compute-0 ceph-mgr[75519]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 08:33:33 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v937: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:33:34 compute-0 ceph-mon[75227]: pgmap v937: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:33:35 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:33:35 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v938: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:33:35 compute-0 ceph-mon[75227]: pgmap v938: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:33:37 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v939: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:33:38 compute-0 podman[245723]: 2026-01-31 08:33:38.175280295 +0000 UTC m=+0.059390910 container health_status 5cc46d1955888fed41771eb977e7f9416e280539f01559118253757fe3eb0869 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '3b798815db4eef76ff54d2cfe5801aee605c637f16e47a4297de289ab1fdb9c1-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Jan 31 08:33:38 compute-0 podman[245722]: 2026-01-31 08:33:38.20803651 +0000 UTC m=+0.092098394 container health_status 14c3c41ef3ecc0cb180f3b1c6b2646401de390411c75f2edf127900ced71a3ae (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '3b798815db4eef76ff54d2cfe5801aee605c637f16e47a4297de289ab1fdb9c1-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3)
Jan 31 08:33:38 compute-0 ceph-mon[75227]: pgmap v939: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:33:39 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v940: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:33:40 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:33:40 compute-0 ceph-mon[75227]: pgmap v940: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:33:41 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v941: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:33:42 compute-0 ceph-mon[75227]: pgmap v941: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:33:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] _maybe_adjust
Jan 31 08:33:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:33:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 31 08:33:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:33:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 08:33:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:33:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 08:33:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:33:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 08:33:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:33:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 1.9431184059615526e-07 of space, bias 1.0, pg target 5.829355217884658e-05 quantized to 32 (current 32)
Jan 31 08:33:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:33:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 2.607793448422658e-06 of space, bias 4.0, pg target 0.0031293521381071895 quantized to 16 (current 16)
Jan 31 08:33:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:33:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 08:33:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:33:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 31 08:33:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:33:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Jan 31 08:33:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:33:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 08:33:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:33:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 31 08:33:43 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v942: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:33:43 compute-0 ceph-mon[75227]: pgmap v942: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:33:43 compute-0 sudo[245761]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 08:33:43 compute-0 sudo[245761]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:33:43 compute-0 sudo[245761]: pam_unix(sudo:session): session closed for user root
Jan 31 08:33:43 compute-0 sudo[245786]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/82c880e6-d992-5408-8b12-efff9c275473/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Jan 31 08:33:43 compute-0 sudo[245786]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:33:44 compute-0 sudo[245786]: pam_unix(sudo:session): session closed for user root
Jan 31 08:33:44 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 31 08:33:44 compute-0 ceph-mon[75227]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 08:33:44 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 31 08:33:44 compute-0 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 31 08:33:44 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 31 08:33:44 compute-0 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 08:33:44 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 31 08:33:44 compute-0 ceph-mon[75227]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 31 08:33:44 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 31 08:33:44 compute-0 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 31 08:33:44 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 31 08:33:44 compute-0 ceph-mon[75227]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 08:33:44 compute-0 sudo[245843]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 08:33:44 compute-0 sudo[245843]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:33:44 compute-0 sudo[245843]: pam_unix(sudo:session): session closed for user root
Jan 31 08:33:44 compute-0 sudo[245868]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/82c880e6-d992-5408-8b12-efff9c275473/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 82c880e6-d992-5408-8b12-efff9c275473 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Jan 31 08:33:44 compute-0 sudo[245868]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:33:44 compute-0 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 08:33:44 compute-0 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 31 08:33:44 compute-0 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 08:33:44 compute-0 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 31 08:33:44 compute-0 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 31 08:33:44 compute-0 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 08:33:44 compute-0 podman[245905]: 2026-01-31 08:33:44.958358095 +0000 UTC m=+0.059998697 container create 947856d0df239aa3f06f94fdd644875f95c59fde4d88801ce6e569682fc1ad54 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sharp_dewdney, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Jan 31 08:33:45 compute-0 podman[245905]: 2026-01-31 08:33:44.920183466 +0000 UTC m=+0.021824108 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 08:33:45 compute-0 systemd[1]: Started libpod-conmon-947856d0df239aa3f06f94fdd644875f95c59fde4d88801ce6e569682fc1ad54.scope.
Jan 31 08:33:45 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:33:45 compute-0 podman[245905]: 2026-01-31 08:33:45.134678279 +0000 UTC m=+0.236318921 container init 947856d0df239aa3f06f94fdd644875f95c59fde4d88801ce6e569682fc1ad54 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sharp_dewdney, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, ceph=True)
Jan 31 08:33:45 compute-0 podman[245905]: 2026-01-31 08:33:45.142616903 +0000 UTC m=+0.244257525 container start 947856d0df239aa3f06f94fdd644875f95c59fde4d88801ce6e569682fc1ad54 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sharp_dewdney, ceph=True, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Jan 31 08:33:45 compute-0 sharp_dewdney[245922]: 167 167
Jan 31 08:33:45 compute-0 systemd[1]: libpod-947856d0df239aa3f06f94fdd644875f95c59fde4d88801ce6e569682fc1ad54.scope: Deactivated successfully.
Jan 31 08:33:45 compute-0 podman[245905]: 2026-01-31 08:33:45.170236044 +0000 UTC m=+0.271876656 container attach 947856d0df239aa3f06f94fdd644875f95c59fde4d88801ce6e569682fc1ad54 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sharp_dewdney, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Jan 31 08:33:45 compute-0 podman[245905]: 2026-01-31 08:33:45.171215091 +0000 UTC m=+0.272855703 container died 947856d0df239aa3f06f94fdd644875f95c59fde4d88801ce6e569682fc1ad54 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sharp_dewdney, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 08:33:45 compute-0 systemd[1]: var-lib-containers-storage-overlay-0ba6451af2729b3d56a0e865b6046d2d272914b58967e6be4c46f5f25e54c468-merged.mount: Deactivated successfully.
Jan 31 08:33:45 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:33:45 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v943: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:33:45 compute-0 podman[245905]: 2026-01-31 08:33:45.539920373 +0000 UTC m=+0.641560985 container remove 947856d0df239aa3f06f94fdd644875f95c59fde4d88801ce6e569682fc1ad54 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sharp_dewdney, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 08:33:45 compute-0 systemd[1]: libpod-conmon-947856d0df239aa3f06f94fdd644875f95c59fde4d88801ce6e569682fc1ad54.scope: Deactivated successfully.
Jan 31 08:33:45 compute-0 podman[245946]: 2026-01-31 08:33:45.740526523 +0000 UTC m=+0.111203764 container create 87a2047efd40ba3a373e6a2b013be2c83cd9e71de2c85e30182dd79dad06596b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nice_babbage, org.label-schema.build-date=20251030, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 31 08:33:45 compute-0 podman[245946]: 2026-01-31 08:33:45.648198663 +0000 UTC m=+0.018875944 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 08:33:45 compute-0 systemd[1]: Started libpod-conmon-87a2047efd40ba3a373e6a2b013be2c83cd9e71de2c85e30182dd79dad06596b.scope.
Jan 31 08:33:45 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:33:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cf3c9344b14bd9b7559c54f7186d4a9856a6330cd5fc7f79f2ab1d955d6e2a22/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 08:33:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cf3c9344b14bd9b7559c54f7186d4a9856a6330cd5fc7f79f2ab1d955d6e2a22/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 08:33:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cf3c9344b14bd9b7559c54f7186d4a9856a6330cd5fc7f79f2ab1d955d6e2a22/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 08:33:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cf3c9344b14bd9b7559c54f7186d4a9856a6330cd5fc7f79f2ab1d955d6e2a22/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 08:33:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cf3c9344b14bd9b7559c54f7186d4a9856a6330cd5fc7f79f2ab1d955d6e2a22/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 31 08:33:45 compute-0 ceph-mon[75227]: pgmap v943: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:33:46 compute-0 podman[245946]: 2026-01-31 08:33:46.018409827 +0000 UTC m=+0.389087148 container init 87a2047efd40ba3a373e6a2b013be2c83cd9e71de2c85e30182dd79dad06596b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nice_babbage, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 08:33:46 compute-0 podman[245946]: 2026-01-31 08:33:46.026851266 +0000 UTC m=+0.397528557 container start 87a2047efd40ba3a373e6a2b013be2c83cd9e71de2c85e30182dd79dad06596b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nice_babbage, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3)
Jan 31 08:33:46 compute-0 podman[245946]: 2026-01-31 08:33:46.137912005 +0000 UTC m=+0.508589366 container attach 87a2047efd40ba3a373e6a2b013be2c83cd9e71de2c85e30182dd79dad06596b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nice_babbage, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 08:33:46 compute-0 nice_babbage[245963]: --> passed data devices: 0 physical, 3 LVM
Jan 31 08:33:46 compute-0 nice_babbage[245963]: --> All data devices are unavailable
Jan 31 08:33:46 compute-0 systemd[1]: libpod-87a2047efd40ba3a373e6a2b013be2c83cd9e71de2c85e30182dd79dad06596b.scope: Deactivated successfully.
Jan 31 08:33:46 compute-0 podman[245946]: 2026-01-31 08:33:46.451885489 +0000 UTC m=+0.822562760 container died 87a2047efd40ba3a373e6a2b013be2c83cd9e71de2c85e30182dd79dad06596b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nice_babbage, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 08:33:47 compute-0 systemd[1]: var-lib-containers-storage-overlay-cf3c9344b14bd9b7559c54f7186d4a9856a6330cd5fc7f79f2ab1d955d6e2a22-merged.mount: Deactivated successfully.
Jan 31 08:33:47 compute-0 podman[245946]: 2026-01-31 08:33:47.270339341 +0000 UTC m=+1.641016592 container remove 87a2047efd40ba3a373e6a2b013be2c83cd9e71de2c85e30182dd79dad06596b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nice_babbage, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 08:33:47 compute-0 sudo[245868]: pam_unix(sudo:session): session closed for user root
Jan 31 08:33:47 compute-0 systemd[1]: libpod-conmon-87a2047efd40ba3a373e6a2b013be2c83cd9e71de2c85e30182dd79dad06596b.scope: Deactivated successfully.
Jan 31 08:33:47 compute-0 sudo[245995]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 08:33:47 compute-0 sudo[245995]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:33:47 compute-0 sudo[245995]: pam_unix(sudo:session): session closed for user root
Jan 31 08:33:47 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v944: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:33:47 compute-0 sudo[246020]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/82c880e6-d992-5408-8b12-efff9c275473/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 82c880e6-d992-5408-8b12-efff9c275473 -- lvm list --format json
Jan 31 08:33:47 compute-0 sudo[246020]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:33:47 compute-0 podman[246056]: 2026-01-31 08:33:47.669610336 +0000 UTC m=+0.021507958 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 08:33:47 compute-0 podman[246056]: 2026-01-31 08:33:47.802444612 +0000 UTC m=+0.154342154 container create 1ee8248b8e85ddf9c78ec99482bdaac5a75f4966149a6279899e39e9acbae109 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=heuristic_sutherland, org.label-schema.build-date=20251030, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Jan 31 08:33:47 compute-0 systemd[1]: Started libpod-conmon-1ee8248b8e85ddf9c78ec99482bdaac5a75f4966149a6279899e39e9acbae109.scope.
Jan 31 08:33:47 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:33:47 compute-0 podman[246056]: 2026-01-31 08:33:47.914718465 +0000 UTC m=+0.266616037 container init 1ee8248b8e85ddf9c78ec99482bdaac5a75f4966149a6279899e39e9acbae109 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=heuristic_sutherland, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030)
Jan 31 08:33:47 compute-0 podman[246056]: 2026-01-31 08:33:47.921318651 +0000 UTC m=+0.273216243 container start 1ee8248b8e85ddf9c78ec99482bdaac5a75f4966149a6279899e39e9acbae109 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=heuristic_sutherland, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True)
Jan 31 08:33:47 compute-0 heuristic_sutherland[246072]: 167 167
Jan 31 08:33:47 compute-0 systemd[1]: libpod-1ee8248b8e85ddf9c78ec99482bdaac5a75f4966149a6279899e39e9acbae109.scope: Deactivated successfully.
Jan 31 08:33:47 compute-0 podman[246056]: 2026-01-31 08:33:47.958141462 +0000 UTC m=+0.310039034 container attach 1ee8248b8e85ddf9c78ec99482bdaac5a75f4966149a6279899e39e9acbae109 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=heuristic_sutherland, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 08:33:47 compute-0 podman[246056]: 2026-01-31 08:33:47.95878762 +0000 UTC m=+0.310685192 container died 1ee8248b8e85ddf9c78ec99482bdaac5a75f4966149a6279899e39e9acbae109 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=heuristic_sutherland, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=tentacle, OSD_FLAVOR=default)
Jan 31 08:33:48 compute-0 systemd[1]: var-lib-containers-storage-overlay-1bd436a7a5c79bbdd1890b7a9c775299f1258399a44ed3a376bcb20c61963976-merged.mount: Deactivated successfully.
Jan 31 08:33:48 compute-0 podman[246056]: 2026-01-31 08:33:48.298463511 +0000 UTC m=+0.650361053 container remove 1ee8248b8e85ddf9c78ec99482bdaac5a75f4966149a6279899e39e9acbae109 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=heuristic_sutherland, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=tentacle, ceph=True, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 08:33:48 compute-0 systemd[1]: libpod-conmon-1ee8248b8e85ddf9c78ec99482bdaac5a75f4966149a6279899e39e9acbae109.scope: Deactivated successfully.
Jan 31 08:33:48 compute-0 podman[246097]: 2026-01-31 08:33:48.428944559 +0000 UTC m=+0.041832863 container create aead3223c04a6e1628894b37f3509ec322c82ed6cd7c8b583a57074c08b53a2b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sleepy_bell, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Jan 31 08:33:48 compute-0 ceph-mon[75227]: pgmap v944: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:33:48 compute-0 podman[246097]: 2026-01-31 08:33:48.409790658 +0000 UTC m=+0.022678992 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 08:33:48 compute-0 systemd[1]: Started libpod-conmon-aead3223c04a6e1628894b37f3509ec322c82ed6cd7c8b583a57074c08b53a2b.scope.
Jan 31 08:33:48 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:33:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/99d363bf22c95336d72f4db313b55f3ef7adc90d6287572364ed575f0998eeed/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 08:33:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/99d363bf22c95336d72f4db313b55f3ef7adc90d6287572364ed575f0998eeed/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 08:33:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/99d363bf22c95336d72f4db313b55f3ef7adc90d6287572364ed575f0998eeed/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 08:33:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/99d363bf22c95336d72f4db313b55f3ef7adc90d6287572364ed575f0998eeed/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 08:33:48 compute-0 podman[246097]: 2026-01-31 08:33:48.57085769 +0000 UTC m=+0.183746014 container init aead3223c04a6e1628894b37f3509ec322c82ed6cd7c8b583a57074c08b53a2b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sleepy_bell, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 08:33:48 compute-0 podman[246097]: 2026-01-31 08:33:48.578577259 +0000 UTC m=+0.191465563 container start aead3223c04a6e1628894b37f3509ec322c82ed6cd7c8b583a57074c08b53a2b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sleepy_bell, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle)
Jan 31 08:33:48 compute-0 podman[246097]: 2026-01-31 08:33:48.780973739 +0000 UTC m=+0.393862073 container attach aead3223c04a6e1628894b37f3509ec322c82ed6cd7c8b583a57074c08b53a2b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sleepy_bell, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Jan 31 08:33:48 compute-0 sleepy_bell[246114]: {
Jan 31 08:33:48 compute-0 sleepy_bell[246114]:     "0": [
Jan 31 08:33:48 compute-0 sleepy_bell[246114]:         {
Jan 31 08:33:48 compute-0 sleepy_bell[246114]:             "devices": [
Jan 31 08:33:48 compute-0 sleepy_bell[246114]:                 "/dev/loop3"
Jan 31 08:33:48 compute-0 sleepy_bell[246114]:             ],
Jan 31 08:33:48 compute-0 sleepy_bell[246114]:             "lv_name": "ceph_lv0",
Jan 31 08:33:48 compute-0 sleepy_bell[246114]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 08:33:48 compute-0 sleepy_bell[246114]:             "lv_size": "21470642176",
Jan 31 08:33:48 compute-0 sleepy_bell[246114]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=MTsNbY-MKaT-jGv0-3onj-5WQa-gnK0-BbfLsK,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=82c880e6-d992-5408-8b12-efff9c275473,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=39c36249-2898-4a76-b317-8e4ca379866f,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 31 08:33:48 compute-0 sleepy_bell[246114]:             "lv_uuid": "MTsNbY-MKaT-jGv0-3onj-5WQa-gnK0-BbfLsK",
Jan 31 08:33:48 compute-0 sleepy_bell[246114]:             "name": "ceph_lv0",
Jan 31 08:33:48 compute-0 sleepy_bell[246114]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 08:33:48 compute-0 sleepy_bell[246114]:             "tags": {
Jan 31 08:33:48 compute-0 sleepy_bell[246114]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 31 08:33:48 compute-0 sleepy_bell[246114]:                 "ceph.block_uuid": "MTsNbY-MKaT-jGv0-3onj-5WQa-gnK0-BbfLsK",
Jan 31 08:33:48 compute-0 sleepy_bell[246114]:                 "ceph.cephx_lockbox_secret": "",
Jan 31 08:33:48 compute-0 sleepy_bell[246114]:                 "ceph.cluster_fsid": "82c880e6-d992-5408-8b12-efff9c275473",
Jan 31 08:33:48 compute-0 sleepy_bell[246114]:                 "ceph.cluster_name": "ceph",
Jan 31 08:33:48 compute-0 sleepy_bell[246114]:                 "ceph.crush_device_class": "",
Jan 31 08:33:48 compute-0 sleepy_bell[246114]:                 "ceph.encrypted": "0",
Jan 31 08:33:48 compute-0 sleepy_bell[246114]:                 "ceph.objectstore": "bluestore",
Jan 31 08:33:48 compute-0 sleepy_bell[246114]:                 "ceph.osd_fsid": "39c36249-2898-4a76-b317-8e4ca379866f",
Jan 31 08:33:48 compute-0 sleepy_bell[246114]:                 "ceph.osd_id": "0",
Jan 31 08:33:48 compute-0 sleepy_bell[246114]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 31 08:33:48 compute-0 sleepy_bell[246114]:                 "ceph.type": "block",
Jan 31 08:33:48 compute-0 sleepy_bell[246114]:                 "ceph.vdo": "0",
Jan 31 08:33:48 compute-0 sleepy_bell[246114]:                 "ceph.with_tpm": "0"
Jan 31 08:33:48 compute-0 sleepy_bell[246114]:             },
Jan 31 08:33:48 compute-0 sleepy_bell[246114]:             "type": "block",
Jan 31 08:33:48 compute-0 sleepy_bell[246114]:             "vg_name": "ceph_vg0"
Jan 31 08:33:48 compute-0 sleepy_bell[246114]:         }
Jan 31 08:33:48 compute-0 sleepy_bell[246114]:     ],
Jan 31 08:33:48 compute-0 sleepy_bell[246114]:     "1": [
Jan 31 08:33:48 compute-0 sleepy_bell[246114]:         {
Jan 31 08:33:48 compute-0 sleepy_bell[246114]:             "devices": [
Jan 31 08:33:48 compute-0 sleepy_bell[246114]:                 "/dev/loop4"
Jan 31 08:33:48 compute-0 sleepy_bell[246114]:             ],
Jan 31 08:33:48 compute-0 sleepy_bell[246114]:             "lv_name": "ceph_lv1",
Jan 31 08:33:48 compute-0 sleepy_bell[246114]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 31 08:33:48 compute-0 sleepy_bell[246114]:             "lv_size": "21470642176",
Jan 31 08:33:48 compute-0 sleepy_bell[246114]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=p93Mbf-DMxT-pcUt-jSJE-SFna-oscq-yTAd40,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=82c880e6-d992-5408-8b12-efff9c275473,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=dacad4fa-56d8-4937-b2d8-306fb75187f3,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 31 08:33:48 compute-0 sleepy_bell[246114]:             "lv_uuid": "p93Mbf-DMxT-pcUt-jSJE-SFna-oscq-yTAd40",
Jan 31 08:33:48 compute-0 sleepy_bell[246114]:             "name": "ceph_lv1",
Jan 31 08:33:48 compute-0 sleepy_bell[246114]:             "path": "/dev/ceph_vg1/ceph_lv1",
Jan 31 08:33:48 compute-0 sleepy_bell[246114]:             "tags": {
Jan 31 08:33:48 compute-0 sleepy_bell[246114]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 31 08:33:48 compute-0 sleepy_bell[246114]:                 "ceph.block_uuid": "p93Mbf-DMxT-pcUt-jSJE-SFna-oscq-yTAd40",
Jan 31 08:33:48 compute-0 sleepy_bell[246114]:                 "ceph.cephx_lockbox_secret": "",
Jan 31 08:33:48 compute-0 sleepy_bell[246114]:                 "ceph.cluster_fsid": "82c880e6-d992-5408-8b12-efff9c275473",
Jan 31 08:33:48 compute-0 sleepy_bell[246114]:                 "ceph.cluster_name": "ceph",
Jan 31 08:33:48 compute-0 sleepy_bell[246114]:                 "ceph.crush_device_class": "",
Jan 31 08:33:48 compute-0 sleepy_bell[246114]:                 "ceph.encrypted": "0",
Jan 31 08:33:48 compute-0 sleepy_bell[246114]:                 "ceph.objectstore": "bluestore",
Jan 31 08:33:48 compute-0 sleepy_bell[246114]:                 "ceph.osd_fsid": "dacad4fa-56d8-4937-b2d8-306fb75187f3",
Jan 31 08:33:48 compute-0 sleepy_bell[246114]:                 "ceph.osd_id": "1",
Jan 31 08:33:48 compute-0 sleepy_bell[246114]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 31 08:33:48 compute-0 sleepy_bell[246114]:                 "ceph.type": "block",
Jan 31 08:33:48 compute-0 sleepy_bell[246114]:                 "ceph.vdo": "0",
Jan 31 08:33:48 compute-0 sleepy_bell[246114]:                 "ceph.with_tpm": "0"
Jan 31 08:33:48 compute-0 sleepy_bell[246114]:             },
Jan 31 08:33:48 compute-0 sleepy_bell[246114]:             "type": "block",
Jan 31 08:33:48 compute-0 sleepy_bell[246114]:             "vg_name": "ceph_vg1"
Jan 31 08:33:48 compute-0 sleepy_bell[246114]:         }
Jan 31 08:33:48 compute-0 sleepy_bell[246114]:     ],
Jan 31 08:33:48 compute-0 sleepy_bell[246114]:     "2": [
Jan 31 08:33:48 compute-0 sleepy_bell[246114]:         {
Jan 31 08:33:48 compute-0 sleepy_bell[246114]:             "devices": [
Jan 31 08:33:48 compute-0 sleepy_bell[246114]:                 "/dev/loop5"
Jan 31 08:33:48 compute-0 sleepy_bell[246114]:             ],
Jan 31 08:33:48 compute-0 sleepy_bell[246114]:             "lv_name": "ceph_lv2",
Jan 31 08:33:48 compute-0 sleepy_bell[246114]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 31 08:33:48 compute-0 sleepy_bell[246114]:             "lv_size": "21470642176",
Jan 31 08:33:48 compute-0 sleepy_bell[246114]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=fxd7JU-HnwP-NvcE-M4xv-EgEF-kK7y-w6dXCS,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=82c880e6-d992-5408-8b12-efff9c275473,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=faa25865-e7b6-44f9-8188-08bf287b941b,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 31 08:33:48 compute-0 sleepy_bell[246114]:             "lv_uuid": "fxd7JU-HnwP-NvcE-M4xv-EgEF-kK7y-w6dXCS",
Jan 31 08:33:48 compute-0 sleepy_bell[246114]:             "name": "ceph_lv2",
Jan 31 08:33:48 compute-0 sleepy_bell[246114]:             "path": "/dev/ceph_vg2/ceph_lv2",
Jan 31 08:33:48 compute-0 sleepy_bell[246114]:             "tags": {
Jan 31 08:33:48 compute-0 sleepy_bell[246114]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 31 08:33:48 compute-0 sleepy_bell[246114]:                 "ceph.block_uuid": "fxd7JU-HnwP-NvcE-M4xv-EgEF-kK7y-w6dXCS",
Jan 31 08:33:48 compute-0 sleepy_bell[246114]:                 "ceph.cephx_lockbox_secret": "",
Jan 31 08:33:48 compute-0 sleepy_bell[246114]:                 "ceph.cluster_fsid": "82c880e6-d992-5408-8b12-efff9c275473",
Jan 31 08:33:48 compute-0 sleepy_bell[246114]:                 "ceph.cluster_name": "ceph",
Jan 31 08:33:48 compute-0 sleepy_bell[246114]:                 "ceph.crush_device_class": "",
Jan 31 08:33:48 compute-0 sleepy_bell[246114]:                 "ceph.encrypted": "0",
Jan 31 08:33:48 compute-0 sleepy_bell[246114]:                 "ceph.objectstore": "bluestore",
Jan 31 08:33:48 compute-0 sleepy_bell[246114]:                 "ceph.osd_fsid": "faa25865-e7b6-44f9-8188-08bf287b941b",
Jan 31 08:33:48 compute-0 sleepy_bell[246114]:                 "ceph.osd_id": "2",
Jan 31 08:33:48 compute-0 sleepy_bell[246114]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 31 08:33:48 compute-0 sleepy_bell[246114]:                 "ceph.type": "block",
Jan 31 08:33:48 compute-0 sleepy_bell[246114]:                 "ceph.vdo": "0",
Jan 31 08:33:48 compute-0 sleepy_bell[246114]:                 "ceph.with_tpm": "0"
Jan 31 08:33:48 compute-0 sleepy_bell[246114]:             },
Jan 31 08:33:48 compute-0 sleepy_bell[246114]:             "type": "block",
Jan 31 08:33:48 compute-0 sleepy_bell[246114]:             "vg_name": "ceph_vg2"
Jan 31 08:33:48 compute-0 sleepy_bell[246114]:         }
Jan 31 08:33:48 compute-0 sleepy_bell[246114]:     ]
Jan 31 08:33:48 compute-0 sleepy_bell[246114]: }
Jan 31 08:33:48 compute-0 systemd[1]: libpod-aead3223c04a6e1628894b37f3509ec322c82ed6cd7c8b583a57074c08b53a2b.scope: Deactivated successfully.
Jan 31 08:33:48 compute-0 conmon[246114]: conmon aead3223c04a6e162889 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-aead3223c04a6e1628894b37f3509ec322c82ed6cd7c8b583a57074c08b53a2b.scope/container/memory.events
Jan 31 08:33:48 compute-0 podman[246097]: 2026-01-31 08:33:48.872547478 +0000 UTC m=+0.485435782 container died aead3223c04a6e1628894b37f3509ec322c82ed6cd7c8b583a57074c08b53a2b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sleepy_bell, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 08:33:49 compute-0 systemd[1]: var-lib-containers-storage-overlay-99d363bf22c95336d72f4db313b55f3ef7adc90d6287572364ed575f0998eeed-merged.mount: Deactivated successfully.
Jan 31 08:33:49 compute-0 podman[246097]: 2026-01-31 08:33:49.111460121 +0000 UTC m=+0.724348425 container remove aead3223c04a6e1628894b37f3509ec322c82ed6cd7c8b583a57074c08b53a2b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sleepy_bell, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030)
Jan 31 08:33:49 compute-0 systemd[1]: libpod-conmon-aead3223c04a6e1628894b37f3509ec322c82ed6cd7c8b583a57074c08b53a2b.scope: Deactivated successfully.
Jan 31 08:33:49 compute-0 sudo[246020]: pam_unix(sudo:session): session closed for user root
Jan 31 08:33:49 compute-0 sudo[246136]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 08:33:49 compute-0 sudo[246136]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:33:49 compute-0 sudo[246136]: pam_unix(sudo:session): session closed for user root
Jan 31 08:33:49 compute-0 sudo[246161]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/82c880e6-d992-5408-8b12-efff9c275473/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 82c880e6-d992-5408-8b12-efff9c275473 -- raw list --format json
Jan 31 08:33:49 compute-0 sudo[246161]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:33:49 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v945: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:33:49 compute-0 podman[246198]: 2026-01-31 08:33:49.587910287 +0000 UTC m=+0.037097709 container create 965bf0aeee3e00e472aab3b730e71c5bb12c1bb4dbdb4b777b084dc01722a1ae (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigorous_cartwright, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030)
Jan 31 08:33:49 compute-0 systemd[1]: Started libpod-conmon-965bf0aeee3e00e472aab3b730e71c5bb12c1bb4dbdb4b777b084dc01722a1ae.scope.
Jan 31 08:33:49 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:33:49 compute-0 podman[246198]: 2026-01-31 08:33:49.571908745 +0000 UTC m=+0.021096197 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 08:33:49 compute-0 podman[246198]: 2026-01-31 08:33:49.669168154 +0000 UTC m=+0.118355646 container init 965bf0aeee3e00e472aab3b730e71c5bb12c1bb4dbdb4b777b084dc01722a1ae (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigorous_cartwright, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_REF=tentacle)
Jan 31 08:33:49 compute-0 podman[246198]: 2026-01-31 08:33:49.674477494 +0000 UTC m=+0.123664916 container start 965bf0aeee3e00e472aab3b730e71c5bb12c1bb4dbdb4b777b084dc01722a1ae (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigorous_cartwright, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.41.3, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 08:33:49 compute-0 vigorous_cartwright[246214]: 167 167
Jan 31 08:33:49 compute-0 systemd[1]: libpod-965bf0aeee3e00e472aab3b730e71c5bb12c1bb4dbdb4b777b084dc01722a1ae.scope: Deactivated successfully.
Jan 31 08:33:49 compute-0 podman[246198]: 2026-01-31 08:33:49.682169112 +0000 UTC m=+0.131356614 container attach 965bf0aeee3e00e472aab3b730e71c5bb12c1bb4dbdb4b777b084dc01722a1ae (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigorous_cartwright, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 08:33:49 compute-0 podman[246198]: 2026-01-31 08:33:49.682690866 +0000 UTC m=+0.131878328 container died 965bf0aeee3e00e472aab3b730e71c5bb12c1bb4dbdb4b777b084dc01722a1ae (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigorous_cartwright, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 08:33:49 compute-0 systemd[1]: var-lib-containers-storage-overlay-469d95fe89fab4fbb1f2831f1a9240a8acecdab0d02ff317416c673c203b0f06-merged.mount: Deactivated successfully.
Jan 31 08:33:49 compute-0 podman[246198]: 2026-01-31 08:33:49.716342217 +0000 UTC m=+0.165529629 container remove 965bf0aeee3e00e472aab3b730e71c5bb12c1bb4dbdb4b777b084dc01722a1ae (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigorous_cartwright, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 08:33:49 compute-0 systemd[1]: libpod-conmon-965bf0aeee3e00e472aab3b730e71c5bb12c1bb4dbdb4b777b084dc01722a1ae.scope: Deactivated successfully.
Jan 31 08:33:49 compute-0 podman[246238]: 2026-01-31 08:33:49.84804066 +0000 UTC m=+0.037811070 container create 3db19286d01c6d015ff1e7ead0f8e63d5b37e256f35ccaef35fa2f2ef2891d32 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=affectionate_sammet, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Jan 31 08:33:49 compute-0 systemd[1]: Started libpod-conmon-3db19286d01c6d015ff1e7ead0f8e63d5b37e256f35ccaef35fa2f2ef2891d32.scope.
Jan 31 08:33:49 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:33:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/50bf9ad734d614bd918f1202ef57547dac5cce45444a656ca5b3d0bd3fb7b18f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 08:33:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/50bf9ad734d614bd918f1202ef57547dac5cce45444a656ca5b3d0bd3fb7b18f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 08:33:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/50bf9ad734d614bd918f1202ef57547dac5cce45444a656ca5b3d0bd3fb7b18f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 08:33:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/50bf9ad734d614bd918f1202ef57547dac5cce45444a656ca5b3d0bd3fb7b18f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 08:33:49 compute-0 podman[246238]: 2026-01-31 08:33:49.916437853 +0000 UTC m=+0.106208283 container init 3db19286d01c6d015ff1e7ead0f8e63d5b37e256f35ccaef35fa2f2ef2891d32 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=affectionate_sammet, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 08:33:49 compute-0 podman[246238]: 2026-01-31 08:33:49.921933559 +0000 UTC m=+0.111703969 container start 3db19286d01c6d015ff1e7ead0f8e63d5b37e256f35ccaef35fa2f2ef2891d32 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=affectionate_sammet, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 08:33:49 compute-0 podman[246238]: 2026-01-31 08:33:49.92766117 +0000 UTC m=+0.117431600 container attach 3db19286d01c6d015ff1e7ead0f8e63d5b37e256f35ccaef35fa2f2ef2891d32 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=affectionate_sammet, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, io.buildah.version=1.41.3, OSD_FLAVOR=default)
Jan 31 08:33:49 compute-0 podman[246238]: 2026-01-31 08:33:49.832929143 +0000 UTC m=+0.022699583 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 08:33:50 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:33:50 compute-0 lvm[246330]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 31 08:33:50 compute-0 lvm[246330]: VG ceph_vg0 finished
Jan 31 08:33:50 compute-0 lvm[246333]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 31 08:33:50 compute-0 lvm[246333]: VG ceph_vg1 finished
Jan 31 08:33:50 compute-0 ceph-mon[75227]: pgmap v945: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:33:50 compute-0 lvm[246335]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 31 08:33:50 compute-0 lvm[246335]: VG ceph_vg2 finished
Jan 31 08:33:50 compute-0 affectionate_sammet[246254]: {}
Jan 31 08:33:50 compute-0 systemd[1]: libpod-3db19286d01c6d015ff1e7ead0f8e63d5b37e256f35ccaef35fa2f2ef2891d32.scope: Deactivated successfully.
Jan 31 08:33:50 compute-0 podman[246238]: 2026-01-31 08:33:50.640163828 +0000 UTC m=+0.829934278 container died 3db19286d01c6d015ff1e7ead0f8e63d5b37e256f35ccaef35fa2f2ef2891d32 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=affectionate_sammet, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, ceph=True, org.label-schema.vendor=CentOS)
Jan 31 08:33:50 compute-0 systemd[1]: var-lib-containers-storage-overlay-50bf9ad734d614bd918f1202ef57547dac5cce45444a656ca5b3d0bd3fb7b18f-merged.mount: Deactivated successfully.
Jan 31 08:33:50 compute-0 podman[246238]: 2026-01-31 08:33:50.696576993 +0000 UTC m=+0.886347413 container remove 3db19286d01c6d015ff1e7ead0f8e63d5b37e256f35ccaef35fa2f2ef2891d32 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=affectionate_sammet, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 08:33:50 compute-0 systemd[1]: libpod-conmon-3db19286d01c6d015ff1e7ead0f8e63d5b37e256f35ccaef35fa2f2ef2891d32.scope: Deactivated successfully.
Jan 31 08:33:50 compute-0 sudo[246161]: pam_unix(sudo:session): session closed for user root
Jan 31 08:33:50 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 31 08:33:50 compute-0 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 08:33:50 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 31 08:33:50 compute-0 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 08:33:50 compute-0 sudo[246351]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 31 08:33:50 compute-0 sudo[246351]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:33:50 compute-0 sudo[246351]: pam_unix(sudo:session): session closed for user root
Jan 31 08:33:51 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v946: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:33:51 compute-0 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 08:33:51 compute-0 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 08:33:51 compute-0 ceph-mon[75227]: pgmap v946: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:33:53 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v947: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:33:54 compute-0 ceph-mon[75227]: pgmap v947: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:33:55 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:33:55 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v948: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:33:56 compute-0 ceph-mon[75227]: pgmap v948: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:33:57 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v949: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:33:58 compute-0 ceph-mon[75227]: pgmap v949: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:33:59 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v950: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:34:00 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:34:00 compute-0 ceph-mon[75227]: pgmap v950: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:34:01 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v951: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:34:02 compute-0 ceph-mon[75227]: pgmap v951: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:34:02 compute-0 ceph-mgr[75519]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:34:02 compute-0 ceph-mgr[75519]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:34:02 compute-0 ceph-mgr[75519]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:34:02 compute-0 ceph-mgr[75519]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:34:02 compute-0 ceph-mgr[75519]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:34:02 compute-0 ceph-mgr[75519]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:34:03 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v952: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:34:04 compute-0 ceph-mon[75227]: pgmap v952: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:34:05 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:34:05 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v953: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:34:06 compute-0 ceph-mon[75227]: pgmap v953: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:34:07 compute-0 nova_compute[238824]: 2026-01-31 08:34:07.340 238828 DEBUG oslo_service.periodic_task [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:34:07 compute-0 nova_compute[238824]: 2026-01-31 08:34:07.341 238828 DEBUG nova.compute.manager [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Jan 31 08:34:07 compute-0 nova_compute[238824]: 2026-01-31 08:34:07.361 238828 DEBUG nova.compute.manager [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Jan 31 08:34:07 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v954: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:34:08 compute-0 nova_compute[238824]: 2026-01-31 08:34:08.340 238828 DEBUG oslo_service.periodic_task [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:34:08 compute-0 ceph-mon[75227]: pgmap v954: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:34:09 compute-0 podman[246377]: 2026-01-31 08:34:09.198975646 +0000 UTC m=+0.071295896 container health_status 5cc46d1955888fed41771eb977e7f9416e280539f01559118253757fe3eb0869 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '3b798815db4eef76ff54d2cfe5801aee605c637f16e47a4297de289ab1fdb9c1-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Jan 31 08:34:09 compute-0 podman[246376]: 2026-01-31 08:34:09.224007764 +0000 UTC m=+0.097429855 container health_status 14c3c41ef3ecc0cb180f3b1c6b2646401de390411c75f2edf127900ced71a3ae (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '3b798815db4eef76ff54d2cfe5801aee605c637f16e47a4297de289ab1fdb9c1-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3)
Jan 31 08:34:09 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v955: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:34:10 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:34:10 compute-0 ceph-mon[75227]: pgmap v955: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:34:11 compute-0 nova_compute[238824]: 2026-01-31 08:34:11.398 238828 DEBUG oslo_service.periodic_task [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:34:11 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v956: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:34:12 compute-0 ceph-mon[75227]: pgmap v956: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:34:13 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v957: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:34:14 compute-0 nova_compute[238824]: 2026-01-31 08:34:14.340 238828 DEBUG oslo_service.periodic_task [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:34:14 compute-0 nova_compute[238824]: 2026-01-31 08:34:14.340 238828 DEBUG nova.compute.manager [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 31 08:34:14 compute-0 ceph-mon[75227]: pgmap v957: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:34:15 compute-0 nova_compute[238824]: 2026-01-31 08:34:15.340 238828 DEBUG oslo_service.periodic_task [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:34:15 compute-0 nova_compute[238824]: 2026-01-31 08:34:15.341 238828 DEBUG oslo_service.periodic_task [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:34:15 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:34:15 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v958: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:34:15 compute-0 ceph-mon[75227]: pgmap v958: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:34:16 compute-0 nova_compute[238824]: 2026-01-31 08:34:16.339 238828 DEBUG oslo_service.periodic_task [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:34:16 compute-0 nova_compute[238824]: 2026-01-31 08:34:16.397 238828 DEBUG oslo_concurrency.lockutils [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:34:16 compute-0 nova_compute[238824]: 2026-01-31 08:34:16.398 238828 DEBUG oslo_concurrency.lockutils [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:34:16 compute-0 nova_compute[238824]: 2026-01-31 08:34:16.398 238828 DEBUG oslo_concurrency.lockutils [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:34:16 compute-0 nova_compute[238824]: 2026-01-31 08:34:16.398 238828 DEBUG nova.compute.resource_tracker [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 31 08:34:16 compute-0 nova_compute[238824]: 2026-01-31 08:34:16.398 238828 DEBUG oslo_concurrency.processutils [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:34:16 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 31 08:34:16 compute-0 ceph-mon[75227]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2081614150' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 31 08:34:16 compute-0 nova_compute[238824]: 2026-01-31 08:34:16.893 238828 DEBUG oslo_concurrency.processutils [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.495s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:34:17 compute-0 nova_compute[238824]: 2026-01-31 08:34:17.043 238828 WARNING nova.virt.libvirt.driver [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 31 08:34:17 compute-0 nova_compute[238824]: 2026-01-31 08:34:17.043 238828 DEBUG nova.compute.resource_tracker [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5149MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 31 08:34:17 compute-0 nova_compute[238824]: 2026-01-31 08:34:17.044 238828 DEBUG oslo_concurrency.lockutils [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:34:17 compute-0 nova_compute[238824]: 2026-01-31 08:34:17.044 238828 DEBUG oslo_concurrency.lockutils [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:34:17 compute-0 ceph-mon[75227]: from='client.? 192.168.122.100:0/2081614150' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 31 08:34:17 compute-0 nova_compute[238824]: 2026-01-31 08:34:17.193 238828 DEBUG nova.compute.resource_tracker [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 31 08:34:17 compute-0 nova_compute[238824]: 2026-01-31 08:34:17.194 238828 DEBUG nova.compute.resource_tracker [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 31 08:34:17 compute-0 nova_compute[238824]: 2026-01-31 08:34:17.214 238828 DEBUG oslo_concurrency.processutils [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:34:17 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v959: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:34:17 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 31 08:34:17 compute-0 ceph-mon[75227]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/219244939' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 31 08:34:17 compute-0 nova_compute[238824]: 2026-01-31 08:34:17.741 238828 DEBUG oslo_concurrency.processutils [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.527s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:34:17 compute-0 nova_compute[238824]: 2026-01-31 08:34:17.747 238828 DEBUG nova.compute.provider_tree [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Inventory has not changed in ProviderTree for provider: 6d4ff98f-eb37-47a1-bfaf-01e7f5329d98 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 31 08:34:17 compute-0 nova_compute[238824]: 2026-01-31 08:34:17.795 238828 DEBUG nova.scheduler.client.report [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Inventory has not changed for provider 6d4ff98f-eb37-47a1-bfaf-01e7f5329d98 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 31 08:34:17 compute-0 nova_compute[238824]: 2026-01-31 08:34:17.798 238828 DEBUG nova.compute.resource_tracker [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 31 08:34:17 compute-0 nova_compute[238824]: 2026-01-31 08:34:17.798 238828 DEBUG oslo_concurrency.lockutils [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.754s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:34:17 compute-0 ovn_metadata_agent[154972]: 2026-01-31 08:34:17.893 154977 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:34:17 compute-0 ovn_metadata_agent[154972]: 2026-01-31 08:34:17.894 154977 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:34:17 compute-0 ovn_metadata_agent[154972]: 2026-01-31 08:34:17.894 154977 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:34:17 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 31 08:34:17 compute-0 ceph-mon[75227]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/595877313' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 31 08:34:17 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 31 08:34:17 compute-0 ceph-mon[75227]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/595877313' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 31 08:34:18 compute-0 ceph-mon[75227]: pgmap v959: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:34:18 compute-0 ceph-mon[75227]: from='client.? 192.168.122.100:0/219244939' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 31 08:34:18 compute-0 ceph-mon[75227]: from='client.? 192.168.122.10:0/595877313' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 31 08:34:18 compute-0 ceph-mon[75227]: from='client.? 192.168.122.10:0/595877313' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 31 08:34:18 compute-0 nova_compute[238824]: 2026-01-31 08:34:18.793 238828 DEBUG oslo_service.periodic_task [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:34:18 compute-0 nova_compute[238824]: 2026-01-31 08:34:18.794 238828 DEBUG oslo_service.periodic_task [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:34:18 compute-0 nova_compute[238824]: 2026-01-31 08:34:18.794 238828 DEBUG nova.compute.manager [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 31 08:34:18 compute-0 nova_compute[238824]: 2026-01-31 08:34:18.794 238828 DEBUG nova.compute.manager [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 31 08:34:18 compute-0 nova_compute[238824]: 2026-01-31 08:34:18.830 238828 DEBUG nova.compute.manager [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 31 08:34:19 compute-0 nova_compute[238824]: 2026-01-31 08:34:19.339 238828 DEBUG oslo_service.periodic_task [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:34:19 compute-0 nova_compute[238824]: 2026-01-31 08:34:19.371 238828 DEBUG oslo_service.periodic_task [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:34:19 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v960: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:34:20 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:34:20 compute-0 ceph-mon[75227]: pgmap v960: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:34:21 compute-0 nova_compute[238824]: 2026-01-31 08:34:21.340 238828 DEBUG oslo_service.periodic_task [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:34:21 compute-0 nova_compute[238824]: 2026-01-31 08:34:21.341 238828 DEBUG nova.compute.manager [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Jan 31 08:34:21 compute-0 nova_compute[238824]: 2026-01-31 08:34:21.406 238828 DEBUG oslo_service.periodic_task [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:34:21 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v961: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:34:22 compute-0 nova_compute[238824]: 2026-01-31 08:34:22.372 238828 DEBUG oslo_service.periodic_task [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:34:22 compute-0 ceph-mon[75227]: pgmap v961: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:34:23 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v962: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:34:24 compute-0 ceph-mon[75227]: pgmap v962: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:34:25 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:34:25 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v963: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:34:25 compute-0 ceph-mon[75227]: pgmap v963: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:34:27 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v964: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:34:28 compute-0 ceph-mon[75227]: pgmap v964: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:34:29 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v965: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:34:30 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:34:30 compute-0 ceph-mon[75227]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #45. Immutable memtables: 0.
Jan 31 08:34:30 compute-0 ceph-mon[75227]: rocksdb: (Original Log Time 2026/01/31-08:34:30.368990) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 31 08:34:30 compute-0 ceph-mon[75227]: rocksdb: [db/flush_job.cc:856] [default] [JOB 21] Flushing memtable with next log file: 45
Jan 31 08:34:30 compute-0 ceph-mon[75227]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769848470369022, "job": 21, "event": "flush_started", "num_memtables": 1, "num_entries": 1201, "num_deletes": 255, "total_data_size": 1862466, "memory_usage": 1888680, "flush_reason": "Manual Compaction"}
Jan 31 08:34:30 compute-0 ceph-mon[75227]: rocksdb: [db/flush_job.cc:885] [default] [JOB 21] Level-0 flush table #46: started
Jan 31 08:34:30 compute-0 ceph-mon[75227]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769848470377456, "cf_name": "default", "job": 21, "event": "table_file_creation", "file_number": 46, "file_size": 1834736, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 18803, "largest_seqno": 20003, "table_properties": {"data_size": 1829030, "index_size": 3101, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1541, "raw_key_size": 11484, "raw_average_key_size": 18, "raw_value_size": 1817574, "raw_average_value_size": 2965, "num_data_blocks": 142, "num_entries": 613, "num_filter_entries": 613, "num_deletions": 255, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769848346, "oldest_key_time": 1769848346, "file_creation_time": 1769848470, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "91992687-9ca4-489a-811f-a25b3432622d", "db_session_id": "RDN3DWKE2K2I6QTJYIJY", "orig_file_number": 46, "seqno_to_time_mapping": "N/A"}}
Jan 31 08:34:30 compute-0 ceph-mon[75227]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 21] Flush lasted 8512 microseconds, and 3218 cpu microseconds.
Jan 31 08:34:30 compute-0 ceph-mon[75227]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 31 08:34:30 compute-0 ceph-mon[75227]: rocksdb: (Original Log Time 2026/01/31-08:34:30.377501) [db/flush_job.cc:967] [default] [JOB 21] Level-0 flush table #46: 1834736 bytes OK
Jan 31 08:34:30 compute-0 ceph-mon[75227]: rocksdb: (Original Log Time 2026/01/31-08:34:30.377519) [db/memtable_list.cc:519] [default] Level-0 commit table #46 started
Jan 31 08:34:30 compute-0 ceph-mon[75227]: rocksdb: (Original Log Time 2026/01/31-08:34:30.379418) [db/memtable_list.cc:722] [default] Level-0 commit table #46: memtable #1 done
Jan 31 08:34:30 compute-0 ceph-mon[75227]: rocksdb: (Original Log Time 2026/01/31-08:34:30.379436) EVENT_LOG_v1 {"time_micros": 1769848470379431, "job": 21, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 31 08:34:30 compute-0 ceph-mon[75227]: rocksdb: (Original Log Time 2026/01/31-08:34:30.379454) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 31 08:34:30 compute-0 ceph-mon[75227]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 21] Try to delete WAL files size 1856996, prev total WAL file size 1856996, number of live WAL files 2.
Jan 31 08:34:30 compute-0 ceph-mon[75227]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000042.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 08:34:30 compute-0 ceph-mon[75227]: rocksdb: (Original Log Time 2026/01/31-08:34:30.379868) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D00323530' seq:72057594037927935, type:22 .. '6C6F676D00353031' seq:0, type:0; will stop at (end)
Jan 31 08:34:30 compute-0 ceph-mon[75227]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 22] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 31 08:34:30 compute-0 ceph-mon[75227]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 21 Base level 0, inputs: [46(1791KB)], [44(6335KB)]
Jan 31 08:34:30 compute-0 ceph-mon[75227]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769848470379900, "job": 22, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [46], "files_L6": [44], "score": -1, "input_data_size": 8322259, "oldest_snapshot_seqno": -1}
Jan 31 08:34:30 compute-0 ceph-mon[75227]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 22] Generated table #47: 4352 keys, 8196605 bytes, temperature: kUnknown
Jan 31 08:34:30 compute-0 ceph-mon[75227]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769848470430443, "cf_name": "default", "job": 22, "event": "table_file_creation", "file_number": 47, "file_size": 8196605, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 8165772, "index_size": 18883, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 10885, "raw_key_size": 107386, "raw_average_key_size": 24, "raw_value_size": 8085224, "raw_average_value_size": 1857, "num_data_blocks": 794, "num_entries": 4352, "num_filter_entries": 4352, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769846771, "oldest_key_time": 0, "file_creation_time": 1769848470, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "91992687-9ca4-489a-811f-a25b3432622d", "db_session_id": "RDN3DWKE2K2I6QTJYIJY", "orig_file_number": 47, "seqno_to_time_mapping": "N/A"}}
Jan 31 08:34:30 compute-0 ceph-mon[75227]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 31 08:34:30 compute-0 ceph-mon[75227]: rocksdb: (Original Log Time 2026/01/31-08:34:30.430797) [db/compaction/compaction_job.cc:1663] [default] [JOB 22] Compacted 1@0 + 1@6 files to L6 => 8196605 bytes
Jan 31 08:34:30 compute-0 ceph-mon[75227]: rocksdb: (Original Log Time 2026/01/31-08:34:30.432148) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 164.3 rd, 161.8 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.7, 6.2 +0.0 blob) out(7.8 +0.0 blob), read-write-amplify(9.0) write-amplify(4.5) OK, records in: 4874, records dropped: 522 output_compression: NoCompression
Jan 31 08:34:30 compute-0 ceph-mon[75227]: rocksdb: (Original Log Time 2026/01/31-08:34:30.432169) EVENT_LOG_v1 {"time_micros": 1769848470432157, "job": 22, "event": "compaction_finished", "compaction_time_micros": 50658, "compaction_time_cpu_micros": 12392, "output_level": 6, "num_output_files": 1, "total_output_size": 8196605, "num_input_records": 4874, "num_output_records": 4352, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 31 08:34:30 compute-0 ceph-mon[75227]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000046.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 08:34:30 compute-0 ceph-mon[75227]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769848470432671, "job": 22, "event": "table_file_deletion", "file_number": 46}
Jan 31 08:34:30 compute-0 ceph-mon[75227]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000044.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 08:34:30 compute-0 ceph-mon[75227]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769848470433633, "job": 22, "event": "table_file_deletion", "file_number": 44}
Jan 31 08:34:30 compute-0 ceph-mon[75227]: rocksdb: (Original Log Time 2026/01/31-08:34:30.379819) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 08:34:30 compute-0 ceph-mon[75227]: rocksdb: (Original Log Time 2026/01/31-08:34:30.433797) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 08:34:30 compute-0 ceph-mon[75227]: rocksdb: (Original Log Time 2026/01/31-08:34:30.433807) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 08:34:30 compute-0 ceph-mon[75227]: rocksdb: (Original Log Time 2026/01/31-08:34:30.433809) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 08:34:30 compute-0 ceph-mon[75227]: rocksdb: (Original Log Time 2026/01/31-08:34:30.433810) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 08:34:30 compute-0 ceph-mon[75227]: rocksdb: (Original Log Time 2026/01/31-08:34:30.433812) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 08:34:30 compute-0 ceph-mon[75227]: pgmap v965: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:34:31 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v966: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:34:31 compute-0 ceph-mon[75227]: pgmap v966: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:34:31 compute-0 ceph-mgr[75519]: [balancer INFO root] Optimize plan auto_2026-01-31_08:34:31
Jan 31 08:34:31 compute-0 ceph-mgr[75519]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 31 08:34:31 compute-0 ceph-mgr[75519]: [balancer INFO root] do_upmap
Jan 31 08:34:31 compute-0 ceph-mgr[75519]: [balancer INFO root] pools ['backups', 'default.rgw.log', 'default.rgw.control', 'vms', '.mgr', 'cephfs.cephfs.data', 'images', 'default.rgw.meta', 'volumes', 'cephfs.cephfs.meta', '.rgw.root']
Jan 31 08:34:31 compute-0 ceph-mgr[75519]: [balancer INFO root] prepared 0/10 upmap changes
Jan 31 08:34:32 compute-0 ceph-mgr[75519]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:34:32 compute-0 ceph-mgr[75519]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:34:32 compute-0 ceph-mgr[75519]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:34:32 compute-0 ceph-mgr[75519]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:34:32 compute-0 ceph-mgr[75519]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:34:32 compute-0 ceph-mgr[75519]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:34:33 compute-0 ceph-mgr[75519]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 31 08:34:33 compute-0 ceph-mgr[75519]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 08:34:33 compute-0 ceph-mgr[75519]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 31 08:34:33 compute-0 ceph-mgr[75519]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 08:34:33 compute-0 ceph-mgr[75519]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 08:34:33 compute-0 ceph-mgr[75519]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 08:34:33 compute-0 ceph-mgr[75519]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 08:34:33 compute-0 ceph-mgr[75519]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 08:34:33 compute-0 ceph-mgr[75519]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 08:34:33 compute-0 ceph-mgr[75519]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 08:34:33 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v967: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:34:34 compute-0 ceph-mon[75227]: pgmap v967: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:34:35 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:34:35 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v968: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:34:36 compute-0 ceph-mon[75227]: pgmap v968: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:34:37 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v969: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:34:38 compute-0 ceph-mon[75227]: pgmap v969: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:34:39 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v970: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:34:40 compute-0 podman[246466]: 2026-01-31 08:34:40.147869156 +0000 UTC m=+0.041893235 container health_status 5cc46d1955888fed41771eb977e7f9416e280539f01559118253757fe3eb0869 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '3b798815db4eef76ff54d2cfe5801aee605c637f16e47a4297de289ab1fdb9c1-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, managed_by=edpm_ansible)
Jan 31 08:34:40 compute-0 podman[246465]: 2026-01-31 08:34:40.176937518 +0000 UTC m=+0.067925381 container health_status 14c3c41ef3ecc0cb180f3b1c6b2646401de390411c75f2edf127900ced71a3ae (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '3b798815db4eef76ff54d2cfe5801aee605c637f16e47a4297de289ab1fdb9c1-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3)
Jan 31 08:34:40 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:34:40 compute-0 ceph-mon[75227]: pgmap v970: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:34:41 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v971: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:34:41 compute-0 ceph-mon[75227]: pgmap v971: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:34:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] _maybe_adjust
Jan 31 08:34:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:34:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 31 08:34:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:34:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 08:34:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:34:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 08:34:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:34:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 08:34:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:34:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 1.9431184059615526e-07 of space, bias 1.0, pg target 5.829355217884658e-05 quantized to 32 (current 32)
Jan 31 08:34:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:34:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 2.607793448422658e-06 of space, bias 4.0, pg target 0.0031293521381071895 quantized to 16 (current 16)
Jan 31 08:34:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:34:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 08:34:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:34:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 31 08:34:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:34:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Jan 31 08:34:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:34:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 08:34:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:34:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 31 08:34:43 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v972: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:34:44 compute-0 ceph-mon[75227]: pgmap v972: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:34:45 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:34:45 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v973: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:34:45 compute-0 ceph-mon[75227]: pgmap v973: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:34:47 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v974: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:34:48 compute-0 ceph-mon[75227]: pgmap v974: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:34:49 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v975: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:34:50 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:34:50 compute-0 ceph-mon[75227]: pgmap v975: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:34:50 compute-0 sudo[246510]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 08:34:50 compute-0 sudo[246510]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:34:50 compute-0 sudo[246510]: pam_unix(sudo:session): session closed for user root
Jan 31 08:34:50 compute-0 sudo[246535]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/82c880e6-d992-5408-8b12-efff9c275473/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Jan 31 08:34:50 compute-0 sudo[246535]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:34:51 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v976: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:34:51 compute-0 sudo[246535]: pam_unix(sudo:session): session closed for user root
Jan 31 08:34:51 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 31 08:34:51 compute-0 ceph-mon[75227]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 08:34:51 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 31 08:34:51 compute-0 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 31 08:34:51 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 31 08:34:51 compute-0 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 08:34:51 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 31 08:34:51 compute-0 ceph-mon[75227]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 31 08:34:51 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 31 08:34:51 compute-0 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 31 08:34:51 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 31 08:34:51 compute-0 ceph-mon[75227]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 08:34:51 compute-0 sudo[246591]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 08:34:51 compute-0 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 08:34:51 compute-0 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 31 08:34:51 compute-0 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 08:34:51 compute-0 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 31 08:34:51 compute-0 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 31 08:34:51 compute-0 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 08:34:51 compute-0 sudo[246591]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:34:51 compute-0 sudo[246591]: pam_unix(sudo:session): session closed for user root
Jan 31 08:34:51 compute-0 sudo[246616]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/82c880e6-d992-5408-8b12-efff9c275473/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 82c880e6-d992-5408-8b12-efff9c275473 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Jan 31 08:34:51 compute-0 sudo[246616]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:34:52 compute-0 podman[246653]: 2026-01-31 08:34:52.007620932 +0000 UTC m=+0.066652955 container create dce65ba00438c0b2c3cf267ce17fcd928a7a777eb626ed76220b3a44d3d2bd55 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nifty_diffie, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 08:34:52 compute-0 podman[246653]: 2026-01-31 08:34:51.966087648 +0000 UTC m=+0.025119721 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 08:34:52 compute-0 systemd[1]: Started libpod-conmon-dce65ba00438c0b2c3cf267ce17fcd928a7a777eb626ed76220b3a44d3d2bd55.scope.
Jan 31 08:34:52 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:34:52 compute-0 podman[246653]: 2026-01-31 08:34:52.137574015 +0000 UTC m=+0.196606128 container init dce65ba00438c0b2c3cf267ce17fcd928a7a777eb626ed76220b3a44d3d2bd55 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nifty_diffie, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS)
Jan 31 08:34:52 compute-0 podman[246653]: 2026-01-31 08:34:52.146758974 +0000 UTC m=+0.205791037 container start dce65ba00438c0b2c3cf267ce17fcd928a7a777eb626ed76220b3a44d3d2bd55 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nifty_diffie, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 08:34:52 compute-0 podman[246653]: 2026-01-31 08:34:52.150579953 +0000 UTC m=+0.209612066 container attach dce65ba00438c0b2c3cf267ce17fcd928a7a777eb626ed76220b3a44d3d2bd55 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nifty_diffie, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Jan 31 08:34:52 compute-0 systemd[1]: libpod-dce65ba00438c0b2c3cf267ce17fcd928a7a777eb626ed76220b3a44d3d2bd55.scope: Deactivated successfully.
Jan 31 08:34:52 compute-0 nifty_diffie[246670]: 167 167
Jan 31 08:34:52 compute-0 conmon[246670]: conmon dce65ba00438c0b2c3cf <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-dce65ba00438c0b2c3cf267ce17fcd928a7a777eb626ed76220b3a44d3d2bd55.scope/container/memory.events
Jan 31 08:34:52 compute-0 podman[246653]: 2026-01-31 08:34:52.155467211 +0000 UTC m=+0.214499234 container died dce65ba00438c0b2c3cf267ce17fcd928a7a777eb626ed76220b3a44d3d2bd55 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nifty_diffie, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Jan 31 08:34:52 compute-0 systemd[1]: var-lib-containers-storage-overlay-7c6d3d713638cc28ccdb3fc1b796a5c222b14e1c88e4cb9dcd6054368c47fb35-merged.mount: Deactivated successfully.
Jan 31 08:34:52 compute-0 podman[246653]: 2026-01-31 08:34:52.252907545 +0000 UTC m=+0.311939618 container remove dce65ba00438c0b2c3cf267ce17fcd928a7a777eb626ed76220b3a44d3d2bd55 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nifty_diffie, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.41.3)
Jan 31 08:34:52 compute-0 systemd[1]: libpod-conmon-dce65ba00438c0b2c3cf267ce17fcd928a7a777eb626ed76220b3a44d3d2bd55.scope: Deactivated successfully.
Jan 31 08:34:52 compute-0 podman[246693]: 2026-01-31 08:34:52.397710758 +0000 UTC m=+0.024341669 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 08:34:52 compute-0 podman[246693]: 2026-01-31 08:34:52.603699 +0000 UTC m=+0.230329931 container create 5588d12b826d5d6ff6db02255d1c7039cdd73ae53e0686b9b808769be99a9b64 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=blissful_mahavira, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Jan 31 08:34:52 compute-0 systemd[1]: Started libpod-conmon-5588d12b826d5d6ff6db02255d1c7039cdd73ae53e0686b9b808769be99a9b64.scope.
Jan 31 08:34:52 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:34:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a6cf1fb83701587b5ab739cf3ca178f9d61282086be2fd5b81cddb9274ec90b1/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 08:34:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a6cf1fb83701587b5ab739cf3ca178f9d61282086be2fd5b81cddb9274ec90b1/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 08:34:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a6cf1fb83701587b5ab739cf3ca178f9d61282086be2fd5b81cddb9274ec90b1/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 08:34:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a6cf1fb83701587b5ab739cf3ca178f9d61282086be2fd5b81cddb9274ec90b1/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 08:34:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a6cf1fb83701587b5ab739cf3ca178f9d61282086be2fd5b81cddb9274ec90b1/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 31 08:34:52 compute-0 ceph-mon[75227]: pgmap v976: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:34:53 compute-0 podman[246693]: 2026-01-31 08:34:53.214715591 +0000 UTC m=+0.841346562 container init 5588d12b826d5d6ff6db02255d1c7039cdd73ae53e0686b9b808769be99a9b64 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=blissful_mahavira, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Jan 31 08:34:53 compute-0 podman[246693]: 2026-01-31 08:34:53.221158883 +0000 UTC m=+0.847789834 container start 5588d12b826d5d6ff6db02255d1c7039cdd73ae53e0686b9b808769be99a9b64 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=blissful_mahavira, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 08:34:53 compute-0 podman[246693]: 2026-01-31 08:34:53.378996034 +0000 UTC m=+1.005626985 container attach 5588d12b826d5d6ff6db02255d1c7039cdd73ae53e0686b9b808769be99a9b64 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=blissful_mahavira, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.build-date=20251030)
Jan 31 08:34:53 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v977: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:34:53 compute-0 blissful_mahavira[246709]: --> passed data devices: 0 physical, 3 LVM
Jan 31 08:34:53 compute-0 blissful_mahavira[246709]: --> All data devices are unavailable
Jan 31 08:34:53 compute-0 systemd[1]: libpod-5588d12b826d5d6ff6db02255d1c7039cdd73ae53e0686b9b808769be99a9b64.scope: Deactivated successfully.
Jan 31 08:34:53 compute-0 podman[246693]: 2026-01-31 08:34:53.701050007 +0000 UTC m=+1.327680978 container died 5588d12b826d5d6ff6db02255d1c7039cdd73ae53e0686b9b808769be99a9b64 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=blissful_mahavira, ceph=True, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Jan 31 08:34:54 compute-0 systemd[1]: var-lib-containers-storage-overlay-a6cf1fb83701587b5ab739cf3ca178f9d61282086be2fd5b81cddb9274ec90b1-merged.mount: Deactivated successfully.
Jan 31 08:34:54 compute-0 ceph-mon[75227]: pgmap v977: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:34:55 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:34:55 compute-0 podman[246693]: 2026-01-31 08:34:55.399654327 +0000 UTC m=+3.026285238 container remove 5588d12b826d5d6ff6db02255d1c7039cdd73ae53e0686b9b808769be99a9b64 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=blissful_mahavira, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 08:34:55 compute-0 systemd[1]: libpod-conmon-5588d12b826d5d6ff6db02255d1c7039cdd73ae53e0686b9b808769be99a9b64.scope: Deactivated successfully.
Jan 31 08:34:55 compute-0 sudo[246616]: pam_unix(sudo:session): session closed for user root
Jan 31 08:34:55 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v978: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:34:55 compute-0 sudo[246744]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 08:34:55 compute-0 sudo[246744]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:34:55 compute-0 sudo[246744]: pam_unix(sudo:session): session closed for user root
Jan 31 08:34:55 compute-0 sudo[246769]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/82c880e6-d992-5408-8b12-efff9c275473/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 82c880e6-d992-5408-8b12-efff9c275473 -- lvm list --format json
Jan 31 08:34:55 compute-0 sudo[246769]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:34:55 compute-0 podman[246806]: 2026-01-31 08:34:55.839113898 +0000 UTC m=+0.034007652 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 08:34:55 compute-0 podman[246806]: 2026-01-31 08:34:55.978995253 +0000 UTC m=+0.173888967 container create 0c323f17365ec58ee779c96b27053ed8ca7543f9f7590aaae7f9a13c9c424f5d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=suspicious_hermann, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, ceph=True)
Jan 31 08:34:56 compute-0 ceph-mon[75227]: pgmap v978: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:34:56 compute-0 systemd[1]: Started libpod-conmon-0c323f17365ec58ee779c96b27053ed8ca7543f9f7590aaae7f9a13c9c424f5d.scope.
Jan 31 08:34:56 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:34:56 compute-0 podman[246806]: 2026-01-31 08:34:56.432187121 +0000 UTC m=+0.627080885 container init 0c323f17365ec58ee779c96b27053ed8ca7543f9f7590aaae7f9a13c9c424f5d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=suspicious_hermann, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Jan 31 08:34:56 compute-0 podman[246806]: 2026-01-31 08:34:56.438387276 +0000 UTC m=+0.633280990 container start 0c323f17365ec58ee779c96b27053ed8ca7543f9f7590aaae7f9a13c9c424f5d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=suspicious_hermann, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030)
Jan 31 08:34:56 compute-0 suspicious_hermann[246823]: 167 167
Jan 31 08:34:56 compute-0 systemd[1]: libpod-0c323f17365ec58ee779c96b27053ed8ca7543f9f7590aaae7f9a13c9c424f5d.scope: Deactivated successfully.
Jan 31 08:34:56 compute-0 podman[246806]: 2026-01-31 08:34:56.571032036 +0000 UTC m=+0.765925730 container attach 0c323f17365ec58ee779c96b27053ed8ca7543f9f7590aaae7f9a13c9c424f5d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=suspicious_hermann, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Jan 31 08:34:56 compute-0 podman[246806]: 2026-01-31 08:34:56.572046594 +0000 UTC m=+0.766940308 container died 0c323f17365ec58ee779c96b27053ed8ca7543f9f7590aaae7f9a13c9c424f5d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=suspicious_hermann, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3)
Jan 31 08:34:56 compute-0 systemd[1]: var-lib-containers-storage-overlay-e955b171f70d6b5db1cc7a0602e241f432c241ab4a262e99eff0ddc0d488f18c-merged.mount: Deactivated successfully.
Jan 31 08:34:57 compute-0 podman[246806]: 2026-01-31 08:34:57.42030484 +0000 UTC m=+1.615198554 container remove 0c323f17365ec58ee779c96b27053ed8ca7543f9f7590aaae7f9a13c9c424f5d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=suspicious_hermann, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Jan 31 08:34:57 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v979: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:34:57 compute-0 systemd[1]: libpod-conmon-0c323f17365ec58ee779c96b27053ed8ca7543f9f7590aaae7f9a13c9c424f5d.scope: Deactivated successfully.
Jan 31 08:34:57 compute-0 podman[246847]: 2026-01-31 08:34:57.595980626 +0000 UTC m=+0.036282607 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 08:34:57 compute-0 podman[246847]: 2026-01-31 08:34:57.705747138 +0000 UTC m=+0.146049099 container create 28fe1ab3f27f7ed29c6572a8c9779c69712c6e430225e26bb6281c59bc18d64e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=modest_noyce, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, io.buildah.version=1.41.3)
Jan 31 08:34:57 compute-0 systemd[1]: Started libpod-conmon-28fe1ab3f27f7ed29c6572a8c9779c69712c6e430225e26bb6281c59bc18d64e.scope.
Jan 31 08:34:57 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:34:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/85b449fbe49c940907352ea7dc85d050c9c4d32a63651a8da890da0e9dd9e6d3/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 08:34:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/85b449fbe49c940907352ea7dc85d050c9c4d32a63651a8da890da0e9dd9e6d3/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 08:34:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/85b449fbe49c940907352ea7dc85d050c9c4d32a63651a8da890da0e9dd9e6d3/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 08:34:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/85b449fbe49c940907352ea7dc85d050c9c4d32a63651a8da890da0e9dd9e6d3/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 08:34:57 compute-0 podman[246847]: 2026-01-31 08:34:57.936442309 +0000 UTC m=+0.376744260 container init 28fe1ab3f27f7ed29c6572a8c9779c69712c6e430225e26bb6281c59bc18d64e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=modest_noyce, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 08:34:57 compute-0 podman[246847]: 2026-01-31 08:34:57.941986386 +0000 UTC m=+0.382288317 container start 28fe1ab3f27f7ed29c6572a8c9779c69712c6e430225e26bb6281c59bc18d64e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=modest_noyce, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 31 08:34:57 compute-0 podman[246847]: 2026-01-31 08:34:57.998659137 +0000 UTC m=+0.438961068 container attach 28fe1ab3f27f7ed29c6572a8c9779c69712c6e430225e26bb6281c59bc18d64e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=modest_noyce, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle)
Jan 31 08:34:58 compute-0 modest_noyce[246862]: {
Jan 31 08:34:58 compute-0 modest_noyce[246862]:     "0": [
Jan 31 08:34:58 compute-0 modest_noyce[246862]:         {
Jan 31 08:34:58 compute-0 modest_noyce[246862]:             "devices": [
Jan 31 08:34:58 compute-0 modest_noyce[246862]:                 "/dev/loop3"
Jan 31 08:34:58 compute-0 modest_noyce[246862]:             ],
Jan 31 08:34:58 compute-0 modest_noyce[246862]:             "lv_name": "ceph_lv0",
Jan 31 08:34:58 compute-0 modest_noyce[246862]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 08:34:58 compute-0 modest_noyce[246862]:             "lv_size": "21470642176",
Jan 31 08:34:58 compute-0 modest_noyce[246862]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=MTsNbY-MKaT-jGv0-3onj-5WQa-gnK0-BbfLsK,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=82c880e6-d992-5408-8b12-efff9c275473,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=39c36249-2898-4a76-b317-8e4ca379866f,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 31 08:34:58 compute-0 modest_noyce[246862]:             "lv_uuid": "MTsNbY-MKaT-jGv0-3onj-5WQa-gnK0-BbfLsK",
Jan 31 08:34:58 compute-0 modest_noyce[246862]:             "name": "ceph_lv0",
Jan 31 08:34:58 compute-0 modest_noyce[246862]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 08:34:58 compute-0 modest_noyce[246862]:             "tags": {
Jan 31 08:34:58 compute-0 modest_noyce[246862]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 31 08:34:58 compute-0 modest_noyce[246862]:                 "ceph.block_uuid": "MTsNbY-MKaT-jGv0-3onj-5WQa-gnK0-BbfLsK",
Jan 31 08:34:58 compute-0 modest_noyce[246862]:                 "ceph.cephx_lockbox_secret": "",
Jan 31 08:34:58 compute-0 modest_noyce[246862]:                 "ceph.cluster_fsid": "82c880e6-d992-5408-8b12-efff9c275473",
Jan 31 08:34:58 compute-0 modest_noyce[246862]:                 "ceph.cluster_name": "ceph",
Jan 31 08:34:58 compute-0 modest_noyce[246862]:                 "ceph.crush_device_class": "",
Jan 31 08:34:58 compute-0 modest_noyce[246862]:                 "ceph.encrypted": "0",
Jan 31 08:34:58 compute-0 modest_noyce[246862]:                 "ceph.objectstore": "bluestore",
Jan 31 08:34:58 compute-0 modest_noyce[246862]:                 "ceph.osd_fsid": "39c36249-2898-4a76-b317-8e4ca379866f",
Jan 31 08:34:58 compute-0 modest_noyce[246862]:                 "ceph.osd_id": "0",
Jan 31 08:34:58 compute-0 modest_noyce[246862]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 31 08:34:58 compute-0 modest_noyce[246862]:                 "ceph.type": "block",
Jan 31 08:34:58 compute-0 modest_noyce[246862]:                 "ceph.vdo": "0",
Jan 31 08:34:58 compute-0 modest_noyce[246862]:                 "ceph.with_tpm": "0"
Jan 31 08:34:58 compute-0 modest_noyce[246862]:             },
Jan 31 08:34:58 compute-0 modest_noyce[246862]:             "type": "block",
Jan 31 08:34:58 compute-0 modest_noyce[246862]:             "vg_name": "ceph_vg0"
Jan 31 08:34:58 compute-0 modest_noyce[246862]:         }
Jan 31 08:34:58 compute-0 modest_noyce[246862]:     ],
Jan 31 08:34:58 compute-0 modest_noyce[246862]:     "1": [
Jan 31 08:34:58 compute-0 modest_noyce[246862]:         {
Jan 31 08:34:58 compute-0 modest_noyce[246862]:             "devices": [
Jan 31 08:34:58 compute-0 modest_noyce[246862]:                 "/dev/loop4"
Jan 31 08:34:58 compute-0 modest_noyce[246862]:             ],
Jan 31 08:34:58 compute-0 modest_noyce[246862]:             "lv_name": "ceph_lv1",
Jan 31 08:34:58 compute-0 modest_noyce[246862]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 31 08:34:58 compute-0 modest_noyce[246862]:             "lv_size": "21470642176",
Jan 31 08:34:58 compute-0 modest_noyce[246862]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=p93Mbf-DMxT-pcUt-jSJE-SFna-oscq-yTAd40,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=82c880e6-d992-5408-8b12-efff9c275473,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=dacad4fa-56d8-4937-b2d8-306fb75187f3,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 31 08:34:58 compute-0 modest_noyce[246862]:             "lv_uuid": "p93Mbf-DMxT-pcUt-jSJE-SFna-oscq-yTAd40",
Jan 31 08:34:58 compute-0 modest_noyce[246862]:             "name": "ceph_lv1",
Jan 31 08:34:58 compute-0 modest_noyce[246862]:             "path": "/dev/ceph_vg1/ceph_lv1",
Jan 31 08:34:58 compute-0 modest_noyce[246862]:             "tags": {
Jan 31 08:34:58 compute-0 modest_noyce[246862]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 31 08:34:58 compute-0 modest_noyce[246862]:                 "ceph.block_uuid": "p93Mbf-DMxT-pcUt-jSJE-SFna-oscq-yTAd40",
Jan 31 08:34:58 compute-0 modest_noyce[246862]:                 "ceph.cephx_lockbox_secret": "",
Jan 31 08:34:58 compute-0 modest_noyce[246862]:                 "ceph.cluster_fsid": "82c880e6-d992-5408-8b12-efff9c275473",
Jan 31 08:34:58 compute-0 modest_noyce[246862]:                 "ceph.cluster_name": "ceph",
Jan 31 08:34:58 compute-0 modest_noyce[246862]:                 "ceph.crush_device_class": "",
Jan 31 08:34:58 compute-0 modest_noyce[246862]:                 "ceph.encrypted": "0",
Jan 31 08:34:58 compute-0 modest_noyce[246862]:                 "ceph.objectstore": "bluestore",
Jan 31 08:34:58 compute-0 modest_noyce[246862]:                 "ceph.osd_fsid": "dacad4fa-56d8-4937-b2d8-306fb75187f3",
Jan 31 08:34:58 compute-0 modest_noyce[246862]:                 "ceph.osd_id": "1",
Jan 31 08:34:58 compute-0 modest_noyce[246862]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 31 08:34:58 compute-0 modest_noyce[246862]:                 "ceph.type": "block",
Jan 31 08:34:58 compute-0 modest_noyce[246862]:                 "ceph.vdo": "0",
Jan 31 08:34:58 compute-0 modest_noyce[246862]:                 "ceph.with_tpm": "0"
Jan 31 08:34:58 compute-0 modest_noyce[246862]:             },
Jan 31 08:34:58 compute-0 modest_noyce[246862]:             "type": "block",
Jan 31 08:34:58 compute-0 modest_noyce[246862]:             "vg_name": "ceph_vg1"
Jan 31 08:34:58 compute-0 modest_noyce[246862]:         }
Jan 31 08:34:58 compute-0 modest_noyce[246862]:     ],
Jan 31 08:34:58 compute-0 modest_noyce[246862]:     "2": [
Jan 31 08:34:58 compute-0 modest_noyce[246862]:         {
Jan 31 08:34:58 compute-0 modest_noyce[246862]:             "devices": [
Jan 31 08:34:58 compute-0 modest_noyce[246862]:                 "/dev/loop5"
Jan 31 08:34:58 compute-0 modest_noyce[246862]:             ],
Jan 31 08:34:58 compute-0 modest_noyce[246862]:             "lv_name": "ceph_lv2",
Jan 31 08:34:58 compute-0 modest_noyce[246862]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 31 08:34:58 compute-0 modest_noyce[246862]:             "lv_size": "21470642176",
Jan 31 08:34:58 compute-0 modest_noyce[246862]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=fxd7JU-HnwP-NvcE-M4xv-EgEF-kK7y-w6dXCS,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=82c880e6-d992-5408-8b12-efff9c275473,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=faa25865-e7b6-44f9-8188-08bf287b941b,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 31 08:34:58 compute-0 modest_noyce[246862]:             "lv_uuid": "fxd7JU-HnwP-NvcE-M4xv-EgEF-kK7y-w6dXCS",
Jan 31 08:34:58 compute-0 modest_noyce[246862]:             "name": "ceph_lv2",
Jan 31 08:34:58 compute-0 modest_noyce[246862]:             "path": "/dev/ceph_vg2/ceph_lv2",
Jan 31 08:34:58 compute-0 modest_noyce[246862]:             "tags": {
Jan 31 08:34:58 compute-0 modest_noyce[246862]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 31 08:34:58 compute-0 modest_noyce[246862]:                 "ceph.block_uuid": "fxd7JU-HnwP-NvcE-M4xv-EgEF-kK7y-w6dXCS",
Jan 31 08:34:58 compute-0 modest_noyce[246862]:                 "ceph.cephx_lockbox_secret": "",
Jan 31 08:34:58 compute-0 modest_noyce[246862]:                 "ceph.cluster_fsid": "82c880e6-d992-5408-8b12-efff9c275473",
Jan 31 08:34:58 compute-0 modest_noyce[246862]:                 "ceph.cluster_name": "ceph",
Jan 31 08:34:58 compute-0 modest_noyce[246862]:                 "ceph.crush_device_class": "",
Jan 31 08:34:58 compute-0 modest_noyce[246862]:                 "ceph.encrypted": "0",
Jan 31 08:34:58 compute-0 modest_noyce[246862]:                 "ceph.objectstore": "bluestore",
Jan 31 08:34:58 compute-0 modest_noyce[246862]:                 "ceph.osd_fsid": "faa25865-e7b6-44f9-8188-08bf287b941b",
Jan 31 08:34:58 compute-0 modest_noyce[246862]:                 "ceph.osd_id": "2",
Jan 31 08:34:58 compute-0 modest_noyce[246862]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 31 08:34:58 compute-0 modest_noyce[246862]:                 "ceph.type": "block",
Jan 31 08:34:58 compute-0 modest_noyce[246862]:                 "ceph.vdo": "0",
Jan 31 08:34:58 compute-0 modest_noyce[246862]:                 "ceph.with_tpm": "0"
Jan 31 08:34:58 compute-0 modest_noyce[246862]:             },
Jan 31 08:34:58 compute-0 modest_noyce[246862]:             "type": "block",
Jan 31 08:34:58 compute-0 modest_noyce[246862]:             "vg_name": "ceph_vg2"
Jan 31 08:34:58 compute-0 modest_noyce[246862]:         }
Jan 31 08:34:58 compute-0 modest_noyce[246862]:     ]
Jan 31 08:34:58 compute-0 modest_noyce[246862]: }
Jan 31 08:34:58 compute-0 systemd[1]: libpod-28fe1ab3f27f7ed29c6572a8c9779c69712c6e430225e26bb6281c59bc18d64e.scope: Deactivated successfully.
Jan 31 08:34:58 compute-0 podman[246847]: 2026-01-31 08:34:58.222832363 +0000 UTC m=+0.663134294 container died 28fe1ab3f27f7ed29c6572a8c9779c69712c6e430225e26bb6281c59bc18d64e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=modest_noyce, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=tentacle)
Jan 31 08:34:58 compute-0 systemd[1]: var-lib-containers-storage-overlay-85b449fbe49c940907352ea7dc85d050c9c4d32a63651a8da890da0e9dd9e6d3-merged.mount: Deactivated successfully.
Jan 31 08:34:58 compute-0 ceph-mon[75227]: pgmap v979: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:34:58 compute-0 podman[246847]: 2026-01-31 08:34:58.767666242 +0000 UTC m=+1.207968173 container remove 28fe1ab3f27f7ed29c6572a8c9779c69712c6e430225e26bb6281c59bc18d64e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=modest_noyce, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 08:34:58 compute-0 systemd[1]: libpod-conmon-28fe1ab3f27f7ed29c6572a8c9779c69712c6e430225e26bb6281c59bc18d64e.scope: Deactivated successfully.
Jan 31 08:34:58 compute-0 sudo[246769]: pam_unix(sudo:session): session closed for user root
Jan 31 08:34:58 compute-0 sudo[246885]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 08:34:58 compute-0 sudo[246885]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:34:58 compute-0 sudo[246885]: pam_unix(sudo:session): session closed for user root
Jan 31 08:34:58 compute-0 sudo[246910]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/82c880e6-d992-5408-8b12-efff9c275473/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 82c880e6-d992-5408-8b12-efff9c275473 -- raw list --format json
Jan 31 08:34:58 compute-0 sudo[246910]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:34:59 compute-0 podman[246947]: 2026-01-31 08:34:59.219276957 +0000 UTC m=+0.039907499 container create 5f3bcb5576467096fa4c7cd75931cd94e829dba5bb000f8b0507f53d7c934b02 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=romantic_bassi, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=tentacle)
Jan 31 08:34:59 compute-0 systemd[1]: Started libpod-conmon-5f3bcb5576467096fa4c7cd75931cd94e829dba5bb000f8b0507f53d7c934b02.scope.
Jan 31 08:34:59 compute-0 podman[246947]: 2026-01-31 08:34:59.196140843 +0000 UTC m=+0.016771405 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 08:34:59 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:34:59 compute-0 podman[246947]: 2026-01-31 08:34:59.317121363 +0000 UTC m=+0.137751935 container init 5f3bcb5576467096fa4c7cd75931cd94e829dba5bb000f8b0507f53d7c934b02 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=romantic_bassi, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 08:34:59 compute-0 podman[246947]: 2026-01-31 08:34:59.325384076 +0000 UTC m=+0.146014618 container start 5f3bcb5576467096fa4c7cd75931cd94e829dba5bb000f8b0507f53d7c934b02 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=romantic_bassi, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 08:34:59 compute-0 romantic_bassi[246963]: 167 167
Jan 31 08:34:59 compute-0 systemd[1]: libpod-5f3bcb5576467096fa4c7cd75931cd94e829dba5bb000f8b0507f53d7c934b02.scope: Deactivated successfully.
Jan 31 08:34:59 compute-0 podman[246947]: 2026-01-31 08:34:59.33435554 +0000 UTC m=+0.154986102 container attach 5f3bcb5576467096fa4c7cd75931cd94e829dba5bb000f8b0507f53d7c934b02 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=romantic_bassi, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Jan 31 08:34:59 compute-0 podman[246947]: 2026-01-31 08:34:59.334821673 +0000 UTC m=+0.155452215 container died 5f3bcb5576467096fa4c7cd75931cd94e829dba5bb000f8b0507f53d7c934b02 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=romantic_bassi, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Jan 31 08:34:59 compute-0 systemd[1]: var-lib-containers-storage-overlay-a848ffd85386a878a61d21ef6e92e5361ad5a32ab9f4f17964d631baf2210a2f-merged.mount: Deactivated successfully.
Jan 31 08:34:59 compute-0 podman[246947]: 2026-01-31 08:34:59.398514023 +0000 UTC m=+0.219144565 container remove 5f3bcb5576467096fa4c7cd75931cd94e829dba5bb000f8b0507f53d7c934b02 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=romantic_bassi, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, CEPH_REF=tentacle)
Jan 31 08:34:59 compute-0 systemd[1]: libpod-conmon-5f3bcb5576467096fa4c7cd75931cd94e829dba5bb000f8b0507f53d7c934b02.scope: Deactivated successfully.
Jan 31 08:34:59 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v980: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:34:59 compute-0 podman[246987]: 2026-01-31 08:34:59.594509143 +0000 UTC m=+0.093530405 container create ff5f9be30aabb5e4c6676cb6b3c42863f08fdd2bab59951e965d90440f5f8f79 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=tender_hellman, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, OSD_FLAVOR=default)
Jan 31 08:34:59 compute-0 podman[246987]: 2026-01-31 08:34:59.528921119 +0000 UTC m=+0.027942401 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 08:34:59 compute-0 systemd[1]: Started libpod-conmon-ff5f9be30aabb5e4c6676cb6b3c42863f08fdd2bab59951e965d90440f5f8f79.scope.
Jan 31 08:34:59 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:34:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/844164e985f05769f09e965b73186b555593b481be55c2e2f53be0be980b11ab/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 08:34:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/844164e985f05769f09e965b73186b555593b481be55c2e2f53be0be980b11ab/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 08:34:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/844164e985f05769f09e965b73186b555593b481be55c2e2f53be0be980b11ab/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 08:34:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/844164e985f05769f09e965b73186b555593b481be55c2e2f53be0be980b11ab/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 08:34:59 compute-0 ceph-mon[75227]: pgmap v980: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:34:59 compute-0 podman[246987]: 2026-01-31 08:34:59.744678918 +0000 UTC m=+0.243700200 container init ff5f9be30aabb5e4c6676cb6b3c42863f08fdd2bab59951e965d90440f5f8f79 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=tender_hellman, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_REF=tentacle)
Jan 31 08:34:59 compute-0 podman[246987]: 2026-01-31 08:34:59.750043549 +0000 UTC m=+0.249064811 container start ff5f9be30aabb5e4c6676cb6b3c42863f08fdd2bab59951e965d90440f5f8f79 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=tender_hellman, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, io.buildah.version=1.41.3)
Jan 31 08:34:59 compute-0 podman[246987]: 2026-01-31 08:34:59.766500304 +0000 UTC m=+0.265521586 container attach ff5f9be30aabb5e4c6676cb6b3c42863f08fdd2bab59951e965d90440f5f8f79 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=tender_hellman, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 08:35:00 compute-0 lvm[247085]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 31 08:35:00 compute-0 lvm[247084]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 31 08:35:00 compute-0 lvm[247084]: VG ceph_vg0 finished
Jan 31 08:35:00 compute-0 lvm[247085]: VG ceph_vg1 finished
Jan 31 08:35:00 compute-0 lvm[247087]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 31 08:35:00 compute-0 lvm[247087]: VG ceph_vg2 finished
Jan 31 08:35:00 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:35:00 compute-0 tender_hellman[247006]: {}
Jan 31 08:35:00 compute-0 systemd[1]: libpod-ff5f9be30aabb5e4c6676cb6b3c42863f08fdd2bab59951e965d90440f5f8f79.scope: Deactivated successfully.
Jan 31 08:35:00 compute-0 systemd[1]: libpod-ff5f9be30aabb5e4c6676cb6b3c42863f08fdd2bab59951e965d90440f5f8f79.scope: Consumed 1.007s CPU time.
Jan 31 08:35:00 compute-0 podman[246987]: 2026-01-31 08:35:00.542101187 +0000 UTC m=+1.041122499 container died ff5f9be30aabb5e4c6676cb6b3c42863f08fdd2bab59951e965d90440f5f8f79 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=tender_hellman, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_REF=tentacle)
Jan 31 08:35:00 compute-0 systemd[1]: var-lib-containers-storage-overlay-844164e985f05769f09e965b73186b555593b481be55c2e2f53be0be980b11ab-merged.mount: Deactivated successfully.
Jan 31 08:35:00 compute-0 podman[246987]: 2026-01-31 08:35:00.647309841 +0000 UTC m=+1.146331113 container remove ff5f9be30aabb5e4c6676cb6b3c42863f08fdd2bab59951e965d90440f5f8f79 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=tender_hellman, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 31 08:35:00 compute-0 systemd[1]: libpod-conmon-ff5f9be30aabb5e4c6676cb6b3c42863f08fdd2bab59951e965d90440f5f8f79.scope: Deactivated successfully.
Jan 31 08:35:00 compute-0 sudo[246910]: pam_unix(sudo:session): session closed for user root
Jan 31 08:35:00 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 31 08:35:00 compute-0 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 08:35:00 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 31 08:35:00 compute-0 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 08:35:01 compute-0 sudo[247103]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 31 08:35:01 compute-0 sudo[247103]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:35:01 compute-0 sudo[247103]: pam_unix(sudo:session): session closed for user root
Jan 31 08:35:01 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v981: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:35:01 compute-0 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 08:35:01 compute-0 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 08:35:01 compute-0 ceph-mon[75227]: pgmap v981: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:35:02 compute-0 ceph-mgr[75519]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:35:02 compute-0 ceph-mgr[75519]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:35:02 compute-0 ceph-mgr[75519]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:35:02 compute-0 ceph-mgr[75519]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:35:02 compute-0 ceph-mgr[75519]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:35:02 compute-0 ceph-mgr[75519]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:35:03 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v982: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:35:03 compute-0 ceph-mon[75227]: pgmap v982: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:35:05 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:35:05 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v983: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:35:06 compute-0 ceph-mon[75227]: pgmap v983: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:35:07 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v984: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:35:07 compute-0 ceph-mon[75227]: pgmap v984: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:35:09 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v985: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:35:10 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:35:10 compute-0 ceph-mon[75227]: pgmap v985: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:35:11 compute-0 podman[247130]: 2026-01-31 08:35:11.175525347 +0000 UTC m=+0.059382889 container health_status 5cc46d1955888fed41771eb977e7f9416e280539f01559118253757fe3eb0869 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '3b798815db4eef76ff54d2cfe5801aee605c637f16e47a4297de289ab1fdb9c1-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, container_name=ovn_metadata_agent)
Jan 31 08:35:11 compute-0 podman[247129]: 2026-01-31 08:35:11.203768516 +0000 UTC m=+0.092214798 container health_status 14c3c41ef3ecc0cb180f3b1c6b2646401de390411c75f2edf127900ced71a3ae (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '3b798815db4eef76ff54d2cfe5801aee605c637f16e47a4297de289ab1fdb9c1-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.schema-version=1.0)
Jan 31 08:35:11 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v986: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:35:11 compute-0 ceph-mon[75227]: pgmap v986: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:35:12 compute-0 nova_compute[238824]: 2026-01-31 08:35:12.341 238828 DEBUG oslo_service.periodic_task [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:35:13 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v987: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:35:14 compute-0 ceph-mon[75227]: pgmap v987: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:35:15 compute-0 nova_compute[238824]: 2026-01-31 08:35:15.339 238828 DEBUG oslo_service.periodic_task [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:35:15 compute-0 nova_compute[238824]: 2026-01-31 08:35:15.340 238828 DEBUG oslo_service.periodic_task [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:35:15 compute-0 nova_compute[238824]: 2026-01-31 08:35:15.340 238828 DEBUG nova.compute.manager [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 31 08:35:15 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:35:15 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v988: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:35:16 compute-0 nova_compute[238824]: 2026-01-31 08:35:16.340 238828 DEBUG oslo_service.periodic_task [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:35:16 compute-0 ceph-mon[75227]: pgmap v988: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:35:17 compute-0 nova_compute[238824]: 2026-01-31 08:35:17.334 238828 DEBUG oslo_service.periodic_task [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:35:17 compute-0 nova_compute[238824]: 2026-01-31 08:35:17.338 238828 DEBUG oslo_service.periodic_task [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:35:17 compute-0 nova_compute[238824]: 2026-01-31 08:35:17.338 238828 DEBUG nova.compute.manager [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 31 08:35:17 compute-0 nova_compute[238824]: 2026-01-31 08:35:17.339 238828 DEBUG nova.compute.manager [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 31 08:35:17 compute-0 nova_compute[238824]: 2026-01-31 08:35:17.363 238828 DEBUG nova.compute.manager [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 31 08:35:17 compute-0 nova_compute[238824]: 2026-01-31 08:35:17.364 238828 DEBUG oslo_service.periodic_task [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:35:17 compute-0 nova_compute[238824]: 2026-01-31 08:35:17.409 238828 DEBUG oslo_concurrency.lockutils [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:35:17 compute-0 nova_compute[238824]: 2026-01-31 08:35:17.409 238828 DEBUG oslo_concurrency.lockutils [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:35:17 compute-0 nova_compute[238824]: 2026-01-31 08:35:17.410 238828 DEBUG oslo_concurrency.lockutils [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:35:17 compute-0 nova_compute[238824]: 2026-01-31 08:35:17.410 238828 DEBUG nova.compute.resource_tracker [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 31 08:35:17 compute-0 nova_compute[238824]: 2026-01-31 08:35:17.410 238828 DEBUG oslo_concurrency.processutils [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:35:17 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v989: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:35:17 compute-0 ceph-mon[75227]: pgmap v989: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:35:17 compute-0 ovn_metadata_agent[154972]: 2026-01-31 08:35:17.894 154977 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:35:17 compute-0 ovn_metadata_agent[154972]: 2026-01-31 08:35:17.895 154977 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:35:17 compute-0 ovn_metadata_agent[154972]: 2026-01-31 08:35:17.895 154977 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:35:17 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 31 08:35:17 compute-0 ceph-mon[75227]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1875130539' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 31 08:35:17 compute-0 nova_compute[238824]: 2026-01-31 08:35:17.950 238828 DEBUG oslo_concurrency.processutils [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.540s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:35:17 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 31 08:35:17 compute-0 ceph-mon[75227]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/873376400' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 31 08:35:17 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 31 08:35:17 compute-0 ceph-mon[75227]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/873376400' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 31 08:35:18 compute-0 nova_compute[238824]: 2026-01-31 08:35:18.072 238828 WARNING nova.virt.libvirt.driver [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 31 08:35:18 compute-0 nova_compute[238824]: 2026-01-31 08:35:18.073 238828 DEBUG nova.compute.resource_tracker [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5160MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 31 08:35:18 compute-0 nova_compute[238824]: 2026-01-31 08:35:18.073 238828 DEBUG oslo_concurrency.lockutils [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:35:18 compute-0 nova_compute[238824]: 2026-01-31 08:35:18.073 238828 DEBUG oslo_concurrency.lockutils [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:35:18 compute-0 nova_compute[238824]: 2026-01-31 08:35:18.224 238828 DEBUG nova.compute.resource_tracker [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 31 08:35:18 compute-0 nova_compute[238824]: 2026-01-31 08:35:18.224 238828 DEBUG nova.compute.resource_tracker [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 31 08:35:18 compute-0 nova_compute[238824]: 2026-01-31 08:35:18.242 238828 DEBUG nova.scheduler.client.report [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Refreshing inventories for resource provider 6d4ff98f-eb37-47a1-bfaf-01e7f5329d98 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Jan 31 08:35:18 compute-0 nova_compute[238824]: 2026-01-31 08:35:18.351 238828 DEBUG nova.scheduler.client.report [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Updating ProviderTree inventory for provider 6d4ff98f-eb37-47a1-bfaf-01e7f5329d98 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Jan 31 08:35:18 compute-0 nova_compute[238824]: 2026-01-31 08:35:18.352 238828 DEBUG nova.compute.provider_tree [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Updating inventory in ProviderTree for provider 6d4ff98f-eb37-47a1-bfaf-01e7f5329d98 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Jan 31 08:35:18 compute-0 nova_compute[238824]: 2026-01-31 08:35:18.368 238828 DEBUG nova.scheduler.client.report [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Refreshing aggregate associations for resource provider 6d4ff98f-eb37-47a1-bfaf-01e7f5329d98, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Jan 31 08:35:18 compute-0 nova_compute[238824]: 2026-01-31 08:35:18.390 238828 DEBUG nova.scheduler.client.report [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Refreshing trait associations for resource provider 6d4ff98f-eb37-47a1-bfaf-01e7f5329d98, traits: COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_NODE,COMPUTE_IMAGE_TYPE_ISO,HW_CPU_X86_F16C,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_STORAGE_BUS_SCSI,HW_CPU_X86_FMA3,HW_CPU_X86_SHA,HW_CPU_X86_SSE,HW_CPU_X86_SSSE3,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,HW_CPU_X86_MMX,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_STORAGE_BUS_IDE,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_ACCELERATORS,HW_CPU_X86_CLMUL,HW_CPU_X86_BMI,HW_CPU_X86_AESNI,HW_CPU_X86_SSE2,HW_CPU_X86_SVM,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_NET_VIF_MODEL_E1000E,HW_CPU_X86_AVX,COMPUTE_SECURITY_TPM_2_0,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_DEVICE_TAGGING,COMPUTE_SECURITY_UEFI_SECURE_BOOT,HW_CPU_X86_SSE41,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_STORAGE_BUS_FDC,COMPUTE_GRAPHICS_MODEL_BOCHS,HW_CPU_X86_AVX2,HW_CPU_X86_BMI2,COMPUTE_VOLUME_EXTEND,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_TRUSTED_CERTS,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_SECURITY_TPM_1_2,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_GRAPHICS_MODEL_VIRTIO,HW_CPU_X86_ABM,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_RESCUE_BFV,HW_CPU_X86_SSE42,HW_CPU_X86_SSE4A,COMPUTE_STORAGE_BUS_SATA,COMPUTE_STORAGE_BUS_USB,COMPUTE_VIOMMU_MODEL_INTEL,HW_CPU_X86_AMD_SVM _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Jan 31 08:35:18 compute-0 nova_compute[238824]: 2026-01-31 08:35:18.411 238828 DEBUG oslo_concurrency.processutils [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:35:18 compute-0 ceph-mon[75227]: from='client.? 192.168.122.100:0/1875130539' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 31 08:35:18 compute-0 ceph-mon[75227]: from='client.? 192.168.122.10:0/873376400' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 31 08:35:18 compute-0 ceph-mon[75227]: from='client.? 192.168.122.10:0/873376400' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 31 08:35:18 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 31 08:35:18 compute-0 ceph-mon[75227]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2320495469' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 31 08:35:18 compute-0 nova_compute[238824]: 2026-01-31 08:35:18.972 238828 DEBUG oslo_concurrency.processutils [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.561s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:35:18 compute-0 nova_compute[238824]: 2026-01-31 08:35:18.976 238828 DEBUG nova.compute.provider_tree [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Inventory has not changed in ProviderTree for provider: 6d4ff98f-eb37-47a1-bfaf-01e7f5329d98 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 31 08:35:19 compute-0 nova_compute[238824]: 2026-01-31 08:35:19.007 238828 DEBUG nova.scheduler.client.report [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Inventory has not changed for provider 6d4ff98f-eb37-47a1-bfaf-01e7f5329d98 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 31 08:35:19 compute-0 nova_compute[238824]: 2026-01-31 08:35:19.010 238828 DEBUG nova.compute.resource_tracker [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 31 08:35:19 compute-0 nova_compute[238824]: 2026-01-31 08:35:19.010 238828 DEBUG oslo_concurrency.lockutils [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.937s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:35:19 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v990: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:35:20 compute-0 ceph-mon[75227]: from='client.? 192.168.122.100:0/2320495469' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 31 08:35:20 compute-0 ceph-mon[75227]: pgmap v990: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:35:20 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:35:20 compute-0 nova_compute[238824]: 2026-01-31 08:35:20.987 238828 DEBUG oslo_service.periodic_task [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:35:21 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v991: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:35:22 compute-0 ceph-mon[75227]: pgmap v991: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:35:23 compute-0 nova_compute[238824]: 2026-01-31 08:35:23.339 238828 DEBUG oslo_service.periodic_task [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:35:23 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v992: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:35:24 compute-0 ceph-mon[75227]: pgmap v992: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:35:25 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:35:25 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v993: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:35:25 compute-0 ceph-mon[75227]: pgmap v993: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:35:27 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v994: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:35:27 compute-0 ceph-mon[75227]: pgmap v994: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:35:29 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v995: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:35:30 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:35:30 compute-0 ceph-mon[75227]: pgmap v995: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:35:31 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v996: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:35:31 compute-0 ceph-mgr[75519]: [balancer INFO root] Optimize plan auto_2026-01-31_08:35:31
Jan 31 08:35:31 compute-0 ceph-mgr[75519]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 31 08:35:31 compute-0 ceph-mgr[75519]: [balancer INFO root] do_upmap
Jan 31 08:35:31 compute-0 ceph-mgr[75519]: [balancer INFO root] pools ['backups', 'default.rgw.meta', 'cephfs.cephfs.data', '.rgw.root', '.mgr', 'cephfs.cephfs.meta', 'default.rgw.log', 'vms', 'default.rgw.control', 'volumes', 'images']
Jan 31 08:35:31 compute-0 ceph-mgr[75519]: [balancer INFO root] prepared 0/10 upmap changes
Jan 31 08:35:32 compute-0 ceph-mon[75227]: pgmap v996: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:35:32 compute-0 ceph-mgr[75519]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:35:32 compute-0 ceph-mgr[75519]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:35:32 compute-0 ceph-mgr[75519]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:35:32 compute-0 ceph-mgr[75519]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:35:32 compute-0 ceph-mgr[75519]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:35:32 compute-0 ceph-mgr[75519]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:35:33 compute-0 ceph-mgr[75519]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 31 08:35:33 compute-0 ceph-mgr[75519]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 08:35:33 compute-0 ceph-mgr[75519]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 31 08:35:33 compute-0 ceph-mgr[75519]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 08:35:33 compute-0 ceph-mgr[75519]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 08:35:33 compute-0 ceph-mgr[75519]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 08:35:33 compute-0 ceph-mgr[75519]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 08:35:33 compute-0 ceph-mgr[75519]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 08:35:33 compute-0 ceph-mgr[75519]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 08:35:33 compute-0 ceph-mgr[75519]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 08:35:33 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v997: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:35:33 compute-0 ceph-mon[75227]: pgmap v997: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:35:35 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:35:35 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v998: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:35:36 compute-0 ceph-mon[75227]: pgmap v998: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:35:37 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v999: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:35:38 compute-0 ceph-mon[75227]: pgmap v999: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:35:39 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1000: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:35:39 compute-0 ceph-mon[75227]: pgmap v1000: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:35:40 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:35:41 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1001: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:35:42 compute-0 podman[247218]: 2026-01-31 08:35:42.157088019 +0000 UTC m=+0.051700332 container health_status 5cc46d1955888fed41771eb977e7f9416e280539f01559118253757fe3eb0869 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '3b798815db4eef76ff54d2cfe5801aee605c637f16e47a4297de289ab1fdb9c1-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20260127, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team)
Jan 31 08:35:42 compute-0 podman[247217]: 2026-01-31 08:35:42.198951423 +0000 UTC m=+0.095497161 container health_status 14c3c41ef3ecc0cb180f3b1c6b2646401de390411c75f2edf127900ced71a3ae (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260127, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '3b798815db4eef76ff54d2cfe5801aee605c637f16e47a4297de289ab1fdb9c1-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team)
Jan 31 08:35:42 compute-0 ceph-mon[75227]: pgmap v1001: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:35:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] _maybe_adjust
Jan 31 08:35:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:35:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 31 08:35:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:35:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 08:35:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:35:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 08:35:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:35:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 08:35:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:35:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 1.9431184059615526e-07 of space, bias 1.0, pg target 5.829355217884658e-05 quantized to 32 (current 32)
Jan 31 08:35:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:35:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 2.607793448422658e-06 of space, bias 4.0, pg target 0.0031293521381071895 quantized to 16 (current 16)
Jan 31 08:35:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:35:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 08:35:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:35:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 31 08:35:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:35:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Jan 31 08:35:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:35:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 08:35:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:35:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 31 08:35:43 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1002: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:35:44 compute-0 ceph-mon[75227]: pgmap v1002: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:35:45 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:35:45 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1003: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:35:46 compute-0 ceph-mon[75227]: pgmap v1003: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:35:47 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1004: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:35:47 compute-0 ceph-mon[75227]: pgmap v1004: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:35:49 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1005: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:35:50 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:35:50 compute-0 ceph-mon[75227]: pgmap v1005: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:35:51 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1006: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:35:51 compute-0 ceph-mon[75227]: pgmap v1006: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:35:53 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1007: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:35:54 compute-0 ceph-mon[75227]: pgmap v1007: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:35:55 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:35:55 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1008: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:35:56 compute-0 ceph-mon[75227]: pgmap v1008: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:35:57 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1009: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:35:58 compute-0 ceph-mon[75227]: pgmap v1009: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:35:59 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1010: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:35:59 compute-0 ceph-mon[75227]: pgmap v1010: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:36:00 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:36:01 compute-0 sudo[247262]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 08:36:01 compute-0 sudo[247262]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:36:01 compute-0 sudo[247262]: pam_unix(sudo:session): session closed for user root
Jan 31 08:36:01 compute-0 sudo[247287]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/82c880e6-d992-5408-8b12-efff9c275473/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Jan 31 08:36:01 compute-0 sudo[247287]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:36:01 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1011: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:36:01 compute-0 sudo[247287]: pam_unix(sudo:session): session closed for user root
Jan 31 08:36:01 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 31 08:36:01 compute-0 ceph-mon[75227]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 08:36:01 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 31 08:36:01 compute-0 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 31 08:36:01 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 31 08:36:01 compute-0 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 08:36:01 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 31 08:36:01 compute-0 ceph-mon[75227]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 31 08:36:01 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 31 08:36:01 compute-0 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 31 08:36:01 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 31 08:36:01 compute-0 ceph-mon[75227]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 08:36:01 compute-0 sudo[247343]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 08:36:01 compute-0 sudo[247343]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:36:01 compute-0 sudo[247343]: pam_unix(sudo:session): session closed for user root
Jan 31 08:36:01 compute-0 sudo[247368]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/82c880e6-d992-5408-8b12-efff9c275473/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 82c880e6-d992-5408-8b12-efff9c275473 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Jan 31 08:36:01 compute-0 sudo[247368]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:36:02 compute-0 podman[247405]: 2026-01-31 08:36:02.050433927 +0000 UTC m=+0.031471631 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 08:36:02 compute-0 podman[247405]: 2026-01-31 08:36:02.187558293 +0000 UTC m=+0.168596007 container create fc98def84f5d0ab8e4f46e6c4b4ac783c4f99da885e823d1c020d02f9408a3a7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nifty_vaughan, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 31 08:36:02 compute-0 systemd[1]: Started libpod-conmon-fc98def84f5d0ab8e4f46e6c4b4ac783c4f99da885e823d1c020d02f9408a3a7.scope.
Jan 31 08:36:02 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:36:02 compute-0 podman[247405]: 2026-01-31 08:36:02.347764951 +0000 UTC m=+0.328802665 container init fc98def84f5d0ab8e4f46e6c4b4ac783c4f99da885e823d1c020d02f9408a3a7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nifty_vaughan, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 08:36:02 compute-0 podman[247405]: 2026-01-31 08:36:02.390785047 +0000 UTC m=+0.371822751 container start fc98def84f5d0ab8e4f46e6c4b4ac783c4f99da885e823d1c020d02f9408a3a7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nifty_vaughan, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030)
Jan 31 08:36:02 compute-0 systemd[1]: libpod-fc98def84f5d0ab8e4f46e6c4b4ac783c4f99da885e823d1c020d02f9408a3a7.scope: Deactivated successfully.
Jan 31 08:36:02 compute-0 nifty_vaughan[247422]: 167 167
Jan 31 08:36:02 compute-0 conmon[247422]: conmon fc98def84f5d0ab8e4f4 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-fc98def84f5d0ab8e4f46e6c4b4ac783c4f99da885e823d1c020d02f9408a3a7.scope/container/memory.events
Jan 31 08:36:02 compute-0 podman[247405]: 2026-01-31 08:36:02.443229759 +0000 UTC m=+0.424267443 container attach fc98def84f5d0ab8e4f46e6c4b4ac783c4f99da885e823d1c020d02f9408a3a7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nifty_vaughan, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True)
Jan 31 08:36:02 compute-0 podman[247405]: 2026-01-31 08:36:02.444625328 +0000 UTC m=+0.425663012 container died fc98def84f5d0ab8e4f46e6c4b4ac783c4f99da885e823d1c020d02f9408a3a7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nifty_vaughan, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 08:36:02 compute-0 systemd[1]: var-lib-containers-storage-overlay-9bf791c8d83d32beb8a204504814cd42ad7dcea9ce34ab0ce2b504bdd5e56a1d-merged.mount: Deactivated successfully.
Jan 31 08:36:02 compute-0 podman[247405]: 2026-01-31 08:36:02.502756731 +0000 UTC m=+0.483794425 container remove fc98def84f5d0ab8e4f46e6c4b4ac783c4f99da885e823d1c020d02f9408a3a7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nifty_vaughan, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 31 08:36:02 compute-0 systemd[1]: libpod-conmon-fc98def84f5d0ab8e4f46e6c4b4ac783c4f99da885e823d1c020d02f9408a3a7.scope: Deactivated successfully.
Jan 31 08:36:02 compute-0 ceph-mon[75227]: pgmap v1011: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:36:02 compute-0 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 08:36:02 compute-0 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 31 08:36:02 compute-0 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 08:36:02 compute-0 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 31 08:36:02 compute-0 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 31 08:36:02 compute-0 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 08:36:02 compute-0 podman[247448]: 2026-01-31 08:36:02.656852576 +0000 UTC m=+0.042133682 container create b27e9fba3595a8ce930624378401a8841588231bf58ceda87daf22b106b8d793 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=admiring_rubin, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 08:36:02 compute-0 systemd[1]: Started libpod-conmon-b27e9fba3595a8ce930624378401a8841588231bf58ceda87daf22b106b8d793.scope.
Jan 31 08:36:02 compute-0 podman[247448]: 2026-01-31 08:36:02.638684282 +0000 UTC m=+0.023965418 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 08:36:02 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:36:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/69ee6475fde0c3e231f3fadbc563661d00ae5540544c33e5d2aa12640e4ab1a3/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 08:36:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/69ee6475fde0c3e231f3fadbc563661d00ae5540544c33e5d2aa12640e4ab1a3/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 08:36:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/69ee6475fde0c3e231f3fadbc563661d00ae5540544c33e5d2aa12640e4ab1a3/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 08:36:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/69ee6475fde0c3e231f3fadbc563661d00ae5540544c33e5d2aa12640e4ab1a3/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 08:36:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/69ee6475fde0c3e231f3fadbc563661d00ae5540544c33e5d2aa12640e4ab1a3/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 31 08:36:02 compute-0 podman[247448]: 2026-01-31 08:36:02.760187727 +0000 UTC m=+0.145468863 container init b27e9fba3595a8ce930624378401a8841588231bf58ceda87daf22b106b8d793 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=admiring_rubin, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 08:36:02 compute-0 podman[247448]: 2026-01-31 08:36:02.768709387 +0000 UTC m=+0.153990493 container start b27e9fba3595a8ce930624378401a8841588231bf58ceda87daf22b106b8d793 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=admiring_rubin, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 08:36:02 compute-0 podman[247448]: 2026-01-31 08:36:02.773428291 +0000 UTC m=+0.158709557 container attach b27e9fba3595a8ce930624378401a8841588231bf58ceda87daf22b106b8d793 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=admiring_rubin, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2)
Jan 31 08:36:02 compute-0 ceph-mgr[75519]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:36:02 compute-0 ceph-mgr[75519]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:36:02 compute-0 ceph-mgr[75519]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:36:02 compute-0 ceph-mgr[75519]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:36:02 compute-0 ceph-mgr[75519]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:36:02 compute-0 ceph-mgr[75519]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:36:03 compute-0 admiring_rubin[247464]: --> passed data devices: 0 physical, 3 LVM
Jan 31 08:36:03 compute-0 admiring_rubin[247464]: --> All data devices are unavailable
Jan 31 08:36:03 compute-0 systemd[1]: libpod-b27e9fba3595a8ce930624378401a8841588231bf58ceda87daf22b106b8d793.scope: Deactivated successfully.
Jan 31 08:36:03 compute-0 podman[247448]: 2026-01-31 08:36:03.211926875 +0000 UTC m=+0.597207981 container died b27e9fba3595a8ce930624378401a8841588231bf58ceda87daf22b106b8d793 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=admiring_rubin, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 08:36:03 compute-0 systemd[1]: var-lib-containers-storage-overlay-69ee6475fde0c3e231f3fadbc563661d00ae5540544c33e5d2aa12640e4ab1a3-merged.mount: Deactivated successfully.
Jan 31 08:36:03 compute-0 podman[247448]: 2026-01-31 08:36:03.2549082 +0000 UTC m=+0.640189306 container remove b27e9fba3595a8ce930624378401a8841588231bf58ceda87daf22b106b8d793 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=admiring_rubin, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 08:36:03 compute-0 systemd[1]: libpod-conmon-b27e9fba3595a8ce930624378401a8841588231bf58ceda87daf22b106b8d793.scope: Deactivated successfully.
Jan 31 08:36:03 compute-0 sudo[247368]: pam_unix(sudo:session): session closed for user root
Jan 31 08:36:03 compute-0 sudo[247496]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 08:36:03 compute-0 sudo[247496]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:36:03 compute-0 sudo[247496]: pam_unix(sudo:session): session closed for user root
Jan 31 08:36:03 compute-0 sudo[247521]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/82c880e6-d992-5408-8b12-efff9c275473/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 82c880e6-d992-5408-8b12-efff9c275473 -- lvm list --format json
Jan 31 08:36:03 compute-0 sudo[247521]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:36:03 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1012: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:36:03 compute-0 podman[247558]: 2026-01-31 08:36:03.949113191 +0000 UTC m=+0.040391942 container create 038a5918a6d91d5de4afd76e27f224eab775e6d7e0822319b09749917ffcd53b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hungry_tesla, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 08:36:03 compute-0 systemd[1]: Started libpod-conmon-038a5918a6d91d5de4afd76e27f224eab775e6d7e0822319b09749917ffcd53b.scope.
Jan 31 08:36:04 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:36:04 compute-0 podman[247558]: 2026-01-31 08:36:04.024157022 +0000 UTC m=+0.115435783 container init 038a5918a6d91d5de4afd76e27f224eab775e6d7e0822319b09749917ffcd53b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hungry_tesla, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 08:36:04 compute-0 podman[247558]: 2026-01-31 08:36:03.930517646 +0000 UTC m=+0.021796437 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 08:36:04 compute-0 podman[247558]: 2026-01-31 08:36:04.029585426 +0000 UTC m=+0.120864177 container start 038a5918a6d91d5de4afd76e27f224eab775e6d7e0822319b09749917ffcd53b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hungry_tesla, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 08:36:04 compute-0 podman[247558]: 2026-01-31 08:36:04.032473098 +0000 UTC m=+0.123751849 container attach 038a5918a6d91d5de4afd76e27f224eab775e6d7e0822319b09749917ffcd53b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hungry_tesla, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_REF=tentacle, io.buildah.version=1.41.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Jan 31 08:36:04 compute-0 hungry_tesla[247574]: 167 167
Jan 31 08:36:04 compute-0 systemd[1]: libpod-038a5918a6d91d5de4afd76e27f224eab775e6d7e0822319b09749917ffcd53b.scope: Deactivated successfully.
Jan 31 08:36:04 compute-0 podman[247558]: 2026-01-31 08:36:04.034371151 +0000 UTC m=+0.125649912 container died 038a5918a6d91d5de4afd76e27f224eab775e6d7e0822319b09749917ffcd53b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hungry_tesla, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 08:36:04 compute-0 systemd[1]: var-lib-containers-storage-overlay-1c916189ffc2da3372bc7d2944e1f801516e5d9503fa3456fd83df4402e3ee97-merged.mount: Deactivated successfully.
Jan 31 08:36:04 compute-0 podman[247558]: 2026-01-31 08:36:04.072445017 +0000 UTC m=+0.163723768 container remove 038a5918a6d91d5de4afd76e27f224eab775e6d7e0822319b09749917ffcd53b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hungry_tesla, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.build-date=20251030, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 08:36:04 compute-0 systemd[1]: libpod-conmon-038a5918a6d91d5de4afd76e27f224eab775e6d7e0822319b09749917ffcd53b.scope: Deactivated successfully.
Jan 31 08:36:04 compute-0 podman[247598]: 2026-01-31 08:36:04.210740556 +0000 UTC m=+0.046348941 container create e9b7d18fc364c89029f6e14892005ba066b5a50cc8688c5ab9f213d91981d61c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eager_jemison, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=tentacle)
Jan 31 08:36:04 compute-0 systemd[1]: Started libpod-conmon-e9b7d18fc364c89029f6e14892005ba066b5a50cc8688c5ab9f213d91981d61c.scope.
Jan 31 08:36:04 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:36:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a18a0affc386fd754c09795027e8ccf141d12fd6149be5d6ad056855eb810b28/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 08:36:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a18a0affc386fd754c09795027e8ccf141d12fd6149be5d6ad056855eb810b28/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 08:36:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a18a0affc386fd754c09795027e8ccf141d12fd6149be5d6ad056855eb810b28/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 08:36:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a18a0affc386fd754c09795027e8ccf141d12fd6149be5d6ad056855eb810b28/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 08:36:04 compute-0 podman[247598]: 2026-01-31 08:36:04.190963337 +0000 UTC m=+0.026571792 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 08:36:04 compute-0 podman[247598]: 2026-01-31 08:36:04.317468523 +0000 UTC m=+0.153076938 container init e9b7d18fc364c89029f6e14892005ba066b5a50cc8688c5ab9f213d91981d61c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eager_jemison, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 31 08:36:04 compute-0 podman[247598]: 2026-01-31 08:36:04.325268083 +0000 UTC m=+0.160876458 container start e9b7d18fc364c89029f6e14892005ba066b5a50cc8688c5ab9f213d91981d61c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eager_jemison, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 31 08:36:04 compute-0 podman[247598]: 2026-01-31 08:36:04.329430391 +0000 UTC m=+0.165038806 container attach e9b7d18fc364c89029f6e14892005ba066b5a50cc8688c5ab9f213d91981d61c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eager_jemison, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 08:36:04 compute-0 eager_jemison[247614]: {
Jan 31 08:36:04 compute-0 eager_jemison[247614]:     "0": [
Jan 31 08:36:04 compute-0 eager_jemison[247614]:         {
Jan 31 08:36:04 compute-0 eager_jemison[247614]:             "devices": [
Jan 31 08:36:04 compute-0 eager_jemison[247614]:                 "/dev/loop3"
Jan 31 08:36:04 compute-0 eager_jemison[247614]:             ],
Jan 31 08:36:04 compute-0 eager_jemison[247614]:             "lv_name": "ceph_lv0",
Jan 31 08:36:04 compute-0 eager_jemison[247614]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 08:36:04 compute-0 eager_jemison[247614]:             "lv_size": "21470642176",
Jan 31 08:36:04 compute-0 eager_jemison[247614]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=MTsNbY-MKaT-jGv0-3onj-5WQa-gnK0-BbfLsK,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=82c880e6-d992-5408-8b12-efff9c275473,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=39c36249-2898-4a76-b317-8e4ca379866f,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 31 08:36:04 compute-0 eager_jemison[247614]:             "lv_uuid": "MTsNbY-MKaT-jGv0-3onj-5WQa-gnK0-BbfLsK",
Jan 31 08:36:04 compute-0 eager_jemison[247614]:             "name": "ceph_lv0",
Jan 31 08:36:04 compute-0 eager_jemison[247614]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 08:36:04 compute-0 eager_jemison[247614]:             "tags": {
Jan 31 08:36:04 compute-0 eager_jemison[247614]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 31 08:36:04 compute-0 eager_jemison[247614]:                 "ceph.block_uuid": "MTsNbY-MKaT-jGv0-3onj-5WQa-gnK0-BbfLsK",
Jan 31 08:36:04 compute-0 eager_jemison[247614]:                 "ceph.cephx_lockbox_secret": "",
Jan 31 08:36:04 compute-0 eager_jemison[247614]:                 "ceph.cluster_fsid": "82c880e6-d992-5408-8b12-efff9c275473",
Jan 31 08:36:04 compute-0 eager_jemison[247614]:                 "ceph.cluster_name": "ceph",
Jan 31 08:36:04 compute-0 eager_jemison[247614]:                 "ceph.crush_device_class": "",
Jan 31 08:36:04 compute-0 eager_jemison[247614]:                 "ceph.encrypted": "0",
Jan 31 08:36:04 compute-0 eager_jemison[247614]:                 "ceph.objectstore": "bluestore",
Jan 31 08:36:04 compute-0 eager_jemison[247614]:                 "ceph.osd_fsid": "39c36249-2898-4a76-b317-8e4ca379866f",
Jan 31 08:36:04 compute-0 eager_jemison[247614]:                 "ceph.osd_id": "0",
Jan 31 08:36:04 compute-0 eager_jemison[247614]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 31 08:36:04 compute-0 eager_jemison[247614]:                 "ceph.type": "block",
Jan 31 08:36:04 compute-0 eager_jemison[247614]:                 "ceph.vdo": "0",
Jan 31 08:36:04 compute-0 eager_jemison[247614]:                 "ceph.with_tpm": "0"
Jan 31 08:36:04 compute-0 eager_jemison[247614]:             },
Jan 31 08:36:04 compute-0 eager_jemison[247614]:             "type": "block",
Jan 31 08:36:04 compute-0 eager_jemison[247614]:             "vg_name": "ceph_vg0"
Jan 31 08:36:04 compute-0 eager_jemison[247614]:         }
Jan 31 08:36:04 compute-0 eager_jemison[247614]:     ],
Jan 31 08:36:04 compute-0 eager_jemison[247614]:     "1": [
Jan 31 08:36:04 compute-0 eager_jemison[247614]:         {
Jan 31 08:36:04 compute-0 eager_jemison[247614]:             "devices": [
Jan 31 08:36:04 compute-0 eager_jemison[247614]:                 "/dev/loop4"
Jan 31 08:36:04 compute-0 eager_jemison[247614]:             ],
Jan 31 08:36:04 compute-0 eager_jemison[247614]:             "lv_name": "ceph_lv1",
Jan 31 08:36:04 compute-0 eager_jemison[247614]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 31 08:36:04 compute-0 eager_jemison[247614]:             "lv_size": "21470642176",
Jan 31 08:36:04 compute-0 eager_jemison[247614]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=p93Mbf-DMxT-pcUt-jSJE-SFna-oscq-yTAd40,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=82c880e6-d992-5408-8b12-efff9c275473,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=dacad4fa-56d8-4937-b2d8-306fb75187f3,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 31 08:36:04 compute-0 eager_jemison[247614]:             "lv_uuid": "p93Mbf-DMxT-pcUt-jSJE-SFna-oscq-yTAd40",
Jan 31 08:36:04 compute-0 eager_jemison[247614]:             "name": "ceph_lv1",
Jan 31 08:36:04 compute-0 eager_jemison[247614]:             "path": "/dev/ceph_vg1/ceph_lv1",
Jan 31 08:36:04 compute-0 eager_jemison[247614]:             "tags": {
Jan 31 08:36:04 compute-0 eager_jemison[247614]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 31 08:36:04 compute-0 eager_jemison[247614]:                 "ceph.block_uuid": "p93Mbf-DMxT-pcUt-jSJE-SFna-oscq-yTAd40",
Jan 31 08:36:04 compute-0 eager_jemison[247614]:                 "ceph.cephx_lockbox_secret": "",
Jan 31 08:36:04 compute-0 eager_jemison[247614]:                 "ceph.cluster_fsid": "82c880e6-d992-5408-8b12-efff9c275473",
Jan 31 08:36:04 compute-0 eager_jemison[247614]:                 "ceph.cluster_name": "ceph",
Jan 31 08:36:04 compute-0 eager_jemison[247614]:                 "ceph.crush_device_class": "",
Jan 31 08:36:04 compute-0 eager_jemison[247614]:                 "ceph.encrypted": "0",
Jan 31 08:36:04 compute-0 eager_jemison[247614]:                 "ceph.objectstore": "bluestore",
Jan 31 08:36:04 compute-0 eager_jemison[247614]:                 "ceph.osd_fsid": "dacad4fa-56d8-4937-b2d8-306fb75187f3",
Jan 31 08:36:04 compute-0 eager_jemison[247614]:                 "ceph.osd_id": "1",
Jan 31 08:36:04 compute-0 eager_jemison[247614]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 31 08:36:04 compute-0 eager_jemison[247614]:                 "ceph.type": "block",
Jan 31 08:36:04 compute-0 eager_jemison[247614]:                 "ceph.vdo": "0",
Jan 31 08:36:04 compute-0 eager_jemison[247614]:                 "ceph.with_tpm": "0"
Jan 31 08:36:04 compute-0 eager_jemison[247614]:             },
Jan 31 08:36:04 compute-0 eager_jemison[247614]:             "type": "block",
Jan 31 08:36:04 compute-0 eager_jemison[247614]:             "vg_name": "ceph_vg1"
Jan 31 08:36:04 compute-0 eager_jemison[247614]:         }
Jan 31 08:36:04 compute-0 eager_jemison[247614]:     ],
Jan 31 08:36:04 compute-0 eager_jemison[247614]:     "2": [
Jan 31 08:36:04 compute-0 eager_jemison[247614]:         {
Jan 31 08:36:04 compute-0 eager_jemison[247614]:             "devices": [
Jan 31 08:36:04 compute-0 eager_jemison[247614]:                 "/dev/loop5"
Jan 31 08:36:04 compute-0 eager_jemison[247614]:             ],
Jan 31 08:36:04 compute-0 eager_jemison[247614]:             "lv_name": "ceph_lv2",
Jan 31 08:36:04 compute-0 eager_jemison[247614]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 31 08:36:04 compute-0 eager_jemison[247614]:             "lv_size": "21470642176",
Jan 31 08:36:04 compute-0 eager_jemison[247614]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=fxd7JU-HnwP-NvcE-M4xv-EgEF-kK7y-w6dXCS,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=82c880e6-d992-5408-8b12-efff9c275473,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=faa25865-e7b6-44f9-8188-08bf287b941b,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 31 08:36:04 compute-0 eager_jemison[247614]:             "lv_uuid": "fxd7JU-HnwP-NvcE-M4xv-EgEF-kK7y-w6dXCS",
Jan 31 08:36:04 compute-0 eager_jemison[247614]:             "name": "ceph_lv2",
Jan 31 08:36:04 compute-0 eager_jemison[247614]:             "path": "/dev/ceph_vg2/ceph_lv2",
Jan 31 08:36:04 compute-0 eager_jemison[247614]:             "tags": {
Jan 31 08:36:04 compute-0 eager_jemison[247614]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 31 08:36:04 compute-0 eager_jemison[247614]:                 "ceph.block_uuid": "fxd7JU-HnwP-NvcE-M4xv-EgEF-kK7y-w6dXCS",
Jan 31 08:36:04 compute-0 eager_jemison[247614]:                 "ceph.cephx_lockbox_secret": "",
Jan 31 08:36:04 compute-0 eager_jemison[247614]:                 "ceph.cluster_fsid": "82c880e6-d992-5408-8b12-efff9c275473",
Jan 31 08:36:04 compute-0 eager_jemison[247614]:                 "ceph.cluster_name": "ceph",
Jan 31 08:36:04 compute-0 eager_jemison[247614]:                 "ceph.crush_device_class": "",
Jan 31 08:36:04 compute-0 eager_jemison[247614]:                 "ceph.encrypted": "0",
Jan 31 08:36:04 compute-0 eager_jemison[247614]:                 "ceph.objectstore": "bluestore",
Jan 31 08:36:04 compute-0 eager_jemison[247614]:                 "ceph.osd_fsid": "faa25865-e7b6-44f9-8188-08bf287b941b",
Jan 31 08:36:04 compute-0 eager_jemison[247614]:                 "ceph.osd_id": "2",
Jan 31 08:36:04 compute-0 eager_jemison[247614]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 31 08:36:04 compute-0 eager_jemison[247614]:                 "ceph.type": "block",
Jan 31 08:36:04 compute-0 eager_jemison[247614]:                 "ceph.vdo": "0",
Jan 31 08:36:04 compute-0 eager_jemison[247614]:                 "ceph.with_tpm": "0"
Jan 31 08:36:04 compute-0 eager_jemison[247614]:             },
Jan 31 08:36:04 compute-0 eager_jemison[247614]:             "type": "block",
Jan 31 08:36:04 compute-0 eager_jemison[247614]:             "vg_name": "ceph_vg2"
Jan 31 08:36:04 compute-0 eager_jemison[247614]:         }
Jan 31 08:36:04 compute-0 eager_jemison[247614]:     ]
Jan 31 08:36:04 compute-0 eager_jemison[247614]: }
Jan 31 08:36:04 compute-0 ceph-mon[75227]: pgmap v1012: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:36:04 compute-0 systemd[1]: libpod-e9b7d18fc364c89029f6e14892005ba066b5a50cc8688c5ab9f213d91981d61c.scope: Deactivated successfully.
Jan 31 08:36:04 compute-0 podman[247598]: 2026-01-31 08:36:04.602731976 +0000 UTC m=+0.438340361 container died e9b7d18fc364c89029f6e14892005ba066b5a50cc8688c5ab9f213d91981d61c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eager_jemison, ceph=True, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, OSD_FLAVOR=default, io.buildah.version=1.41.3)
Jan 31 08:36:04 compute-0 systemd[1]: var-lib-containers-storage-overlay-a18a0affc386fd754c09795027e8ccf141d12fd6149be5d6ad056855eb810b28-merged.mount: Deactivated successfully.
Jan 31 08:36:04 compute-0 podman[247598]: 2026-01-31 08:36:04.727319707 +0000 UTC m=+0.562928132 container remove e9b7d18fc364c89029f6e14892005ba066b5a50cc8688c5ab9f213d91981d61c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eager_jemison, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 08:36:04 compute-0 systemd[1]: libpod-conmon-e9b7d18fc364c89029f6e14892005ba066b5a50cc8688c5ab9f213d91981d61c.scope: Deactivated successfully.
Jan 31 08:36:04 compute-0 sudo[247521]: pam_unix(sudo:session): session closed for user root
Jan 31 08:36:04 compute-0 sudo[247638]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 08:36:04 compute-0 sudo[247638]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:36:04 compute-0 sudo[247638]: pam_unix(sudo:session): session closed for user root
Jan 31 08:36:04 compute-0 sudo[247663]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/82c880e6-d992-5408-8b12-efff9c275473/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 82c880e6-d992-5408-8b12-efff9c275473 -- raw list --format json
Jan 31 08:36:04 compute-0 sudo[247663]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:36:05 compute-0 podman[247701]: 2026-01-31 08:36:05.19372881 +0000 UTC m=+0.047007229 container create c0413c8ea769949141edec636beb47035a7cf9a6a7df347447e7f1b494b5538d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gracious_antonelli, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=tentacle, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 08:36:05 compute-0 systemd[1]: Started libpod-conmon-c0413c8ea769949141edec636beb47035a7cf9a6a7df347447e7f1b494b5538d.scope.
Jan 31 08:36:05 compute-0 podman[247701]: 2026-01-31 08:36:05.172906802 +0000 UTC m=+0.026185311 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 08:36:05 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:36:05 compute-0 podman[247701]: 2026-01-31 08:36:05.286527223 +0000 UTC m=+0.139805722 container init c0413c8ea769949141edec636beb47035a7cf9a6a7df347447e7f1b494b5538d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gracious_antonelli, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True)
Jan 31 08:36:05 compute-0 podman[247701]: 2026-01-31 08:36:05.295895198 +0000 UTC m=+0.149173657 container start c0413c8ea769949141edec636beb47035a7cf9a6a7df347447e7f1b494b5538d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gracious_antonelli, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 31 08:36:05 compute-0 gracious_antonelli[247717]: 167 167
Jan 31 08:36:05 compute-0 systemd[1]: libpod-c0413c8ea769949141edec636beb47035a7cf9a6a7df347447e7f1b494b5538d.scope: Deactivated successfully.
Jan 31 08:36:05 compute-0 podman[247701]: 2026-01-31 08:36:05.301121856 +0000 UTC m=+0.154400275 container attach c0413c8ea769949141edec636beb47035a7cf9a6a7df347447e7f1b494b5538d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gracious_antonelli, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 08:36:05 compute-0 podman[247701]: 2026-01-31 08:36:05.302245887 +0000 UTC m=+0.155524346 container died c0413c8ea769949141edec636beb47035a7cf9a6a7df347447e7f1b494b5538d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gracious_antonelli, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, io.buildah.version=1.41.3)
Jan 31 08:36:05 compute-0 systemd[1]: var-lib-containers-storage-overlay-3f5eda88d5cb2383bf848f048b9f15d1902421db7a442c4e16f5b5617f290b7f-merged.mount: Deactivated successfully.
Jan 31 08:36:05 compute-0 podman[247701]: 2026-01-31 08:36:05.344460331 +0000 UTC m=+0.197738750 container remove c0413c8ea769949141edec636beb47035a7cf9a6a7df347447e7f1b494b5538d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gracious_antonelli, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 08:36:05 compute-0 systemd[1]: libpod-conmon-c0413c8ea769949141edec636beb47035a7cf9a6a7df347447e7f1b494b5538d.scope: Deactivated successfully.
Jan 31 08:36:05 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:36:05 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1013: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:36:05 compute-0 podman[247740]: 2026-01-31 08:36:05.50932354 +0000 UTC m=+0.051011892 container create 7fc59431eaeab72fa21194bae70a6a6ea41ea79ec3100b3244fbde5f99d295d8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nifty_pasteur, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 08:36:05 compute-0 systemd[1]: Started libpod-conmon-7fc59431eaeab72fa21194bae70a6a6ea41ea79ec3100b3244fbde5f99d295d8.scope.
Jan 31 08:36:05 compute-0 podman[247740]: 2026-01-31 08:36:05.479413365 +0000 UTC m=+0.021101727 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 08:36:05 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:36:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2e4e38f48314cf892fd7e9bb59348d6e6edef9dd678f990e0d9962c8c00d9773/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 08:36:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2e4e38f48314cf892fd7e9bb59348d6e6edef9dd678f990e0d9962c8c00d9773/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 08:36:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2e4e38f48314cf892fd7e9bb59348d6e6edef9dd678f990e0d9962c8c00d9773/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 08:36:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2e4e38f48314cf892fd7e9bb59348d6e6edef9dd678f990e0d9962c8c00d9773/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 08:36:05 compute-0 podman[247740]: 2026-01-31 08:36:05.602293218 +0000 UTC m=+0.143981600 container init 7fc59431eaeab72fa21194bae70a6a6ea41ea79ec3100b3244fbde5f99d295d8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nifty_pasteur, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Jan 31 08:36:05 compute-0 podman[247740]: 2026-01-31 08:36:05.609019248 +0000 UTC m=+0.150707590 container start 7fc59431eaeab72fa21194bae70a6a6ea41ea79ec3100b3244fbde5f99d295d8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nifty_pasteur, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 31 08:36:05 compute-0 podman[247740]: 2026-01-31 08:36:05.612953229 +0000 UTC m=+0.154641611 container attach 7fc59431eaeab72fa21194bae70a6a6ea41ea79ec3100b3244fbde5f99d295d8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nifty_pasteur, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Jan 31 08:36:06 compute-0 lvm[247834]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 31 08:36:06 compute-0 lvm[247834]: VG ceph_vg0 finished
Jan 31 08:36:06 compute-0 lvm[247836]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 31 08:36:06 compute-0 lvm[247836]: VG ceph_vg1 finished
Jan 31 08:36:06 compute-0 lvm[247838]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 31 08:36:06 compute-0 lvm[247838]: VG ceph_vg2 finished
Jan 31 08:36:06 compute-0 nifty_pasteur[247756]: {}
Jan 31 08:36:06 compute-0 systemd[1]: libpod-7fc59431eaeab72fa21194bae70a6a6ea41ea79ec3100b3244fbde5f99d295d8.scope: Deactivated successfully.
Jan 31 08:36:06 compute-0 systemd[1]: libpod-7fc59431eaeab72fa21194bae70a6a6ea41ea79ec3100b3244fbde5f99d295d8.scope: Consumed 1.230s CPU time.
Jan 31 08:36:06 compute-0 podman[247740]: 2026-01-31 08:36:06.445030387 +0000 UTC m=+0.986718729 container died 7fc59431eaeab72fa21194bae70a6a6ea41ea79ec3100b3244fbde5f99d295d8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nifty_pasteur, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Jan 31 08:36:06 compute-0 systemd[1]: var-lib-containers-storage-overlay-2e4e38f48314cf892fd7e9bb59348d6e6edef9dd678f990e0d9962c8c00d9773-merged.mount: Deactivated successfully.
Jan 31 08:36:06 compute-0 podman[247740]: 2026-01-31 08:36:06.495225646 +0000 UTC m=+1.036913978 container remove 7fc59431eaeab72fa21194bae70a6a6ea41ea79ec3100b3244fbde5f99d295d8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nifty_pasteur, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, io.buildah.version=1.41.3, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 08:36:06 compute-0 systemd[1]: libpod-conmon-7fc59431eaeab72fa21194bae70a6a6ea41ea79ec3100b3244fbde5f99d295d8.scope: Deactivated successfully.
Jan 31 08:36:06 compute-0 sudo[247663]: pam_unix(sudo:session): session closed for user root
Jan 31 08:36:06 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 31 08:36:06 compute-0 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 08:36:06 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 31 08:36:06 compute-0 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 08:36:06 compute-0 ceph-mon[75227]: pgmap v1013: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:36:06 compute-0 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 08:36:06 compute-0 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 08:36:06 compute-0 sudo[247854]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 31 08:36:06 compute-0 sudo[247854]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:36:06 compute-0 sudo[247854]: pam_unix(sudo:session): session closed for user root
Jan 31 08:36:07 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1014: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:36:08 compute-0 ceph-mon[75227]: pgmap v1014: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:36:09 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1015: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:36:10 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:36:10 compute-0 ceph-mon[75227]: pgmap v1015: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:36:11 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1016: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:36:11 compute-0 ceph-mon[75227]: pgmap v1016: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:36:13 compute-0 podman[247880]: 2026-01-31 08:36:13.180232396 +0000 UTC m=+0.062613401 container health_status 5cc46d1955888fed41771eb977e7f9416e280539f01559118253757fe3eb0869 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.build-date=20260127, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '3b798815db4eef76ff54d2cfe5801aee605c637f16e47a4297de289ab1fdb9c1-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 08:36:13 compute-0 podman[247879]: 2026-01-31 08:36:13.217305354 +0000 UTC m=+0.099968176 container health_status 14c3c41ef3ecc0cb180f3b1c6b2646401de390411c75f2edf127900ced71a3ae (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '3b798815db4eef76ff54d2cfe5801aee605c637f16e47a4297de289ab1fdb9c1-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Jan 31 08:36:13 compute-0 ceph-mon[75227]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 31 08:36:13 compute-0 ceph-mon[75227]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 1800.0 total, 600.0 interval
                                           Cumulative writes: 4627 writes, 20K keys, 4627 commit groups, 1.0 writes per commit group, ingest: 0.03 GB, 0.02 MB/s
                                           Cumulative WAL: 4627 writes, 4627 syncs, 1.00 writes per sync, written: 0.03 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 1319 writes, 6013 keys, 1319 commit groups, 1.0 writes per commit group, ingest: 8.78 MB, 0.01 MB/s
                                           Interval WAL: 1319 writes, 1319 syncs, 1.00 writes per sync, written: 0.01 GB, 0.01 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0     15.3      1.50              0.05        11    0.136       0      0       0.0       0.0
                                             L6      1/0    7.82 MB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   3.2     34.6     28.7      2.56              0.20        10    0.256     43K   5191       0.0       0.0
                                            Sum      1/0    7.82 MB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   4.2     21.8     23.7      4.05              0.25        21    0.193     43K   5191       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   5.7     14.1     14.3      3.16              0.11        10    0.316     24K   2989       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Low      0/0    0.00 KB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   0.0     34.6     28.7      2.56              0.20        10    0.256     43K   5191       0.0       0.0
                                           High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     15.3      1.49              0.05        10    0.149       0      0       0.0       0.0
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     13.5      0.00              0.00         1    0.004       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1800.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.022, interval 0.008
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.09 GB write, 0.05 MB/s write, 0.09 GB read, 0.05 MB/s read, 4.1 seconds
                                           Interval compaction: 0.04 GB write, 0.08 MB/s write, 0.04 GB read, 0.07 MB/s read, 3.2 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55bf4c7858d0#2 capacity: 304.00 MB usage: 7.15 MB table_size: 0 occupancy: 18446744073709551615 collections: 4 last_copies: 0 last_secs: 0.000113 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(430,6.79 MB,2.23426%) FilterBlock(22,128.86 KB,0.0413945%) IndexBlock(22,242.23 KB,0.0778148%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
Jan 31 08:36:13 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1017: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:36:14 compute-0 nova_compute[238824]: 2026-01-31 08:36:14.339 238828 DEBUG oslo_service.periodic_task [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:36:14 compute-0 ceph-mon[75227]: pgmap v1017: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:36:15 compute-0 nova_compute[238824]: 2026-01-31 08:36:15.338 238828 DEBUG oslo_service.periodic_task [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:36:15 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:36:15 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1018: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:36:16 compute-0 nova_compute[238824]: 2026-01-31 08:36:16.339 238828 DEBUG oslo_service.periodic_task [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:36:16 compute-0 ceph-mon[75227]: pgmap v1018: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:36:17 compute-0 nova_compute[238824]: 2026-01-31 08:36:17.339 238828 DEBUG oslo_service.periodic_task [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:36:17 compute-0 nova_compute[238824]: 2026-01-31 08:36:17.340 238828 DEBUG nova.compute.manager [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 31 08:36:17 compute-0 nova_compute[238824]: 2026-01-31 08:36:17.340 238828 DEBUG oslo_service.periodic_task [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:36:17 compute-0 nova_compute[238824]: 2026-01-31 08:36:17.364 238828 DEBUG oslo_concurrency.lockutils [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:36:17 compute-0 nova_compute[238824]: 2026-01-31 08:36:17.364 238828 DEBUG oslo_concurrency.lockutils [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:36:17 compute-0 nova_compute[238824]: 2026-01-31 08:36:17.364 238828 DEBUG oslo_concurrency.lockutils [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:36:17 compute-0 nova_compute[238824]: 2026-01-31 08:36:17.364 238828 DEBUG nova.compute.resource_tracker [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 31 08:36:17 compute-0 nova_compute[238824]: 2026-01-31 08:36:17.365 238828 DEBUG oslo_concurrency.processutils [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:36:17 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1019: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:36:17 compute-0 ovn_metadata_agent[154972]: 2026-01-31 08:36:17.895 154977 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:36:17 compute-0 ovn_metadata_agent[154972]: 2026-01-31 08:36:17.896 154977 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:36:17 compute-0 ovn_metadata_agent[154972]: 2026-01-31 08:36:17.896 154977 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:36:17 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 31 08:36:17 compute-0 ceph-mon[75227]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/814207391' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 31 08:36:17 compute-0 nova_compute[238824]: 2026-01-31 08:36:17.951 238828 DEBUG oslo_concurrency.processutils [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.586s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:36:17 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 31 08:36:17 compute-0 ceph-mon[75227]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1077727800' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 31 08:36:17 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 31 08:36:17 compute-0 ceph-mon[75227]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1077727800' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 31 08:36:18 compute-0 nova_compute[238824]: 2026-01-31 08:36:18.095 238828 WARNING nova.virt.libvirt.driver [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 31 08:36:18 compute-0 nova_compute[238824]: 2026-01-31 08:36:18.097 238828 DEBUG nova.compute.resource_tracker [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5118MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 31 08:36:18 compute-0 nova_compute[238824]: 2026-01-31 08:36:18.097 238828 DEBUG oslo_concurrency.lockutils [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:36:18 compute-0 nova_compute[238824]: 2026-01-31 08:36:18.098 238828 DEBUG oslo_concurrency.lockutils [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:36:18 compute-0 nova_compute[238824]: 2026-01-31 08:36:18.167 238828 DEBUG nova.compute.resource_tracker [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 31 08:36:18 compute-0 nova_compute[238824]: 2026-01-31 08:36:18.168 238828 DEBUG nova.compute.resource_tracker [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 31 08:36:18 compute-0 ceph-mon[75227]: pgmap v1019: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:36:18 compute-0 nova_compute[238824]: 2026-01-31 08:36:18.186 238828 DEBUG oslo_concurrency.processutils [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:36:18 compute-0 ceph-mon[75227]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #48. Immutable memtables: 0.
Jan 31 08:36:18 compute-0 ceph-mon[75227]: rocksdb: (Original Log Time 2026/01/31-08:36:18.681379) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 31 08:36:18 compute-0 ceph-mon[75227]: rocksdb: [db/flush_job.cc:856] [default] [JOB 23] Flushing memtable with next log file: 48
Jan 31 08:36:18 compute-0 ceph-mon[75227]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769848578681449, "job": 23, "event": "flush_started", "num_memtables": 1, "num_entries": 1095, "num_deletes": 251, "total_data_size": 1642941, "memory_usage": 1673920, "flush_reason": "Manual Compaction"}
Jan 31 08:36:18 compute-0 ceph-mon[75227]: rocksdb: [db/flush_job.cc:885] [default] [JOB 23] Level-0 flush table #49: started
Jan 31 08:36:18 compute-0 ceph-mon[75227]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769848578810861, "cf_name": "default", "job": 23, "event": "table_file_creation", "file_number": 49, "file_size": 1616668, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 20004, "largest_seqno": 21098, "table_properties": {"data_size": 1611362, "index_size": 2766, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1477, "raw_key_size": 11269, "raw_average_key_size": 19, "raw_value_size": 1600751, "raw_average_value_size": 2788, "num_data_blocks": 127, "num_entries": 574, "num_filter_entries": 574, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769848470, "oldest_key_time": 1769848470, "file_creation_time": 1769848578, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "91992687-9ca4-489a-811f-a25b3432622d", "db_session_id": "RDN3DWKE2K2I6QTJYIJY", "orig_file_number": 49, "seqno_to_time_mapping": "N/A"}}
Jan 31 08:36:18 compute-0 ceph-mon[75227]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 23] Flush lasted 129534 microseconds, and 4724 cpu microseconds.
Jan 31 08:36:18 compute-0 ceph-mon[75227]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 31 08:36:18 compute-0 ceph-mon[75227]: rocksdb: (Original Log Time 2026/01/31-08:36:18.810920) [db/flush_job.cc:967] [default] [JOB 23] Level-0 flush table #49: 1616668 bytes OK
Jan 31 08:36:18 compute-0 ceph-mon[75227]: rocksdb: (Original Log Time 2026/01/31-08:36:18.810942) [db/memtable_list.cc:519] [default] Level-0 commit table #49 started
Jan 31 08:36:18 compute-0 ceph-mon[75227]: rocksdb: (Original Log Time 2026/01/31-08:36:18.837228) [db/memtable_list.cc:722] [default] Level-0 commit table #49: memtable #1 done
Jan 31 08:36:18 compute-0 ceph-mon[75227]: rocksdb: (Original Log Time 2026/01/31-08:36:18.837290) EVENT_LOG_v1 {"time_micros": 1769848578837282, "job": 23, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 31 08:36:18 compute-0 ceph-mon[75227]: rocksdb: (Original Log Time 2026/01/31-08:36:18.837313) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 31 08:36:18 compute-0 ceph-mon[75227]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 23] Try to delete WAL files size 1637857, prev total WAL file size 1637857, number of live WAL files 2.
Jan 31 08:36:18 compute-0 ceph-mon[75227]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000045.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 08:36:18 compute-0 ceph-mon[75227]: rocksdb: (Original Log Time 2026/01/31-08:36:18.838013) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730031353036' seq:72057594037927935, type:22 .. '7061786F730031373538' seq:0, type:0; will stop at (end)
Jan 31 08:36:18 compute-0 ceph-mon[75227]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 24] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 31 08:36:18 compute-0 ceph-mon[75227]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 23 Base level 0, inputs: [49(1578KB)], [47(8004KB)]
Jan 31 08:36:18 compute-0 ceph-mon[75227]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769848578838045, "job": 24, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [49], "files_L6": [47], "score": -1, "input_data_size": 9813273, "oldest_snapshot_seqno": -1}
Jan 31 08:36:18 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 31 08:36:18 compute-0 ceph-mon[75227]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2705001609' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 31 08:36:18 compute-0 nova_compute[238824]: 2026-01-31 08:36:18.880 238828 DEBUG oslo_concurrency.processutils [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.694s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:36:18 compute-0 nova_compute[238824]: 2026-01-31 08:36:18.887 238828 DEBUG nova.compute.provider_tree [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Inventory has not changed in ProviderTree for provider: 6d4ff98f-eb37-47a1-bfaf-01e7f5329d98 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 31 08:36:18 compute-0 nova_compute[238824]: 2026-01-31 08:36:18.907 238828 DEBUG nova.scheduler.client.report [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Inventory has not changed for provider 6d4ff98f-eb37-47a1-bfaf-01e7f5329d98 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 31 08:36:18 compute-0 nova_compute[238824]: 2026-01-31 08:36:18.910 238828 DEBUG nova.compute.resource_tracker [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 31 08:36:18 compute-0 nova_compute[238824]: 2026-01-31 08:36:18.910 238828 DEBUG oslo_concurrency.lockutils [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.813s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:36:18 compute-0 ceph-mon[75227]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 24] Generated table #50: 4412 keys, 8023610 bytes, temperature: kUnknown
Jan 31 08:36:19 compute-0 ceph-mon[75227]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769848578999571, "cf_name": "default", "job": 24, "event": "table_file_creation", "file_number": 50, "file_size": 8023610, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 7992517, "index_size": 18951, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 11077, "raw_key_size": 109228, "raw_average_key_size": 24, "raw_value_size": 7911063, "raw_average_value_size": 1793, "num_data_blocks": 793, "num_entries": 4412, "num_filter_entries": 4412, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769846771, "oldest_key_time": 0, "file_creation_time": 1769848578, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "91992687-9ca4-489a-811f-a25b3432622d", "db_session_id": "RDN3DWKE2K2I6QTJYIJY", "orig_file_number": 50, "seqno_to_time_mapping": "N/A"}}
Jan 31 08:36:19 compute-0 ceph-mon[75227]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 31 08:36:19 compute-0 ceph-mon[75227]: rocksdb: (Original Log Time 2026/01/31-08:36:18.999880) [db/compaction/compaction_job.cc:1663] [default] [JOB 24] Compacted 1@0 + 1@6 files to L6 => 8023610 bytes
Jan 31 08:36:19 compute-0 ceph-mon[75227]: rocksdb: (Original Log Time 2026/01/31-08:36:19.004950) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 60.7 rd, 49.6 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.5, 7.8 +0.0 blob) out(7.7 +0.0 blob), read-write-amplify(11.0) write-amplify(5.0) OK, records in: 4926, records dropped: 514 output_compression: NoCompression
Jan 31 08:36:19 compute-0 ceph-mon[75227]: rocksdb: (Original Log Time 2026/01/31-08:36:19.004980) EVENT_LOG_v1 {"time_micros": 1769848579004966, "job": 24, "event": "compaction_finished", "compaction_time_micros": 161644, "compaction_time_cpu_micros": 13274, "output_level": 6, "num_output_files": 1, "total_output_size": 8023610, "num_input_records": 4926, "num_output_records": 4412, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 31 08:36:19 compute-0 ceph-mon[75227]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000049.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 08:36:19 compute-0 ceph-mon[75227]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769848579005403, "job": 24, "event": "table_file_deletion", "file_number": 49}
Jan 31 08:36:19 compute-0 ceph-mon[75227]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000047.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 08:36:19 compute-0 ceph-mon[75227]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769848579006685, "job": 24, "event": "table_file_deletion", "file_number": 47}
Jan 31 08:36:19 compute-0 ceph-mon[75227]: rocksdb: (Original Log Time 2026/01/31-08:36:18.837896) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 08:36:19 compute-0 ceph-mon[75227]: rocksdb: (Original Log Time 2026/01/31-08:36:19.006872) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 08:36:19 compute-0 ceph-mon[75227]: rocksdb: (Original Log Time 2026/01/31-08:36:19.006881) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 08:36:19 compute-0 ceph-mon[75227]: rocksdb: (Original Log Time 2026/01/31-08:36:19.006883) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 08:36:19 compute-0 ceph-mon[75227]: rocksdb: (Original Log Time 2026/01/31-08:36:19.006885) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 08:36:19 compute-0 ceph-mon[75227]: rocksdb: (Original Log Time 2026/01/31-08:36:19.006888) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 08:36:19 compute-0 ceph-mon[75227]: from='client.? 192.168.122.100:0/814207391' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 31 08:36:19 compute-0 ceph-mon[75227]: from='client.? 192.168.122.10:0/1077727800' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 31 08:36:19 compute-0 ceph-mon[75227]: from='client.? 192.168.122.10:0/1077727800' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 31 08:36:19 compute-0 ceph-mon[75227]: from='client.? 192.168.122.100:0/2705001609' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 31 08:36:19 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1020: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:36:19 compute-0 nova_compute[238824]: 2026-01-31 08:36:19.911 238828 DEBUG oslo_service.periodic_task [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:36:19 compute-0 nova_compute[238824]: 2026-01-31 08:36:19.912 238828 DEBUG oslo_service.periodic_task [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:36:19 compute-0 nova_compute[238824]: 2026-01-31 08:36:19.912 238828 DEBUG nova.compute.manager [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 31 08:36:19 compute-0 nova_compute[238824]: 2026-01-31 08:36:19.912 238828 DEBUG nova.compute.manager [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 31 08:36:19 compute-0 nova_compute[238824]: 2026-01-31 08:36:19.939 238828 DEBUG nova.compute.manager [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 31 08:36:19 compute-0 nova_compute[238824]: 2026-01-31 08:36:19.940 238828 DEBUG oslo_service.periodic_task [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:36:20 compute-0 ceph-mon[75227]: pgmap v1020: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:36:20 compute-0 nova_compute[238824]: 2026-01-31 08:36:20.361 238828 DEBUG oslo_service.periodic_task [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:36:20 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:36:21 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1021: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:36:22 compute-0 ceph-mon[75227]: pgmap v1021: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:36:23 compute-0 nova_compute[238824]: 2026-01-31 08:36:23.339 238828 DEBUG oslo_service.periodic_task [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:36:23 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1022: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:36:24 compute-0 ceph-mon[75227]: pgmap v1022: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:36:25 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:36:25 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1023: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:36:26 compute-0 ceph-mon[75227]: pgmap v1023: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:36:27 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1024: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:36:28 compute-0 ceph-mon[75227]: pgmap v1024: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:36:29 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1025: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:36:30 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:36:30 compute-0 ceph-mon[75227]: pgmap v1025: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:36:31 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1026: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:36:31 compute-0 ceph-mgr[75519]: [balancer INFO root] Optimize plan auto_2026-01-31_08:36:31
Jan 31 08:36:31 compute-0 ceph-mgr[75519]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 31 08:36:31 compute-0 ceph-mgr[75519]: [balancer INFO root] do_upmap
Jan 31 08:36:31 compute-0 ceph-mgr[75519]: [balancer INFO root] pools ['images', 'vms', 'default.rgw.log', 'backups', 'cephfs.cephfs.meta', '.mgr', 'default.rgw.control', 'volumes', 'cephfs.cephfs.data', '.rgw.root', 'default.rgw.meta']
Jan 31 08:36:31 compute-0 ceph-mgr[75519]: [balancer INFO root] prepared 0/10 upmap changes
Jan 31 08:36:32 compute-0 ceph-mon[75227]: pgmap v1026: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:36:32 compute-0 ceph-mgr[75519]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:36:32 compute-0 ceph-mgr[75519]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:36:32 compute-0 ceph-mgr[75519]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:36:32 compute-0 ceph-mgr[75519]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:36:32 compute-0 ceph-mgr[75519]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:36:32 compute-0 ceph-mgr[75519]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:36:33 compute-0 ceph-mgr[75519]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 31 08:36:33 compute-0 ceph-mgr[75519]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 08:36:33 compute-0 ceph-mgr[75519]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 31 08:36:33 compute-0 ceph-mgr[75519]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 08:36:33 compute-0 ceph-mgr[75519]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 08:36:33 compute-0 ceph-mgr[75519]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 08:36:33 compute-0 ceph-mgr[75519]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 08:36:33 compute-0 ceph-mgr[75519]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 08:36:33 compute-0 ceph-mgr[75519]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 08:36:33 compute-0 ceph-mgr[75519]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 08:36:33 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1027: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:36:34 compute-0 ceph-mon[75227]: pgmap v1027: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:36:35 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:36:35 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1028: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:36:36 compute-0 ceph-mon[75227]: pgmap v1028: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:36:37 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1029: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:36:37 compute-0 ceph-mon[75227]: pgmap v1029: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:36:39 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1030: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:36:40 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:36:40 compute-0 ceph-mon[75227]: pgmap v1030: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:36:41 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1031: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:36:42 compute-0 ceph-mon[75227]: pgmap v1031: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:36:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] _maybe_adjust
Jan 31 08:36:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:36:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 31 08:36:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:36:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 08:36:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:36:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 08:36:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:36:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 08:36:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:36:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 1.9431184059615526e-07 of space, bias 1.0, pg target 5.829355217884658e-05 quantized to 32 (current 32)
Jan 31 08:36:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:36:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 2.607793448422658e-06 of space, bias 4.0, pg target 0.0031293521381071895 quantized to 16 (current 16)
Jan 31 08:36:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:36:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 08:36:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:36:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 31 08:36:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:36:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Jan 31 08:36:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:36:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 08:36:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:36:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 31 08:36:43 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1032: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:36:44 compute-0 podman[247967]: 2026-01-31 08:36:44.172953325 +0000 UTC m=+0.058554326 container health_status 5cc46d1955888fed41771eb977e7f9416e280539f01559118253757fe3eb0869 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '3b798815db4eef76ff54d2cfe5801aee605c637f16e47a4297de289ab1fdb9c1-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Jan 31 08:36:44 compute-0 podman[247966]: 2026-01-31 08:36:44.189854392 +0000 UTC m=+0.077549903 container health_status 14c3c41ef3ecc0cb180f3b1c6b2646401de390411c75f2edf127900ced71a3ae (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '3b798815db4eef76ff54d2cfe5801aee605c637f16e47a4297de289ab1fdb9c1-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller)
Jan 31 08:36:44 compute-0 ceph-mon[75227]: pgmap v1032: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:36:45 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:36:45 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1033: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:36:46 compute-0 ceph-mon[75227]: pgmap v1033: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:36:47 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1034: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:36:48 compute-0 ceph-mon[75227]: pgmap v1034: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:36:49 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1035: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:36:49 compute-0 ceph-mon[75227]: pgmap v1035: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:36:50 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:36:51 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1036: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:36:52 compute-0 ceph-mon[75227]: pgmap v1036: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:36:53 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1037: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:36:53 compute-0 ceph-mon[75227]: pgmap v1037: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:36:55 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:36:55 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1038: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:36:56 compute-0 ceph-mon[75227]: pgmap v1038: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:36:57 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1039: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:36:57 compute-0 ceph-mon[75227]: pgmap v1039: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:36:59 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1040: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:37:00 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:37:00 compute-0 ceph-mon[75227]: pgmap v1040: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:37:01 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1041: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:37:01 compute-0 ceph-mon[75227]: pgmap v1041: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:37:02 compute-0 ceph-mgr[75519]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:37:02 compute-0 ceph-mgr[75519]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:37:02 compute-0 ceph-mgr[75519]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:37:02 compute-0 ceph-mgr[75519]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:37:02 compute-0 ceph-mgr[75519]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:37:02 compute-0 ceph-mgr[75519]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:37:03 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1042: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:37:04 compute-0 ceph-mon[75227]: pgmap v1042: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:37:05 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:37:05 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1043: 305 pgs: 305 active+clean; 8.5 MiB data, 137 MiB used, 60 GiB / 60 GiB avail; 682 KiB/s wr, 0 op/s
Jan 31 08:37:05 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e136 do_prune osdmap full prune enabled
Jan 31 08:37:05 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e137 e137: 3 total, 3 up, 3 in
Jan 31 08:37:05 compute-0 ceph-mon[75227]: pgmap v1043: 305 pgs: 305 active+clean; 8.5 MiB data, 137 MiB used, 60 GiB / 60 GiB avail; 682 KiB/s wr, 0 op/s
Jan 31 08:37:05 compute-0 ceph-mon[75227]: log_channel(cluster) log [DBG] : osdmap e137: 3 total, 3 up, 3 in
Jan 31 08:37:06 compute-0 sudo[248010]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 08:37:06 compute-0 sudo[248010]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:37:06 compute-0 sudo[248010]: pam_unix(sudo:session): session closed for user root
Jan 31 08:37:06 compute-0 sudo[248035]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/82c880e6-d992-5408-8b12-efff9c275473/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Jan 31 08:37:06 compute-0 sudo[248035]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:37:06 compute-0 ceph-mon[75227]: osdmap e137: 3 total, 3 up, 3 in
Jan 31 08:37:07 compute-0 sudo[248035]: pam_unix(sudo:session): session closed for user root
Jan 31 08:37:07 compute-0 sudo[248090]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 08:37:07 compute-0 sudo[248090]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:37:07 compute-0 sudo[248090]: pam_unix(sudo:session): session closed for user root
Jan 31 08:37:07 compute-0 sudo[248115]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/82c880e6-d992-5408-8b12-efff9c275473/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 list-networks
Jan 31 08:37:07 compute-0 sudo[248115]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:37:07 compute-0 sudo[248115]: pam_unix(sudo:session): session closed for user root
Jan 31 08:37:07 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1045: 305 pgs: 305 active+clean; 8.5 MiB data, 137 MiB used, 60 GiB / 60 GiB avail; 7.4 KiB/s rd, 819 KiB/s wr, 9 op/s
Jan 31 08:37:07 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 31 08:37:07 compute-0 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 08:37:07 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 31 08:37:07 compute-0 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 08:37:07 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 31 08:37:07 compute-0 ceph-mon[75227]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 08:37:07 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 31 08:37:07 compute-0 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 31 08:37:07 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 31 08:37:07 compute-0 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 08:37:07 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 31 08:37:07 compute-0 ceph-mon[75227]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 31 08:37:07 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 31 08:37:07 compute-0 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 31 08:37:07 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 31 08:37:07 compute-0 ceph-mon[75227]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 08:37:07 compute-0 sudo[248158]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 08:37:07 compute-0 sudo[248158]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:37:07 compute-0 sudo[248158]: pam_unix(sudo:session): session closed for user root
Jan 31 08:37:07 compute-0 sudo[248183]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/82c880e6-d992-5408-8b12-efff9c275473/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 82c880e6-d992-5408-8b12-efff9c275473 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Jan 31 08:37:07 compute-0 sudo[248183]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:37:07 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e137 do_prune osdmap full prune enabled
Jan 31 08:37:07 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e138 e138: 3 total, 3 up, 3 in
Jan 31 08:37:07 compute-0 ceph-mon[75227]: log_channel(cluster) log [DBG] : osdmap e138: 3 total, 3 up, 3 in
Jan 31 08:37:07 compute-0 podman[248221]: 2026-01-31 08:37:07.887677752 +0000 UTC m=+0.044810477 container create 20ae7ff6567c8b7fb63d0d1eaa1d87df7a2aaaf7ae53596146e4b803cc146f42 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigilant_tharp, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=tentacle, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 31 08:37:07 compute-0 systemd[1]: Started libpod-conmon-20ae7ff6567c8b7fb63d0d1eaa1d87df7a2aaaf7ae53596146e4b803cc146f42.scope.
Jan 31 08:37:07 compute-0 podman[248221]: 2026-01-31 08:37:07.864073245 +0000 UTC m=+0.021205990 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 08:37:07 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:37:07 compute-0 podman[248221]: 2026-01-31 08:37:07.990790027 +0000 UTC m=+0.147922762 container init 20ae7ff6567c8b7fb63d0d1eaa1d87df7a2aaaf7ae53596146e4b803cc146f42 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigilant_tharp, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Jan 31 08:37:08 compute-0 podman[248221]: 2026-01-31 08:37:08.001776947 +0000 UTC m=+0.158909672 container start 20ae7ff6567c8b7fb63d0d1eaa1d87df7a2aaaf7ae53596146e4b803cc146f42 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigilant_tharp, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 08:37:08 compute-0 systemd[1]: libpod-20ae7ff6567c8b7fb63d0d1eaa1d87df7a2aaaf7ae53596146e4b803cc146f42.scope: Deactivated successfully.
Jan 31 08:37:08 compute-0 vigilant_tharp[248238]: 167 167
Jan 31 08:37:08 compute-0 conmon[248238]: conmon 20ae7ff6567c8b7fb63d <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-20ae7ff6567c8b7fb63d0d1eaa1d87df7a2aaaf7ae53596146e4b803cc146f42.scope/container/memory.events
Jan 31 08:37:08 compute-0 podman[248221]: 2026-01-31 08:37:08.010283428 +0000 UTC m=+0.167416253 container attach 20ae7ff6567c8b7fb63d0d1eaa1d87df7a2aaaf7ae53596146e4b803cc146f42 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigilant_tharp, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Jan 31 08:37:08 compute-0 podman[248221]: 2026-01-31 08:37:08.011242445 +0000 UTC m=+0.168375170 container died 20ae7ff6567c8b7fb63d0d1eaa1d87df7a2aaaf7ae53596146e4b803cc146f42 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigilant_tharp, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 08:37:08 compute-0 systemd[1]: var-lib-containers-storage-overlay-99f227ccd708901342097eea12842b09094128c09585be5ebe0111cebf94ebf4-merged.mount: Deactivated successfully.
Jan 31 08:37:08 compute-0 podman[248221]: 2026-01-31 08:37:08.089438655 +0000 UTC m=+0.246571390 container remove 20ae7ff6567c8b7fb63d0d1eaa1d87df7a2aaaf7ae53596146e4b803cc146f42 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigilant_tharp, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 31 08:37:08 compute-0 systemd[1]: libpod-conmon-20ae7ff6567c8b7fb63d0d1eaa1d87df7a2aaaf7ae53596146e4b803cc146f42.scope: Deactivated successfully.
Jan 31 08:37:08 compute-0 podman[248261]: 2026-01-31 08:37:08.226909031 +0000 UTC m=+0.044992093 container create a4393626ad10a4596e2e9faf4229f72221f7f9c5cb3446bfdec75c8521ddc084 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=distracted_poincare, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.schema-version=1.0)
Jan 31 08:37:08 compute-0 systemd[1]: Started libpod-conmon-a4393626ad10a4596e2e9faf4229f72221f7f9c5cb3446bfdec75c8521ddc084.scope.
Jan 31 08:37:08 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:37:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/23e1f661a634b4281278e4c0b6a8b82737b29aa26d5727372369b860b6b2c8b2/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 08:37:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/23e1f661a634b4281278e4c0b6a8b82737b29aa26d5727372369b860b6b2c8b2/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 08:37:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/23e1f661a634b4281278e4c0b6a8b82737b29aa26d5727372369b860b6b2c8b2/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 08:37:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/23e1f661a634b4281278e4c0b6a8b82737b29aa26d5727372369b860b6b2c8b2/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 08:37:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/23e1f661a634b4281278e4c0b6a8b82737b29aa26d5727372369b860b6b2c8b2/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 31 08:37:08 compute-0 podman[248261]: 2026-01-31 08:37:08.207492462 +0000 UTC m=+0.025575544 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 08:37:08 compute-0 podman[248261]: 2026-01-31 08:37:08.314525027 +0000 UTC m=+0.132608179 container init a4393626ad10a4596e2e9faf4229f72221f7f9c5cb3446bfdec75c8521ddc084 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=distracted_poincare, org.label-schema.build-date=20251030, CEPH_REF=tentacle, OSD_FLAVOR=default, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 08:37:08 compute-0 podman[248261]: 2026-01-31 08:37:08.321532665 +0000 UTC m=+0.139615747 container start a4393626ad10a4596e2e9faf4229f72221f7f9c5cb3446bfdec75c8521ddc084 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=distracted_poincare, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True)
Jan 31 08:37:08 compute-0 podman[248261]: 2026-01-31 08:37:08.326083964 +0000 UTC m=+0.144167056 container attach a4393626ad10a4596e2e9faf4229f72221f7f9c5cb3446bfdec75c8521ddc084 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=distracted_poincare, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Jan 31 08:37:08 compute-0 ceph-mon[75227]: pgmap v1045: 305 pgs: 305 active+clean; 8.5 MiB data, 137 MiB used, 60 GiB / 60 GiB avail; 7.4 KiB/s rd, 819 KiB/s wr, 9 op/s
Jan 31 08:37:08 compute-0 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 08:37:08 compute-0 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 08:37:08 compute-0 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 08:37:08 compute-0 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 31 08:37:08 compute-0 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 08:37:08 compute-0 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 31 08:37:08 compute-0 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 31 08:37:08 compute-0 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 08:37:08 compute-0 ceph-mon[75227]: osdmap e138: 3 total, 3 up, 3 in
Jan 31 08:37:08 compute-0 distracted_poincare[248277]: --> passed data devices: 0 physical, 3 LVM
Jan 31 08:37:08 compute-0 distracted_poincare[248277]: --> All data devices are unavailable
Jan 31 08:37:08 compute-0 systemd[1]: libpod-a4393626ad10a4596e2e9faf4229f72221f7f9c5cb3446bfdec75c8521ddc084.scope: Deactivated successfully.
Jan 31 08:37:08 compute-0 podman[248261]: 2026-01-31 08:37:08.793794453 +0000 UTC m=+0.611877535 container died a4393626ad10a4596e2e9faf4229f72221f7f9c5cb3446bfdec75c8521ddc084 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=distracted_poincare, io.buildah.version=1.41.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Jan 31 08:37:08 compute-0 systemd[1]: var-lib-containers-storage-overlay-23e1f661a634b4281278e4c0b6a8b82737b29aa26d5727372369b860b6b2c8b2-merged.mount: Deactivated successfully.
Jan 31 08:37:08 compute-0 podman[248261]: 2026-01-31 08:37:08.834743351 +0000 UTC m=+0.652826423 container remove a4393626ad10a4596e2e9faf4229f72221f7f9c5cb3446bfdec75c8521ddc084 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=distracted_poincare, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 08:37:08 compute-0 systemd[1]: libpod-conmon-a4393626ad10a4596e2e9faf4229f72221f7f9c5cb3446bfdec75c8521ddc084.scope: Deactivated successfully.
Jan 31 08:37:08 compute-0 sudo[248183]: pam_unix(sudo:session): session closed for user root
Jan 31 08:37:08 compute-0 sudo[248307]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 08:37:08 compute-0 sudo[248307]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:37:08 compute-0 sudo[248307]: pam_unix(sudo:session): session closed for user root
Jan 31 08:37:08 compute-0 sudo[248332]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/82c880e6-d992-5408-8b12-efff9c275473/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 82c880e6-d992-5408-8b12-efff9c275473 -- lvm list --format json
Jan 31 08:37:09 compute-0 sudo[248332]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:37:09 compute-0 podman[248369]: 2026-01-31 08:37:09.269201111 +0000 UTC m=+0.037484121 container create 0d6daf2473780eb9012b860eabdb63556d5341209e6854473d41de722aadd762 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elegant_robinson, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2)
Jan 31 08:37:09 compute-0 systemd[1]: Started libpod-conmon-0d6daf2473780eb9012b860eabdb63556d5341209e6854473d41de722aadd762.scope.
Jan 31 08:37:09 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:37:09 compute-0 podman[248369]: 2026-01-31 08:37:09.340707122 +0000 UTC m=+0.108990182 container init 0d6daf2473780eb9012b860eabdb63556d5341209e6854473d41de722aadd762 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elegant_robinson, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Jan 31 08:37:09 compute-0 podman[248369]: 2026-01-31 08:37:09.252720605 +0000 UTC m=+0.021003645 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 08:37:09 compute-0 podman[248369]: 2026-01-31 08:37:09.349330476 +0000 UTC m=+0.117613486 container start 0d6daf2473780eb9012b860eabdb63556d5341209e6854473d41de722aadd762 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elegant_robinson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle)
Jan 31 08:37:09 compute-0 podman[248369]: 2026-01-31 08:37:09.352281379 +0000 UTC m=+0.120564439 container attach 0d6daf2473780eb9012b860eabdb63556d5341209e6854473d41de722aadd762 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elegant_robinson, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030)
Jan 31 08:37:09 compute-0 elegant_robinson[248386]: 167 167
Jan 31 08:37:09 compute-0 systemd[1]: libpod-0d6daf2473780eb9012b860eabdb63556d5341209e6854473d41de722aadd762.scope: Deactivated successfully.
Jan 31 08:37:09 compute-0 podman[248369]: 2026-01-31 08:37:09.35337226 +0000 UTC m=+0.121655290 container died 0d6daf2473780eb9012b860eabdb63556d5341209e6854473d41de722aadd762 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elegant_robinson, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 08:37:09 compute-0 systemd[1]: var-lib-containers-storage-overlay-fe27db8f9e9c3597a43129a627d2e03bc8f007ecb8c62dc3028d35cdf3511e27-merged.mount: Deactivated successfully.
Jan 31 08:37:09 compute-0 podman[248369]: 2026-01-31 08:37:09.390005075 +0000 UTC m=+0.158288085 container remove 0d6daf2473780eb9012b860eabdb63556d5341209e6854473d41de722aadd762 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elegant_robinson, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030)
Jan 31 08:37:09 compute-0 systemd[1]: libpod-conmon-0d6daf2473780eb9012b860eabdb63556d5341209e6854473d41de722aadd762.scope: Deactivated successfully.
Jan 31 08:37:09 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1047: 305 pgs: 305 active+clean; 8.5 MiB data, 137 MiB used, 60 GiB / 60 GiB avail; 9.2 KiB/s rd, 1024 KiB/s wr, 11 op/s
Jan 31 08:37:09 compute-0 podman[248410]: 2026-01-31 08:37:09.538796001 +0000 UTC m=+0.039900789 container create 1eef9a82c8cc5f47c9993f9f0d47ca015b86e71c1dbc2e465a465ed3e6b0f9f3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=naughty_jepsen, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 08:37:09 compute-0 systemd[1]: Started libpod-conmon-1eef9a82c8cc5f47c9993f9f0d47ca015b86e71c1dbc2e465a465ed3e6b0f9f3.scope.
Jan 31 08:37:09 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:37:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/20f2dd5902e3ba2c99ad44ba57c96fcab3eeafaa45bdc9356d39b79b079c97d5/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 08:37:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/20f2dd5902e3ba2c99ad44ba57c96fcab3eeafaa45bdc9356d39b79b079c97d5/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 08:37:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/20f2dd5902e3ba2c99ad44ba57c96fcab3eeafaa45bdc9356d39b79b079c97d5/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 08:37:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/20f2dd5902e3ba2c99ad44ba57c96fcab3eeafaa45bdc9356d39b79b079c97d5/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 08:37:09 compute-0 podman[248410]: 2026-01-31 08:37:09.519314881 +0000 UTC m=+0.020419679 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 08:37:09 compute-0 podman[248410]: 2026-01-31 08:37:09.623813984 +0000 UTC m=+0.124918772 container init 1eef9a82c8cc5f47c9993f9f0d47ca015b86e71c1dbc2e465a465ed3e6b0f9f3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=naughty_jepsen, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_REF=tentacle)
Jan 31 08:37:09 compute-0 podman[248410]: 2026-01-31 08:37:09.62860905 +0000 UTC m=+0.129713838 container start 1eef9a82c8cc5f47c9993f9f0d47ca015b86e71c1dbc2e465a465ed3e6b0f9f3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=naughty_jepsen, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Jan 31 08:37:09 compute-0 podman[248410]: 2026-01-31 08:37:09.631860832 +0000 UTC m=+0.132965630 container attach 1eef9a82c8cc5f47c9993f9f0d47ca015b86e71c1dbc2e465a465ed3e6b0f9f3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=naughty_jepsen, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS)
Jan 31 08:37:09 compute-0 naughty_jepsen[248426]: {
Jan 31 08:37:09 compute-0 naughty_jepsen[248426]:     "0": [
Jan 31 08:37:09 compute-0 naughty_jepsen[248426]:         {
Jan 31 08:37:09 compute-0 naughty_jepsen[248426]:             "devices": [
Jan 31 08:37:09 compute-0 naughty_jepsen[248426]:                 "/dev/loop3"
Jan 31 08:37:09 compute-0 naughty_jepsen[248426]:             ],
Jan 31 08:37:09 compute-0 naughty_jepsen[248426]:             "lv_name": "ceph_lv0",
Jan 31 08:37:09 compute-0 naughty_jepsen[248426]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 08:37:09 compute-0 naughty_jepsen[248426]:             "lv_size": "21470642176",
Jan 31 08:37:09 compute-0 naughty_jepsen[248426]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=MTsNbY-MKaT-jGv0-3onj-5WQa-gnK0-BbfLsK,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=82c880e6-d992-5408-8b12-efff9c275473,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=39c36249-2898-4a76-b317-8e4ca379866f,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 31 08:37:09 compute-0 naughty_jepsen[248426]:             "lv_uuid": "MTsNbY-MKaT-jGv0-3onj-5WQa-gnK0-BbfLsK",
Jan 31 08:37:09 compute-0 naughty_jepsen[248426]:             "name": "ceph_lv0",
Jan 31 08:37:09 compute-0 naughty_jepsen[248426]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 08:37:09 compute-0 naughty_jepsen[248426]:             "tags": {
Jan 31 08:37:09 compute-0 naughty_jepsen[248426]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 31 08:37:09 compute-0 naughty_jepsen[248426]:                 "ceph.block_uuid": "MTsNbY-MKaT-jGv0-3onj-5WQa-gnK0-BbfLsK",
Jan 31 08:37:09 compute-0 naughty_jepsen[248426]:                 "ceph.cephx_lockbox_secret": "",
Jan 31 08:37:09 compute-0 naughty_jepsen[248426]:                 "ceph.cluster_fsid": "82c880e6-d992-5408-8b12-efff9c275473",
Jan 31 08:37:09 compute-0 naughty_jepsen[248426]:                 "ceph.cluster_name": "ceph",
Jan 31 08:37:09 compute-0 naughty_jepsen[248426]:                 "ceph.crush_device_class": "",
Jan 31 08:37:09 compute-0 naughty_jepsen[248426]:                 "ceph.encrypted": "0",
Jan 31 08:37:09 compute-0 naughty_jepsen[248426]:                 "ceph.objectstore": "bluestore",
Jan 31 08:37:09 compute-0 naughty_jepsen[248426]:                 "ceph.osd_fsid": "39c36249-2898-4a76-b317-8e4ca379866f",
Jan 31 08:37:09 compute-0 naughty_jepsen[248426]:                 "ceph.osd_id": "0",
Jan 31 08:37:09 compute-0 naughty_jepsen[248426]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 31 08:37:09 compute-0 naughty_jepsen[248426]:                 "ceph.type": "block",
Jan 31 08:37:09 compute-0 naughty_jepsen[248426]:                 "ceph.vdo": "0",
Jan 31 08:37:09 compute-0 naughty_jepsen[248426]:                 "ceph.with_tpm": "0"
Jan 31 08:37:09 compute-0 naughty_jepsen[248426]:             },
Jan 31 08:37:09 compute-0 naughty_jepsen[248426]:             "type": "block",
Jan 31 08:37:09 compute-0 naughty_jepsen[248426]:             "vg_name": "ceph_vg0"
Jan 31 08:37:09 compute-0 naughty_jepsen[248426]:         }
Jan 31 08:37:09 compute-0 naughty_jepsen[248426]:     ],
Jan 31 08:37:09 compute-0 naughty_jepsen[248426]:     "1": [
Jan 31 08:37:09 compute-0 naughty_jepsen[248426]:         {
Jan 31 08:37:09 compute-0 naughty_jepsen[248426]:             "devices": [
Jan 31 08:37:09 compute-0 naughty_jepsen[248426]:                 "/dev/loop4"
Jan 31 08:37:09 compute-0 naughty_jepsen[248426]:             ],
Jan 31 08:37:09 compute-0 naughty_jepsen[248426]:             "lv_name": "ceph_lv1",
Jan 31 08:37:09 compute-0 naughty_jepsen[248426]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 31 08:37:09 compute-0 naughty_jepsen[248426]:             "lv_size": "21470642176",
Jan 31 08:37:09 compute-0 naughty_jepsen[248426]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=p93Mbf-DMxT-pcUt-jSJE-SFna-oscq-yTAd40,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=82c880e6-d992-5408-8b12-efff9c275473,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=dacad4fa-56d8-4937-b2d8-306fb75187f3,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 31 08:37:09 compute-0 naughty_jepsen[248426]:             "lv_uuid": "p93Mbf-DMxT-pcUt-jSJE-SFna-oscq-yTAd40",
Jan 31 08:37:09 compute-0 naughty_jepsen[248426]:             "name": "ceph_lv1",
Jan 31 08:37:09 compute-0 naughty_jepsen[248426]:             "path": "/dev/ceph_vg1/ceph_lv1",
Jan 31 08:37:09 compute-0 naughty_jepsen[248426]:             "tags": {
Jan 31 08:37:09 compute-0 naughty_jepsen[248426]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 31 08:37:09 compute-0 naughty_jepsen[248426]:                 "ceph.block_uuid": "p93Mbf-DMxT-pcUt-jSJE-SFna-oscq-yTAd40",
Jan 31 08:37:09 compute-0 naughty_jepsen[248426]:                 "ceph.cephx_lockbox_secret": "",
Jan 31 08:37:09 compute-0 naughty_jepsen[248426]:                 "ceph.cluster_fsid": "82c880e6-d992-5408-8b12-efff9c275473",
Jan 31 08:37:09 compute-0 naughty_jepsen[248426]:                 "ceph.cluster_name": "ceph",
Jan 31 08:37:09 compute-0 naughty_jepsen[248426]:                 "ceph.crush_device_class": "",
Jan 31 08:37:09 compute-0 naughty_jepsen[248426]:                 "ceph.encrypted": "0",
Jan 31 08:37:09 compute-0 naughty_jepsen[248426]:                 "ceph.objectstore": "bluestore",
Jan 31 08:37:09 compute-0 naughty_jepsen[248426]:                 "ceph.osd_fsid": "dacad4fa-56d8-4937-b2d8-306fb75187f3",
Jan 31 08:37:09 compute-0 naughty_jepsen[248426]:                 "ceph.osd_id": "1",
Jan 31 08:37:09 compute-0 naughty_jepsen[248426]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 31 08:37:09 compute-0 naughty_jepsen[248426]:                 "ceph.type": "block",
Jan 31 08:37:09 compute-0 naughty_jepsen[248426]:                 "ceph.vdo": "0",
Jan 31 08:37:09 compute-0 naughty_jepsen[248426]:                 "ceph.with_tpm": "0"
Jan 31 08:37:09 compute-0 naughty_jepsen[248426]:             },
Jan 31 08:37:09 compute-0 naughty_jepsen[248426]:             "type": "block",
Jan 31 08:37:09 compute-0 naughty_jepsen[248426]:             "vg_name": "ceph_vg1"
Jan 31 08:37:09 compute-0 naughty_jepsen[248426]:         }
Jan 31 08:37:09 compute-0 naughty_jepsen[248426]:     ],
Jan 31 08:37:09 compute-0 naughty_jepsen[248426]:     "2": [
Jan 31 08:37:09 compute-0 naughty_jepsen[248426]:         {
Jan 31 08:37:09 compute-0 naughty_jepsen[248426]:             "devices": [
Jan 31 08:37:09 compute-0 naughty_jepsen[248426]:                 "/dev/loop5"
Jan 31 08:37:09 compute-0 naughty_jepsen[248426]:             ],
Jan 31 08:37:09 compute-0 naughty_jepsen[248426]:             "lv_name": "ceph_lv2",
Jan 31 08:37:09 compute-0 naughty_jepsen[248426]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 31 08:37:09 compute-0 naughty_jepsen[248426]:             "lv_size": "21470642176",
Jan 31 08:37:09 compute-0 naughty_jepsen[248426]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=fxd7JU-HnwP-NvcE-M4xv-EgEF-kK7y-w6dXCS,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=82c880e6-d992-5408-8b12-efff9c275473,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=faa25865-e7b6-44f9-8188-08bf287b941b,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 31 08:37:09 compute-0 naughty_jepsen[248426]:             "lv_uuid": "fxd7JU-HnwP-NvcE-M4xv-EgEF-kK7y-w6dXCS",
Jan 31 08:37:09 compute-0 naughty_jepsen[248426]:             "name": "ceph_lv2",
Jan 31 08:37:09 compute-0 naughty_jepsen[248426]:             "path": "/dev/ceph_vg2/ceph_lv2",
Jan 31 08:37:09 compute-0 naughty_jepsen[248426]:             "tags": {
Jan 31 08:37:09 compute-0 naughty_jepsen[248426]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 31 08:37:09 compute-0 naughty_jepsen[248426]:                 "ceph.block_uuid": "fxd7JU-HnwP-NvcE-M4xv-EgEF-kK7y-w6dXCS",
Jan 31 08:37:09 compute-0 naughty_jepsen[248426]:                 "ceph.cephx_lockbox_secret": "",
Jan 31 08:37:09 compute-0 naughty_jepsen[248426]:                 "ceph.cluster_fsid": "82c880e6-d992-5408-8b12-efff9c275473",
Jan 31 08:37:09 compute-0 naughty_jepsen[248426]:                 "ceph.cluster_name": "ceph",
Jan 31 08:37:09 compute-0 naughty_jepsen[248426]:                 "ceph.crush_device_class": "",
Jan 31 08:37:09 compute-0 naughty_jepsen[248426]:                 "ceph.encrypted": "0",
Jan 31 08:37:09 compute-0 naughty_jepsen[248426]:                 "ceph.objectstore": "bluestore",
Jan 31 08:37:09 compute-0 naughty_jepsen[248426]:                 "ceph.osd_fsid": "faa25865-e7b6-44f9-8188-08bf287b941b",
Jan 31 08:37:09 compute-0 naughty_jepsen[248426]:                 "ceph.osd_id": "2",
Jan 31 08:37:09 compute-0 naughty_jepsen[248426]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 31 08:37:09 compute-0 naughty_jepsen[248426]:                 "ceph.type": "block",
Jan 31 08:37:09 compute-0 naughty_jepsen[248426]:                 "ceph.vdo": "0",
Jan 31 08:37:09 compute-0 naughty_jepsen[248426]:                 "ceph.with_tpm": "0"
Jan 31 08:37:09 compute-0 naughty_jepsen[248426]:             },
Jan 31 08:37:09 compute-0 naughty_jepsen[248426]:             "type": "block",
Jan 31 08:37:09 compute-0 naughty_jepsen[248426]:             "vg_name": "ceph_vg2"
Jan 31 08:37:09 compute-0 naughty_jepsen[248426]:         }
Jan 31 08:37:09 compute-0 naughty_jepsen[248426]:     ]
Jan 31 08:37:09 compute-0 naughty_jepsen[248426]: }
Jan 31 08:37:09 compute-0 systemd[1]: libpod-1eef9a82c8cc5f47c9993f9f0d47ca015b86e71c1dbc2e465a465ed3e6b0f9f3.scope: Deactivated successfully.
Jan 31 08:37:09 compute-0 podman[248410]: 2026-01-31 08:37:09.940900137 +0000 UTC m=+0.442004955 container died 1eef9a82c8cc5f47c9993f9f0d47ca015b86e71c1dbc2e465a465ed3e6b0f9f3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=naughty_jepsen, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Jan 31 08:37:09 compute-0 systemd[1]: var-lib-containers-storage-overlay-20f2dd5902e3ba2c99ad44ba57c96fcab3eeafaa45bdc9356d39b79b079c97d5-merged.mount: Deactivated successfully.
Jan 31 08:37:09 compute-0 podman[248410]: 2026-01-31 08:37:09.993071452 +0000 UTC m=+0.494176250 container remove 1eef9a82c8cc5f47c9993f9f0d47ca015b86e71c1dbc2e465a465ed3e6b0f9f3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=naughty_jepsen, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3)
Jan 31 08:37:10 compute-0 systemd[1]: libpod-conmon-1eef9a82c8cc5f47c9993f9f0d47ca015b86e71c1dbc2e465a465ed3e6b0f9f3.scope: Deactivated successfully.
Jan 31 08:37:10 compute-0 sudo[248332]: pam_unix(sudo:session): session closed for user root
Jan 31 08:37:10 compute-0 sudo[248447]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 08:37:10 compute-0 sudo[248447]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:37:10 compute-0 sudo[248447]: pam_unix(sudo:session): session closed for user root
Jan 31 08:37:10 compute-0 sudo[248472]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/82c880e6-d992-5408-8b12-efff9c275473/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 82c880e6-d992-5408-8b12-efff9c275473 -- raw list --format json
Jan 31 08:37:10 compute-0 sudo[248472]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:37:10 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:37:10 compute-0 podman[248510]: 2026-01-31 08:37:10.438154212 +0000 UTC m=+0.039490637 container create 76da6958c621667f0fb133e14dd016c3ee5b346a2f866621988c081d0677b838 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sleepy_kalam, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 31 08:37:10 compute-0 systemd[1]: Started libpod-conmon-76da6958c621667f0fb133e14dd016c3ee5b346a2f866621988c081d0677b838.scope.
Jan 31 08:37:10 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:37:10 compute-0 podman[248510]: 2026-01-31 08:37:10.507710758 +0000 UTC m=+0.109047183 container init 76da6958c621667f0fb133e14dd016c3ee5b346a2f866621988c081d0677b838 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sleepy_kalam, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 08:37:10 compute-0 podman[248510]: 2026-01-31 08:37:10.515749565 +0000 UTC m=+0.117085990 container start 76da6958c621667f0fb133e14dd016c3ee5b346a2f866621988c081d0677b838 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sleepy_kalam, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 08:37:10 compute-0 podman[248510]: 2026-01-31 08:37:10.421743788 +0000 UTC m=+0.023080233 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 08:37:10 compute-0 podman[248510]: 2026-01-31 08:37:10.519195832 +0000 UTC m=+0.120532277 container attach 76da6958c621667f0fb133e14dd016c3ee5b346a2f866621988c081d0677b838 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sleepy_kalam, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 08:37:10 compute-0 sleepy_kalam[248527]: 167 167
Jan 31 08:37:10 compute-0 systemd[1]: libpod-76da6958c621667f0fb133e14dd016c3ee5b346a2f866621988c081d0677b838.scope: Deactivated successfully.
Jan 31 08:37:10 compute-0 podman[248510]: 2026-01-31 08:37:10.520817918 +0000 UTC m=+0.122154343 container died 76da6958c621667f0fb133e14dd016c3ee5b346a2f866621988c081d0677b838 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sleepy_kalam, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default)
Jan 31 08:37:10 compute-0 ceph-mon[75227]: pgmap v1047: 305 pgs: 305 active+clean; 8.5 MiB data, 137 MiB used, 60 GiB / 60 GiB avail; 9.2 KiB/s rd, 1024 KiB/s wr, 11 op/s
Jan 31 08:37:10 compute-0 systemd[1]: var-lib-containers-storage-overlay-9e5088b90d55fcd98fa23e81e5c443360e7960bb6a3d0bd60f68dbc3fe8b0536-merged.mount: Deactivated successfully.
Jan 31 08:37:10 compute-0 podman[248510]: 2026-01-31 08:37:10.561389075 +0000 UTC m=+0.162725520 container remove 76da6958c621667f0fb133e14dd016c3ee5b346a2f866621988c081d0677b838 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sleepy_kalam, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 08:37:10 compute-0 systemd[1]: libpod-conmon-76da6958c621667f0fb133e14dd016c3ee5b346a2f866621988c081d0677b838.scope: Deactivated successfully.
Jan 31 08:37:10 compute-0 podman[248550]: 2026-01-31 08:37:10.715359686 +0000 UTC m=+0.049071548 container create 8c75ac2f40ef5e6aee1e78e565b81586d3769030cd9e4450220e4aa040cdda8b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_ganguly, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=tentacle)
Jan 31 08:37:10 compute-0 systemd[1]: Started libpod-conmon-8c75ac2f40ef5e6aee1e78e565b81586d3769030cd9e4450220e4aa040cdda8b.scope.
Jan 31 08:37:10 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:37:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2871ea29ca894d0a199e8a5c28414e5718b643d0b41082ee0c4d858e43143f07/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 08:37:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2871ea29ca894d0a199e8a5c28414e5718b643d0b41082ee0c4d858e43143f07/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 08:37:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2871ea29ca894d0a199e8a5c28414e5718b643d0b41082ee0c4d858e43143f07/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 08:37:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2871ea29ca894d0a199e8a5c28414e5718b643d0b41082ee0c4d858e43143f07/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 08:37:10 compute-0 podman[248550]: 2026-01-31 08:37:10.689706981 +0000 UTC m=+0.023418933 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 08:37:10 compute-0 podman[248550]: 2026-01-31 08:37:10.787601528 +0000 UTC m=+0.121313430 container init 8c75ac2f40ef5e6aee1e78e565b81586d3769030cd9e4450220e4aa040cdda8b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_ganguly, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 08:37:10 compute-0 podman[248550]: 2026-01-31 08:37:10.793653919 +0000 UTC m=+0.127365791 container start 8c75ac2f40ef5e6aee1e78e565b81586d3769030cd9e4450220e4aa040cdda8b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_ganguly, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 08:37:10 compute-0 podman[248550]: 2026-01-31 08:37:10.800741349 +0000 UTC m=+0.134453221 container attach 8c75ac2f40ef5e6aee1e78e565b81586d3769030cd9e4450220e4aa040cdda8b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_ganguly, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 08:37:11 compute-0 lvm[248646]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 31 08:37:11 compute-0 lvm[248646]: VG ceph_vg1 finished
Jan 31 08:37:11 compute-0 lvm[248645]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 31 08:37:11 compute-0 lvm[248645]: VG ceph_vg0 finished
Jan 31 08:37:11 compute-0 lvm[248648]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 31 08:37:11 compute-0 lvm[248648]: VG ceph_vg2 finished
Jan 31 08:37:11 compute-0 fervent_ganguly[248567]: {}
Jan 31 08:37:11 compute-0 systemd[1]: libpod-8c75ac2f40ef5e6aee1e78e565b81586d3769030cd9e4450220e4aa040cdda8b.scope: Deactivated successfully.
Jan 31 08:37:11 compute-0 podman[248550]: 2026-01-31 08:37:11.484710561 +0000 UTC m=+0.818422463 container died 8c75ac2f40ef5e6aee1e78e565b81586d3769030cd9e4450220e4aa040cdda8b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_ganguly, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 08:37:11 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1048: 305 pgs: 305 active+clean; 41 MiB data, 178 MiB used, 60 GiB / 60 GiB avail; 33 KiB/s rd, 5.1 MiB/s wr, 47 op/s
Jan 31 08:37:11 compute-0 systemd[1]: var-lib-containers-storage-overlay-2871ea29ca894d0a199e8a5c28414e5718b643d0b41082ee0c4d858e43143f07-merged.mount: Deactivated successfully.
Jan 31 08:37:11 compute-0 podman[248550]: 2026-01-31 08:37:11.545176871 +0000 UTC m=+0.878888773 container remove 8c75ac2f40ef5e6aee1e78e565b81586d3769030cd9e4450220e4aa040cdda8b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_ganguly, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Jan 31 08:37:11 compute-0 systemd[1]: libpod-conmon-8c75ac2f40ef5e6aee1e78e565b81586d3769030cd9e4450220e4aa040cdda8b.scope: Deactivated successfully.
Jan 31 08:37:11 compute-0 sudo[248472]: pam_unix(sudo:session): session closed for user root
Jan 31 08:37:11 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 31 08:37:11 compute-0 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 08:37:11 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 31 08:37:11 compute-0 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 08:37:11 compute-0 sudo[248664]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 31 08:37:11 compute-0 sudo[248664]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:37:11 compute-0 sudo[248664]: pam_unix(sudo:session): session closed for user root
Jan 31 08:37:12 compute-0 ceph-mon[75227]: pgmap v1048: 305 pgs: 305 active+clean; 41 MiB data, 178 MiB used, 60 GiB / 60 GiB avail; 33 KiB/s rd, 5.1 MiB/s wr, 47 op/s
Jan 31 08:37:12 compute-0 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 08:37:12 compute-0 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 08:37:13 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1049: 305 pgs: 305 active+clean; 41 MiB data, 178 MiB used, 60 GiB / 60 GiB avail; 33 KiB/s rd, 4.1 MiB/s wr, 47 op/s
Jan 31 08:37:14 compute-0 nova_compute[238824]: 2026-01-31 08:37:14.339 238828 DEBUG oslo_service.periodic_task [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:37:14 compute-0 ceph-mon[75227]: pgmap v1049: 305 pgs: 305 active+clean; 41 MiB data, 178 MiB used, 60 GiB / 60 GiB avail; 33 KiB/s rd, 4.1 MiB/s wr, 47 op/s
Jan 31 08:37:15 compute-0 podman[248690]: 2026-01-31 08:37:15.208636516 +0000 UTC m=+0.087825063 container health_status 5cc46d1955888fed41771eb977e7f9416e280539f01559118253757fe3eb0869 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '3b798815db4eef76ff54d2cfe5801aee605c637f16e47a4297de289ab1fdb9c1-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Jan 31 08:37:15 compute-0 podman[248689]: 2026-01-31 08:37:15.225203595 +0000 UTC m=+0.105563855 container health_status 14c3c41ef3ecc0cb180f3b1c6b2646401de390411c75f2edf127900ced71a3ae (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '3b798815db4eef76ff54d2cfe5801aee605c637f16e47a4297de289ab1fdb9c1-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Jan 31 08:37:15 compute-0 nova_compute[238824]: 2026-01-31 08:37:15.339 238828 DEBUG oslo_service.periodic_task [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:37:15 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:37:15 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1050: 305 pgs: 305 active+clean; 41 MiB data, 178 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 3.4 MiB/s wr, 29 op/s
Jan 31 08:37:16 compute-0 ceph-osd[85971]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 31 08:37:16 compute-0 ceph-osd[85971]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 1800.1 total, 600.0 interval
                                           Cumulative writes: 6186 writes, 25K keys, 6186 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.01 MB/s
                                           Cumulative WAL: 6186 writes, 1125 syncs, 5.50 writes per sync, written: 0.02 GB, 0.01 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 363 writes, 834 keys, 363 commit groups, 1.0 writes per commit group, ingest: 0.49 MB, 0.00 MB/s
                                           Interval WAL: 363 writes, 164 syncs, 2.21 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Jan 31 08:37:16 compute-0 ceph-mon[75227]: pgmap v1050: 305 pgs: 305 active+clean; 41 MiB data, 178 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 3.4 MiB/s wr, 29 op/s
Jan 31 08:37:17 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1051: 305 pgs: 305 active+clean; 41 MiB data, 178 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 3.3 MiB/s wr, 29 op/s
Jan 31 08:37:17 compute-0 ovn_metadata_agent[154972]: 2026-01-31 08:37:17.897 154977 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:37:17 compute-0 ovn_metadata_agent[154972]: 2026-01-31 08:37:17.897 154977 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:37:17 compute-0 ovn_metadata_agent[154972]: 2026-01-31 08:37:17.898 154977 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:37:17 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 31 08:37:17 compute-0 ceph-mon[75227]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3609810312' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 31 08:37:17 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 31 08:37:17 compute-0 ceph-mon[75227]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3609810312' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 31 08:37:18 compute-0 nova_compute[238824]: 2026-01-31 08:37:18.338 238828 DEBUG oslo_service.periodic_task [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:37:18 compute-0 nova_compute[238824]: 2026-01-31 08:37:18.339 238828 DEBUG oslo_service.periodic_task [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:37:18 compute-0 nova_compute[238824]: 2026-01-31 08:37:18.339 238828 DEBUG nova.compute.manager [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 31 08:37:18 compute-0 nova_compute[238824]: 2026-01-31 08:37:18.339 238828 DEBUG oslo_service.periodic_task [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:37:18 compute-0 nova_compute[238824]: 2026-01-31 08:37:18.364 238828 DEBUG oslo_concurrency.lockutils [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:37:18 compute-0 nova_compute[238824]: 2026-01-31 08:37:18.365 238828 DEBUG oslo_concurrency.lockutils [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:37:18 compute-0 nova_compute[238824]: 2026-01-31 08:37:18.365 238828 DEBUG oslo_concurrency.lockutils [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:37:18 compute-0 nova_compute[238824]: 2026-01-31 08:37:18.365 238828 DEBUG nova.compute.resource_tracker [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 31 08:37:18 compute-0 nova_compute[238824]: 2026-01-31 08:37:18.365 238828 DEBUG oslo_concurrency.processutils [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:37:18 compute-0 ceph-mon[75227]: pgmap v1051: 305 pgs: 305 active+clean; 41 MiB data, 178 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 3.3 MiB/s wr, 29 op/s
Jan 31 08:37:18 compute-0 ceph-mon[75227]: from='client.? 192.168.122.10:0/3609810312' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 31 08:37:18 compute-0 ceph-mon[75227]: from='client.? 192.168.122.10:0/3609810312' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 31 08:37:18 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 31 08:37:18 compute-0 ceph-mon[75227]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1534117675' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 31 08:37:18 compute-0 nova_compute[238824]: 2026-01-31 08:37:18.903 238828 DEBUG oslo_concurrency.processutils [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.538s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:37:19 compute-0 nova_compute[238824]: 2026-01-31 08:37:19.099 238828 WARNING nova.virt.libvirt.driver [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 31 08:37:19 compute-0 nova_compute[238824]: 2026-01-31 08:37:19.101 238828 DEBUG nova.compute.resource_tracker [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5121MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 31 08:37:19 compute-0 nova_compute[238824]: 2026-01-31 08:37:19.101 238828 DEBUG oslo_concurrency.lockutils [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:37:19 compute-0 nova_compute[238824]: 2026-01-31 08:37:19.102 238828 DEBUG oslo_concurrency.lockutils [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:37:19 compute-0 nova_compute[238824]: 2026-01-31 08:37:19.181 238828 DEBUG nova.compute.resource_tracker [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 31 08:37:19 compute-0 nova_compute[238824]: 2026-01-31 08:37:19.181 238828 DEBUG nova.compute.resource_tracker [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 31 08:37:19 compute-0 nova_compute[238824]: 2026-01-31 08:37:19.204 238828 DEBUG oslo_concurrency.processutils [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:37:19 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1052: 305 pgs: 305 active+clean; 41 MiB data, 178 MiB used, 60 GiB / 60 GiB avail; 16 KiB/s rd, 2.8 MiB/s wr, 24 op/s
Jan 31 08:37:19 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e138 do_prune osdmap full prune enabled
Jan 31 08:37:19 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e139 e139: 3 total, 3 up, 3 in
Jan 31 08:37:19 compute-0 ceph-mon[75227]: from='client.? 192.168.122.100:0/1534117675' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 31 08:37:19 compute-0 ceph-mon[75227]: log_channel(cluster) log [DBG] : osdmap e139: 3 total, 3 up, 3 in
Jan 31 08:37:19 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 31 08:37:19 compute-0 ceph-mon[75227]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/37918866' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 31 08:37:19 compute-0 nova_compute[238824]: 2026-01-31 08:37:19.784 238828 DEBUG oslo_concurrency.processutils [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.580s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:37:19 compute-0 nova_compute[238824]: 2026-01-31 08:37:19.788 238828 DEBUG nova.compute.provider_tree [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Inventory has not changed in ProviderTree for provider: 6d4ff98f-eb37-47a1-bfaf-01e7f5329d98 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 31 08:37:19 compute-0 nova_compute[238824]: 2026-01-31 08:37:19.803 238828 DEBUG nova.scheduler.client.report [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Inventory has not changed for provider 6d4ff98f-eb37-47a1-bfaf-01e7f5329d98 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 31 08:37:19 compute-0 nova_compute[238824]: 2026-01-31 08:37:19.806 238828 DEBUG nova.compute.resource_tracker [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 31 08:37:19 compute-0 nova_compute[238824]: 2026-01-31 08:37:19.806 238828 DEBUG oslo_concurrency.lockutils [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.704s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:37:20 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:37:20 compute-0 nova_compute[238824]: 2026-01-31 08:37:20.808 238828 DEBUG oslo_service.periodic_task [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:37:20 compute-0 nova_compute[238824]: 2026-01-31 08:37:20.808 238828 DEBUG nova.compute.manager [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 31 08:37:20 compute-0 nova_compute[238824]: 2026-01-31 08:37:20.809 238828 DEBUG nova.compute.manager [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 31 08:37:20 compute-0 ceph-mon[75227]: pgmap v1052: 305 pgs: 305 active+clean; 41 MiB data, 178 MiB used, 60 GiB / 60 GiB avail; 16 KiB/s rd, 2.8 MiB/s wr, 24 op/s
Jan 31 08:37:20 compute-0 ceph-mon[75227]: osdmap e139: 3 total, 3 up, 3 in
Jan 31 08:37:20 compute-0 ceph-mon[75227]: from='client.? 192.168.122.100:0/37918866' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 31 08:37:20 compute-0 nova_compute[238824]: 2026-01-31 08:37:20.824 238828 DEBUG nova.compute.manager [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 31 08:37:20 compute-0 nova_compute[238824]: 2026-01-31 08:37:20.824 238828 DEBUG oslo_service.periodic_task [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:37:21 compute-0 ceph-osd[87035]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 31 08:37:21 compute-0 ceph-osd[87035]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 1800.3 total, 600.0 interval
                                           Cumulative writes: 7626 writes, 30K keys, 7626 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.01 MB/s
                                           Cumulative WAL: 7626 writes, 1597 syncs, 4.78 writes per sync, written: 0.02 GB, 0.01 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 570 writes, 1601 keys, 570 commit groups, 1.0 writes per commit group, ingest: 0.67 MB, 0.00 MB/s
                                           Interval WAL: 570 writes, 250 syncs, 2.28 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Jan 31 08:37:21 compute-0 nova_compute[238824]: 2026-01-31 08:37:21.350 238828 DEBUG oslo_service.periodic_task [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:37:21 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1054: 305 pgs: 305 active+clean; 33 MiB data, 178 MiB used, 60 GiB / 60 GiB avail; 15 KiB/s rd, 921 B/s wr, 19 op/s
Jan 31 08:37:21 compute-0 ceph-mon[75227]: pgmap v1054: 305 pgs: 305 active+clean; 33 MiB data, 178 MiB used, 60 GiB / 60 GiB avail; 15 KiB/s rd, 921 B/s wr, 19 op/s
Jan 31 08:37:22 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e139 do_prune osdmap full prune enabled
Jan 31 08:37:22 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e140 e140: 3 total, 3 up, 3 in
Jan 31 08:37:22 compute-0 ceph-mon[75227]: log_channel(cluster) log [DBG] : osdmap e140: 3 total, 3 up, 3 in
Jan 31 08:37:23 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1056: 305 pgs: 305 active+clean; 33 MiB data, 178 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 1.1 KiB/s wr, 24 op/s
Jan 31 08:37:23 compute-0 ceph-mon[75227]: osdmap e140: 3 total, 3 up, 3 in
Jan 31 08:37:23 compute-0 ceph-mon[75227]: pgmap v1056: 305 pgs: 305 active+clean; 33 MiB data, 178 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 1.1 KiB/s wr, 24 op/s
Jan 31 08:37:24 compute-0 nova_compute[238824]: 2026-01-31 08:37:24.339 238828 DEBUG oslo_service.periodic_task [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:37:25 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:37:25 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1057: 305 pgs: 305 active+clean; 8.5 MiB data, 149 MiB used, 60 GiB / 60 GiB avail; 27 KiB/s rd, 2.7 KiB/s wr, 39 op/s
Jan 31 08:37:26 compute-0 ceph-mon[75227]: pgmap v1057: 305 pgs: 305 active+clean; 8.5 MiB data, 149 MiB used, 60 GiB / 60 GiB avail; 27 KiB/s rd, 2.7 KiB/s wr, 39 op/s
Jan 31 08:37:27 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1058: 305 pgs: 305 active+clean; 461 KiB data, 141 MiB used, 60 GiB / 60 GiB avail; 45 KiB/s rd, 3.5 KiB/s wr, 62 op/s
Jan 31 08:37:27 compute-0 ceph-osd[88096]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 31 08:37:27 compute-0 ceph-osd[88096]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 1800.8 total, 600.0 interval
                                           Cumulative writes: 6134 writes, 25K keys, 6134 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.01 MB/s
                                           Cumulative WAL: 6134 writes, 1062 syncs, 5.78 writes per sync, written: 0.02 GB, 0.01 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 543 writes, 1575 keys, 543 commit groups, 1.0 writes per commit group, ingest: 0.86 MB, 0.00 MB/s
                                           Interval WAL: 543 writes, 236 syncs, 2.30 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Jan 31 08:37:28 compute-0 ceph-mon[75227]: pgmap v1058: 305 pgs: 305 active+clean; 461 KiB data, 141 MiB used, 60 GiB / 60 GiB avail; 45 KiB/s rd, 3.5 KiB/s wr, 62 op/s
Jan 31 08:37:29 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1059: 305 pgs: 305 active+clean; 461 KiB data, 141 MiB used, 60 GiB / 60 GiB avail; 37 KiB/s rd, 2.8 KiB/s wr, 50 op/s
Jan 31 08:37:29 compute-0 ceph-mon[75227]: pgmap v1059: 305 pgs: 305 active+clean; 461 KiB data, 141 MiB used, 60 GiB / 60 GiB avail; 37 KiB/s rd, 2.8 KiB/s wr, 50 op/s
Jan 31 08:37:30 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:37:30 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e140 do_prune osdmap full prune enabled
Jan 31 08:37:30 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e141 e141: 3 total, 3 up, 3 in
Jan 31 08:37:30 compute-0 ceph-mon[75227]: log_channel(cluster) log [DBG] : osdmap e141: 3 total, 3 up, 3 in
Jan 31 08:37:31 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1061: 305 pgs: 305 active+clean; 461 KiB data, 141 MiB used, 60 GiB / 60 GiB avail; 24 KiB/s rd, 2.2 KiB/s wr, 35 op/s
Jan 31 08:37:31 compute-0 ceph-mon[75227]: osdmap e141: 3 total, 3 up, 3 in
Jan 31 08:37:31 compute-0 ceph-mgr[75519]: [balancer INFO root] Optimize plan auto_2026-01-31_08:37:31
Jan 31 08:37:31 compute-0 ceph-mgr[75519]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 31 08:37:31 compute-0 ceph-mgr[75519]: [balancer INFO root] do_upmap
Jan 31 08:37:31 compute-0 ceph-mgr[75519]: [balancer INFO root] pools ['.rgw.root', 'default.rgw.control', 'default.rgw.meta', '.mgr', 'volumes', 'vms', 'cephfs.cephfs.data', 'default.rgw.log', 'cephfs.cephfs.meta', 'backups', 'images']
Jan 31 08:37:31 compute-0 ceph-mgr[75519]: [balancer INFO root] prepared 0/10 upmap changes
Jan 31 08:37:32 compute-0 ceph-mgr[75519]: [devicehealth INFO root] Check health
Jan 31 08:37:32 compute-0 ceph-mon[75227]: pgmap v1061: 305 pgs: 305 active+clean; 461 KiB data, 141 MiB used, 60 GiB / 60 GiB avail; 24 KiB/s rd, 2.2 KiB/s wr, 35 op/s
Jan 31 08:37:32 compute-0 ceph-mgr[75519]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:37:32 compute-0 ceph-mgr[75519]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:37:32 compute-0 ceph-mgr[75519]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:37:32 compute-0 ceph-mgr[75519]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:37:32 compute-0 ceph-mgr[75519]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:37:32 compute-0 ceph-mgr[75519]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:37:33 compute-0 ceph-mgr[75519]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 31 08:37:33 compute-0 ceph-mgr[75519]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 08:37:33 compute-0 ceph-mgr[75519]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 31 08:37:33 compute-0 ceph-mgr[75519]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 08:37:33 compute-0 ceph-mgr[75519]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 08:37:33 compute-0 ceph-mgr[75519]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 08:37:33 compute-0 ceph-mgr[75519]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 08:37:33 compute-0 ceph-mgr[75519]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 08:37:33 compute-0 ceph-mgr[75519]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 08:37:33 compute-0 ceph-mgr[75519]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 08:37:33 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1062: 305 pgs: 305 active+clean; 461 KiB data, 141 MiB used, 60 GiB / 60 GiB avail; 21 KiB/s rd, 1.9 KiB/s wr, 30 op/s
Jan 31 08:37:34 compute-0 ceph-mon[75227]: pgmap v1062: 305 pgs: 305 active+clean; 461 KiB data, 141 MiB used, 60 GiB / 60 GiB avail; 21 KiB/s rd, 1.9 KiB/s wr, 30 op/s
Jan 31 08:37:35 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:37:35 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1063: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail; 15 KiB/s rd, 614 B/s wr, 18 op/s
Jan 31 08:37:36 compute-0 ceph-mon[75227]: pgmap v1063: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail; 15 KiB/s rd, 614 B/s wr, 18 op/s
Jan 31 08:37:37 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1064: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:37:38 compute-0 ceph-mon[75227]: pgmap v1064: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:37:39 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1065: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:37:40 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:37:40 compute-0 ceph-mon[75227]: pgmap v1065: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:37:41 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1066: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:37:42 compute-0 ceph-mon[75227]: pgmap v1066: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:37:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] _maybe_adjust
Jan 31 08:37:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:37:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 31 08:37:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:37:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 08:37:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:37:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 08:37:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:37:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 08:37:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:37:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 3.257160766784386e-07 of space, bias 1.0, pg target 9.771482300353158e-05 quantized to 32 (current 32)
Jan 31 08:37:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:37:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 2.5331644121694047e-06 of space, bias 4.0, pg target 0.0030397972946032857 quantized to 16 (current 16)
Jan 31 08:37:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:37:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 08:37:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:37:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 31 08:37:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:37:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Jan 31 08:37:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:37:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 08:37:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:37:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 31 08:37:43 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1067: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:37:44 compute-0 ceph-mon[75227]: pgmap v1067: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:37:45 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:37:45 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1068: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:37:46 compute-0 podman[248781]: 2026-01-31 08:37:46.159848781 +0000 UTC m=+0.053173284 container health_status 5cc46d1955888fed41771eb977e7f9416e280539f01559118253757fe3eb0869 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '3b798815db4eef76ff54d2cfe5801aee605c637f16e47a4297de289ab1fdb9c1-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent)
Jan 31 08:37:46 compute-0 podman[248780]: 2026-01-31 08:37:46.180804344 +0000 UTC m=+0.075771643 container health_status 14c3c41ef3ecc0cb180f3b1c6b2646401de390411c75f2edf127900ced71a3ae (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ovn_controller, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '3b798815db4eef76ff54d2cfe5801aee605c637f16e47a4297de289ab1fdb9c1-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3)
Jan 31 08:37:46 compute-0 ceph-mon[75227]: pgmap v1068: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:37:47 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1069: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:37:48 compute-0 ceph-mon[75227]: pgmap v1069: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:37:49 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1070: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:37:49 compute-0 ceph-mon[75227]: pgmap v1070: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:37:50 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:37:51 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1071: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:37:52 compute-0 ceph-mon[75227]: pgmap v1071: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:37:53 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1072: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:37:54 compute-0 ceph-mon[75227]: pgmap v1072: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:37:55 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:37:55 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1073: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:37:56 compute-0 ceph-mon[75227]: pgmap v1073: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:37:57 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1074: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:37:58 compute-0 ceph-mon[75227]: pgmap v1074: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:37:59 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1075: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:38:00 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:38:00 compute-0 ceph-mon[75227]: pgmap v1075: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:38:01 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1076: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:38:01 compute-0 ceph-mon[75227]: pgmap v1076: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:38:02 compute-0 ceph-mgr[75519]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:38:02 compute-0 ceph-mgr[75519]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:38:02 compute-0 ceph-mgr[75519]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:38:02 compute-0 ceph-mgr[75519]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:38:02 compute-0 ceph-mgr[75519]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:38:02 compute-0 ceph-mgr[75519]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:38:03 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1077: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:38:04 compute-0 ceph-mon[75227]: pgmap v1077: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:38:05 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:38:05 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1078: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:38:05 compute-0 ceph-mon[75227]: pgmap v1078: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:38:07 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1079: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:38:07 compute-0 ceph-mon[75227]: pgmap v1079: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:38:09 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1080: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:38:10 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:38:10 compute-0 ceph-mon[75227]: pgmap v1080: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:38:11 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1081: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:38:11 compute-0 sudo[248822]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 08:38:11 compute-0 sudo[248822]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:38:11 compute-0 sudo[248822]: pam_unix(sudo:session): session closed for user root
Jan 31 08:38:11 compute-0 sudo[248847]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/82c880e6-d992-5408-8b12-efff9c275473/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 check-host
Jan 31 08:38:11 compute-0 sudo[248847]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:38:12 compute-0 sudo[248847]: pam_unix(sudo:session): session closed for user root
Jan 31 08:38:12 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 31 08:38:12 compute-0 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 08:38:12 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 31 08:38:12 compute-0 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 08:38:12 compute-0 sudo[248893]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 08:38:12 compute-0 sudo[248893]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:38:12 compute-0 sudo[248893]: pam_unix(sudo:session): session closed for user root
Jan 31 08:38:12 compute-0 sudo[248918]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/82c880e6-d992-5408-8b12-efff9c275473/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Jan 31 08:38:12 compute-0 sudo[248918]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:38:12 compute-0 sudo[248918]: pam_unix(sudo:session): session closed for user root
Jan 31 08:38:12 compute-0 ceph-mon[75227]: pgmap v1081: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:38:12 compute-0 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 08:38:12 compute-0 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 08:38:12 compute-0 sudo[248975]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 08:38:12 compute-0 sudo[248975]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:38:12 compute-0 sudo[248975]: pam_unix(sudo:session): session closed for user root
Jan 31 08:38:12 compute-0 sudo[249000]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/82c880e6-d992-5408-8b12-efff9c275473/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 82c880e6-d992-5408-8b12-efff9c275473 -- inventory --format=json-pretty --filter-for-batch
Jan 31 08:38:12 compute-0 sudo[249000]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:38:13 compute-0 podman[249037]: 2026-01-31 08:38:13.083775602 +0000 UTC m=+0.060134277 container create b76803e5b9fcc5d10139b4dd088ac551138442e325bab609fc4f1af2cebcb44d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=festive_torvalds, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 08:38:13 compute-0 systemd[1]: Started libpod-conmon-b76803e5b9fcc5d10139b4dd088ac551138442e325bab609fc4f1af2cebcb44d.scope.
Jan 31 08:38:13 compute-0 podman[249037]: 2026-01-31 08:38:13.058792813 +0000 UTC m=+0.035151528 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 08:38:13 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:38:13 compute-0 podman[249037]: 2026-01-31 08:38:13.182681297 +0000 UTC m=+0.159040012 container init b76803e5b9fcc5d10139b4dd088ac551138442e325bab609fc4f1af2cebcb44d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=festive_torvalds, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Jan 31 08:38:13 compute-0 podman[249037]: 2026-01-31 08:38:13.192460414 +0000 UTC m=+0.168819089 container start b76803e5b9fcc5d10139b4dd088ac551138442e325bab609fc4f1af2cebcb44d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=festive_torvalds, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default)
Jan 31 08:38:13 compute-0 podman[249037]: 2026-01-31 08:38:13.197333902 +0000 UTC m=+0.173692577 container attach b76803e5b9fcc5d10139b4dd088ac551138442e325bab609fc4f1af2cebcb44d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=festive_torvalds, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 08:38:13 compute-0 festive_torvalds[249053]: 167 167
Jan 31 08:38:13 compute-0 systemd[1]: libpod-b76803e5b9fcc5d10139b4dd088ac551138442e325bab609fc4f1af2cebcb44d.scope: Deactivated successfully.
Jan 31 08:38:13 compute-0 conmon[249053]: conmon b76803e5b9fcc5d10139 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-b76803e5b9fcc5d10139b4dd088ac551138442e325bab609fc4f1af2cebcb44d.scope/container/memory.events
Jan 31 08:38:13 compute-0 podman[249037]: 2026-01-31 08:38:13.201379897 +0000 UTC m=+0.177738572 container died b76803e5b9fcc5d10139b4dd088ac551138442e325bab609fc4f1af2cebcb44d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=festive_torvalds, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Jan 31 08:38:13 compute-0 systemd[1]: var-lib-containers-storage-overlay-fb1ed9233ac16777b4e4d31488084d33c6143aebe9c179c35d88550a643b5de2-merged.mount: Deactivated successfully.
Jan 31 08:38:13 compute-0 podman[249037]: 2026-01-31 08:38:13.279717919 +0000 UTC m=+0.256076594 container remove b76803e5b9fcc5d10139b4dd088ac551138442e325bab609fc4f1af2cebcb44d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=festive_torvalds, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 08:38:13 compute-0 systemd[1]: libpod-conmon-b76803e5b9fcc5d10139b4dd088ac551138442e325bab609fc4f1af2cebcb44d.scope: Deactivated successfully.
Jan 31 08:38:13 compute-0 podman[249077]: 2026-01-31 08:38:13.455643339 +0000 UTC m=+0.057861942 container create e63fba119b2730bb04fdb16cdc765e30551d8175d924cdf8d16448d4dc073579 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_einstein, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle)
Jan 31 08:38:13 compute-0 systemd[1]: Started libpod-conmon-e63fba119b2730bb04fdb16cdc765e30551d8175d924cdf8d16448d4dc073579.scope.
Jan 31 08:38:13 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:38:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/83cb49cbe886d0c1e79243d7a148d8272707e4031cae67bacf5f788062b82e4f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 08:38:13 compute-0 podman[249077]: 2026-01-31 08:38:13.432720998 +0000 UTC m=+0.034939671 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 08:38:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/83cb49cbe886d0c1e79243d7a148d8272707e4031cae67bacf5f788062b82e4f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 08:38:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/83cb49cbe886d0c1e79243d7a148d8272707e4031cae67bacf5f788062b82e4f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 08:38:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/83cb49cbe886d0c1e79243d7a148d8272707e4031cae67bacf5f788062b82e4f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 08:38:13 compute-0 podman[249077]: 2026-01-31 08:38:13.542864222 +0000 UTC m=+0.145082835 container init e63fba119b2730bb04fdb16cdc765e30551d8175d924cdf8d16448d4dc073579 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_einstein, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Jan 31 08:38:13 compute-0 podman[249077]: 2026-01-31 08:38:13.548614925 +0000 UTC m=+0.150833538 container start e63fba119b2730bb04fdb16cdc765e30551d8175d924cdf8d16448d4dc073579 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_einstein, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0)
Jan 31 08:38:13 compute-0 podman[249077]: 2026-01-31 08:38:13.552078684 +0000 UTC m=+0.154297277 container attach e63fba119b2730bb04fdb16cdc765e30551d8175d924cdf8d16448d4dc073579 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_einstein, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 08:38:13 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1082: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:38:14 compute-0 reverent_einstein[249094]: [
Jan 31 08:38:14 compute-0 reverent_einstein[249094]:     {
Jan 31 08:38:14 compute-0 reverent_einstein[249094]:         "available": false,
Jan 31 08:38:14 compute-0 reverent_einstein[249094]:         "being_replaced": false,
Jan 31 08:38:14 compute-0 reverent_einstein[249094]:         "ceph_device_lvm": false,
Jan 31 08:38:14 compute-0 reverent_einstein[249094]:         "device_id": "QEMU_DVD-ROM_QM00001",
Jan 31 08:38:14 compute-0 reverent_einstein[249094]:         "lsm_data": {},
Jan 31 08:38:14 compute-0 reverent_einstein[249094]:         "lvs": [],
Jan 31 08:38:14 compute-0 reverent_einstein[249094]:         "path": "/dev/sr0",
Jan 31 08:38:14 compute-0 reverent_einstein[249094]:         "rejected_reasons": [
Jan 31 08:38:14 compute-0 reverent_einstein[249094]:             "Has a FileSystem",
Jan 31 08:38:14 compute-0 reverent_einstein[249094]:             "Insufficient space (<5GB)"
Jan 31 08:38:14 compute-0 reverent_einstein[249094]:         ],
Jan 31 08:38:14 compute-0 reverent_einstein[249094]:         "sys_api": {
Jan 31 08:38:14 compute-0 reverent_einstein[249094]:             "actuators": null,
Jan 31 08:38:14 compute-0 reverent_einstein[249094]:             "device_nodes": [
Jan 31 08:38:14 compute-0 reverent_einstein[249094]:                 "sr0"
Jan 31 08:38:14 compute-0 reverent_einstein[249094]:             ],
Jan 31 08:38:14 compute-0 reverent_einstein[249094]:             "devname": "sr0",
Jan 31 08:38:14 compute-0 reverent_einstein[249094]:             "human_readable_size": "482.00 KB",
Jan 31 08:38:14 compute-0 reverent_einstein[249094]:             "id_bus": "ata",
Jan 31 08:38:14 compute-0 reverent_einstein[249094]:             "model": "QEMU DVD-ROM",
Jan 31 08:38:14 compute-0 reverent_einstein[249094]:             "nr_requests": "2",
Jan 31 08:38:14 compute-0 reverent_einstein[249094]:             "parent": "/dev/sr0",
Jan 31 08:38:14 compute-0 reverent_einstein[249094]:             "partitions": {},
Jan 31 08:38:14 compute-0 reverent_einstein[249094]:             "path": "/dev/sr0",
Jan 31 08:38:14 compute-0 reverent_einstein[249094]:             "removable": "1",
Jan 31 08:38:14 compute-0 reverent_einstein[249094]:             "rev": "2.5+",
Jan 31 08:38:14 compute-0 reverent_einstein[249094]:             "ro": "0",
Jan 31 08:38:14 compute-0 reverent_einstein[249094]:             "rotational": "1",
Jan 31 08:38:14 compute-0 reverent_einstein[249094]:             "sas_address": "",
Jan 31 08:38:14 compute-0 reverent_einstein[249094]:             "sas_device_handle": "",
Jan 31 08:38:14 compute-0 reverent_einstein[249094]:             "scheduler_mode": "mq-deadline",
Jan 31 08:38:14 compute-0 reverent_einstein[249094]:             "sectors": 0,
Jan 31 08:38:14 compute-0 reverent_einstein[249094]:             "sectorsize": "2048",
Jan 31 08:38:14 compute-0 reverent_einstein[249094]:             "size": 493568.0,
Jan 31 08:38:14 compute-0 reverent_einstein[249094]:             "support_discard": "2048",
Jan 31 08:38:14 compute-0 reverent_einstein[249094]:             "type": "disk",
Jan 31 08:38:14 compute-0 reverent_einstein[249094]:             "vendor": "QEMU"
Jan 31 08:38:14 compute-0 reverent_einstein[249094]:         }
Jan 31 08:38:14 compute-0 reverent_einstein[249094]:     }
Jan 31 08:38:14 compute-0 reverent_einstein[249094]: ]
Jan 31 08:38:14 compute-0 systemd[1]: libpod-e63fba119b2730bb04fdb16cdc765e30551d8175d924cdf8d16448d4dc073579.scope: Deactivated successfully.
Jan 31 08:38:14 compute-0 podman[249077]: 2026-01-31 08:38:14.098840491 +0000 UTC m=+0.701059094 container died e63fba119b2730bb04fdb16cdc765e30551d8175d924cdf8d16448d4dc073579 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_einstein, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3)
Jan 31 08:38:14 compute-0 systemd[1]: var-lib-containers-storage-overlay-83cb49cbe886d0c1e79243d7a148d8272707e4031cae67bacf5f788062b82e4f-merged.mount: Deactivated successfully.
Jan 31 08:38:14 compute-0 podman[249077]: 2026-01-31 08:38:14.573303567 +0000 UTC m=+1.175522160 container remove e63fba119b2730bb04fdb16cdc765e30551d8175d924cdf8d16448d4dc073579 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_einstein, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Jan 31 08:38:14 compute-0 sudo[249000]: pam_unix(sudo:session): session closed for user root
Jan 31 08:38:14 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 31 08:38:14 compute-0 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 08:38:14 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 31 08:38:14 compute-0 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 08:38:14 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 31 08:38:14 compute-0 ceph-mon[75227]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 08:38:14 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 31 08:38:14 compute-0 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 31 08:38:14 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 31 08:38:14 compute-0 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 08:38:14 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 31 08:38:14 compute-0 ceph-mon[75227]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 31 08:38:14 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 31 08:38:14 compute-0 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 31 08:38:14 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 31 08:38:14 compute-0 ceph-mon[75227]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 08:38:14 compute-0 systemd[1]: libpod-conmon-e63fba119b2730bb04fdb16cdc765e30551d8175d924cdf8d16448d4dc073579.scope: Deactivated successfully.
Jan 31 08:38:14 compute-0 sudo[249823]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 08:38:14 compute-0 ceph-mon[75227]: pgmap v1082: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:38:14 compute-0 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 08:38:14 compute-0 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 08:38:14 compute-0 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 08:38:14 compute-0 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 31 08:38:14 compute-0 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 08:38:14 compute-0 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 31 08:38:14 compute-0 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 31 08:38:14 compute-0 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 08:38:14 compute-0 sudo[249823]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:38:14 compute-0 sudo[249823]: pam_unix(sudo:session): session closed for user root
Jan 31 08:38:14 compute-0 sudo[249848]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/82c880e6-d992-5408-8b12-efff9c275473/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 82c880e6-d992-5408-8b12-efff9c275473 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Jan 31 08:38:14 compute-0 sudo[249848]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:38:15 compute-0 podman[249885]: 2026-01-31 08:38:15.015812408 +0000 UTC m=+0.038994087 container create 50953cf23aec59cebec0fc4f661fde6d0c06bfceb197ecff1eae8563974113a6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gallant_lehmann, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Jan 31 08:38:15 compute-0 systemd[1]: Started libpod-conmon-50953cf23aec59cebec0fc4f661fde6d0c06bfceb197ecff1eae8563974113a6.scope.
Jan 31 08:38:15 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:38:15 compute-0 podman[249885]: 2026-01-31 08:38:15.091215515 +0000 UTC m=+0.114397244 container init 50953cf23aec59cebec0fc4f661fde6d0c06bfceb197ecff1eae8563974113a6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gallant_lehmann, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Jan 31 08:38:15 compute-0 podman[249885]: 2026-01-31 08:38:15.000222075 +0000 UTC m=+0.023403714 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 08:38:15 compute-0 podman[249885]: 2026-01-31 08:38:15.097905285 +0000 UTC m=+0.121086944 container start 50953cf23aec59cebec0fc4f661fde6d0c06bfceb197ecff1eae8563974113a6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gallant_lehmann, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_REF=tentacle, io.buildah.version=1.41.3)
Jan 31 08:38:15 compute-0 gallant_lehmann[249901]: 167 167
Jan 31 08:38:15 compute-0 systemd[1]: libpod-50953cf23aec59cebec0fc4f661fde6d0c06bfceb197ecff1eae8563974113a6.scope: Deactivated successfully.
Jan 31 08:38:15 compute-0 podman[249885]: 2026-01-31 08:38:15.101794965 +0000 UTC m=+0.124976654 container attach 50953cf23aec59cebec0fc4f661fde6d0c06bfceb197ecff1eae8563974113a6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gallant_lehmann, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Jan 31 08:38:15 compute-0 podman[249885]: 2026-01-31 08:38:15.102998139 +0000 UTC m=+0.126179788 container died 50953cf23aec59cebec0fc4f661fde6d0c06bfceb197ecff1eae8563974113a6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gallant_lehmann, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 08:38:15 compute-0 systemd[1]: var-lib-containers-storage-overlay-0a5be5edbf5e0604e085ddc1816f2b32256699a1a8058d57434bc8b2ea0d2014-merged.mount: Deactivated successfully.
Jan 31 08:38:15 compute-0 podman[249885]: 2026-01-31 08:38:15.136809158 +0000 UTC m=+0.159990797 container remove 50953cf23aec59cebec0fc4f661fde6d0c06bfceb197ecff1eae8563974113a6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gallant_lehmann, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3)
Jan 31 08:38:15 compute-0 systemd[1]: libpod-conmon-50953cf23aec59cebec0fc4f661fde6d0c06bfceb197ecff1eae8563974113a6.scope: Deactivated successfully.
Jan 31 08:38:15 compute-0 podman[249925]: 2026-01-31 08:38:15.298029371 +0000 UTC m=+0.047097367 container create edf09665a0d9b8e116395216cf00f78ddf607cc60356d5a8df2452a44448d41e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sleepy_rhodes, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 08:38:15 compute-0 systemd[1]: Started libpod-conmon-edf09665a0d9b8e116395216cf00f78ddf607cc60356d5a8df2452a44448d41e.scope.
Jan 31 08:38:15 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:38:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/70165d4b1c1da3702853a191f086586a58133e20cac5a53344c203c7b5c1bc17/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 08:38:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/70165d4b1c1da3702853a191f086586a58133e20cac5a53344c203c7b5c1bc17/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 08:38:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/70165d4b1c1da3702853a191f086586a58133e20cac5a53344c203c7b5c1bc17/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 08:38:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/70165d4b1c1da3702853a191f086586a58133e20cac5a53344c203c7b5c1bc17/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 08:38:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/70165d4b1c1da3702853a191f086586a58133e20cac5a53344c203c7b5c1bc17/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 31 08:38:15 compute-0 podman[249925]: 2026-01-31 08:38:15.280157454 +0000 UTC m=+0.029225470 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 08:38:15 compute-0 podman[249925]: 2026-01-31 08:38:15.382922739 +0000 UTC m=+0.131990785 container init edf09665a0d9b8e116395216cf00f78ddf607cc60356d5a8df2452a44448d41e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sleepy_rhodes, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle)
Jan 31 08:38:15 compute-0 podman[249925]: 2026-01-31 08:38:15.38861985 +0000 UTC m=+0.137687826 container start edf09665a0d9b8e116395216cf00f78ddf607cc60356d5a8df2452a44448d41e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sleepy_rhodes, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_REF=tentacle, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 08:38:15 compute-0 podman[249925]: 2026-01-31 08:38:15.392235293 +0000 UTC m=+0.141303359 container attach edf09665a0d9b8e116395216cf00f78ddf607cc60356d5a8df2452a44448d41e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sleepy_rhodes, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 08:38:15 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:38:15 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1083: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:38:15 compute-0 ceph-mon[75227]: pgmap v1083: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:38:15 compute-0 sleepy_rhodes[249942]: --> passed data devices: 0 physical, 3 LVM
Jan 31 08:38:15 compute-0 sleepy_rhodes[249942]: --> All data devices are unavailable
Jan 31 08:38:15 compute-0 systemd[1]: libpod-edf09665a0d9b8e116395216cf00f78ddf607cc60356d5a8df2452a44448d41e.scope: Deactivated successfully.
Jan 31 08:38:15 compute-0 podman[249925]: 2026-01-31 08:38:15.860996728 +0000 UTC m=+0.610064724 container died edf09665a0d9b8e116395216cf00f78ddf607cc60356d5a8df2452a44448d41e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sleepy_rhodes, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 08:38:15 compute-0 systemd[1]: var-lib-containers-storage-overlay-70165d4b1c1da3702853a191f086586a58133e20cac5a53344c203c7b5c1bc17-merged.mount: Deactivated successfully.
Jan 31 08:38:15 compute-0 podman[249925]: 2026-01-31 08:38:15.946989617 +0000 UTC m=+0.696057603 container remove edf09665a0d9b8e116395216cf00f78ddf607cc60356d5a8df2452a44448d41e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sleepy_rhodes, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, ceph=True, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3)
Jan 31 08:38:15 compute-0 systemd[1]: libpod-conmon-edf09665a0d9b8e116395216cf00f78ddf607cc60356d5a8df2452a44448d41e.scope: Deactivated successfully.
Jan 31 08:38:15 compute-0 sudo[249848]: pam_unix(sudo:session): session closed for user root
Jan 31 08:38:16 compute-0 sudo[249977]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 08:38:16 compute-0 sudo[249977]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:38:16 compute-0 sudo[249977]: pam_unix(sudo:session): session closed for user root
Jan 31 08:38:16 compute-0 sudo[250002]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/82c880e6-d992-5408-8b12-efff9c275473/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 82c880e6-d992-5408-8b12-efff9c275473 -- lvm list --format json
Jan 31 08:38:16 compute-0 sudo[250002]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:38:16 compute-0 nova_compute[238824]: 2026-01-31 08:38:16.339 238828 DEBUG oslo_service.periodic_task [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:38:16 compute-0 nova_compute[238824]: 2026-01-31 08:38:16.341 238828 DEBUG oslo_service.periodic_task [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:38:16 compute-0 podman[250039]: 2026-01-31 08:38:16.396921127 +0000 UTC m=+0.049049322 container create c25fc753b8f4fb4e5a8fec512ec27c9cf181ea159240648b6e3f51dcedc74950 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=heuristic_pike, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.build-date=20251030)
Jan 31 08:38:16 compute-0 systemd[1]: Started libpod-conmon-c25fc753b8f4fb4e5a8fec512ec27c9cf181ea159240648b6e3f51dcedc74950.scope.
Jan 31 08:38:16 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:38:16 compute-0 podman[250039]: 2026-01-31 08:38:16.46293877 +0000 UTC m=+0.115066955 container init c25fc753b8f4fb4e5a8fec512ec27c9cf181ea159240648b6e3f51dcedc74950 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=heuristic_pike, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle)
Jan 31 08:38:16 compute-0 podman[250039]: 2026-01-31 08:38:16.37302855 +0000 UTC m=+0.025156725 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 08:38:16 compute-0 podman[250039]: 2026-01-31 08:38:16.472295375 +0000 UTC m=+0.124423580 container start c25fc753b8f4fb4e5a8fec512ec27c9cf181ea159240648b6e3f51dcedc74950 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=heuristic_pike, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 31 08:38:16 compute-0 heuristic_pike[250058]: 167 167
Jan 31 08:38:16 compute-0 systemd[1]: libpod-c25fc753b8f4fb4e5a8fec512ec27c9cf181ea159240648b6e3f51dcedc74950.scope: Deactivated successfully.
Jan 31 08:38:16 compute-0 podman[250039]: 2026-01-31 08:38:16.486840958 +0000 UTC m=+0.138969133 container attach c25fc753b8f4fb4e5a8fec512ec27c9cf181ea159240648b6e3f51dcedc74950 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=heuristic_pike, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 31 08:38:16 compute-0 podman[250039]: 2026-01-31 08:38:16.487756534 +0000 UTC m=+0.139884729 container died c25fc753b8f4fb4e5a8fec512ec27c9cf181ea159240648b6e3f51dcedc74950 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=heuristic_pike, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 08:38:16 compute-0 systemd[1]: var-lib-containers-storage-overlay-e4393ae062be713202300682d0681f2d06d1bc5435a314186d3f35637af85167-merged.mount: Deactivated successfully.
Jan 31 08:38:16 compute-0 podman[250039]: 2026-01-31 08:38:16.525197606 +0000 UTC m=+0.177325791 container remove c25fc753b8f4fb4e5a8fec512ec27c9cf181ea159240648b6e3f51dcedc74950 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=heuristic_pike, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Jan 31 08:38:16 compute-0 systemd[1]: libpod-conmon-c25fc753b8f4fb4e5a8fec512ec27c9cf181ea159240648b6e3f51dcedc74950.scope: Deactivated successfully.
Jan 31 08:38:16 compute-0 podman[250057]: 2026-01-31 08:38:16.534375216 +0000 UTC m=+0.091672401 container health_status 5cc46d1955888fed41771eb977e7f9416e280539f01559118253757fe3eb0869 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '3b798815db4eef76ff54d2cfe5801aee605c637f16e47a4297de289ab1fdb9c1-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true)
Jan 31 08:38:16 compute-0 podman[250053]: 2026-01-31 08:38:16.590978771 +0000 UTC m=+0.146214478 container health_status 14c3c41ef3ecc0cb180f3b1c6b2646401de390411c75f2edf127900ced71a3ae (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '3b798815db4eef76ff54d2cfe5801aee605c637f16e47a4297de289ab1fdb9c1-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 08:38:16 compute-0 podman[250124]: 2026-01-31 08:38:16.658805935 +0000 UTC m=+0.043457034 container create 485622e96a5069fba49c8465903b3d82731ea00fa3e9d0be3deffecc1a1fb240 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=lucid_bohr, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030)
Jan 31 08:38:16 compute-0 systemd[1]: Started libpod-conmon-485622e96a5069fba49c8465903b3d82731ea00fa3e9d0be3deffecc1a1fb240.scope.
Jan 31 08:38:16 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:38:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/72dda44c06ce6ccab48f2982dbd0100e8e51b7c026c60a9579b7b4ed2fdf0915/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 08:38:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/72dda44c06ce6ccab48f2982dbd0100e8e51b7c026c60a9579b7b4ed2fdf0915/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 08:38:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/72dda44c06ce6ccab48f2982dbd0100e8e51b7c026c60a9579b7b4ed2fdf0915/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 08:38:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/72dda44c06ce6ccab48f2982dbd0100e8e51b7c026c60a9579b7b4ed2fdf0915/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 08:38:16 compute-0 podman[250124]: 2026-01-31 08:38:16.638790147 +0000 UTC m=+0.023441276 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 08:38:16 compute-0 podman[250124]: 2026-01-31 08:38:16.752380429 +0000 UTC m=+0.137031588 container init 485622e96a5069fba49c8465903b3d82731ea00fa3e9d0be3deffecc1a1fb240 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=lucid_bohr, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.build-date=20251030)
Jan 31 08:38:16 compute-0 podman[250124]: 2026-01-31 08:38:16.758657507 +0000 UTC m=+0.143308606 container start 485622e96a5069fba49c8465903b3d82731ea00fa3e9d0be3deffecc1a1fb240 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=lucid_bohr, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, OSD_FLAVOR=default)
Jan 31 08:38:16 compute-0 podman[250124]: 2026-01-31 08:38:16.762793714 +0000 UTC m=+0.147444803 container attach 485622e96a5069fba49c8465903b3d82731ea00fa3e9d0be3deffecc1a1fb240 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=lucid_bohr, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 08:38:17 compute-0 lucid_bohr[250141]: {
Jan 31 08:38:17 compute-0 lucid_bohr[250141]:     "0": [
Jan 31 08:38:17 compute-0 lucid_bohr[250141]:         {
Jan 31 08:38:17 compute-0 lucid_bohr[250141]:             "devices": [
Jan 31 08:38:17 compute-0 lucid_bohr[250141]:                 "/dev/loop3"
Jan 31 08:38:17 compute-0 lucid_bohr[250141]:             ],
Jan 31 08:38:17 compute-0 lucid_bohr[250141]:             "lv_name": "ceph_lv0",
Jan 31 08:38:17 compute-0 lucid_bohr[250141]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 08:38:17 compute-0 lucid_bohr[250141]:             "lv_size": "21470642176",
Jan 31 08:38:17 compute-0 lucid_bohr[250141]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=MTsNbY-MKaT-jGv0-3onj-5WQa-gnK0-BbfLsK,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=82c880e6-d992-5408-8b12-efff9c275473,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=39c36249-2898-4a76-b317-8e4ca379866f,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 31 08:38:17 compute-0 lucid_bohr[250141]:             "lv_uuid": "MTsNbY-MKaT-jGv0-3onj-5WQa-gnK0-BbfLsK",
Jan 31 08:38:17 compute-0 lucid_bohr[250141]:             "name": "ceph_lv0",
Jan 31 08:38:17 compute-0 lucid_bohr[250141]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 08:38:17 compute-0 lucid_bohr[250141]:             "tags": {
Jan 31 08:38:17 compute-0 lucid_bohr[250141]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 31 08:38:17 compute-0 lucid_bohr[250141]:                 "ceph.block_uuid": "MTsNbY-MKaT-jGv0-3onj-5WQa-gnK0-BbfLsK",
Jan 31 08:38:17 compute-0 lucid_bohr[250141]:                 "ceph.cephx_lockbox_secret": "",
Jan 31 08:38:17 compute-0 lucid_bohr[250141]:                 "ceph.cluster_fsid": "82c880e6-d992-5408-8b12-efff9c275473",
Jan 31 08:38:17 compute-0 lucid_bohr[250141]:                 "ceph.cluster_name": "ceph",
Jan 31 08:38:17 compute-0 lucid_bohr[250141]:                 "ceph.crush_device_class": "",
Jan 31 08:38:17 compute-0 lucid_bohr[250141]:                 "ceph.encrypted": "0",
Jan 31 08:38:17 compute-0 lucid_bohr[250141]:                 "ceph.objectstore": "bluestore",
Jan 31 08:38:17 compute-0 lucid_bohr[250141]:                 "ceph.osd_fsid": "39c36249-2898-4a76-b317-8e4ca379866f",
Jan 31 08:38:17 compute-0 lucid_bohr[250141]:                 "ceph.osd_id": "0",
Jan 31 08:38:17 compute-0 lucid_bohr[250141]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 31 08:38:17 compute-0 lucid_bohr[250141]:                 "ceph.type": "block",
Jan 31 08:38:17 compute-0 lucid_bohr[250141]:                 "ceph.vdo": "0",
Jan 31 08:38:17 compute-0 lucid_bohr[250141]:                 "ceph.with_tpm": "0"
Jan 31 08:38:17 compute-0 lucid_bohr[250141]:             },
Jan 31 08:38:17 compute-0 lucid_bohr[250141]:             "type": "block",
Jan 31 08:38:17 compute-0 lucid_bohr[250141]:             "vg_name": "ceph_vg0"
Jan 31 08:38:17 compute-0 lucid_bohr[250141]:         }
Jan 31 08:38:17 compute-0 lucid_bohr[250141]:     ],
Jan 31 08:38:17 compute-0 lucid_bohr[250141]:     "1": [
Jan 31 08:38:17 compute-0 lucid_bohr[250141]:         {
Jan 31 08:38:17 compute-0 lucid_bohr[250141]:             "devices": [
Jan 31 08:38:17 compute-0 lucid_bohr[250141]:                 "/dev/loop4"
Jan 31 08:38:17 compute-0 lucid_bohr[250141]:             ],
Jan 31 08:38:17 compute-0 lucid_bohr[250141]:             "lv_name": "ceph_lv1",
Jan 31 08:38:17 compute-0 lucid_bohr[250141]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 31 08:38:17 compute-0 lucid_bohr[250141]:             "lv_size": "21470642176",
Jan 31 08:38:17 compute-0 lucid_bohr[250141]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=p93Mbf-DMxT-pcUt-jSJE-SFna-oscq-yTAd40,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=82c880e6-d992-5408-8b12-efff9c275473,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=dacad4fa-56d8-4937-b2d8-306fb75187f3,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 31 08:38:17 compute-0 lucid_bohr[250141]:             "lv_uuid": "p93Mbf-DMxT-pcUt-jSJE-SFna-oscq-yTAd40",
Jan 31 08:38:17 compute-0 lucid_bohr[250141]:             "name": "ceph_lv1",
Jan 31 08:38:17 compute-0 lucid_bohr[250141]:             "path": "/dev/ceph_vg1/ceph_lv1",
Jan 31 08:38:17 compute-0 lucid_bohr[250141]:             "tags": {
Jan 31 08:38:17 compute-0 lucid_bohr[250141]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 31 08:38:17 compute-0 lucid_bohr[250141]:                 "ceph.block_uuid": "p93Mbf-DMxT-pcUt-jSJE-SFna-oscq-yTAd40",
Jan 31 08:38:17 compute-0 lucid_bohr[250141]:                 "ceph.cephx_lockbox_secret": "",
Jan 31 08:38:17 compute-0 lucid_bohr[250141]:                 "ceph.cluster_fsid": "82c880e6-d992-5408-8b12-efff9c275473",
Jan 31 08:38:17 compute-0 lucid_bohr[250141]:                 "ceph.cluster_name": "ceph",
Jan 31 08:38:17 compute-0 lucid_bohr[250141]:                 "ceph.crush_device_class": "",
Jan 31 08:38:17 compute-0 lucid_bohr[250141]:                 "ceph.encrypted": "0",
Jan 31 08:38:17 compute-0 lucid_bohr[250141]:                 "ceph.objectstore": "bluestore",
Jan 31 08:38:17 compute-0 lucid_bohr[250141]:                 "ceph.osd_fsid": "dacad4fa-56d8-4937-b2d8-306fb75187f3",
Jan 31 08:38:17 compute-0 lucid_bohr[250141]:                 "ceph.osd_id": "1",
Jan 31 08:38:17 compute-0 lucid_bohr[250141]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 31 08:38:17 compute-0 lucid_bohr[250141]:                 "ceph.type": "block",
Jan 31 08:38:17 compute-0 lucid_bohr[250141]:                 "ceph.vdo": "0",
Jan 31 08:38:17 compute-0 lucid_bohr[250141]:                 "ceph.with_tpm": "0"
Jan 31 08:38:17 compute-0 lucid_bohr[250141]:             },
Jan 31 08:38:17 compute-0 lucid_bohr[250141]:             "type": "block",
Jan 31 08:38:17 compute-0 lucid_bohr[250141]:             "vg_name": "ceph_vg1"
Jan 31 08:38:17 compute-0 lucid_bohr[250141]:         }
Jan 31 08:38:17 compute-0 lucid_bohr[250141]:     ],
Jan 31 08:38:17 compute-0 lucid_bohr[250141]:     "2": [
Jan 31 08:38:17 compute-0 lucid_bohr[250141]:         {
Jan 31 08:38:17 compute-0 lucid_bohr[250141]:             "devices": [
Jan 31 08:38:17 compute-0 lucid_bohr[250141]:                 "/dev/loop5"
Jan 31 08:38:17 compute-0 lucid_bohr[250141]:             ],
Jan 31 08:38:17 compute-0 lucid_bohr[250141]:             "lv_name": "ceph_lv2",
Jan 31 08:38:17 compute-0 lucid_bohr[250141]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 31 08:38:17 compute-0 lucid_bohr[250141]:             "lv_size": "21470642176",
Jan 31 08:38:17 compute-0 lucid_bohr[250141]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=fxd7JU-HnwP-NvcE-M4xv-EgEF-kK7y-w6dXCS,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=82c880e6-d992-5408-8b12-efff9c275473,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=faa25865-e7b6-44f9-8188-08bf287b941b,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 31 08:38:17 compute-0 lucid_bohr[250141]:             "lv_uuid": "fxd7JU-HnwP-NvcE-M4xv-EgEF-kK7y-w6dXCS",
Jan 31 08:38:17 compute-0 lucid_bohr[250141]:             "name": "ceph_lv2",
Jan 31 08:38:17 compute-0 lucid_bohr[250141]:             "path": "/dev/ceph_vg2/ceph_lv2",
Jan 31 08:38:17 compute-0 lucid_bohr[250141]:             "tags": {
Jan 31 08:38:17 compute-0 lucid_bohr[250141]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 31 08:38:17 compute-0 lucid_bohr[250141]:                 "ceph.block_uuid": "fxd7JU-HnwP-NvcE-M4xv-EgEF-kK7y-w6dXCS",
Jan 31 08:38:17 compute-0 lucid_bohr[250141]:                 "ceph.cephx_lockbox_secret": "",
Jan 31 08:38:17 compute-0 lucid_bohr[250141]:                 "ceph.cluster_fsid": "82c880e6-d992-5408-8b12-efff9c275473",
Jan 31 08:38:17 compute-0 lucid_bohr[250141]:                 "ceph.cluster_name": "ceph",
Jan 31 08:38:17 compute-0 lucid_bohr[250141]:                 "ceph.crush_device_class": "",
Jan 31 08:38:17 compute-0 lucid_bohr[250141]:                 "ceph.encrypted": "0",
Jan 31 08:38:17 compute-0 lucid_bohr[250141]:                 "ceph.objectstore": "bluestore",
Jan 31 08:38:17 compute-0 lucid_bohr[250141]:                 "ceph.osd_fsid": "faa25865-e7b6-44f9-8188-08bf287b941b",
Jan 31 08:38:17 compute-0 lucid_bohr[250141]:                 "ceph.osd_id": "2",
Jan 31 08:38:17 compute-0 lucid_bohr[250141]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 31 08:38:17 compute-0 lucid_bohr[250141]:                 "ceph.type": "block",
Jan 31 08:38:17 compute-0 lucid_bohr[250141]:                 "ceph.vdo": "0",
Jan 31 08:38:17 compute-0 lucid_bohr[250141]:                 "ceph.with_tpm": "0"
Jan 31 08:38:17 compute-0 lucid_bohr[250141]:             },
Jan 31 08:38:17 compute-0 lucid_bohr[250141]:             "type": "block",
Jan 31 08:38:17 compute-0 lucid_bohr[250141]:             "vg_name": "ceph_vg2"
Jan 31 08:38:17 compute-0 lucid_bohr[250141]:         }
Jan 31 08:38:17 compute-0 lucid_bohr[250141]:     ]
Jan 31 08:38:17 compute-0 lucid_bohr[250141]: }
Jan 31 08:38:17 compute-0 systemd[1]: libpod-485622e96a5069fba49c8465903b3d82731ea00fa3e9d0be3deffecc1a1fb240.scope: Deactivated successfully.
Jan 31 08:38:17 compute-0 podman[250124]: 2026-01-31 08:38:17.046529021 +0000 UTC m=+0.431180120 container died 485622e96a5069fba49c8465903b3d82731ea00fa3e9d0be3deffecc1a1fb240 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=lucid_bohr, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030)
Jan 31 08:38:17 compute-0 systemd[1]: var-lib-containers-storage-overlay-72dda44c06ce6ccab48f2982dbd0100e8e51b7c026c60a9579b7b4ed2fdf0915-merged.mount: Deactivated successfully.
Jan 31 08:38:17 compute-0 podman[250124]: 2026-01-31 08:38:17.09335784 +0000 UTC m=+0.478008929 container remove 485622e96a5069fba49c8465903b3d82731ea00fa3e9d0be3deffecc1a1fb240 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=lucid_bohr, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 08:38:17 compute-0 systemd[1]: libpod-conmon-485622e96a5069fba49c8465903b3d82731ea00fa3e9d0be3deffecc1a1fb240.scope: Deactivated successfully.
Jan 31 08:38:17 compute-0 sudo[250002]: pam_unix(sudo:session): session closed for user root
Jan 31 08:38:17 compute-0 sudo[250161]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 08:38:17 compute-0 sudo[250161]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:38:17 compute-0 sudo[250161]: pam_unix(sudo:session): session closed for user root
Jan 31 08:38:17 compute-0 sudo[250186]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/82c880e6-d992-5408-8b12-efff9c275473/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 82c880e6-d992-5408-8b12-efff9c275473 -- raw list --format json
Jan 31 08:38:17 compute-0 sudo[250186]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:38:17 compute-0 podman[250223]: 2026-01-31 08:38:17.547245873 +0000 UTC m=+0.034615013 container create c579047720263a9d49d13c3a1c0e2399725755340bdccf86d5f9a9673db64f78 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=compassionate_mestorf, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Jan 31 08:38:17 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1084: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:38:17 compute-0 systemd[1]: Started libpod-conmon-c579047720263a9d49d13c3a1c0e2399725755340bdccf86d5f9a9673db64f78.scope.
Jan 31 08:38:17 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:38:17 compute-0 podman[250223]: 2026-01-31 08:38:17.605921677 +0000 UTC m=+0.093290867 container init c579047720263a9d49d13c3a1c0e2399725755340bdccf86d5f9a9673db64f78 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=compassionate_mestorf, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 08:38:17 compute-0 podman[250223]: 2026-01-31 08:38:17.610792905 +0000 UTC m=+0.098162085 container start c579047720263a9d49d13c3a1c0e2399725755340bdccf86d5f9a9673db64f78 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=compassionate_mestorf, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, OSD_FLAVOR=default)
Jan 31 08:38:17 compute-0 podman[250223]: 2026-01-31 08:38:17.615445547 +0000 UTC m=+0.102814727 container attach c579047720263a9d49d13c3a1c0e2399725755340bdccf86d5f9a9673db64f78 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=compassionate_mestorf, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Jan 31 08:38:17 compute-0 compassionate_mestorf[250240]: 167 167
Jan 31 08:38:17 compute-0 systemd[1]: libpod-c579047720263a9d49d13c3a1c0e2399725755340bdccf86d5f9a9673db64f78.scope: Deactivated successfully.
Jan 31 08:38:17 compute-0 podman[250223]: 2026-01-31 08:38:17.617140185 +0000 UTC m=+0.104509325 container died c579047720263a9d49d13c3a1c0e2399725755340bdccf86d5f9a9673db64f78 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=compassionate_mestorf, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Jan 31 08:38:17 compute-0 podman[250223]: 2026-01-31 08:38:17.534662496 +0000 UTC m=+0.022031646 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 08:38:17 compute-0 systemd[1]: var-lib-containers-storage-overlay-14a219c3dc351f63417c3ac153a9ac979c3d80c81f9a786ad0e1d6e2eb23506d-merged.mount: Deactivated successfully.
Jan 31 08:38:17 compute-0 podman[250223]: 2026-01-31 08:38:17.650325926 +0000 UTC m=+0.137695086 container remove c579047720263a9d49d13c3a1c0e2399725755340bdccf86d5f9a9673db64f78 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=compassionate_mestorf, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.41.3)
Jan 31 08:38:17 compute-0 systemd[1]: libpod-conmon-c579047720263a9d49d13c3a1c0e2399725755340bdccf86d5f9a9673db64f78.scope: Deactivated successfully.
Jan 31 08:38:17 compute-0 podman[250266]: 2026-01-31 08:38:17.814088241 +0000 UTC m=+0.055181866 container create 49b3146360f319da03012895a7de9d6072fd7e7d172fec427b5f3c650e9f4373 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=suspicious_chaum, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=tentacle, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030)
Jan 31 08:38:17 compute-0 systemd[1]: Started libpod-conmon-49b3146360f319da03012895a7de9d6072fd7e7d172fec427b5f3c650e9f4373.scope.
Jan 31 08:38:17 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:38:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3643f1b1f70353c6915bcbd484f196ddd03155fa9ebe5ca440910134082e5c2a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 08:38:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3643f1b1f70353c6915bcbd484f196ddd03155fa9ebe5ca440910134082e5c2a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 08:38:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3643f1b1f70353c6915bcbd484f196ddd03155fa9ebe5ca440910134082e5c2a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 08:38:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3643f1b1f70353c6915bcbd484f196ddd03155fa9ebe5ca440910134082e5c2a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 08:38:17 compute-0 podman[250266]: 2026-01-31 08:38:17.78620906 +0000 UTC m=+0.027302695 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 08:38:17 compute-0 podman[250266]: 2026-01-31 08:38:17.899115042 +0000 UTC m=+0.140208677 container init 49b3146360f319da03012895a7de9d6072fd7e7d172fec427b5f3c650e9f4373 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=suspicious_chaum, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 08:38:17 compute-0 ovn_metadata_agent[154972]: 2026-01-31 08:38:17.897 154977 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:38:17 compute-0 ovn_metadata_agent[154972]: 2026-01-31 08:38:17.900 154977 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.003s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:38:17 compute-0 ovn_metadata_agent[154972]: 2026-01-31 08:38:17.901 154977 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:38:17 compute-0 podman[250266]: 2026-01-31 08:38:17.906038069 +0000 UTC m=+0.147131684 container start 49b3146360f319da03012895a7de9d6072fd7e7d172fec427b5f3c650e9f4373 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=suspicious_chaum, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Jan 31 08:38:17 compute-0 podman[250266]: 2026-01-31 08:38:17.914225121 +0000 UTC m=+0.155318766 container attach 49b3146360f319da03012895a7de9d6072fd7e7d172fec427b5f3c650e9f4373 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=suspicious_chaum, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Jan 31 08:38:17 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 31 08:38:17 compute-0 ceph-mon[75227]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3967753749' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 31 08:38:17 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 31 08:38:17 compute-0 ceph-mon[75227]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3967753749' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 31 08:38:18 compute-0 lvm[250361]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 31 08:38:18 compute-0 lvm[250360]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 31 08:38:18 compute-0 lvm[250360]: VG ceph_vg0 finished
Jan 31 08:38:18 compute-0 lvm[250361]: VG ceph_vg1 finished
Jan 31 08:38:18 compute-0 lvm[250363]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 31 08:38:18 compute-0 lvm[250363]: VG ceph_vg2 finished
Jan 31 08:38:18 compute-0 ceph-mon[75227]: pgmap v1084: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:38:18 compute-0 ceph-mon[75227]: from='client.? 192.168.122.10:0/3967753749' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 31 08:38:18 compute-0 ceph-mon[75227]: from='client.? 192.168.122.10:0/3967753749' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 31 08:38:18 compute-0 suspicious_chaum[250282]: {}
Jan 31 08:38:18 compute-0 systemd[1]: libpod-49b3146360f319da03012895a7de9d6072fd7e7d172fec427b5f3c650e9f4373.scope: Deactivated successfully.
Jan 31 08:38:18 compute-0 systemd[1]: libpod-49b3146360f319da03012895a7de9d6072fd7e7d172fec427b5f3c650e9f4373.scope: Consumed 1.150s CPU time.
Jan 31 08:38:18 compute-0 podman[250266]: 2026-01-31 08:38:18.731621923 +0000 UTC m=+0.972715568 container died 49b3146360f319da03012895a7de9d6072fd7e7d172fec427b5f3c650e9f4373 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=suspicious_chaum, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Jan 31 08:38:18 compute-0 systemd[1]: var-lib-containers-storage-overlay-3643f1b1f70353c6915bcbd484f196ddd03155fa9ebe5ca440910134082e5c2a-merged.mount: Deactivated successfully.
Jan 31 08:38:18 compute-0 podman[250266]: 2026-01-31 08:38:18.868145615 +0000 UTC m=+1.109239230 container remove 49b3146360f319da03012895a7de9d6072fd7e7d172fec427b5f3c650e9f4373 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=suspicious_chaum, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0)
Jan 31 08:38:18 compute-0 systemd[1]: libpod-conmon-49b3146360f319da03012895a7de9d6072fd7e7d172fec427b5f3c650e9f4373.scope: Deactivated successfully.
Jan 31 08:38:18 compute-0 sudo[250186]: pam_unix(sudo:session): session closed for user root
Jan 31 08:38:18 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 31 08:38:18 compute-0 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 08:38:18 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 31 08:38:18 compute-0 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 08:38:19 compute-0 sudo[250378]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 31 08:38:19 compute-0 sudo[250378]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:38:19 compute-0 sudo[250378]: pam_unix(sudo:session): session closed for user root
Jan 31 08:38:19 compute-0 nova_compute[238824]: 2026-01-31 08:38:19.340 238828 DEBUG oslo_service.periodic_task [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:38:19 compute-0 nova_compute[238824]: 2026-01-31 08:38:19.340 238828 DEBUG oslo_service.periodic_task [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:38:19 compute-0 nova_compute[238824]: 2026-01-31 08:38:19.341 238828 DEBUG oslo_service.periodic_task [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:38:19 compute-0 nova_compute[238824]: 2026-01-31 08:38:19.341 238828 DEBUG nova.compute.manager [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 31 08:38:19 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1085: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:38:19 compute-0 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 08:38:19 compute-0 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 08:38:19 compute-0 ceph-mon[75227]: pgmap v1085: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:38:20 compute-0 nova_compute[238824]: 2026-01-31 08:38:20.340 238828 DEBUG oslo_service.periodic_task [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:38:20 compute-0 nova_compute[238824]: 2026-01-31 08:38:20.361 238828 DEBUG oslo_concurrency.lockutils [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:38:20 compute-0 nova_compute[238824]: 2026-01-31 08:38:20.362 238828 DEBUG oslo_concurrency.lockutils [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:38:20 compute-0 nova_compute[238824]: 2026-01-31 08:38:20.362 238828 DEBUG oslo_concurrency.lockutils [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:38:20 compute-0 nova_compute[238824]: 2026-01-31 08:38:20.362 238828 DEBUG nova.compute.resource_tracker [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 31 08:38:20 compute-0 nova_compute[238824]: 2026-01-31 08:38:20.362 238828 DEBUG oslo_concurrency.processutils [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:38:20 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:38:20 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 31 08:38:20 compute-0 ceph-mon[75227]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/486414388' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 31 08:38:20 compute-0 nova_compute[238824]: 2026-01-31 08:38:20.907 238828 DEBUG oslo_concurrency.processutils [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.544s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:38:21 compute-0 nova_compute[238824]: 2026-01-31 08:38:21.052 238828 WARNING nova.virt.libvirt.driver [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 31 08:38:21 compute-0 nova_compute[238824]: 2026-01-31 08:38:21.054 238828 DEBUG nova.compute.resource_tracker [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5075MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 31 08:38:21 compute-0 nova_compute[238824]: 2026-01-31 08:38:21.054 238828 DEBUG oslo_concurrency.lockutils [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:38:21 compute-0 nova_compute[238824]: 2026-01-31 08:38:21.054 238828 DEBUG oslo_concurrency.lockutils [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:38:21 compute-0 nova_compute[238824]: 2026-01-31 08:38:21.116 238828 DEBUG nova.compute.resource_tracker [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 31 08:38:21 compute-0 nova_compute[238824]: 2026-01-31 08:38:21.117 238828 DEBUG nova.compute.resource_tracker [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 31 08:38:21 compute-0 nova_compute[238824]: 2026-01-31 08:38:21.131 238828 DEBUG oslo_concurrency.processutils [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:38:21 compute-0 ceph-mon[75227]: from='client.? 192.168.122.100:0/486414388' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 31 08:38:21 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1086: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:38:21 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 31 08:38:21 compute-0 ceph-mon[75227]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2742062989' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 31 08:38:21 compute-0 nova_compute[238824]: 2026-01-31 08:38:21.671 238828 DEBUG oslo_concurrency.processutils [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.540s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:38:21 compute-0 nova_compute[238824]: 2026-01-31 08:38:21.676 238828 DEBUG nova.compute.provider_tree [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Inventory has not changed in ProviderTree for provider: 6d4ff98f-eb37-47a1-bfaf-01e7f5329d98 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 31 08:38:21 compute-0 nova_compute[238824]: 2026-01-31 08:38:21.696 238828 DEBUG nova.scheduler.client.report [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Inventory has not changed for provider 6d4ff98f-eb37-47a1-bfaf-01e7f5329d98 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 31 08:38:21 compute-0 nova_compute[238824]: 2026-01-31 08:38:21.698 238828 DEBUG nova.compute.resource_tracker [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 31 08:38:21 compute-0 nova_compute[238824]: 2026-01-31 08:38:21.698 238828 DEBUG oslo_concurrency.lockutils [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.644s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:38:22 compute-0 ceph-mon[75227]: pgmap v1086: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:38:22 compute-0 ceph-mon[75227]: from='client.? 192.168.122.100:0/2742062989' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 31 08:38:22 compute-0 nova_compute[238824]: 2026-01-31 08:38:22.698 238828 DEBUG oslo_service.periodic_task [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:38:22 compute-0 nova_compute[238824]: 2026-01-31 08:38:22.728 238828 DEBUG oslo_service.periodic_task [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:38:22 compute-0 nova_compute[238824]: 2026-01-31 08:38:22.728 238828 DEBUG nova.compute.manager [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 31 08:38:22 compute-0 nova_compute[238824]: 2026-01-31 08:38:22.728 238828 DEBUG nova.compute.manager [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 31 08:38:22 compute-0 nova_compute[238824]: 2026-01-31 08:38:22.751 238828 DEBUG nova.compute.manager [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 31 08:38:23 compute-0 nova_compute[238824]: 2026-01-31 08:38:23.387 238828 DEBUG oslo_service.periodic_task [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:38:23 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1087: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:38:24 compute-0 nova_compute[238824]: 2026-01-31 08:38:24.339 238828 DEBUG oslo_service.periodic_task [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:38:24 compute-0 ceph-mon[75227]: pgmap v1087: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:38:25 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:38:25 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1088: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:38:26 compute-0 ceph-mon[75227]: pgmap v1088: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:38:27 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1089: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:38:28 compute-0 ceph-mon[75227]: pgmap v1089: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:38:29 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1090: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:38:30 compute-0 ceph-mon[75227]: pgmap v1090: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:38:30 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:38:31 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1091: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail; 937 B/s rd, 0 B/s wr, 1 op/s
Jan 31 08:38:31 compute-0 ceph-mgr[75519]: [balancer INFO root] Optimize plan auto_2026-01-31_08:38:31
Jan 31 08:38:31 compute-0 ceph-mgr[75519]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 31 08:38:31 compute-0 ceph-mgr[75519]: [balancer INFO root] do_upmap
Jan 31 08:38:31 compute-0 ceph-mgr[75519]: [balancer INFO root] pools ['vms', 'cephfs.cephfs.meta', '.rgw.root', 'default.rgw.meta', '.mgr', 'volumes', 'cephfs.cephfs.data', 'images', 'backups', 'default.rgw.log', 'default.rgw.control']
Jan 31 08:38:31 compute-0 ceph-mgr[75519]: [balancer INFO root] prepared 0/10 upmap changes
Jan 31 08:38:32 compute-0 ceph-mon[75227]: pgmap v1091: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail; 937 B/s rd, 0 B/s wr, 1 op/s
Jan 31 08:38:32 compute-0 ceph-mgr[75519]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:38:32 compute-0 ceph-mgr[75519]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:38:32 compute-0 ceph-mgr[75519]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:38:32 compute-0 ceph-mgr[75519]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:38:32 compute-0 ceph-mgr[75519]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:38:32 compute-0 ceph-mgr[75519]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:38:33 compute-0 ceph-mgr[75519]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 31 08:38:33 compute-0 ceph-mgr[75519]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 31 08:38:33 compute-0 ceph-mgr[75519]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 08:38:33 compute-0 ceph-mgr[75519]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 08:38:33 compute-0 ceph-mgr[75519]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 08:38:33 compute-0 ceph-mgr[75519]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 08:38:33 compute-0 ceph-mgr[75519]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 08:38:33 compute-0 ceph-mgr[75519]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 08:38:33 compute-0 ceph-mgr[75519]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 08:38:33 compute-0 ceph-mgr[75519]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 08:38:33 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1092: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail; 937 B/s rd, 0 B/s wr, 1 op/s
Jan 31 08:38:33 compute-0 ceph-mon[75227]: pgmap v1092: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail; 937 B/s rd, 0 B/s wr, 1 op/s
Jan 31 08:38:35 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:38:35 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1093: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail; 44 KiB/s rd, 0 B/s wr, 75 op/s
Jan 31 08:38:36 compute-0 ceph-mon[75227]: pgmap v1093: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail; 44 KiB/s rd, 0 B/s wr, 75 op/s
Jan 31 08:38:37 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1094: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail; 44 KiB/s rd, 0 B/s wr, 75 op/s
Jan 31 08:38:38 compute-0 ceph-mon[75227]: pgmap v1094: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail; 44 KiB/s rd, 0 B/s wr, 75 op/s
Jan 31 08:38:39 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1095: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail; 44 KiB/s rd, 0 B/s wr, 75 op/s
Jan 31 08:38:39 compute-0 ceph-mon[75227]: pgmap v1095: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail; 44 KiB/s rd, 0 B/s wr, 75 op/s
Jan 31 08:38:40 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:38:41 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1096: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail; 44 KiB/s rd, 0 B/s wr, 75 op/s
Jan 31 08:38:42 compute-0 ceph-mon[75227]: pgmap v1096: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail; 44 KiB/s rd, 0 B/s wr, 75 op/s
Jan 31 08:38:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] _maybe_adjust
Jan 31 08:38:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:38:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 31 08:38:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:38:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 08:38:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:38:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 08:38:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:38:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 08:38:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:38:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 3.257160766784386e-07 of space, bias 1.0, pg target 9.771482300353158e-05 quantized to 32 (current 32)
Jan 31 08:38:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:38:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 2.5331644121694047e-06 of space, bias 4.0, pg target 0.0030397972946032857 quantized to 16 (current 16)
Jan 31 08:38:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:38:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 08:38:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:38:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 31 08:38:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:38:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Jan 31 08:38:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:38:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 08:38:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:38:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 31 08:38:43 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1097: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail; 43 KiB/s rd, 0 B/s wr, 73 op/s
Jan 31 08:38:43 compute-0 ceph-mon[75227]: pgmap v1097: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail; 43 KiB/s rd, 0 B/s wr, 73 op/s
Jan 31 08:38:45 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:38:45 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1098: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail; 43 KiB/s rd, 0 B/s wr, 73 op/s
Jan 31 08:38:46 compute-0 ceph-mon[75227]: pgmap v1098: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail; 43 KiB/s rd, 0 B/s wr, 73 op/s
Jan 31 08:38:47 compute-0 podman[250448]: 2026-01-31 08:38:47.219007541 +0000 UTC m=+0.111677129 container health_status 5cc46d1955888fed41771eb977e7f9416e280539f01559118253757fe3eb0869 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '3b798815db4eef76ff54d2cfe5801aee605c637f16e47a4297de289ab1fdb9c1-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS)
Jan 31 08:38:47 compute-0 podman[250447]: 2026-01-31 08:38:47.230404883 +0000 UTC m=+0.122381161 container health_status 14c3c41ef3ecc0cb180f3b1c6b2646401de390411c75f2edf127900ced71a3ae (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, tcib_managed=true, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '3b798815db4eef76ff54d2cfe5801aee605c637f16e47a4297de289ab1fdb9c1-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.vendor=CentOS, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Jan 31 08:38:47 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1099: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:38:48 compute-0 ceph-mon[75227]: pgmap v1099: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:38:49 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1100: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:38:49 compute-0 ceph-mon[75227]: pgmap v1100: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:38:50 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:38:51 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1101: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:38:52 compute-0 ceph-mon[75227]: pgmap v1101: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:38:53 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1102: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:38:54 compute-0 ceph-mon[75227]: pgmap v1102: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:38:55 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:38:55 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1103: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:38:56 compute-0 ceph-mon[75227]: pgmap v1103: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:38:57 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1104: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:38:58 compute-0 ceph-mon[75227]: pgmap v1104: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:38:59 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1105: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:39:00 compute-0 ceph-mon[75227]: pgmap v1105: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:39:00 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:39:01 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1106: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:39:02 compute-0 ceph-mon[75227]: pgmap v1106: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:39:02 compute-0 ceph-mgr[75519]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:39:02 compute-0 ceph-mgr[75519]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:39:02 compute-0 ceph-mgr[75519]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:39:02 compute-0 ceph-mgr[75519]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:39:02 compute-0 ceph-mgr[75519]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:39:02 compute-0 ceph-mgr[75519]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:39:03 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1107: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:39:04 compute-0 ceph-mon[75227]: pgmap v1107: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:39:05 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:39:05 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1108: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:39:06 compute-0 ceph-mon[75227]: pgmap v1108: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:39:07 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1109: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:39:07 compute-0 ceph-mon[75227]: pgmap v1109: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:39:08 compute-0 nova_compute[238824]: 2026-01-31 08:39:08.340 238828 DEBUG oslo_service.periodic_task [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:39:09 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1110: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:39:10 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:39:10 compute-0 ceph-mon[75227]: pgmap v1110: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:39:11 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1111: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:39:12 compute-0 ceph-mon[75227]: pgmap v1111: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:39:13 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1112: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:39:14 compute-0 nova_compute[238824]: 2026-01-31 08:39:14.401 238828 DEBUG oslo_service.periodic_task [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:39:14 compute-0 nova_compute[238824]: 2026-01-31 08:39:14.402 238828 DEBUG nova.compute.manager [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Jan 31 08:39:14 compute-0 nova_compute[238824]: 2026-01-31 08:39:14.423 238828 DEBUG nova.compute.manager [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Jan 31 08:39:14 compute-0 ceph-mon[75227]: pgmap v1112: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:39:15 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:39:15 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1113: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:39:16 compute-0 ceph-mon[75227]: pgmap v1113: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:39:16 compute-0 nova_compute[238824]: 2026-01-31 08:39:16.361 238828 DEBUG oslo_service.periodic_task [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:39:17 compute-0 nova_compute[238824]: 2026-01-31 08:39:17.339 238828 DEBUG oslo_service.periodic_task [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:39:17 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1114: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:39:17 compute-0 ovn_metadata_agent[154972]: 2026-01-31 08:39:17.899 154977 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:39:17 compute-0 ovn_metadata_agent[154972]: 2026-01-31 08:39:17.899 154977 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:39:17 compute-0 ovn_metadata_agent[154972]: 2026-01-31 08:39:17.899 154977 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:39:18 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 31 08:39:18 compute-0 ceph-mon[75227]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2859608869' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 31 08:39:18 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 31 08:39:18 compute-0 ceph-mon[75227]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2859608869' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 31 08:39:18 compute-0 podman[250494]: 2026-01-31 08:39:18.172706956 +0000 UTC m=+0.061741082 container health_status 5cc46d1955888fed41771eb977e7f9416e280539f01559118253757fe3eb0869 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '3b798815db4eef76ff54d2cfe5801aee605c637f16e47a4297de289ab1fdb9c1-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 31 08:39:18 compute-0 podman[250493]: 2026-01-31 08:39:18.197021615 +0000 UTC m=+0.087047339 container health_status 14c3c41ef3ecc0cb180f3b1c6b2646401de390411c75f2edf127900ced71a3ae (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '3b798815db4eef76ff54d2cfe5801aee605c637f16e47a4297de289ab1fdb9c1-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0)
Jan 31 08:39:18 compute-0 rsyslogd[1007]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 31 08:39:18 compute-0 ceph-mon[75227]: pgmap v1114: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:39:19 compute-0 sudo[250538]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 08:39:19 compute-0 sudo[250538]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:39:19 compute-0 sudo[250538]: pam_unix(sudo:session): session closed for user root
Jan 31 08:39:19 compute-0 sudo[250563]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/82c880e6-d992-5408-8b12-efff9c275473/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ls
Jan 31 08:39:19 compute-0 sudo[250563]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:39:19 compute-0 ceph-mon[75227]: from='client.? 192.168.122.10:0/2859608869' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 31 08:39:19 compute-0 ceph-mon[75227]: from='client.? 192.168.122.10:0/2859608869' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 31 08:39:19 compute-0 nova_compute[238824]: 2026-01-31 08:39:19.339 238828 DEBUG oslo_service.periodic_task [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:39:19 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1115: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:39:19 compute-0 podman[250633]: 2026-01-31 08:39:19.897926205 +0000 UTC m=+0.437842419 container exec 2c160fb9852a007dc977740f88f96001cc57b1cb392a9e315d541aef8037777a (image=quay.io/ceph/ceph:v20, name=ceph-82c880e6-d992-5408-8b12-efff9c275473-mon-compute-0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default)
Jan 31 08:39:20 compute-0 podman[250655]: 2026-01-31 08:39:20.114500978 +0000 UTC m=+0.097046924 container exec_died 2c160fb9852a007dc977740f88f96001cc57b1cb392a9e315d541aef8037777a (image=quay.io/ceph/ceph:v20, name=ceph-82c880e6-d992-5408-8b12-efff9c275473-mon-compute-0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 08:39:20 compute-0 nova_compute[238824]: 2026-01-31 08:39:20.340 238828 DEBUG oslo_service.periodic_task [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:39:20 compute-0 nova_compute[238824]: 2026-01-31 08:39:20.341 238828 DEBUG nova.compute.manager [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 31 08:39:20 compute-0 podman[250633]: 2026-01-31 08:39:20.384757353 +0000 UTC m=+0.924673487 container exec_died 2c160fb9852a007dc977740f88f96001cc57b1cb392a9e315d541aef8037777a (image=quay.io/ceph/ceph:v20, name=ceph-82c880e6-d992-5408-8b12-efff9c275473-mon-compute-0, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Jan 31 08:39:20 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:39:20 compute-0 ceph-mon[75227]: pgmap v1115: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:39:21 compute-0 nova_compute[238824]: 2026-01-31 08:39:21.339 238828 DEBUG oslo_service.periodic_task [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:39:21 compute-0 nova_compute[238824]: 2026-01-31 08:39:21.340 238828 DEBUG nova.compute.manager [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 31 08:39:21 compute-0 nova_compute[238824]: 2026-01-31 08:39:21.340 238828 DEBUG nova.compute.manager [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 31 08:39:21 compute-0 nova_compute[238824]: 2026-01-31 08:39:21.359 238828 DEBUG nova.compute.manager [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 31 08:39:21 compute-0 nova_compute[238824]: 2026-01-31 08:39:21.359 238828 DEBUG oslo_service.periodic_task [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:39:21 compute-0 nova_compute[238824]: 2026-01-31 08:39:21.359 238828 DEBUG oslo_service.periodic_task [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:39:21 compute-0 nova_compute[238824]: 2026-01-31 08:39:21.359 238828 DEBUG nova.compute.manager [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Jan 31 08:39:21 compute-0 sudo[250563]: pam_unix(sudo:session): session closed for user root
Jan 31 08:39:21 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 31 08:39:21 compute-0 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 08:39:21 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 31 08:39:21 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1116: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:39:21 compute-0 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 08:39:21 compute-0 sudo[250820]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 08:39:21 compute-0 sudo[250820]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:39:21 compute-0 sudo[250820]: pam_unix(sudo:session): session closed for user root
Jan 31 08:39:21 compute-0 sudo[250845]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/82c880e6-d992-5408-8b12-efff9c275473/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Jan 31 08:39:21 compute-0 sudo[250845]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:39:22 compute-0 sudo[250845]: pam_unix(sudo:session): session closed for user root
Jan 31 08:39:22 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0)
Jan 31 08:39:22 compute-0 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} : dispatch
Jan 31 08:39:22 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 31 08:39:22 compute-0 ceph-mon[75227]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 08:39:22 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 31 08:39:22 compute-0 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 31 08:39:22 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 31 08:39:22 compute-0 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 08:39:22 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 31 08:39:22 compute-0 ceph-mon[75227]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 31 08:39:22 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 31 08:39:22 compute-0 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 31 08:39:22 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 31 08:39:22 compute-0 ceph-mon[75227]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 08:39:22 compute-0 sudo[250901]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 08:39:22 compute-0 sudo[250901]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:39:22 compute-0 sudo[250901]: pam_unix(sudo:session): session closed for user root
Jan 31 08:39:22 compute-0 sudo[250926]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/82c880e6-d992-5408-8b12-efff9c275473/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 82c880e6-d992-5408-8b12-efff9c275473 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Jan 31 08:39:22 compute-0 sudo[250926]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:39:22 compute-0 nova_compute[238824]: 2026-01-31 08:39:22.351 238828 DEBUG oslo_service.periodic_task [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:39:22 compute-0 nova_compute[238824]: 2026-01-31 08:39:22.416 238828 DEBUG oslo_concurrency.lockutils [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:39:22 compute-0 nova_compute[238824]: 2026-01-31 08:39:22.416 238828 DEBUG oslo_concurrency.lockutils [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:39:22 compute-0 nova_compute[238824]: 2026-01-31 08:39:22.416 238828 DEBUG oslo_concurrency.lockutils [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:39:22 compute-0 nova_compute[238824]: 2026-01-31 08:39:22.417 238828 DEBUG nova.compute.resource_tracker [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 31 08:39:22 compute-0 nova_compute[238824]: 2026-01-31 08:39:22.417 238828 DEBUG oslo_concurrency.processutils [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:39:22 compute-0 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 08:39:22 compute-0 ceph-mon[75227]: pgmap v1116: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:39:22 compute-0 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 08:39:22 compute-0 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} : dispatch
Jan 31 08:39:22 compute-0 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 08:39:22 compute-0 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 31 08:39:22 compute-0 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 08:39:22 compute-0 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 31 08:39:22 compute-0 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 31 08:39:22 compute-0 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 08:39:22 compute-0 podman[250983]: 2026-01-31 08:39:22.594566997 +0000 UTC m=+0.023517448 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 08:39:22 compute-0 podman[250983]: 2026-01-31 08:39:22.69021603 +0000 UTC m=+0.119166451 container create 16422dfe8028f07118f2886e5ed6d4413108384306cd3e78746d17cce80891cd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=inspiring_sinoussi, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 08:39:22 compute-0 systemd[1]: Started libpod-conmon-16422dfe8028f07118f2886e5ed6d4413108384306cd3e78746d17cce80891cd.scope.
Jan 31 08:39:22 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:39:22 compute-0 podman[250983]: 2026-01-31 08:39:22.842283443 +0000 UTC m=+0.271233884 container init 16422dfe8028f07118f2886e5ed6d4413108384306cd3e78746d17cce80891cd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=inspiring_sinoussi, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2)
Jan 31 08:39:22 compute-0 podman[250983]: 2026-01-31 08:39:22.850013782 +0000 UTC m=+0.278964193 container start 16422dfe8028f07118f2886e5ed6d4413108384306cd3e78746d17cce80891cd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=inspiring_sinoussi, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Jan 31 08:39:22 compute-0 inspiring_sinoussi[250999]: 167 167
Jan 31 08:39:22 compute-0 systemd[1]: libpod-16422dfe8028f07118f2886e5ed6d4413108384306cd3e78746d17cce80891cd.scope: Deactivated successfully.
Jan 31 08:39:22 compute-0 podman[250983]: 2026-01-31 08:39:22.953764985 +0000 UTC m=+0.382715416 container attach 16422dfe8028f07118f2886e5ed6d4413108384306cd3e78746d17cce80891cd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=inspiring_sinoussi, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Jan 31 08:39:22 compute-0 podman[250983]: 2026-01-31 08:39:22.954619629 +0000 UTC m=+0.383570090 container died 16422dfe8028f07118f2886e5ed6d4413108384306cd3e78746d17cce80891cd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=inspiring_sinoussi, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 08:39:22 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 31 08:39:22 compute-0 ceph-mon[75227]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3963587154' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 31 08:39:22 compute-0 nova_compute[238824]: 2026-01-31 08:39:22.995 238828 DEBUG oslo_concurrency.processutils [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.578s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:39:23 compute-0 nova_compute[238824]: 2026-01-31 08:39:23.177 238828 WARNING nova.virt.libvirt.driver [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 31 08:39:23 compute-0 nova_compute[238824]: 2026-01-31 08:39:23.179 238828 DEBUG nova.compute.resource_tracker [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5071MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 31 08:39:23 compute-0 nova_compute[238824]: 2026-01-31 08:39:23.179 238828 DEBUG oslo_concurrency.lockutils [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:39:23 compute-0 nova_compute[238824]: 2026-01-31 08:39:23.179 238828 DEBUG oslo_concurrency.lockutils [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:39:23 compute-0 systemd[1]: var-lib-containers-storage-overlay-90d9d8a69b9c5266cef28cb8e77243d306054e7d4fbf4df620f1d42ea1eab311-merged.mount: Deactivated successfully.
Jan 31 08:39:23 compute-0 nova_compute[238824]: 2026-01-31 08:39:23.242 238828 DEBUG nova.compute.resource_tracker [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 31 08:39:23 compute-0 nova_compute[238824]: 2026-01-31 08:39:23.243 238828 DEBUG nova.compute.resource_tracker [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 31 08:39:23 compute-0 nova_compute[238824]: 2026-01-31 08:39:23.262 238828 DEBUG oslo_concurrency.processutils [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:39:23 compute-0 podman[250983]: 2026-01-31 08:39:23.491531516 +0000 UTC m=+0.920481927 container remove 16422dfe8028f07118f2886e5ed6d4413108384306cd3e78746d17cce80891cd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=inspiring_sinoussi, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 08:39:23 compute-0 systemd[1]: libpod-conmon-16422dfe8028f07118f2886e5ed6d4413108384306cd3e78746d17cce80891cd.scope: Deactivated successfully.
Jan 31 08:39:23 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1117: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:39:23 compute-0 podman[251045]: 2026-01-31 08:39:23.609929024 +0000 UTC m=+0.026124502 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 08:39:23 compute-0 ceph-mon[75227]: from='client.? 192.168.122.100:0/3963587154' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 31 08:39:23 compute-0 podman[251045]: 2026-01-31 08:39:23.751246262 +0000 UTC m=+0.167441720 container create fed7cacfdfac00d89d68de7d8c18aed6cf4d55e8080d820bdc408512bac70a32 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nervous_ganguly, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3)
Jan 31 08:39:23 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 31 08:39:23 compute-0 ceph-mon[75227]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1907332852' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 31 08:39:23 compute-0 systemd[1]: Started libpod-conmon-fed7cacfdfac00d89d68de7d8c18aed6cf4d55e8080d820bdc408512bac70a32.scope.
Jan 31 08:39:23 compute-0 nova_compute[238824]: 2026-01-31 08:39:23.909 238828 DEBUG oslo_concurrency.processutils [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.647s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:39:23 compute-0 nova_compute[238824]: 2026-01-31 08:39:23.915 238828 DEBUG nova.compute.provider_tree [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Inventory has not changed in ProviderTree for provider: 6d4ff98f-eb37-47a1-bfaf-01e7f5329d98 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 31 08:39:23 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:39:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0f6354db30478eb9dda006c9bd1d644f7a2848d6a18ceb1b461c628a3939c6a1/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 08:39:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0f6354db30478eb9dda006c9bd1d644f7a2848d6a18ceb1b461c628a3939c6a1/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 08:39:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0f6354db30478eb9dda006c9bd1d644f7a2848d6a18ceb1b461c628a3939c6a1/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 08:39:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0f6354db30478eb9dda006c9bd1d644f7a2848d6a18ceb1b461c628a3939c6a1/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 08:39:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0f6354db30478eb9dda006c9bd1d644f7a2848d6a18ceb1b461c628a3939c6a1/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 31 08:39:23 compute-0 nova_compute[238824]: 2026-01-31 08:39:23.935 238828 DEBUG nova.scheduler.client.report [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Inventory has not changed for provider 6d4ff98f-eb37-47a1-bfaf-01e7f5329d98 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 31 08:39:23 compute-0 nova_compute[238824]: 2026-01-31 08:39:23.937 238828 DEBUG nova.compute.resource_tracker [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 31 08:39:23 compute-0 nova_compute[238824]: 2026-01-31 08:39:23.937 238828 DEBUG oslo_concurrency.lockutils [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.758s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:39:23 compute-0 podman[251045]: 2026-01-31 08:39:23.943115913 +0000 UTC m=+0.359311361 container init fed7cacfdfac00d89d68de7d8c18aed6cf4d55e8080d820bdc408512bac70a32 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nervous_ganguly, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3)
Jan 31 08:39:23 compute-0 podman[251045]: 2026-01-31 08:39:23.948818845 +0000 UTC m=+0.365014273 container start fed7cacfdfac00d89d68de7d8c18aed6cf4d55e8080d820bdc408512bac70a32 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nervous_ganguly, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Jan 31 08:39:23 compute-0 podman[251045]: 2026-01-31 08:39:23.957533362 +0000 UTC m=+0.373728820 container attach fed7cacfdfac00d89d68de7d8c18aed6cf4d55e8080d820bdc408512bac70a32 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nervous_ganguly, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.build-date=20251030, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 08:39:24 compute-0 nervous_ganguly[251063]: --> passed data devices: 0 physical, 3 LVM
Jan 31 08:39:24 compute-0 nervous_ganguly[251063]: --> All data devices are unavailable
Jan 31 08:39:24 compute-0 systemd[1]: libpod-fed7cacfdfac00d89d68de7d8c18aed6cf4d55e8080d820bdc408512bac70a32.scope: Deactivated successfully.
Jan 31 08:39:24 compute-0 podman[251045]: 2026-01-31 08:39:24.379921312 +0000 UTC m=+0.796116780 container died fed7cacfdfac00d89d68de7d8c18aed6cf4d55e8080d820bdc408512bac70a32 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nervous_ganguly, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True)
Jan 31 08:39:24 compute-0 systemd[1]: var-lib-containers-storage-overlay-0f6354db30478eb9dda006c9bd1d644f7a2848d6a18ceb1b461c628a3939c6a1-merged.mount: Deactivated successfully.
Jan 31 08:39:24 compute-0 podman[251045]: 2026-01-31 08:39:24.435410346 +0000 UTC m=+0.851605774 container remove fed7cacfdfac00d89d68de7d8c18aed6cf4d55e8080d820bdc408512bac70a32 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nervous_ganguly, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_REF=tentacle, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 08:39:24 compute-0 systemd[1]: libpod-conmon-fed7cacfdfac00d89d68de7d8c18aed6cf4d55e8080d820bdc408512bac70a32.scope: Deactivated successfully.
Jan 31 08:39:24 compute-0 sudo[250926]: pam_unix(sudo:session): session closed for user root
Jan 31 08:39:24 compute-0 sudo[251096]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 08:39:24 compute-0 sudo[251096]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:39:24 compute-0 sudo[251096]: pam_unix(sudo:session): session closed for user root
Jan 31 08:39:24 compute-0 sudo[251121]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/82c880e6-d992-5408-8b12-efff9c275473/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 82c880e6-d992-5408-8b12-efff9c275473 -- lvm list --format json
Jan 31 08:39:24 compute-0 sudo[251121]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:39:24 compute-0 ceph-mon[75227]: pgmap v1117: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:39:24 compute-0 ceph-mon[75227]: from='client.? 192.168.122.100:0/1907332852' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 31 08:39:24 compute-0 podman[251157]: 2026-01-31 08:39:24.856274022 +0000 UTC m=+0.034536450 container create b7dbfeab7328ce0f3407602350cb35de538079ded90d2a490d06b4062f52e8ac (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=unruffled_khayyam, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030)
Jan 31 08:39:24 compute-0 systemd[1]: Started libpod-conmon-b7dbfeab7328ce0f3407602350cb35de538079ded90d2a490d06b4062f52e8ac.scope.
Jan 31 08:39:24 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:39:24 compute-0 podman[251157]: 2026-01-31 08:39:24.929633583 +0000 UTC m=+0.107896051 container init b7dbfeab7328ce0f3407602350cb35de538079ded90d2a490d06b4062f52e8ac (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=unruffled_khayyam, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030)
Jan 31 08:39:24 compute-0 podman[251157]: 2026-01-31 08:39:24.933903984 +0000 UTC m=+0.112166442 container start b7dbfeab7328ce0f3407602350cb35de538079ded90d2a490d06b4062f52e8ac (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=unruffled_khayyam, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030)
Jan 31 08:39:24 compute-0 podman[251157]: 2026-01-31 08:39:24.937481165 +0000 UTC m=+0.115743623 container attach b7dbfeab7328ce0f3407602350cb35de538079ded90d2a490d06b4062f52e8ac (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=unruffled_khayyam, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=tentacle, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Jan 31 08:39:24 compute-0 podman[251157]: 2026-01-31 08:39:24.841976107 +0000 UTC m=+0.020238565 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 08:39:24 compute-0 unruffled_khayyam[251173]: 167 167
Jan 31 08:39:24 compute-0 systemd[1]: libpod-b7dbfeab7328ce0f3407602350cb35de538079ded90d2a490d06b4062f52e8ac.scope: Deactivated successfully.
Jan 31 08:39:24 compute-0 podman[251157]: 2026-01-31 08:39:24.939517203 +0000 UTC m=+0.117779651 container died b7dbfeab7328ce0f3407602350cb35de538079ded90d2a490d06b4062f52e8ac (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=unruffled_khayyam, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 31 08:39:24 compute-0 systemd[1]: var-lib-containers-storage-overlay-5f2a95b7372c64d771e453eb6d1aa90f97bec0c8f6d5cbac3f020d7c767646be-merged.mount: Deactivated successfully.
Jan 31 08:39:24 compute-0 podman[251157]: 2026-01-31 08:39:24.97362348 +0000 UTC m=+0.151885918 container remove b7dbfeab7328ce0f3407602350cb35de538079ded90d2a490d06b4062f52e8ac (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=unruffled_khayyam, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 31 08:39:24 compute-0 systemd[1]: libpod-conmon-b7dbfeab7328ce0f3407602350cb35de538079ded90d2a490d06b4062f52e8ac.scope: Deactivated successfully.
Jan 31 08:39:25 compute-0 podman[251196]: 2026-01-31 08:39:25.107187739 +0000 UTC m=+0.039162772 container create 9e6ce36d1456be66269b9da8fe9668c70f87e3c7dd45eaa7353cfc786fbe3e3a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=laughing_feistel, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Jan 31 08:39:25 compute-0 systemd[1]: Started libpod-conmon-9e6ce36d1456be66269b9da8fe9668c70f87e3c7dd45eaa7353cfc786fbe3e3a.scope.
Jan 31 08:39:25 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:39:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/578779d1e94b14b7f3fdec0a3cf536069c7786c91398cbfb84dc943350b77626/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 08:39:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/578779d1e94b14b7f3fdec0a3cf536069c7786c91398cbfb84dc943350b77626/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 08:39:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/578779d1e94b14b7f3fdec0a3cf536069c7786c91398cbfb84dc943350b77626/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 08:39:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/578779d1e94b14b7f3fdec0a3cf536069c7786c91398cbfb84dc943350b77626/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 08:39:25 compute-0 podman[251196]: 2026-01-31 08:39:25.173858649 +0000 UTC m=+0.105833682 container init 9e6ce36d1456be66269b9da8fe9668c70f87e3c7dd45eaa7353cfc786fbe3e3a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=laughing_feistel, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Jan 31 08:39:25 compute-0 podman[251196]: 2026-01-31 08:39:25.184230014 +0000 UTC m=+0.116205047 container start 9e6ce36d1456be66269b9da8fe9668c70f87e3c7dd45eaa7353cfc786fbe3e3a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=laughing_feistel, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.license=GPLv2)
Jan 31 08:39:25 compute-0 podman[251196]: 2026-01-31 08:39:25.088380645 +0000 UTC m=+0.020355698 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 08:39:25 compute-0 podman[251196]: 2026-01-31 08:39:25.187533767 +0000 UTC m=+0.119508800 container attach 9e6ce36d1456be66269b9da8fe9668c70f87e3c7dd45eaa7353cfc786fbe3e3a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=laughing_feistel, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.build-date=20251030, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 08:39:25 compute-0 laughing_feistel[251213]: {
Jan 31 08:39:25 compute-0 laughing_feistel[251213]:     "0": [
Jan 31 08:39:25 compute-0 laughing_feistel[251213]:         {
Jan 31 08:39:25 compute-0 laughing_feistel[251213]:             "devices": [
Jan 31 08:39:25 compute-0 laughing_feistel[251213]:                 "/dev/loop3"
Jan 31 08:39:25 compute-0 laughing_feistel[251213]:             ],
Jan 31 08:39:25 compute-0 laughing_feistel[251213]:             "lv_name": "ceph_lv0",
Jan 31 08:39:25 compute-0 laughing_feistel[251213]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 08:39:25 compute-0 laughing_feistel[251213]:             "lv_size": "21470642176",
Jan 31 08:39:25 compute-0 laughing_feistel[251213]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=MTsNbY-MKaT-jGv0-3onj-5WQa-gnK0-BbfLsK,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=82c880e6-d992-5408-8b12-efff9c275473,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=39c36249-2898-4a76-b317-8e4ca379866f,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 31 08:39:25 compute-0 laughing_feistel[251213]:             "lv_uuid": "MTsNbY-MKaT-jGv0-3onj-5WQa-gnK0-BbfLsK",
Jan 31 08:39:25 compute-0 laughing_feistel[251213]:             "name": "ceph_lv0",
Jan 31 08:39:25 compute-0 laughing_feistel[251213]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 08:39:25 compute-0 laughing_feistel[251213]:             "tags": {
Jan 31 08:39:25 compute-0 laughing_feistel[251213]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 31 08:39:25 compute-0 laughing_feistel[251213]:                 "ceph.block_uuid": "MTsNbY-MKaT-jGv0-3onj-5WQa-gnK0-BbfLsK",
Jan 31 08:39:25 compute-0 laughing_feistel[251213]:                 "ceph.cephx_lockbox_secret": "",
Jan 31 08:39:25 compute-0 laughing_feistel[251213]:                 "ceph.cluster_fsid": "82c880e6-d992-5408-8b12-efff9c275473",
Jan 31 08:39:25 compute-0 laughing_feistel[251213]:                 "ceph.cluster_name": "ceph",
Jan 31 08:39:25 compute-0 laughing_feistel[251213]:                 "ceph.crush_device_class": "",
Jan 31 08:39:25 compute-0 laughing_feistel[251213]:                 "ceph.encrypted": "0",
Jan 31 08:39:25 compute-0 laughing_feistel[251213]:                 "ceph.objectstore": "bluestore",
Jan 31 08:39:25 compute-0 laughing_feistel[251213]:                 "ceph.osd_fsid": "39c36249-2898-4a76-b317-8e4ca379866f",
Jan 31 08:39:25 compute-0 laughing_feistel[251213]:                 "ceph.osd_id": "0",
Jan 31 08:39:25 compute-0 laughing_feistel[251213]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 31 08:39:25 compute-0 laughing_feistel[251213]:                 "ceph.type": "block",
Jan 31 08:39:25 compute-0 laughing_feistel[251213]:                 "ceph.vdo": "0",
Jan 31 08:39:25 compute-0 laughing_feistel[251213]:                 "ceph.with_tpm": "0"
Jan 31 08:39:25 compute-0 laughing_feistel[251213]:             },
Jan 31 08:39:25 compute-0 laughing_feistel[251213]:             "type": "block",
Jan 31 08:39:25 compute-0 laughing_feistel[251213]:             "vg_name": "ceph_vg0"
Jan 31 08:39:25 compute-0 laughing_feistel[251213]:         }
Jan 31 08:39:25 compute-0 laughing_feistel[251213]:     ],
Jan 31 08:39:25 compute-0 laughing_feistel[251213]:     "1": [
Jan 31 08:39:25 compute-0 laughing_feistel[251213]:         {
Jan 31 08:39:25 compute-0 laughing_feistel[251213]:             "devices": [
Jan 31 08:39:25 compute-0 laughing_feistel[251213]:                 "/dev/loop4"
Jan 31 08:39:25 compute-0 laughing_feistel[251213]:             ],
Jan 31 08:39:25 compute-0 laughing_feistel[251213]:             "lv_name": "ceph_lv1",
Jan 31 08:39:25 compute-0 laughing_feistel[251213]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 31 08:39:25 compute-0 laughing_feistel[251213]:             "lv_size": "21470642176",
Jan 31 08:39:25 compute-0 laughing_feistel[251213]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=p93Mbf-DMxT-pcUt-jSJE-SFna-oscq-yTAd40,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=82c880e6-d992-5408-8b12-efff9c275473,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=dacad4fa-56d8-4937-b2d8-306fb75187f3,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 31 08:39:25 compute-0 laughing_feistel[251213]:             "lv_uuid": "p93Mbf-DMxT-pcUt-jSJE-SFna-oscq-yTAd40",
Jan 31 08:39:25 compute-0 laughing_feistel[251213]:             "name": "ceph_lv1",
Jan 31 08:39:25 compute-0 laughing_feistel[251213]:             "path": "/dev/ceph_vg1/ceph_lv1",
Jan 31 08:39:25 compute-0 laughing_feistel[251213]:             "tags": {
Jan 31 08:39:25 compute-0 laughing_feistel[251213]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 31 08:39:25 compute-0 laughing_feistel[251213]:                 "ceph.block_uuid": "p93Mbf-DMxT-pcUt-jSJE-SFna-oscq-yTAd40",
Jan 31 08:39:25 compute-0 laughing_feistel[251213]:                 "ceph.cephx_lockbox_secret": "",
Jan 31 08:39:25 compute-0 laughing_feistel[251213]:                 "ceph.cluster_fsid": "82c880e6-d992-5408-8b12-efff9c275473",
Jan 31 08:39:25 compute-0 laughing_feistel[251213]:                 "ceph.cluster_name": "ceph",
Jan 31 08:39:25 compute-0 laughing_feistel[251213]:                 "ceph.crush_device_class": "",
Jan 31 08:39:25 compute-0 laughing_feistel[251213]:                 "ceph.encrypted": "0",
Jan 31 08:39:25 compute-0 laughing_feistel[251213]:                 "ceph.objectstore": "bluestore",
Jan 31 08:39:25 compute-0 laughing_feistel[251213]:                 "ceph.osd_fsid": "dacad4fa-56d8-4937-b2d8-306fb75187f3",
Jan 31 08:39:25 compute-0 laughing_feistel[251213]:                 "ceph.osd_id": "1",
Jan 31 08:39:25 compute-0 laughing_feistel[251213]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 31 08:39:25 compute-0 laughing_feistel[251213]:                 "ceph.type": "block",
Jan 31 08:39:25 compute-0 laughing_feistel[251213]:                 "ceph.vdo": "0",
Jan 31 08:39:25 compute-0 laughing_feistel[251213]:                 "ceph.with_tpm": "0"
Jan 31 08:39:25 compute-0 laughing_feistel[251213]:             },
Jan 31 08:39:25 compute-0 laughing_feistel[251213]:             "type": "block",
Jan 31 08:39:25 compute-0 laughing_feistel[251213]:             "vg_name": "ceph_vg1"
Jan 31 08:39:25 compute-0 laughing_feistel[251213]:         }
Jan 31 08:39:25 compute-0 laughing_feistel[251213]:     ],
Jan 31 08:39:25 compute-0 laughing_feistel[251213]:     "2": [
Jan 31 08:39:25 compute-0 laughing_feistel[251213]:         {
Jan 31 08:39:25 compute-0 laughing_feistel[251213]:             "devices": [
Jan 31 08:39:25 compute-0 laughing_feistel[251213]:                 "/dev/loop5"
Jan 31 08:39:25 compute-0 laughing_feistel[251213]:             ],
Jan 31 08:39:25 compute-0 laughing_feistel[251213]:             "lv_name": "ceph_lv2",
Jan 31 08:39:25 compute-0 laughing_feistel[251213]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 31 08:39:25 compute-0 laughing_feistel[251213]:             "lv_size": "21470642176",
Jan 31 08:39:25 compute-0 laughing_feistel[251213]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=fxd7JU-HnwP-NvcE-M4xv-EgEF-kK7y-w6dXCS,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=82c880e6-d992-5408-8b12-efff9c275473,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=faa25865-e7b6-44f9-8188-08bf287b941b,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 31 08:39:25 compute-0 laughing_feistel[251213]:             "lv_uuid": "fxd7JU-HnwP-NvcE-M4xv-EgEF-kK7y-w6dXCS",
Jan 31 08:39:25 compute-0 laughing_feistel[251213]:             "name": "ceph_lv2",
Jan 31 08:39:25 compute-0 laughing_feistel[251213]:             "path": "/dev/ceph_vg2/ceph_lv2",
Jan 31 08:39:25 compute-0 laughing_feistel[251213]:             "tags": {
Jan 31 08:39:25 compute-0 laughing_feistel[251213]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 31 08:39:25 compute-0 laughing_feistel[251213]:                 "ceph.block_uuid": "fxd7JU-HnwP-NvcE-M4xv-EgEF-kK7y-w6dXCS",
Jan 31 08:39:25 compute-0 laughing_feistel[251213]:                 "ceph.cephx_lockbox_secret": "",
Jan 31 08:39:25 compute-0 laughing_feistel[251213]:                 "ceph.cluster_fsid": "82c880e6-d992-5408-8b12-efff9c275473",
Jan 31 08:39:25 compute-0 laughing_feistel[251213]:                 "ceph.cluster_name": "ceph",
Jan 31 08:39:25 compute-0 laughing_feistel[251213]:                 "ceph.crush_device_class": "",
Jan 31 08:39:25 compute-0 laughing_feistel[251213]:                 "ceph.encrypted": "0",
Jan 31 08:39:25 compute-0 laughing_feistel[251213]:                 "ceph.objectstore": "bluestore",
Jan 31 08:39:25 compute-0 laughing_feistel[251213]:                 "ceph.osd_fsid": "faa25865-e7b6-44f9-8188-08bf287b941b",
Jan 31 08:39:25 compute-0 laughing_feistel[251213]:                 "ceph.osd_id": "2",
Jan 31 08:39:25 compute-0 laughing_feistel[251213]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 31 08:39:25 compute-0 laughing_feistel[251213]:                 "ceph.type": "block",
Jan 31 08:39:25 compute-0 laughing_feistel[251213]:                 "ceph.vdo": "0",
Jan 31 08:39:25 compute-0 laughing_feistel[251213]:                 "ceph.with_tpm": "0"
Jan 31 08:39:25 compute-0 laughing_feistel[251213]:             },
Jan 31 08:39:25 compute-0 laughing_feistel[251213]:             "type": "block",
Jan 31 08:39:25 compute-0 laughing_feistel[251213]:             "vg_name": "ceph_vg2"
Jan 31 08:39:25 compute-0 laughing_feistel[251213]:         }
Jan 31 08:39:25 compute-0 laughing_feistel[251213]:     ]
Jan 31 08:39:25 compute-0 laughing_feistel[251213]: }
Jan 31 08:39:25 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:39:25 compute-0 systemd[1]: libpod-9e6ce36d1456be66269b9da8fe9668c70f87e3c7dd45eaa7353cfc786fbe3e3a.scope: Deactivated successfully.
Jan 31 08:39:25 compute-0 podman[251222]: 2026-01-31 08:39:25.494266797 +0000 UTC m=+0.022750606 container died 9e6ce36d1456be66269b9da8fe9668c70f87e3c7dd45eaa7353cfc786fbe3e3a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=laughing_feistel, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030)
Jan 31 08:39:25 compute-0 systemd[1]: var-lib-containers-storage-overlay-578779d1e94b14b7f3fdec0a3cf536069c7786c91398cbfb84dc943350b77626-merged.mount: Deactivated successfully.
Jan 31 08:39:25 compute-0 podman[251222]: 2026-01-31 08:39:25.53738971 +0000 UTC m=+0.065873509 container remove 9e6ce36d1456be66269b9da8fe9668c70f87e3c7dd45eaa7353cfc786fbe3e3a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=laughing_feistel, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 08:39:25 compute-0 systemd[1]: libpod-conmon-9e6ce36d1456be66269b9da8fe9668c70f87e3c7dd45eaa7353cfc786fbe3e3a.scope: Deactivated successfully.
Jan 31 08:39:25 compute-0 sudo[251121]: pam_unix(sudo:session): session closed for user root
Jan 31 08:39:25 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1118: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:39:25 compute-0 sudo[251235]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 08:39:25 compute-0 sudo[251235]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:39:25 compute-0 sudo[251235]: pam_unix(sudo:session): session closed for user root
Jan 31 08:39:25 compute-0 sudo[251260]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/82c880e6-d992-5408-8b12-efff9c275473/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 82c880e6-d992-5408-8b12-efff9c275473 -- raw list --format json
Jan 31 08:39:25 compute-0 sudo[251260]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:39:25 compute-0 ceph-mon[75227]: pgmap v1118: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:39:25 compute-0 podman[251296]: 2026-01-31 08:39:25.894442457 +0000 UTC m=+0.029474957 container create d439289c45052fae67fbc7ca13451607a78fbac9c90f40f528fd98587c5bd3ee (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ecstatic_sammet, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Jan 31 08:39:25 compute-0 nova_compute[238824]: 2026-01-31 08:39:25.921 238828 DEBUG oslo_service.periodic_task [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:39:25 compute-0 nova_compute[238824]: 2026-01-31 08:39:25.922 238828 DEBUG oslo_service.periodic_task [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:39:25 compute-0 systemd[1]: Started libpod-conmon-d439289c45052fae67fbc7ca13451607a78fbac9c90f40f528fd98587c5bd3ee.scope.
Jan 31 08:39:25 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:39:25 compute-0 podman[251296]: 2026-01-31 08:39:25.954220022 +0000 UTC m=+0.089252542 container init d439289c45052fae67fbc7ca13451607a78fbac9c90f40f528fd98587c5bd3ee (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ecstatic_sammet, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 08:39:25 compute-0 podman[251296]: 2026-01-31 08:39:25.959039539 +0000 UTC m=+0.094072039 container start d439289c45052fae67fbc7ca13451607a78fbac9c90f40f528fd98587c5bd3ee (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ecstatic_sammet, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 08:39:25 compute-0 ecstatic_sammet[251312]: 167 167
Jan 31 08:39:25 compute-0 podman[251296]: 2026-01-31 08:39:25.962562779 +0000 UTC m=+0.097595299 container attach d439289c45052fae67fbc7ca13451607a78fbac9c90f40f528fd98587c5bd3ee (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ecstatic_sammet, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 08:39:25 compute-0 systemd[1]: libpod-d439289c45052fae67fbc7ca13451607a78fbac9c90f40f528fd98587c5bd3ee.scope: Deactivated successfully.
Jan 31 08:39:25 compute-0 podman[251296]: 2026-01-31 08:39:25.963295949 +0000 UTC m=+0.098328449 container died d439289c45052fae67fbc7ca13451607a78fbac9c90f40f528fd98587c5bd3ee (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ecstatic_sammet, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 08:39:25 compute-0 podman[251296]: 2026-01-31 08:39:25.881349995 +0000 UTC m=+0.016382515 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 08:39:25 compute-0 systemd[1]: var-lib-containers-storage-overlay-9297b5ee26d0d4b3926d7328d84a99892c246367d10faa652122a162ebfd99ba-merged.mount: Deactivated successfully.
Jan 31 08:39:25 compute-0 podman[251296]: 2026-01-31 08:39:25.996148581 +0000 UTC m=+0.131181131 container remove d439289c45052fae67fbc7ca13451607a78fbac9c90f40f528fd98587c5bd3ee (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ecstatic_sammet, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Jan 31 08:39:26 compute-0 systemd[1]: libpod-conmon-d439289c45052fae67fbc7ca13451607a78fbac9c90f40f528fd98587c5bd3ee.scope: Deactivated successfully.
Jan 31 08:39:26 compute-0 podman[251337]: 2026-01-31 08:39:26.11070885 +0000 UTC m=+0.035156728 container create 26910092139c5932e500fba0b2fd366e7703eb7a55abf37c8534f82542891a48 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gallant_kare, org.label-schema.build-date=20251030, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 08:39:26 compute-0 systemd[1]: Started libpod-conmon-26910092139c5932e500fba0b2fd366e7703eb7a55abf37c8534f82542891a48.scope.
Jan 31 08:39:26 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:39:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f0cc31479589ceb40d844f171d2c4f2b2db43e028dcbe27cd4a2409a712c5cee/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 08:39:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f0cc31479589ceb40d844f171d2c4f2b2db43e028dcbe27cd4a2409a712c5cee/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 08:39:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f0cc31479589ceb40d844f171d2c4f2b2db43e028dcbe27cd4a2409a712c5cee/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 08:39:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f0cc31479589ceb40d844f171d2c4f2b2db43e028dcbe27cd4a2409a712c5cee/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 08:39:26 compute-0 podman[251337]: 2026-01-31 08:39:26.183785593 +0000 UTC m=+0.108233511 container init 26910092139c5932e500fba0b2fd366e7703eb7a55abf37c8534f82542891a48 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gallant_kare, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 08:39:26 compute-0 podman[251337]: 2026-01-31 08:39:26.189273409 +0000 UTC m=+0.113721297 container start 26910092139c5932e500fba0b2fd366e7703eb7a55abf37c8534f82542891a48 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gallant_kare, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 08:39:26 compute-0 podman[251337]: 2026-01-31 08:39:26.095685014 +0000 UTC m=+0.020132912 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 08:39:26 compute-0 podman[251337]: 2026-01-31 08:39:26.192821049 +0000 UTC m=+0.117268947 container attach 26910092139c5932e500fba0b2fd366e7703eb7a55abf37c8534f82542891a48 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gallant_kare, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True)
Jan 31 08:39:26 compute-0 lvm[251431]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 31 08:39:26 compute-0 lvm[251431]: VG ceph_vg0 finished
Jan 31 08:39:26 compute-0 lvm[251433]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 31 08:39:26 compute-0 lvm[251433]: VG ceph_vg1 finished
Jan 31 08:39:26 compute-0 lvm[251435]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 31 08:39:26 compute-0 lvm[251435]: VG ceph_vg2 finished
Jan 31 08:39:26 compute-0 gallant_kare[251354]: {}
Jan 31 08:39:26 compute-0 systemd[1]: libpod-26910092139c5932e500fba0b2fd366e7703eb7a55abf37c8534f82542891a48.scope: Deactivated successfully.
Jan 31 08:39:26 compute-0 podman[251337]: 2026-01-31 08:39:26.904855423 +0000 UTC m=+0.829303371 container died 26910092139c5932e500fba0b2fd366e7703eb7a55abf37c8534f82542891a48 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gallant_kare, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Jan 31 08:39:27 compute-0 systemd[1]: var-lib-containers-storage-overlay-f0cc31479589ceb40d844f171d2c4f2b2db43e028dcbe27cd4a2409a712c5cee-merged.mount: Deactivated successfully.
Jan 31 08:39:27 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1119: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:39:27 compute-0 podman[251337]: 2026-01-31 08:39:27.609790866 +0000 UTC m=+1.534238774 container remove 26910092139c5932e500fba0b2fd366e7703eb7a55abf37c8534f82542891a48 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gallant_kare, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 08:39:27 compute-0 sudo[251260]: pam_unix(sudo:session): session closed for user root
Jan 31 08:39:27 compute-0 systemd[1]: libpod-conmon-26910092139c5932e500fba0b2fd366e7703eb7a55abf37c8534f82542891a48.scope: Deactivated successfully.
Jan 31 08:39:27 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 31 08:39:28 compute-0 ceph-mon[75227]: pgmap v1119: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:39:28 compute-0 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 08:39:28 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 31 08:39:28 compute-0 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 08:39:28 compute-0 sudo[251451]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 31 08:39:28 compute-0 sudo[251451]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:39:28 compute-0 sudo[251451]: pam_unix(sudo:session): session closed for user root
Jan 31 08:39:29 compute-0 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 08:39:29 compute-0 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 08:39:29 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1120: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:39:30 compute-0 ceph-mon[75227]: pgmap v1120: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:39:30 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:39:31 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1121: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:39:31 compute-0 ceph-mgr[75519]: [balancer INFO root] Optimize plan auto_2026-01-31_08:39:31
Jan 31 08:39:31 compute-0 ceph-mgr[75519]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 31 08:39:31 compute-0 ceph-mgr[75519]: [balancer INFO root] do_upmap
Jan 31 08:39:31 compute-0 ceph-mgr[75519]: [balancer INFO root] pools ['cephfs.cephfs.data', 'default.rgw.meta', 'images', 'vms', 'default.rgw.log', '.rgw.root', 'volumes', 'default.rgw.control', 'backups', 'cephfs.cephfs.meta', '.mgr']
Jan 31 08:39:31 compute-0 ceph-mgr[75519]: [balancer INFO root] prepared 0/10 upmap changes
Jan 31 08:39:32 compute-0 ceph-mgr[75519]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:39:32 compute-0 ceph-mgr[75519]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:39:32 compute-0 ceph-mgr[75519]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:39:32 compute-0 ceph-mgr[75519]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:39:32 compute-0 ceph-mon[75227]: pgmap v1121: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:39:32 compute-0 ceph-mgr[75519]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:39:32 compute-0 ceph-mgr[75519]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:39:33 compute-0 ceph-mgr[75519]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 31 08:39:33 compute-0 ceph-mgr[75519]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 31 08:39:33 compute-0 ceph-mgr[75519]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 08:39:33 compute-0 ceph-mgr[75519]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 08:39:33 compute-0 ceph-mgr[75519]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 08:39:33 compute-0 ceph-mgr[75519]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 08:39:33 compute-0 ceph-mgr[75519]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 08:39:33 compute-0 ceph-mgr[75519]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 08:39:33 compute-0 ceph-mgr[75519]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 08:39:33 compute-0 ceph-mgr[75519]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 08:39:33 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1122: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:39:34 compute-0 ceph-mon[75227]: pgmap v1122: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:39:35 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:39:35 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1123: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:39:36 compute-0 ceph-mon[75227]: pgmap v1123: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:39:37 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1124: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:39:38 compute-0 ceph-mon[75227]: pgmap v1124: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:39:39 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1125: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:39:40 compute-0 ceph-mon[75227]: pgmap v1125: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:39:40 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:39:41 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1126: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:39:41 compute-0 ceph-mon[75227]: pgmap v1126: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:39:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] _maybe_adjust
Jan 31 08:39:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:39:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 31 08:39:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:39:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 08:39:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:39:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 08:39:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:39:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 08:39:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:39:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 3.257160766784386e-07 of space, bias 1.0, pg target 9.771482300353158e-05 quantized to 32 (current 32)
Jan 31 08:39:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:39:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 2.5331644121694047e-06 of space, bias 4.0, pg target 0.0030397972946032857 quantized to 16 (current 16)
Jan 31 08:39:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:39:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 08:39:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:39:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 31 08:39:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:39:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Jan 31 08:39:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:39:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 08:39:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:39:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 31 08:39:43 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1127: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:39:44 compute-0 ceph-mon[75227]: pgmap v1127: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:39:45 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:39:45 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1128: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:39:46 compute-0 ceph-mon[75227]: pgmap v1128: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:39:47 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1129: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:39:48 compute-0 ceph-mon[75227]: pgmap v1129: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:39:49 compute-0 podman[251477]: 2026-01-31 08:39:49.203141077 +0000 UTC m=+0.090255371 container health_status 5cc46d1955888fed41771eb977e7f9416e280539f01559118253757fe3eb0869 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '3b798815db4eef76ff54d2cfe5801aee605c637f16e47a4297de289ab1fdb9c1-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2)
Jan 31 08:39:49 compute-0 podman[251476]: 2026-01-31 08:39:49.232325794 +0000 UTC m=+0.119412828 container health_status 14c3c41ef3ecc0cb180f3b1c6b2646401de390411c75f2edf127900ced71a3ae (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '3b798815db4eef76ff54d2cfe5801aee605c637f16e47a4297de289ab1fdb9c1-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Jan 31 08:39:49 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1130: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:39:49 compute-0 ceph-mon[75227]: pgmap v1130: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:39:50 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:39:51 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1131: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:39:51 compute-0 ceph-mon[75227]: pgmap v1131: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:39:53 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1132: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:39:54 compute-0 ceph-mon[75227]: pgmap v1132: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:39:55 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:39:55 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1133: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:39:56 compute-0 ceph-mon[75227]: pgmap v1133: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:39:57 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1134: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:39:58 compute-0 ceph-mon[75227]: pgmap v1134: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:39:59 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1135: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:39:59 compute-0 ceph-mon[75227]: pgmap v1135: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:40:00 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:40:01 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1136: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:40:01 compute-0 ceph-mon[75227]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #51. Immutable memtables: 0.
Jan 31 08:40:01 compute-0 ceph-mon[75227]: rocksdb: (Original Log Time 2026/01/31-08:40:01.681958) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 31 08:40:01 compute-0 ceph-mon[75227]: rocksdb: [db/flush_job.cc:856] [default] [JOB 25] Flushing memtable with next log file: 51
Jan 31 08:40:01 compute-0 ceph-mon[75227]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769848801682034, "job": 25, "event": "flush_started", "num_memtables": 1, "num_entries": 2085, "num_deletes": 252, "total_data_size": 3636716, "memory_usage": 3691328, "flush_reason": "Manual Compaction"}
Jan 31 08:40:01 compute-0 ceph-mon[75227]: rocksdb: [db/flush_job.cc:885] [default] [JOB 25] Level-0 flush table #52: started
Jan 31 08:40:01 compute-0 ceph-mon[75227]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769848801708577, "cf_name": "default", "job": 25, "event": "table_file_creation", "file_number": 52, "file_size": 3535660, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 21099, "largest_seqno": 23183, "table_properties": {"data_size": 3526101, "index_size": 6117, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2373, "raw_key_size": 19014, "raw_average_key_size": 20, "raw_value_size": 3506999, "raw_average_value_size": 3703, "num_data_blocks": 276, "num_entries": 947, "num_filter_entries": 947, "num_deletions": 252, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769848579, "oldest_key_time": 1769848579, "file_creation_time": 1769848801, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "91992687-9ca4-489a-811f-a25b3432622d", "db_session_id": "RDN3DWKE2K2I6QTJYIJY", "orig_file_number": 52, "seqno_to_time_mapping": "N/A"}}
Jan 31 08:40:01 compute-0 ceph-mon[75227]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 25] Flush lasted 26697 microseconds, and 7909 cpu microseconds.
Jan 31 08:40:01 compute-0 ceph-mon[75227]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 31 08:40:01 compute-0 ceph-mon[75227]: rocksdb: (Original Log Time 2026/01/31-08:40:01.708660) [db/flush_job.cc:967] [default] [JOB 25] Level-0 flush table #52: 3535660 bytes OK
Jan 31 08:40:01 compute-0 ceph-mon[75227]: rocksdb: (Original Log Time 2026/01/31-08:40:01.708689) [db/memtable_list.cc:519] [default] Level-0 commit table #52 started
Jan 31 08:40:01 compute-0 ceph-mon[75227]: rocksdb: (Original Log Time 2026/01/31-08:40:01.711384) [db/memtable_list.cc:722] [default] Level-0 commit table #52: memtable #1 done
Jan 31 08:40:01 compute-0 ceph-mon[75227]: rocksdb: (Original Log Time 2026/01/31-08:40:01.711404) EVENT_LOG_v1 {"time_micros": 1769848801711399, "job": 25, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 31 08:40:01 compute-0 ceph-mon[75227]: rocksdb: (Original Log Time 2026/01/31-08:40:01.711435) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 31 08:40:01 compute-0 ceph-mon[75227]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 25] Try to delete WAL files size 3627953, prev total WAL file size 3627953, number of live WAL files 2.
Jan 31 08:40:01 compute-0 ceph-mon[75227]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000048.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 08:40:01 compute-0 ceph-mon[75227]: rocksdb: (Original Log Time 2026/01/31-08:40:01.712558) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730031373537' seq:72057594037927935, type:22 .. '7061786F730032303039' seq:0, type:0; will stop at (end)
Jan 31 08:40:01 compute-0 ceph-mon[75227]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 26] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 31 08:40:01 compute-0 ceph-mon[75227]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 25 Base level 0, inputs: [52(3452KB)], [50(7835KB)]
Jan 31 08:40:01 compute-0 ceph-mon[75227]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769848801712632, "job": 26, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [52], "files_L6": [50], "score": -1, "input_data_size": 11559270, "oldest_snapshot_seqno": -1}
Jan 31 08:40:01 compute-0 ceph-mon[75227]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 26] Generated table #53: 4839 keys, 9759553 bytes, temperature: kUnknown
Jan 31 08:40:01 compute-0 ceph-mon[75227]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769848801769863, "cf_name": "default", "job": 26, "event": "table_file_creation", "file_number": 53, "file_size": 9759553, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 9724025, "index_size": 22298, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 12165, "raw_key_size": 118613, "raw_average_key_size": 24, "raw_value_size": 9633398, "raw_average_value_size": 1990, "num_data_blocks": 937, "num_entries": 4839, "num_filter_entries": 4839, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769846771, "oldest_key_time": 0, "file_creation_time": 1769848801, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "91992687-9ca4-489a-811f-a25b3432622d", "db_session_id": "RDN3DWKE2K2I6QTJYIJY", "orig_file_number": 53, "seqno_to_time_mapping": "N/A"}}
Jan 31 08:40:01 compute-0 ceph-mon[75227]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 31 08:40:01 compute-0 ceph-mon[75227]: rocksdb: (Original Log Time 2026/01/31-08:40:01.770138) [db/compaction/compaction_job.cc:1663] [default] [JOB 26] Compacted 1@0 + 1@6 files to L6 => 9759553 bytes
Jan 31 08:40:01 compute-0 ceph-mon[75227]: rocksdb: (Original Log Time 2026/01/31-08:40:01.772089) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 201.7 rd, 170.3 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.4, 7.7 +0.0 blob) out(9.3 +0.0 blob), read-write-amplify(6.0) write-amplify(2.8) OK, records in: 5359, records dropped: 520 output_compression: NoCompression
Jan 31 08:40:01 compute-0 ceph-mon[75227]: rocksdb: (Original Log Time 2026/01/31-08:40:01.772109) EVENT_LOG_v1 {"time_micros": 1769848801772099, "job": 26, "event": "compaction_finished", "compaction_time_micros": 57321, "compaction_time_cpu_micros": 16734, "output_level": 6, "num_output_files": 1, "total_output_size": 9759553, "num_input_records": 5359, "num_output_records": 4839, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 31 08:40:01 compute-0 ceph-mon[75227]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000052.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 08:40:01 compute-0 ceph-mon[75227]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769848801772722, "job": 26, "event": "table_file_deletion", "file_number": 52}
Jan 31 08:40:01 compute-0 ceph-mon[75227]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000050.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 08:40:01 compute-0 ceph-mon[75227]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769848801773973, "job": 26, "event": "table_file_deletion", "file_number": 50}
Jan 31 08:40:01 compute-0 ceph-mon[75227]: rocksdb: (Original Log Time 2026/01/31-08:40:01.712445) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 08:40:01 compute-0 ceph-mon[75227]: rocksdb: (Original Log Time 2026/01/31-08:40:01.774074) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 08:40:01 compute-0 ceph-mon[75227]: rocksdb: (Original Log Time 2026/01/31-08:40:01.774082) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 08:40:01 compute-0 ceph-mon[75227]: rocksdb: (Original Log Time 2026/01/31-08:40:01.774084) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 08:40:01 compute-0 ceph-mon[75227]: rocksdb: (Original Log Time 2026/01/31-08:40:01.774086) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 08:40:01 compute-0 ceph-mon[75227]: rocksdb: (Original Log Time 2026/01/31-08:40:01.774089) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 08:40:02 compute-0 ceph-mon[75227]: pgmap v1136: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:40:02 compute-0 ceph-mgr[75519]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:40:02 compute-0 ceph-mgr[75519]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:40:02 compute-0 ceph-mgr[75519]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:40:02 compute-0 ceph-mgr[75519]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:40:02 compute-0 ceph-mgr[75519]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:40:02 compute-0 ceph-mgr[75519]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:40:03 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1137: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:40:04 compute-0 ceph-mon[75227]: pgmap v1137: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:40:05 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:40:05 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1138: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:40:05 compute-0 ceph-mon[75227]: pgmap v1138: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:40:07 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1139: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:40:08 compute-0 ceph-mon[75227]: pgmap v1139: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:40:09 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1140: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:40:10 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:40:10 compute-0 ceph-mon[75227]: pgmap v1140: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:40:11 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1141: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:40:11 compute-0 ceph-mon[75227]: pgmap v1141: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:40:13 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1142: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:40:14 compute-0 ceph-mon[75227]: pgmap v1142: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:40:15 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:40:15 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1143: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:40:16 compute-0 ceph-mon[75227]: pgmap v1143: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:40:17 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1144: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:40:17 compute-0 ovn_metadata_agent[154972]: 2026-01-31 08:40:17.899 154977 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:40:17 compute-0 ovn_metadata_agent[154972]: 2026-01-31 08:40:17.899 154977 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:40:17 compute-0 ovn_metadata_agent[154972]: 2026-01-31 08:40:17.900 154977 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:40:17 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 31 08:40:17 compute-0 ceph-mon[75227]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2912938853' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 31 08:40:17 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 31 08:40:17 compute-0 ceph-mon[75227]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2912938853' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 31 08:40:18 compute-0 nova_compute[238824]: 2026-01-31 08:40:18.339 238828 DEBUG oslo_service.periodic_task [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:40:18 compute-0 ceph-mon[75227]: pgmap v1144: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:40:18 compute-0 ceph-mon[75227]: from='client.? 192.168.122.10:0/2912938853' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 31 08:40:18 compute-0 ceph-mon[75227]: from='client.? 192.168.122.10:0/2912938853' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 31 08:40:19 compute-0 nova_compute[238824]: 2026-01-31 08:40:19.340 238828 DEBUG oslo_service.periodic_task [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:40:19 compute-0 nova_compute[238824]: 2026-01-31 08:40:19.340 238828 DEBUG oslo_service.periodic_task [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:40:19 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1145: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:40:20 compute-0 podman[251522]: 2026-01-31 08:40:20.201892241 +0000 UTC m=+0.049373021 container health_status 5cc46d1955888fed41771eb977e7f9416e280539f01559118253757fe3eb0869 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '3b798815db4eef76ff54d2cfe5801aee605c637f16e47a4297de289ab1fdb9c1-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_metadata_agent, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team)
Jan 31 08:40:20 compute-0 podman[251521]: 2026-01-31 08:40:20.210135745 +0000 UTC m=+0.065272732 container health_status 14c3c41ef3ecc0cb180f3b1c6b2646401de390411c75f2edf127900ced71a3ae (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '3b798815db4eef76ff54d2cfe5801aee605c637f16e47a4297de289ab1fdb9c1-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Jan 31 08:40:20 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:40:20 compute-0 ceph-mon[75227]: pgmap v1145: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:40:21 compute-0 nova_compute[238824]: 2026-01-31 08:40:21.340 238828 DEBUG oslo_service.periodic_task [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:40:21 compute-0 nova_compute[238824]: 2026-01-31 08:40:21.341 238828 DEBUG nova.compute.manager [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 31 08:40:21 compute-0 nova_compute[238824]: 2026-01-31 08:40:21.341 238828 DEBUG nova.compute.manager [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 31 08:40:21 compute-0 nova_compute[238824]: 2026-01-31 08:40:21.361 238828 DEBUG nova.compute.manager [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 31 08:40:21 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1146: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:40:21 compute-0 ceph-mon[75227]: pgmap v1146: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:40:22 compute-0 nova_compute[238824]: 2026-01-31 08:40:22.339 238828 DEBUG oslo_service.periodic_task [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:40:22 compute-0 nova_compute[238824]: 2026-01-31 08:40:22.340 238828 DEBUG nova.compute.manager [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 31 08:40:23 compute-0 nova_compute[238824]: 2026-01-31 08:40:23.340 238828 DEBUG oslo_service.periodic_task [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:40:23 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1147: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:40:24 compute-0 nova_compute[238824]: 2026-01-31 08:40:24.339 238828 DEBUG oslo_service.periodic_task [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:40:24 compute-0 nova_compute[238824]: 2026-01-31 08:40:24.340 238828 DEBUG oslo_service.periodic_task [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:40:24 compute-0 nova_compute[238824]: 2026-01-31 08:40:24.369 238828 DEBUG oslo_concurrency.lockutils [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:40:24 compute-0 nova_compute[238824]: 2026-01-31 08:40:24.370 238828 DEBUG oslo_concurrency.lockutils [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:40:24 compute-0 nova_compute[238824]: 2026-01-31 08:40:24.370 238828 DEBUG oslo_concurrency.lockutils [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:40:24 compute-0 nova_compute[238824]: 2026-01-31 08:40:24.371 238828 DEBUG nova.compute.resource_tracker [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 31 08:40:24 compute-0 nova_compute[238824]: 2026-01-31 08:40:24.371 238828 DEBUG oslo_concurrency.processutils [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:40:24 compute-0 ceph-mon[75227]: pgmap v1147: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:40:24 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 31 08:40:24 compute-0 ceph-mon[75227]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2612032007' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 31 08:40:24 compute-0 nova_compute[238824]: 2026-01-31 08:40:24.957 238828 DEBUG oslo_concurrency.processutils [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.586s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:40:25 compute-0 nova_compute[238824]: 2026-01-31 08:40:25.076 238828 WARNING nova.virt.libvirt.driver [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 31 08:40:25 compute-0 nova_compute[238824]: 2026-01-31 08:40:25.077 238828 DEBUG nova.compute.resource_tracker [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5135MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 31 08:40:25 compute-0 nova_compute[238824]: 2026-01-31 08:40:25.077 238828 DEBUG oslo_concurrency.lockutils [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:40:25 compute-0 nova_compute[238824]: 2026-01-31 08:40:25.077 238828 DEBUG oslo_concurrency.lockutils [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:40:25 compute-0 nova_compute[238824]: 2026-01-31 08:40:25.298 238828 DEBUG nova.compute.resource_tracker [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 31 08:40:25 compute-0 nova_compute[238824]: 2026-01-31 08:40:25.298 238828 DEBUG nova.compute.resource_tracker [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 31 08:40:25 compute-0 nova_compute[238824]: 2026-01-31 08:40:25.366 238828 DEBUG nova.scheduler.client.report [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Refreshing inventories for resource provider 6d4ff98f-eb37-47a1-bfaf-01e7f5329d98 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Jan 31 08:40:25 compute-0 nova_compute[238824]: 2026-01-31 08:40:25.443 238828 DEBUG nova.scheduler.client.report [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Updating ProviderTree inventory for provider 6d4ff98f-eb37-47a1-bfaf-01e7f5329d98 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Jan 31 08:40:25 compute-0 nova_compute[238824]: 2026-01-31 08:40:25.444 238828 DEBUG nova.compute.provider_tree [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Updating inventory in ProviderTree for provider 6d4ff98f-eb37-47a1-bfaf-01e7f5329d98 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Jan 31 08:40:25 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:40:25 compute-0 nova_compute[238824]: 2026-01-31 08:40:25.458 238828 DEBUG nova.scheduler.client.report [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Refreshing aggregate associations for resource provider 6d4ff98f-eb37-47a1-bfaf-01e7f5329d98, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Jan 31 08:40:25 compute-0 nova_compute[238824]: 2026-01-31 08:40:25.482 238828 DEBUG nova.scheduler.client.report [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Refreshing trait associations for resource provider 6d4ff98f-eb37-47a1-bfaf-01e7f5329d98, traits: COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_NODE,COMPUTE_IMAGE_TYPE_ISO,HW_CPU_X86_F16C,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_STORAGE_BUS_SCSI,HW_CPU_X86_FMA3,HW_CPU_X86_SHA,HW_CPU_X86_SSE,HW_CPU_X86_SSSE3,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,HW_CPU_X86_MMX,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_STORAGE_BUS_IDE,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_ACCELERATORS,HW_CPU_X86_CLMUL,HW_CPU_X86_BMI,HW_CPU_X86_AESNI,HW_CPU_X86_SSE2,HW_CPU_X86_SVM,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_NET_VIF_MODEL_E1000E,HW_CPU_X86_AVX,COMPUTE_SECURITY_TPM_2_0,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_DEVICE_TAGGING,COMPUTE_SECURITY_UEFI_SECURE_BOOT,HW_CPU_X86_SSE41,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_STORAGE_BUS_FDC,COMPUTE_GRAPHICS_MODEL_BOCHS,HW_CPU_X86_AVX2,HW_CPU_X86_BMI2,COMPUTE_VOLUME_EXTEND,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_TRUSTED_CERTS,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_SECURITY_TPM_1_2,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_GRAPHICS_MODEL_VIRTIO,HW_CPU_X86_ABM,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_RESCUE_BFV,HW_CPU_X86_SSE42,HW_CPU_X86_SSE4A,COMPUTE_STORAGE_BUS_SATA,COMPUTE_STORAGE_BUS_USB,COMPUTE_VIOMMU_MODEL_INTEL,HW_CPU_X86_AMD_SVM _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Jan 31 08:40:25 compute-0 nova_compute[238824]: 2026-01-31 08:40:25.503 238828 DEBUG oslo_concurrency.processutils [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:40:25 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1148: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:40:25 compute-0 ceph-mon[75227]: from='client.? 192.168.122.100:0/2612032007' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 31 08:40:26 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 31 08:40:26 compute-0 ceph-mon[75227]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3045148885' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 31 08:40:26 compute-0 nova_compute[238824]: 2026-01-31 08:40:26.128 238828 DEBUG oslo_concurrency.processutils [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.625s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:40:26 compute-0 nova_compute[238824]: 2026-01-31 08:40:26.133 238828 DEBUG nova.compute.provider_tree [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Inventory has not changed in ProviderTree for provider: 6d4ff98f-eb37-47a1-bfaf-01e7f5329d98 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 31 08:40:26 compute-0 nova_compute[238824]: 2026-01-31 08:40:26.150 238828 DEBUG nova.scheduler.client.report [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Inventory has not changed for provider 6d4ff98f-eb37-47a1-bfaf-01e7f5329d98 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 31 08:40:26 compute-0 nova_compute[238824]: 2026-01-31 08:40:26.152 238828 DEBUG nova.compute.resource_tracker [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 31 08:40:26 compute-0 nova_compute[238824]: 2026-01-31 08:40:26.152 238828 DEBUG oslo_concurrency.lockutils [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.075s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:40:26 compute-0 ceph-mon[75227]: pgmap v1148: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:40:26 compute-0 ceph-mon[75227]: from='client.? 192.168.122.100:0/3045148885' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 31 08:40:27 compute-0 nova_compute[238824]: 2026-01-31 08:40:27.147 238828 DEBUG oslo_service.periodic_task [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:40:27 compute-0 nova_compute[238824]: 2026-01-31 08:40:27.148 238828 DEBUG oslo_service.periodic_task [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:40:27 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1149: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:40:27 compute-0 ceph-mon[75227]: pgmap v1149: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:40:28 compute-0 sudo[251605]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 08:40:28 compute-0 sudo[251605]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:40:28 compute-0 sudo[251605]: pam_unix(sudo:session): session closed for user root
Jan 31 08:40:28 compute-0 sudo[251630]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/82c880e6-d992-5408-8b12-efff9c275473/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Jan 31 08:40:28 compute-0 sudo[251630]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:40:28 compute-0 sudo[251630]: pam_unix(sudo:session): session closed for user root
Jan 31 08:40:28 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 31 08:40:28 compute-0 ceph-mon[75227]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 08:40:28 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 31 08:40:28 compute-0 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 31 08:40:28 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 31 08:40:28 compute-0 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 08:40:28 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 31 08:40:28 compute-0 ceph-mon[75227]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 31 08:40:28 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 31 08:40:28 compute-0 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 31 08:40:28 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 31 08:40:28 compute-0 ceph-mon[75227]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 08:40:28 compute-0 sudo[251685]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 08:40:28 compute-0 sudo[251685]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:40:28 compute-0 sudo[251685]: pam_unix(sudo:session): session closed for user root
Jan 31 08:40:28 compute-0 sudo[251710]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/82c880e6-d992-5408-8b12-efff9c275473/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 82c880e6-d992-5408-8b12-efff9c275473 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Jan 31 08:40:28 compute-0 sudo[251710]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:40:28 compute-0 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 08:40:28 compute-0 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 31 08:40:28 compute-0 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 08:40:28 compute-0 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 31 08:40:28 compute-0 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 31 08:40:28 compute-0 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 08:40:29 compute-0 podman[251748]: 2026-01-31 08:40:29.059830176 +0000 UTC m=+0.051314817 container create 20f7bf795b360e0324db8523f4fc87c5d853f2108c1bdaaf872f5b2ba8af0a7e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=xenodochial_dubinsky, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 08:40:29 compute-0 systemd[1]: Started libpod-conmon-20f7bf795b360e0324db8523f4fc87c5d853f2108c1bdaaf872f5b2ba8af0a7e.scope.
Jan 31 08:40:29 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:40:29 compute-0 podman[251748]: 2026-01-31 08:40:29.031036969 +0000 UTC m=+0.022521660 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 08:40:29 compute-0 podman[251748]: 2026-01-31 08:40:29.17845562 +0000 UTC m=+0.169940311 container init 20f7bf795b360e0324db8523f4fc87c5d853f2108c1bdaaf872f5b2ba8af0a7e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=xenodochial_dubinsky, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 31 08:40:29 compute-0 podman[251748]: 2026-01-31 08:40:29.183641267 +0000 UTC m=+0.175125908 container start 20f7bf795b360e0324db8523f4fc87c5d853f2108c1bdaaf872f5b2ba8af0a7e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=xenodochial_dubinsky, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 08:40:29 compute-0 xenodochial_dubinsky[251764]: 167 167
Jan 31 08:40:29 compute-0 systemd[1]: libpod-20f7bf795b360e0324db8523f4fc87c5d853f2108c1bdaaf872f5b2ba8af0a7e.scope: Deactivated successfully.
Jan 31 08:40:29 compute-0 conmon[251764]: conmon 20f7bf795b360e0324db <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-20f7bf795b360e0324db8523f4fc87c5d853f2108c1bdaaf872f5b2ba8af0a7e.scope/container/memory.events
Jan 31 08:40:29 compute-0 podman[251748]: 2026-01-31 08:40:29.198689324 +0000 UTC m=+0.190174025 container attach 20f7bf795b360e0324db8523f4fc87c5d853f2108c1bdaaf872f5b2ba8af0a7e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=xenodochial_dubinsky, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 08:40:29 compute-0 podman[251748]: 2026-01-31 08:40:29.199551359 +0000 UTC m=+0.191036000 container died 20f7bf795b360e0324db8523f4fc87c5d853f2108c1bdaaf872f5b2ba8af0a7e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=xenodochial_dubinsky, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 08:40:29 compute-0 systemd[1]: var-lib-containers-storage-overlay-89a40971cf598a282cbbffbafc88874a40fa62431fe6f85233095e42a966ae75-merged.mount: Deactivated successfully.
Jan 31 08:40:29 compute-0 podman[251748]: 2026-01-31 08:40:29.291158837 +0000 UTC m=+0.282643478 container remove 20f7bf795b360e0324db8523f4fc87c5d853f2108c1bdaaf872f5b2ba8af0a7e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=xenodochial_dubinsky, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 08:40:29 compute-0 systemd[1]: libpod-conmon-20f7bf795b360e0324db8523f4fc87c5d853f2108c1bdaaf872f5b2ba8af0a7e.scope: Deactivated successfully.
Jan 31 08:40:29 compute-0 podman[251790]: 2026-01-31 08:40:29.477416489 +0000 UTC m=+0.047081306 container create 19b80789a9d3dca94fa873ac8e3562a2696acf48a273bbde3bbe71488dfa2c46 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hardcore_euler, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Jan 31 08:40:29 compute-0 systemd[1]: Started libpod-conmon-19b80789a9d3dca94fa873ac8e3562a2696acf48a273bbde3bbe71488dfa2c46.scope.
Jan 31 08:40:29 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:40:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c9100e6d6dc949eb26617270dc75b000f37e3b44bed801c69b3b38d7f46806c3/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 08:40:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c9100e6d6dc949eb26617270dc75b000f37e3b44bed801c69b3b38d7f46806c3/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 08:40:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c9100e6d6dc949eb26617270dc75b000f37e3b44bed801c69b3b38d7f46806c3/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 08:40:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c9100e6d6dc949eb26617270dc75b000f37e3b44bed801c69b3b38d7f46806c3/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 08:40:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c9100e6d6dc949eb26617270dc75b000f37e3b44bed801c69b3b38d7f46806c3/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 31 08:40:29 compute-0 podman[251790]: 2026-01-31 08:40:29.455099896 +0000 UTC m=+0.024764783 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 08:40:29 compute-0 podman[251790]: 2026-01-31 08:40:29.603728902 +0000 UTC m=+0.173393719 container init 19b80789a9d3dca94fa873ac8e3562a2696acf48a273bbde3bbe71488dfa2c46 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hardcore_euler, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, ceph=True, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Jan 31 08:40:29 compute-0 podman[251790]: 2026-01-31 08:40:29.609640829 +0000 UTC m=+0.179305626 container start 19b80789a9d3dca94fa873ac8e3562a2696acf48a273bbde3bbe71488dfa2c46 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hardcore_euler, CEPH_REF=tentacle, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 31 08:40:29 compute-0 podman[251790]: 2026-01-31 08:40:29.619739476 +0000 UTC m=+0.189404293 container attach 19b80789a9d3dca94fa873ac8e3562a2696acf48a273bbde3bbe71488dfa2c46 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hardcore_euler, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3)
Jan 31 08:40:29 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1150: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:40:29 compute-0 ceph-mon[75227]: pgmap v1150: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:40:30 compute-0 hardcore_euler[251807]: --> passed data devices: 0 physical, 3 LVM
Jan 31 08:40:30 compute-0 hardcore_euler[251807]: --> All data devices are unavailable
Jan 31 08:40:30 compute-0 systemd[1]: libpod-19b80789a9d3dca94fa873ac8e3562a2696acf48a273bbde3bbe71488dfa2c46.scope: Deactivated successfully.
Jan 31 08:40:30 compute-0 podman[251790]: 2026-01-31 08:40:30.073041252 +0000 UTC m=+0.642706059 container died 19b80789a9d3dca94fa873ac8e3562a2696acf48a273bbde3bbe71488dfa2c46 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hardcore_euler, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 31 08:40:30 compute-0 systemd[1]: var-lib-containers-storage-overlay-c9100e6d6dc949eb26617270dc75b000f37e3b44bed801c69b3b38d7f46806c3-merged.mount: Deactivated successfully.
Jan 31 08:40:30 compute-0 podman[251790]: 2026-01-31 08:40:30.153371141 +0000 UTC m=+0.723035948 container remove 19b80789a9d3dca94fa873ac8e3562a2696acf48a273bbde3bbe71488dfa2c46 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hardcore_euler, org.label-schema.build-date=20251030, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3)
Jan 31 08:40:30 compute-0 systemd[1]: libpod-conmon-19b80789a9d3dca94fa873ac8e3562a2696acf48a273bbde3bbe71488dfa2c46.scope: Deactivated successfully.
Jan 31 08:40:30 compute-0 sudo[251710]: pam_unix(sudo:session): session closed for user root
Jan 31 08:40:30 compute-0 sudo[251839]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 08:40:30 compute-0 sudo[251839]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:40:30 compute-0 sudo[251839]: pam_unix(sudo:session): session closed for user root
Jan 31 08:40:30 compute-0 sudo[251864]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/82c880e6-d992-5408-8b12-efff9c275473/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 82c880e6-d992-5408-8b12-efff9c275473 -- lvm list --format json
Jan 31 08:40:30 compute-0 sudo[251864]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:40:30 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:40:30 compute-0 podman[251901]: 2026-01-31 08:40:30.560777445 +0000 UTC m=+0.016765706 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 08:40:30 compute-0 podman[251901]: 2026-01-31 08:40:30.687116969 +0000 UTC m=+0.143105240 container create 358a31c4272ebe888bf761e4806ea5e0ba3717c0eef78d67c1c7e8e9cf72afad (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sleepy_proskuriakova, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, ceph=True)
Jan 31 08:40:30 compute-0 systemd[1]: Started libpod-conmon-358a31c4272ebe888bf761e4806ea5e0ba3717c0eef78d67c1c7e8e9cf72afad.scope.
Jan 31 08:40:30 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:40:30 compute-0 podman[251901]: 2026-01-31 08:40:30.952356771 +0000 UTC m=+0.408345042 container init 358a31c4272ebe888bf761e4806ea5e0ba3717c0eef78d67c1c7e8e9cf72afad (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sleepy_proskuriakova, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle)
Jan 31 08:40:30 compute-0 podman[251901]: 2026-01-31 08:40:30.958128425 +0000 UTC m=+0.414116676 container start 358a31c4272ebe888bf761e4806ea5e0ba3717c0eef78d67c1c7e8e9cf72afad (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sleepy_proskuriakova, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 08:40:30 compute-0 systemd[1]: libpod-358a31c4272ebe888bf761e4806ea5e0ba3717c0eef78d67c1c7e8e9cf72afad.scope: Deactivated successfully.
Jan 31 08:40:30 compute-0 sleepy_proskuriakova[251917]: 167 167
Jan 31 08:40:30 compute-0 conmon[251917]: conmon 358a31c4272ebe888bf7 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-358a31c4272ebe888bf761e4806ea5e0ba3717c0eef78d67c1c7e8e9cf72afad.scope/container/memory.events
Jan 31 08:40:30 compute-0 podman[251901]: 2026-01-31 08:40:30.96501359 +0000 UTC m=+0.421001851 container attach 358a31c4272ebe888bf761e4806ea5e0ba3717c0eef78d67c1c7e8e9cf72afad (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sleepy_proskuriakova, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 08:40:30 compute-0 podman[251901]: 2026-01-31 08:40:30.96533915 +0000 UTC m=+0.421327381 container died 358a31c4272ebe888bf761e4806ea5e0ba3717c0eef78d67c1c7e8e9cf72afad (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sleepy_proskuriakova, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 08:40:31 compute-0 systemd[1]: var-lib-containers-storage-overlay-f7aad97b21a2ee24ea5123aed401d5cdffa939e784df76518808d53e19ac77d2-merged.mount: Deactivated successfully.
Jan 31 08:40:31 compute-0 podman[251901]: 2026-01-31 08:40:31.026463373 +0000 UTC m=+0.482451654 container remove 358a31c4272ebe888bf761e4806ea5e0ba3717c0eef78d67c1c7e8e9cf72afad (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sleepy_proskuriakova, org.label-schema.build-date=20251030, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 08:40:31 compute-0 systemd[1]: libpod-conmon-358a31c4272ebe888bf761e4806ea5e0ba3717c0eef78d67c1c7e8e9cf72afad.scope: Deactivated successfully.
Jan 31 08:40:31 compute-0 podman[251941]: 2026-01-31 08:40:31.156337656 +0000 UTC m=+0.039405249 container create 968fc56cb17ca3fe5279a30f6969bf1f05b128b43b148d9cf6b70c9abdffdcad (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_bhaskara, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Jan 31 08:40:31 compute-0 systemd[1]: Started libpod-conmon-968fc56cb17ca3fe5279a30f6969bf1f05b128b43b148d9cf6b70c9abdffdcad.scope.
Jan 31 08:40:31 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:40:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b1efa18785b56b70f57d4dd7fa453eca193800a267fec9df3214ed521f195bae/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 08:40:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b1efa18785b56b70f57d4dd7fa453eca193800a267fec9df3214ed521f195bae/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 08:40:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b1efa18785b56b70f57d4dd7fa453eca193800a267fec9df3214ed521f195bae/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 08:40:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b1efa18785b56b70f57d4dd7fa453eca193800a267fec9df3214ed521f195bae/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 08:40:31 compute-0 podman[251941]: 2026-01-31 08:40:31.136950946 +0000 UTC m=+0.020018519 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 08:40:31 compute-0 podman[251941]: 2026-01-31 08:40:31.245410982 +0000 UTC m=+0.128478535 container init 968fc56cb17ca3fe5279a30f6969bf1f05b128b43b148d9cf6b70c9abdffdcad (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_bhaskara, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20251030)
Jan 31 08:40:31 compute-0 podman[251941]: 2026-01-31 08:40:31.253382088 +0000 UTC m=+0.136449631 container start 968fc56cb17ca3fe5279a30f6969bf1f05b128b43b148d9cf6b70c9abdffdcad (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_bhaskara, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 08:40:31 compute-0 podman[251941]: 2026-01-31 08:40:31.258086921 +0000 UTC m=+0.141154494 container attach 968fc56cb17ca3fe5279a30f6969bf1f05b128b43b148d9cf6b70c9abdffdcad (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_bhaskara, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Jan 31 08:40:31 compute-0 reverent_bhaskara[251957]: {
Jan 31 08:40:31 compute-0 reverent_bhaskara[251957]:     "0": [
Jan 31 08:40:31 compute-0 reverent_bhaskara[251957]:         {
Jan 31 08:40:31 compute-0 reverent_bhaskara[251957]:             "devices": [
Jan 31 08:40:31 compute-0 reverent_bhaskara[251957]:                 "/dev/loop3"
Jan 31 08:40:31 compute-0 reverent_bhaskara[251957]:             ],
Jan 31 08:40:31 compute-0 reverent_bhaskara[251957]:             "lv_name": "ceph_lv0",
Jan 31 08:40:31 compute-0 reverent_bhaskara[251957]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 08:40:31 compute-0 reverent_bhaskara[251957]:             "lv_size": "21470642176",
Jan 31 08:40:31 compute-0 reverent_bhaskara[251957]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=MTsNbY-MKaT-jGv0-3onj-5WQa-gnK0-BbfLsK,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=82c880e6-d992-5408-8b12-efff9c275473,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=39c36249-2898-4a76-b317-8e4ca379866f,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 31 08:40:31 compute-0 reverent_bhaskara[251957]:             "lv_uuid": "MTsNbY-MKaT-jGv0-3onj-5WQa-gnK0-BbfLsK",
Jan 31 08:40:31 compute-0 reverent_bhaskara[251957]:             "name": "ceph_lv0",
Jan 31 08:40:31 compute-0 reverent_bhaskara[251957]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 08:40:31 compute-0 reverent_bhaskara[251957]:             "tags": {
Jan 31 08:40:31 compute-0 reverent_bhaskara[251957]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 31 08:40:31 compute-0 reverent_bhaskara[251957]:                 "ceph.block_uuid": "MTsNbY-MKaT-jGv0-3onj-5WQa-gnK0-BbfLsK",
Jan 31 08:40:31 compute-0 reverent_bhaskara[251957]:                 "ceph.cephx_lockbox_secret": "",
Jan 31 08:40:31 compute-0 reverent_bhaskara[251957]:                 "ceph.cluster_fsid": "82c880e6-d992-5408-8b12-efff9c275473",
Jan 31 08:40:31 compute-0 reverent_bhaskara[251957]:                 "ceph.cluster_name": "ceph",
Jan 31 08:40:31 compute-0 reverent_bhaskara[251957]:                 "ceph.crush_device_class": "",
Jan 31 08:40:31 compute-0 reverent_bhaskara[251957]:                 "ceph.encrypted": "0",
Jan 31 08:40:31 compute-0 reverent_bhaskara[251957]:                 "ceph.objectstore": "bluestore",
Jan 31 08:40:31 compute-0 reverent_bhaskara[251957]:                 "ceph.osd_fsid": "39c36249-2898-4a76-b317-8e4ca379866f",
Jan 31 08:40:31 compute-0 reverent_bhaskara[251957]:                 "ceph.osd_id": "0",
Jan 31 08:40:31 compute-0 reverent_bhaskara[251957]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 31 08:40:31 compute-0 reverent_bhaskara[251957]:                 "ceph.type": "block",
Jan 31 08:40:31 compute-0 reverent_bhaskara[251957]:                 "ceph.vdo": "0",
Jan 31 08:40:31 compute-0 reverent_bhaskara[251957]:                 "ceph.with_tpm": "0"
Jan 31 08:40:31 compute-0 reverent_bhaskara[251957]:             },
Jan 31 08:40:31 compute-0 reverent_bhaskara[251957]:             "type": "block",
Jan 31 08:40:31 compute-0 reverent_bhaskara[251957]:             "vg_name": "ceph_vg0"
Jan 31 08:40:31 compute-0 reverent_bhaskara[251957]:         }
Jan 31 08:40:31 compute-0 reverent_bhaskara[251957]:     ],
Jan 31 08:40:31 compute-0 reverent_bhaskara[251957]:     "1": [
Jan 31 08:40:31 compute-0 reverent_bhaskara[251957]:         {
Jan 31 08:40:31 compute-0 reverent_bhaskara[251957]:             "devices": [
Jan 31 08:40:31 compute-0 reverent_bhaskara[251957]:                 "/dev/loop4"
Jan 31 08:40:31 compute-0 reverent_bhaskara[251957]:             ],
Jan 31 08:40:31 compute-0 reverent_bhaskara[251957]:             "lv_name": "ceph_lv1",
Jan 31 08:40:31 compute-0 reverent_bhaskara[251957]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 31 08:40:31 compute-0 reverent_bhaskara[251957]:             "lv_size": "21470642176",
Jan 31 08:40:31 compute-0 reverent_bhaskara[251957]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=p93Mbf-DMxT-pcUt-jSJE-SFna-oscq-yTAd40,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=82c880e6-d992-5408-8b12-efff9c275473,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=dacad4fa-56d8-4937-b2d8-306fb75187f3,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 31 08:40:31 compute-0 reverent_bhaskara[251957]:             "lv_uuid": "p93Mbf-DMxT-pcUt-jSJE-SFna-oscq-yTAd40",
Jan 31 08:40:31 compute-0 reverent_bhaskara[251957]:             "name": "ceph_lv1",
Jan 31 08:40:31 compute-0 reverent_bhaskara[251957]:             "path": "/dev/ceph_vg1/ceph_lv1",
Jan 31 08:40:31 compute-0 reverent_bhaskara[251957]:             "tags": {
Jan 31 08:40:31 compute-0 reverent_bhaskara[251957]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 31 08:40:31 compute-0 reverent_bhaskara[251957]:                 "ceph.block_uuid": "p93Mbf-DMxT-pcUt-jSJE-SFna-oscq-yTAd40",
Jan 31 08:40:31 compute-0 reverent_bhaskara[251957]:                 "ceph.cephx_lockbox_secret": "",
Jan 31 08:40:31 compute-0 reverent_bhaskara[251957]:                 "ceph.cluster_fsid": "82c880e6-d992-5408-8b12-efff9c275473",
Jan 31 08:40:31 compute-0 reverent_bhaskara[251957]:                 "ceph.cluster_name": "ceph",
Jan 31 08:40:31 compute-0 reverent_bhaskara[251957]:                 "ceph.crush_device_class": "",
Jan 31 08:40:31 compute-0 reverent_bhaskara[251957]:                 "ceph.encrypted": "0",
Jan 31 08:40:31 compute-0 reverent_bhaskara[251957]:                 "ceph.objectstore": "bluestore",
Jan 31 08:40:31 compute-0 reverent_bhaskara[251957]:                 "ceph.osd_fsid": "dacad4fa-56d8-4937-b2d8-306fb75187f3",
Jan 31 08:40:31 compute-0 reverent_bhaskara[251957]:                 "ceph.osd_id": "1",
Jan 31 08:40:31 compute-0 reverent_bhaskara[251957]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 31 08:40:31 compute-0 reverent_bhaskara[251957]:                 "ceph.type": "block",
Jan 31 08:40:31 compute-0 reverent_bhaskara[251957]:                 "ceph.vdo": "0",
Jan 31 08:40:31 compute-0 reverent_bhaskara[251957]:                 "ceph.with_tpm": "0"
Jan 31 08:40:31 compute-0 reverent_bhaskara[251957]:             },
Jan 31 08:40:31 compute-0 reverent_bhaskara[251957]:             "type": "block",
Jan 31 08:40:31 compute-0 reverent_bhaskara[251957]:             "vg_name": "ceph_vg1"
Jan 31 08:40:31 compute-0 reverent_bhaskara[251957]:         }
Jan 31 08:40:31 compute-0 reverent_bhaskara[251957]:     ],
Jan 31 08:40:31 compute-0 reverent_bhaskara[251957]:     "2": [
Jan 31 08:40:31 compute-0 reverent_bhaskara[251957]:         {
Jan 31 08:40:31 compute-0 reverent_bhaskara[251957]:             "devices": [
Jan 31 08:40:31 compute-0 reverent_bhaskara[251957]:                 "/dev/loop5"
Jan 31 08:40:31 compute-0 reverent_bhaskara[251957]:             ],
Jan 31 08:40:31 compute-0 reverent_bhaskara[251957]:             "lv_name": "ceph_lv2",
Jan 31 08:40:31 compute-0 reverent_bhaskara[251957]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 31 08:40:31 compute-0 reverent_bhaskara[251957]:             "lv_size": "21470642176",
Jan 31 08:40:31 compute-0 reverent_bhaskara[251957]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=fxd7JU-HnwP-NvcE-M4xv-EgEF-kK7y-w6dXCS,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=82c880e6-d992-5408-8b12-efff9c275473,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=faa25865-e7b6-44f9-8188-08bf287b941b,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 31 08:40:31 compute-0 reverent_bhaskara[251957]:             "lv_uuid": "fxd7JU-HnwP-NvcE-M4xv-EgEF-kK7y-w6dXCS",
Jan 31 08:40:31 compute-0 reverent_bhaskara[251957]:             "name": "ceph_lv2",
Jan 31 08:40:31 compute-0 reverent_bhaskara[251957]:             "path": "/dev/ceph_vg2/ceph_lv2",
Jan 31 08:40:31 compute-0 reverent_bhaskara[251957]:             "tags": {
Jan 31 08:40:31 compute-0 reverent_bhaskara[251957]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 31 08:40:31 compute-0 reverent_bhaskara[251957]:                 "ceph.block_uuid": "fxd7JU-HnwP-NvcE-M4xv-EgEF-kK7y-w6dXCS",
Jan 31 08:40:31 compute-0 reverent_bhaskara[251957]:                 "ceph.cephx_lockbox_secret": "",
Jan 31 08:40:31 compute-0 reverent_bhaskara[251957]:                 "ceph.cluster_fsid": "82c880e6-d992-5408-8b12-efff9c275473",
Jan 31 08:40:31 compute-0 reverent_bhaskara[251957]:                 "ceph.cluster_name": "ceph",
Jan 31 08:40:31 compute-0 reverent_bhaskara[251957]:                 "ceph.crush_device_class": "",
Jan 31 08:40:31 compute-0 reverent_bhaskara[251957]:                 "ceph.encrypted": "0",
Jan 31 08:40:31 compute-0 reverent_bhaskara[251957]:                 "ceph.objectstore": "bluestore",
Jan 31 08:40:31 compute-0 reverent_bhaskara[251957]:                 "ceph.osd_fsid": "faa25865-e7b6-44f9-8188-08bf287b941b",
Jan 31 08:40:31 compute-0 reverent_bhaskara[251957]:                 "ceph.osd_id": "2",
Jan 31 08:40:31 compute-0 reverent_bhaskara[251957]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 31 08:40:31 compute-0 reverent_bhaskara[251957]:                 "ceph.type": "block",
Jan 31 08:40:31 compute-0 reverent_bhaskara[251957]:                 "ceph.vdo": "0",
Jan 31 08:40:31 compute-0 reverent_bhaskara[251957]:                 "ceph.with_tpm": "0"
Jan 31 08:40:31 compute-0 reverent_bhaskara[251957]:             },
Jan 31 08:40:31 compute-0 reverent_bhaskara[251957]:             "type": "block",
Jan 31 08:40:31 compute-0 reverent_bhaskara[251957]:             "vg_name": "ceph_vg2"
Jan 31 08:40:31 compute-0 reverent_bhaskara[251957]:         }
Jan 31 08:40:31 compute-0 reverent_bhaskara[251957]:     ]
Jan 31 08:40:31 compute-0 reverent_bhaskara[251957]: }
Jan 31 08:40:31 compute-0 systemd[1]: libpod-968fc56cb17ca3fe5279a30f6969bf1f05b128b43b148d9cf6b70c9abdffdcad.scope: Deactivated successfully.
Jan 31 08:40:31 compute-0 podman[251941]: 2026-01-31 08:40:31.504729617 +0000 UTC m=+0.387797210 container died 968fc56cb17ca3fe5279a30f6969bf1f05b128b43b148d9cf6b70c9abdffdcad (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_bhaskara, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS)
Jan 31 08:40:31 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1151: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:40:31 compute-0 systemd[1]: var-lib-containers-storage-overlay-b1efa18785b56b70f57d4dd7fa453eca193800a267fec9df3214ed521f195bae-merged.mount: Deactivated successfully.
Jan 31 08:40:31 compute-0 podman[251941]: 2026-01-31 08:40:31.726709862 +0000 UTC m=+0.609777425 container remove 968fc56cb17ca3fe5279a30f6969bf1f05b128b43b148d9cf6b70c9abdffdcad (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_bhaskara, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2)
Jan 31 08:40:31 compute-0 systemd[1]: libpod-conmon-968fc56cb17ca3fe5279a30f6969bf1f05b128b43b148d9cf6b70c9abdffdcad.scope: Deactivated successfully.
Jan 31 08:40:31 compute-0 sudo[251864]: pam_unix(sudo:session): session closed for user root
Jan 31 08:40:31 compute-0 ceph-mgr[75519]: [balancer INFO root] Optimize plan auto_2026-01-31_08:40:31
Jan 31 08:40:31 compute-0 ceph-mgr[75519]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 31 08:40:31 compute-0 ceph-mgr[75519]: [balancer INFO root] do_upmap
Jan 31 08:40:31 compute-0 ceph-mgr[75519]: [balancer INFO root] pools ['cephfs.cephfs.meta', 'default.rgw.log', 'default.rgw.meta', 'default.rgw.control', 'volumes', 'cephfs.cephfs.data', 'vms', '.rgw.root', 'backups', '.mgr', 'images']
Jan 31 08:40:31 compute-0 ceph-mgr[75519]: [balancer INFO root] prepared 0/10 upmap changes
Jan 31 08:40:31 compute-0 sudo[251979]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 08:40:31 compute-0 sudo[251979]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:40:31 compute-0 sudo[251979]: pam_unix(sudo:session): session closed for user root
Jan 31 08:40:31 compute-0 sudo[252004]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/82c880e6-d992-5408-8b12-efff9c275473/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 82c880e6-d992-5408-8b12-efff9c275473 -- raw list --format json
Jan 31 08:40:31 compute-0 sudo[252004]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:40:32 compute-0 podman[252041]: 2026-01-31 08:40:32.175316296 +0000 UTC m=+0.042832836 container create afb4164516e0c8bccbcf546aa1178d0afe410e22268fd94c38b72db6fcbe09de (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=priceless_galois, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 31 08:40:32 compute-0 systemd[1]: Started libpod-conmon-afb4164516e0c8bccbcf546aa1178d0afe410e22268fd94c38b72db6fcbe09de.scope.
Jan 31 08:40:32 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:40:32 compute-0 podman[252041]: 2026-01-31 08:40:32.24529592 +0000 UTC m=+0.112812490 container init afb4164516e0c8bccbcf546aa1178d0afe410e22268fd94c38b72db6fcbe09de (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=priceless_galois, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle)
Jan 31 08:40:32 compute-0 podman[252041]: 2026-01-31 08:40:32.152134468 +0000 UTC m=+0.019651028 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 08:40:32 compute-0 podman[252041]: 2026-01-31 08:40:32.251363182 +0000 UTC m=+0.118879732 container start afb4164516e0c8bccbcf546aa1178d0afe410e22268fd94c38b72db6fcbe09de (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=priceless_galois, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, ceph=True, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.vendor=CentOS)
Jan 31 08:40:32 compute-0 priceless_galois[252057]: 167 167
Jan 31 08:40:32 compute-0 systemd[1]: libpod-afb4164516e0c8bccbcf546aa1178d0afe410e22268fd94c38b72db6fcbe09de.scope: Deactivated successfully.
Jan 31 08:40:32 compute-0 podman[252041]: 2026-01-31 08:40:32.258444613 +0000 UTC m=+0.125961193 container attach afb4164516e0c8bccbcf546aa1178d0afe410e22268fd94c38b72db6fcbe09de (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=priceless_galois, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Jan 31 08:40:32 compute-0 podman[252041]: 2026-01-31 08:40:32.258811154 +0000 UTC m=+0.126327704 container died afb4164516e0c8bccbcf546aa1178d0afe410e22268fd94c38b72db6fcbe09de (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=priceless_galois, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.build-date=20251030, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 08:40:32 compute-0 systemd[1]: var-lib-containers-storage-overlay-0ce0a02109c09f4b3fdc09f3ca3eb079794c3bfcb46e4869afd123768a452d4c-merged.mount: Deactivated successfully.
Jan 31 08:40:32 compute-0 podman[252041]: 2026-01-31 08:40:32.324522857 +0000 UTC m=+0.192039407 container remove afb4164516e0c8bccbcf546aa1178d0afe410e22268fd94c38b72db6fcbe09de (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=priceless_galois, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 08:40:32 compute-0 systemd[1]: libpod-conmon-afb4164516e0c8bccbcf546aa1178d0afe410e22268fd94c38b72db6fcbe09de.scope: Deactivated successfully.
Jan 31 08:40:32 compute-0 podman[252080]: 2026-01-31 08:40:32.415867138 +0000 UTC m=+0.017297951 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 08:40:32 compute-0 podman[252080]: 2026-01-31 08:40:32.600625238 +0000 UTC m=+0.202056031 container create a205d6c4d77fb01ed69acb7929de3a0f4ecaa86015d75162edb365d35c10297f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=angry_gates, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20251030)
Jan 31 08:40:32 compute-0 ceph-mon[75227]: pgmap v1151: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:40:32 compute-0 systemd[1]: Started libpod-conmon-a205d6c4d77fb01ed69acb7929de3a0f4ecaa86015d75162edb365d35c10297f.scope.
Jan 31 08:40:32 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:40:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/afbd38fec9d7e84370af3253dcdfdada92ab4f38fb2f256fb2ec78dae05e2c47/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 08:40:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/afbd38fec9d7e84370af3253dcdfdada92ab4f38fb2f256fb2ec78dae05e2c47/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 08:40:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/afbd38fec9d7e84370af3253dcdfdada92ab4f38fb2f256fb2ec78dae05e2c47/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 08:40:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/afbd38fec9d7e84370af3253dcdfdada92ab4f38fb2f256fb2ec78dae05e2c47/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 08:40:32 compute-0 ceph-mgr[75519]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:40:32 compute-0 ceph-mgr[75519]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:40:32 compute-0 ceph-mgr[75519]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:40:32 compute-0 ceph-mgr[75519]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:40:32 compute-0 podman[252080]: 2026-01-31 08:40:32.823643913 +0000 UTC m=+0.425074716 container init a205d6c4d77fb01ed69acb7929de3a0f4ecaa86015d75162edb365d35c10297f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=angry_gates, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Jan 31 08:40:32 compute-0 ceph-mgr[75519]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:40:32 compute-0 ceph-mgr[75519]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:40:32 compute-0 podman[252080]: 2026-01-31 08:40:32.83091956 +0000 UTC m=+0.432350353 container start a205d6c4d77fb01ed69acb7929de3a0f4ecaa86015d75162edb365d35c10297f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=angry_gates, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Jan 31 08:40:32 compute-0 podman[252080]: 2026-01-31 08:40:32.838663889 +0000 UTC m=+0.440094712 container attach a205d6c4d77fb01ed69acb7929de3a0f4ecaa86015d75162edb365d35c10297f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=angry_gates, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 31 08:40:33 compute-0 ceph-mgr[75519]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 31 08:40:33 compute-0 ceph-mgr[75519]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 31 08:40:33 compute-0 ceph-mgr[75519]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 08:40:33 compute-0 ceph-mgr[75519]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 08:40:33 compute-0 ceph-mgr[75519]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 08:40:33 compute-0 ceph-mgr[75519]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 08:40:33 compute-0 ceph-mgr[75519]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 08:40:33 compute-0 ceph-mgr[75519]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 08:40:33 compute-0 ceph-mgr[75519]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 08:40:33 compute-0 ceph-mgr[75519]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 08:40:33 compute-0 lvm[252175]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 31 08:40:33 compute-0 lvm[252175]: VG ceph_vg0 finished
Jan 31 08:40:33 compute-0 lvm[252174]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 31 08:40:33 compute-0 lvm[252174]: VG ceph_vg1 finished
Jan 31 08:40:33 compute-0 lvm[252177]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 31 08:40:33 compute-0 lvm[252177]: VG ceph_vg2 finished
Jan 31 08:40:33 compute-0 angry_gates[252096]: {}
Jan 31 08:40:33 compute-0 systemd[1]: libpod-a205d6c4d77fb01ed69acb7929de3a0f4ecaa86015d75162edb365d35c10297f.scope: Deactivated successfully.
Jan 31 08:40:33 compute-0 systemd[1]: libpod-a205d6c4d77fb01ed69acb7929de3a0f4ecaa86015d75162edb365d35c10297f.scope: Consumed 1.027s CPU time.
Jan 31 08:40:33 compute-0 podman[252080]: 2026-01-31 08:40:33.566043099 +0000 UTC m=+1.167473902 container died a205d6c4d77fb01ed69acb7929de3a0f4ecaa86015d75162edb365d35c10297f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=angry_gates, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030)
Jan 31 08:40:33 compute-0 systemd[1]: var-lib-containers-storage-overlay-afbd38fec9d7e84370af3253dcdfdada92ab4f38fb2f256fb2ec78dae05e2c47-merged.mount: Deactivated successfully.
Jan 31 08:40:33 compute-0 podman[252080]: 2026-01-31 08:40:33.601789483 +0000 UTC m=+1.203220256 container remove a205d6c4d77fb01ed69acb7929de3a0f4ecaa86015d75162edb365d35c10297f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=angry_gates, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 08:40:33 compute-0 systemd[1]: libpod-conmon-a205d6c4d77fb01ed69acb7929de3a0f4ecaa86015d75162edb365d35c10297f.scope: Deactivated successfully.
Jan 31 08:40:33 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1152: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:40:33 compute-0 sudo[252004]: pam_unix(sudo:session): session closed for user root
Jan 31 08:40:33 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 31 08:40:33 compute-0 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 08:40:33 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 31 08:40:33 compute-0 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 08:40:33 compute-0 sudo[252192]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 31 08:40:33 compute-0 sudo[252192]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:40:33 compute-0 sudo[252192]: pam_unix(sudo:session): session closed for user root
Jan 31 08:40:34 compute-0 ceph-mon[75227]: pgmap v1152: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:40:34 compute-0 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 08:40:34 compute-0 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 08:40:35 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:40:35 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1153: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:40:35 compute-0 ceph-mon[75227]: pgmap v1153: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:40:37 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1154: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:40:38 compute-0 ceph-mon[75227]: pgmap v1154: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:40:39 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1155: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:40:40 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:40:40 compute-0 ceph-mon[75227]: pgmap v1155: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:40:41 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1156: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:40:42 compute-0 ceph-mon[75227]: pgmap v1156: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:40:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] _maybe_adjust
Jan 31 08:40:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:40:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 31 08:40:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:40:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 08:40:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:40:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 08:40:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:40:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 08:40:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:40:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 3.257160766784386e-07 of space, bias 1.0, pg target 9.771482300353158e-05 quantized to 32 (current 32)
Jan 31 08:40:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:40:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 2.5331644121694047e-06 of space, bias 4.0, pg target 0.0030397972946032857 quantized to 16 (current 16)
Jan 31 08:40:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:40:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 08:40:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:40:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 31 08:40:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:40:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Jan 31 08:40:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:40:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 08:40:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:40:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 31 08:40:43 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1157: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:40:44 compute-0 ceph-mon[75227]: pgmap v1157: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:40:45 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:40:45 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1158: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:40:46 compute-0 ceph-mon[75227]: pgmap v1158: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:40:47 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1159: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:40:48 compute-0 ceph-mon[75227]: pgmap v1159: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:40:49 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1160: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:40:49 compute-0 ceph-mon[75227]: pgmap v1160: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:40:50 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:40:51 compute-0 podman[252217]: 2026-01-31 08:40:51.19198175 +0000 UTC m=+0.080248517 container health_status 14c3c41ef3ecc0cb180f3b1c6b2646401de390411c75f2edf127900ced71a3ae (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '3b798815db4eef76ff54d2cfe5801aee605c637f16e47a4297de289ab1fdb9c1-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_controller)
Jan 31 08:40:51 compute-0 podman[252218]: 2026-01-31 08:40:51.191763113 +0000 UTC m=+0.080323279 container health_status 5cc46d1955888fed41771eb977e7f9416e280539f01559118253757fe3eb0869 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '3b798815db4eef76ff54d2cfe5801aee605c637f16e47a4297de289ab1fdb9c1-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Jan 31 08:40:51 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1161: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:40:52 compute-0 ceph-mon[75227]: pgmap v1161: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:40:53 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1162: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:40:54 compute-0 ceph-mon[75227]: pgmap v1162: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:40:55 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:40:55 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1163: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:40:56 compute-0 ceph-mon[75227]: pgmap v1163: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:40:57 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1164: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:40:58 compute-0 ceph-mon[75227]: pgmap v1164: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:40:59 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1165: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:41:00 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:41:00 compute-0 ceph-mon[75227]: pgmap v1165: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:41:01 compute-0 anacron[43542]: Job `cron.daily' started
Jan 31 08:41:01 compute-0 anacron[43542]: Job `cron.daily' terminated
Jan 31 08:41:01 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1166: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:41:02 compute-0 ceph-mon[75227]: pgmap v1166: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:41:02 compute-0 ceph-mgr[75519]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:41:02 compute-0 ceph-mgr[75519]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:41:02 compute-0 ceph-mgr[75519]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:41:02 compute-0 ceph-mgr[75519]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:41:02 compute-0 ceph-mgr[75519]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:41:02 compute-0 ceph-mgr[75519]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:41:03 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1167: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:41:03 compute-0 ceph-mon[75227]: pgmap v1167: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:41:05 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:41:05 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1168: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:41:05 compute-0 ceph-mon[75227]: pgmap v1168: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:41:07 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1169: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:41:08 compute-0 ceph-mon[75227]: pgmap v1169: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:41:09 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1170: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:41:10 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:41:10 compute-0 ceph-mon[75227]: pgmap v1170: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:41:11 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1171: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:41:11 compute-0 ceph-mon[75227]: pgmap v1171: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:41:13 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1172: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:41:14 compute-0 ceph-mon[75227]: pgmap v1172: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:41:15 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:41:15 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1173: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:41:16 compute-0 ceph-mon[75227]: pgmap v1173: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:41:17 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1174: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:41:17 compute-0 ovn_metadata_agent[154972]: 2026-01-31 08:41:17.900 154977 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:41:17 compute-0 ovn_metadata_agent[154972]: 2026-01-31 08:41:17.900 154977 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:41:17 compute-0 ovn_metadata_agent[154972]: 2026-01-31 08:41:17.900 154977 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:41:18 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 31 08:41:18 compute-0 ceph-mon[75227]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1177264080' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 31 08:41:18 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 31 08:41:18 compute-0 ceph-mon[75227]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1177264080' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 31 08:41:18 compute-0 ceph-mon[75227]: pgmap v1174: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:41:18 compute-0 nova_compute[238824]: 2026-01-31 08:41:18.338 238828 DEBUG oslo_service.periodic_task [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:41:19 compute-0 ceph-mon[75227]: from='client.? 192.168.122.10:0/1177264080' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 31 08:41:19 compute-0 ceph-mon[75227]: from='client.? 192.168.122.10:0/1177264080' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 31 08:41:19 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1175: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:41:20 compute-0 nova_compute[238824]: 2026-01-31 08:41:20.340 238828 DEBUG oslo_service.periodic_task [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:41:20 compute-0 ceph-mon[75227]: pgmap v1175: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:41:20 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:41:21 compute-0 nova_compute[238824]: 2026-01-31 08:41:21.340 238828 DEBUG oslo_service.periodic_task [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:41:21 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1176: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:41:21 compute-0 ceph-mon[75227]: pgmap v1176: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:41:22 compute-0 podman[252259]: 2026-01-31 08:41:22.147166921 +0000 UTC m=+0.039065059 container health_status 5cc46d1955888fed41771eb977e7f9416e280539f01559118253757fe3eb0869 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '3b798815db4eef76ff54d2cfe5801aee605c637f16e47a4297de289ab1fdb9c1-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2)
Jan 31 08:41:22 compute-0 podman[252258]: 2026-01-31 08:41:22.176874183 +0000 UTC m=+0.070346096 container health_status 14c3c41ef3ecc0cb180f3b1c6b2646401de390411c75f2edf127900ced71a3ae (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, config_id=ovn_controller, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '3b798815db4eef76ff54d2cfe5801aee605c637f16e47a4297de289ab1fdb9c1-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Jan 31 08:41:22 compute-0 nova_compute[238824]: 2026-01-31 08:41:22.339 238828 DEBUG oslo_service.periodic_task [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:41:22 compute-0 nova_compute[238824]: 2026-01-31 08:41:22.340 238828 DEBUG nova.compute.manager [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 31 08:41:23 compute-0 nova_compute[238824]: 2026-01-31 08:41:23.340 238828 DEBUG oslo_service.periodic_task [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:41:23 compute-0 nova_compute[238824]: 2026-01-31 08:41:23.340 238828 DEBUG nova.compute.manager [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 31 08:41:23 compute-0 nova_compute[238824]: 2026-01-31 08:41:23.340 238828 DEBUG nova.compute.manager [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 31 08:41:23 compute-0 nova_compute[238824]: 2026-01-31 08:41:23.429 238828 DEBUG nova.compute.manager [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 31 08:41:23 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1177: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:41:24 compute-0 nova_compute[238824]: 2026-01-31 08:41:24.339 238828 DEBUG oslo_service.periodic_task [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:41:24 compute-0 nova_compute[238824]: 2026-01-31 08:41:24.340 238828 DEBUG oslo_service.periodic_task [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:41:24 compute-0 ceph-mon[75227]: pgmap v1177: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:41:25 compute-0 nova_compute[238824]: 2026-01-31 08:41:25.339 238828 DEBUG oslo_service.periodic_task [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:41:25 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:41:25 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1178: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:41:26 compute-0 ceph-mon[75227]: pgmap v1178: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:41:27 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1179: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:41:28 compute-0 nova_compute[238824]: 2026-01-31 08:41:28.440 238828 DEBUG oslo_concurrency.lockutils [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:41:28 compute-0 nova_compute[238824]: 2026-01-31 08:41:28.440 238828 DEBUG oslo_concurrency.lockutils [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:41:28 compute-0 nova_compute[238824]: 2026-01-31 08:41:28.440 238828 DEBUG oslo_concurrency.lockutils [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:41:28 compute-0 nova_compute[238824]: 2026-01-31 08:41:28.441 238828 DEBUG nova.compute.resource_tracker [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 31 08:41:28 compute-0 nova_compute[238824]: 2026-01-31 08:41:28.441 238828 DEBUG oslo_concurrency.processutils [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:41:28 compute-0 ceph-mon[75227]: pgmap v1179: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:41:28 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 31 08:41:28 compute-0 ceph-mon[75227]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1289803331' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 31 08:41:28 compute-0 nova_compute[238824]: 2026-01-31 08:41:28.927 238828 DEBUG oslo_concurrency.processutils [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.485s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:41:29 compute-0 nova_compute[238824]: 2026-01-31 08:41:29.075 238828 WARNING nova.virt.libvirt.driver [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 31 08:41:29 compute-0 nova_compute[238824]: 2026-01-31 08:41:29.076 238828 DEBUG nova.compute.resource_tracker [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5130MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 31 08:41:29 compute-0 nova_compute[238824]: 2026-01-31 08:41:29.077 238828 DEBUG oslo_concurrency.lockutils [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:41:29 compute-0 nova_compute[238824]: 2026-01-31 08:41:29.077 238828 DEBUG oslo_concurrency.lockutils [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:41:29 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1180: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:41:29 compute-0 nova_compute[238824]: 2026-01-31 08:41:29.718 238828 DEBUG nova.compute.resource_tracker [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 31 08:41:29 compute-0 nova_compute[238824]: 2026-01-31 08:41:29.719 238828 DEBUG nova.compute.resource_tracker [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 31 08:41:29 compute-0 nova_compute[238824]: 2026-01-31 08:41:29.744 238828 DEBUG oslo_concurrency.processutils [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:41:29 compute-0 ceph-mon[75227]: from='client.? 192.168.122.100:0/1289803331' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 31 08:41:29 compute-0 ceph-mon[75227]: pgmap v1180: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:41:30 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 31 08:41:30 compute-0 ceph-mon[75227]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1756240464' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 31 08:41:30 compute-0 nova_compute[238824]: 2026-01-31 08:41:30.518 238828 DEBUG oslo_concurrency.processutils [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.774s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:41:30 compute-0 nova_compute[238824]: 2026-01-31 08:41:30.523 238828 DEBUG nova.compute.provider_tree [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Inventory has not changed in ProviderTree for provider: 6d4ff98f-eb37-47a1-bfaf-01e7f5329d98 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 31 08:41:30 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:41:30 compute-0 nova_compute[238824]: 2026-01-31 08:41:30.633 238828 DEBUG nova.scheduler.client.report [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Inventory has not changed for provider 6d4ff98f-eb37-47a1-bfaf-01e7f5329d98 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 31 08:41:30 compute-0 nova_compute[238824]: 2026-01-31 08:41:30.635 238828 DEBUG nova.compute.resource_tracker [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 31 08:41:30 compute-0 nova_compute[238824]: 2026-01-31 08:41:30.635 238828 DEBUG oslo_concurrency.lockutils [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.558s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:41:31 compute-0 ceph-mon[75227]: from='client.? 192.168.122.100:0/1756240464' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 31 08:41:31 compute-0 nova_compute[238824]: 2026-01-31 08:41:31.630 238828 DEBUG oslo_service.periodic_task [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:41:31 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1181: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:41:31 compute-0 ceph-mgr[75519]: [balancer INFO root] Optimize plan auto_2026-01-31_08:41:31
Jan 31 08:41:31 compute-0 ceph-mgr[75519]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 31 08:41:31 compute-0 ceph-mgr[75519]: [balancer INFO root] do_upmap
Jan 31 08:41:31 compute-0 ceph-mgr[75519]: [balancer INFO root] pools ['volumes', 'cephfs.cephfs.meta', 'default.rgw.log', 'backups', '.mgr', 'images', 'default.rgw.meta', 'vms', '.rgw.root', 'cephfs.cephfs.data', 'default.rgw.control']
Jan 31 08:41:31 compute-0 ceph-mgr[75519]: [balancer INFO root] prepared 0/10 upmap changes
Jan 31 08:41:32 compute-0 ceph-mon[75227]: pgmap v1181: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:41:32 compute-0 ceph-mgr[75519]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:41:32 compute-0 ceph-mgr[75519]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:41:32 compute-0 ceph-mgr[75519]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:41:32 compute-0 ceph-mgr[75519]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:41:32 compute-0 ceph-mgr[75519]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:41:32 compute-0 ceph-mgr[75519]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:41:33 compute-0 ceph-mgr[75519]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 31 08:41:33 compute-0 ceph-mgr[75519]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 08:41:33 compute-0 ceph-mgr[75519]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 08:41:33 compute-0 ceph-mgr[75519]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 31 08:41:33 compute-0 ceph-mgr[75519]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 08:41:33 compute-0 ceph-mgr[75519]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 08:41:33 compute-0 ceph-mgr[75519]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 08:41:33 compute-0 ceph-mgr[75519]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 08:41:33 compute-0 ceph-mgr[75519]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 08:41:33 compute-0 ceph-mgr[75519]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 08:41:33 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1182: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:41:33 compute-0 sudo[252347]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 08:41:33 compute-0 sudo[252347]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:41:33 compute-0 sudo[252347]: pam_unix(sudo:session): session closed for user root
Jan 31 08:41:33 compute-0 sudo[252372]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/82c880e6-d992-5408-8b12-efff9c275473/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Jan 31 08:41:33 compute-0 sudo[252372]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:41:33 compute-0 ceph-mon[75227]: pgmap v1182: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:41:34 compute-0 sudo[252372]: pam_unix(sudo:session): session closed for user root
Jan 31 08:41:34 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 31 08:41:34 compute-0 ceph-mon[75227]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 08:41:34 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 31 08:41:34 compute-0 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 31 08:41:34 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 31 08:41:34 compute-0 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 08:41:34 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 31 08:41:34 compute-0 ceph-mon[75227]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 31 08:41:34 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 31 08:41:34 compute-0 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 31 08:41:34 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 31 08:41:34 compute-0 ceph-mon[75227]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 08:41:34 compute-0 sudo[252428]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 08:41:34 compute-0 sudo[252428]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:41:34 compute-0 sudo[252428]: pam_unix(sudo:session): session closed for user root
Jan 31 08:41:34 compute-0 sudo[252453]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/82c880e6-d992-5408-8b12-efff9c275473/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 82c880e6-d992-5408-8b12-efff9c275473 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Jan 31 08:41:34 compute-0 sudo[252453]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:41:34 compute-0 podman[252490]: 2026-01-31 08:41:34.842974085 +0000 UTC m=+0.019575996 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 08:41:34 compute-0 podman[252490]: 2026-01-31 08:41:34.995713427 +0000 UTC m=+0.172315318 container create d0889f3aaa04326306c1fbf6a3dd581daac285c3ea6a7850dade7fcffa61b7db (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=optimistic_shtern, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 08:41:35 compute-0 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 08:41:35 compute-0 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 31 08:41:35 compute-0 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 08:41:35 compute-0 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 31 08:41:35 compute-0 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 31 08:41:35 compute-0 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 08:41:35 compute-0 systemd[1]: Started libpod-conmon-d0889f3aaa04326306c1fbf6a3dd581daac285c3ea6a7850dade7fcffa61b7db.scope.
Jan 31 08:41:35 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:41:35 compute-0 podman[252490]: 2026-01-31 08:41:35.347144864 +0000 UTC m=+0.523746815 container init d0889f3aaa04326306c1fbf6a3dd581daac285c3ea6a7850dade7fcffa61b7db (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=optimistic_shtern, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Jan 31 08:41:35 compute-0 podman[252490]: 2026-01-31 08:41:35.355916033 +0000 UTC m=+0.532517964 container start d0889f3aaa04326306c1fbf6a3dd581daac285c3ea6a7850dade7fcffa61b7db (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=optimistic_shtern, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Jan 31 08:41:35 compute-0 optimistic_shtern[252507]: 167 167
Jan 31 08:41:35 compute-0 systemd[1]: libpod-d0889f3aaa04326306c1fbf6a3dd581daac285c3ea6a7850dade7fcffa61b7db.scope: Deactivated successfully.
Jan 31 08:41:35 compute-0 podman[252490]: 2026-01-31 08:41:35.527823187 +0000 UTC m=+0.704425108 container attach d0889f3aaa04326306c1fbf6a3dd581daac285c3ea6a7850dade7fcffa61b7db (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=optimistic_shtern, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 08:41:35 compute-0 podman[252490]: 2026-01-31 08:41:35.528189937 +0000 UTC m=+0.704791858 container died d0889f3aaa04326306c1fbf6a3dd581daac285c3ea6a7850dade7fcffa61b7db (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=optimistic_shtern, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 08:41:35 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:41:35 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1183: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:41:36 compute-0 systemd[1]: var-lib-containers-storage-overlay-3ba3be8de05bdfc7b639e1009b40d2b182fff5c20e2480bdf7900e7d04696e18-merged.mount: Deactivated successfully.
Jan 31 08:41:36 compute-0 ceph-mon[75227]: pgmap v1183: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:41:36 compute-0 podman[252490]: 2026-01-31 08:41:36.595835768 +0000 UTC m=+1.772437679 container remove d0889f3aaa04326306c1fbf6a3dd581daac285c3ea6a7850dade7fcffa61b7db (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=optimistic_shtern, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3)
Jan 31 08:41:36 compute-0 systemd[1]: libpod-conmon-d0889f3aaa04326306c1fbf6a3dd581daac285c3ea6a7850dade7fcffa61b7db.scope: Deactivated successfully.
Jan 31 08:41:36 compute-0 podman[252533]: 2026-01-31 08:41:36.720781622 +0000 UTC m=+0.032442281 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 08:41:36 compute-0 podman[252533]: 2026-01-31 08:41:36.91711045 +0000 UTC m=+0.228771029 container create 878c818be5245b2e90829afb35eb4d932930c096303f5c376a681d787b99294b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=kind_carson, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Jan 31 08:41:37 compute-0 systemd[1]: Started libpod-conmon-878c818be5245b2e90829afb35eb4d932930c096303f5c376a681d787b99294b.scope.
Jan 31 08:41:37 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:41:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/03238b74898c859234b56896e5eb3cc1bf63b5a98b85313757ee2b9c8fe880f0/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 08:41:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/03238b74898c859234b56896e5eb3cc1bf63b5a98b85313757ee2b9c8fe880f0/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 08:41:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/03238b74898c859234b56896e5eb3cc1bf63b5a98b85313757ee2b9c8fe880f0/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 08:41:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/03238b74898c859234b56896e5eb3cc1bf63b5a98b85313757ee2b9c8fe880f0/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 08:41:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/03238b74898c859234b56896e5eb3cc1bf63b5a98b85313757ee2b9c8fe880f0/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 31 08:41:37 compute-0 podman[252533]: 2026-01-31 08:41:37.173379138 +0000 UTC m=+0.485039747 container init 878c818be5245b2e90829afb35eb4d932930c096303f5c376a681d787b99294b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=kind_carson, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 31 08:41:37 compute-0 podman[252533]: 2026-01-31 08:41:37.179994816 +0000 UTC m=+0.491655415 container start 878c818be5245b2e90829afb35eb4d932930c096303f5c376a681d787b99294b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=kind_carson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 08:41:37 compute-0 podman[252533]: 2026-01-31 08:41:37.297932211 +0000 UTC m=+0.609592850 container attach 878c818be5245b2e90829afb35eb4d932930c096303f5c376a681d787b99294b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=kind_carson, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.build-date=20251030)
Jan 31 08:41:37 compute-0 kind_carson[252550]: --> passed data devices: 0 physical, 3 LVM
Jan 31 08:41:37 compute-0 kind_carson[252550]: --> All data devices are unavailable
Jan 31 08:41:37 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1184: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:41:37 compute-0 systemd[1]: libpod-878c818be5245b2e90829afb35eb4d932930c096303f5c376a681d787b99294b.scope: Deactivated successfully.
Jan 31 08:41:37 compute-0 podman[252533]: 2026-01-31 08:41:37.680914253 +0000 UTC m=+0.992574842 container died 878c818be5245b2e90829afb35eb4d932930c096303f5c376a681d787b99294b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=kind_carson, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, OSD_FLAVOR=default, ceph=True)
Jan 31 08:41:38 compute-0 ceph-mon[75227]: pgmap v1184: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:41:38 compute-0 systemd[1]: var-lib-containers-storage-overlay-03238b74898c859234b56896e5eb3cc1bf63b5a98b85313757ee2b9c8fe880f0-merged.mount: Deactivated successfully.
Jan 31 08:41:39 compute-0 podman[252533]: 2026-01-31 08:41:39.185022331 +0000 UTC m=+2.496682900 container remove 878c818be5245b2e90829afb35eb4d932930c096303f5c376a681d787b99294b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=kind_carson, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 31 08:41:39 compute-0 sudo[252453]: pam_unix(sudo:session): session closed for user root
Jan 31 08:41:39 compute-0 systemd[1]: libpod-conmon-878c818be5245b2e90829afb35eb4d932930c096303f5c376a681d787b99294b.scope: Deactivated successfully.
Jan 31 08:41:39 compute-0 sudo[252583]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 08:41:39 compute-0 sudo[252583]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:41:39 compute-0 sudo[252583]: pam_unix(sudo:session): session closed for user root
Jan 31 08:41:39 compute-0 sudo[252608]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/82c880e6-d992-5408-8b12-efff9c275473/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 82c880e6-d992-5408-8b12-efff9c275473 -- lvm list --format json
Jan 31 08:41:39 compute-0 sudo[252608]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:41:39 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1185: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:41:39 compute-0 podman[252646]: 2026-01-31 08:41:39.591977333 +0000 UTC m=+0.023204059 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 08:41:39 compute-0 podman[252646]: 2026-01-31 08:41:39.824849647 +0000 UTC m=+0.256076383 container create 400dd5f37bf40ac04c57452369b167d5169478e1ed0a7ab7f96570cad55b247b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=practical_buck, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Jan 31 08:41:40 compute-0 systemd[1]: Started libpod-conmon-400dd5f37bf40ac04c57452369b167d5169478e1ed0a7ab7f96570cad55b247b.scope.
Jan 31 08:41:40 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:41:40 compute-0 ceph-mon[75227]: pgmap v1185: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:41:40 compute-0 podman[252646]: 2026-01-31 08:41:40.288532318 +0000 UTC m=+0.719759044 container init 400dd5f37bf40ac04c57452369b167d5169478e1ed0a7ab7f96570cad55b247b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=practical_buck, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Jan 31 08:41:40 compute-0 podman[252646]: 2026-01-31 08:41:40.294041334 +0000 UTC m=+0.725268070 container start 400dd5f37bf40ac04c57452369b167d5169478e1ed0a7ab7f96570cad55b247b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=practical_buck, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 08:41:40 compute-0 practical_buck[252662]: 167 167
Jan 31 08:41:40 compute-0 systemd[1]: libpod-400dd5f37bf40ac04c57452369b167d5169478e1ed0a7ab7f96570cad55b247b.scope: Deactivated successfully.
Jan 31 08:41:40 compute-0 podman[252646]: 2026-01-31 08:41:40.393002981 +0000 UTC m=+0.824229707 container attach 400dd5f37bf40ac04c57452369b167d5169478e1ed0a7ab7f96570cad55b247b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=practical_buck, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 08:41:40 compute-0 podman[252646]: 2026-01-31 08:41:40.39436777 +0000 UTC m=+0.825594466 container died 400dd5f37bf40ac04c57452369b167d5169478e1ed0a7ab7f96570cad55b247b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=practical_buck, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Jan 31 08:41:40 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:41:40 compute-0 systemd[1]: var-lib-containers-storage-overlay-939671ef586eccdc19146d93def1b76eb26782aaa5d2fff4475d77cb0305ed04-merged.mount: Deactivated successfully.
Jan 31 08:41:41 compute-0 podman[252646]: 2026-01-31 08:41:41.152224573 +0000 UTC m=+1.583451279 container remove 400dd5f37bf40ac04c57452369b167d5169478e1ed0a7ab7f96570cad55b247b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=practical_buck, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.build-date=20251030, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Jan 31 08:41:41 compute-0 systemd[1]: libpod-conmon-400dd5f37bf40ac04c57452369b167d5169478e1ed0a7ab7f96570cad55b247b.scope: Deactivated successfully.
Jan 31 08:41:41 compute-0 podman[252687]: 2026-01-31 08:41:41.274918873 +0000 UTC m=+0.027834080 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 08:41:41 compute-0 podman[252687]: 2026-01-31 08:41:41.541656278 +0000 UTC m=+0.294571385 container create a0fade8a521276c1591e2d4bf3ab634814b692e9b56fb3fb1d54d90c30a7de44 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=brave_torvalds, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Jan 31 08:41:41 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1186: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:41:41 compute-0 systemd[1]: Started libpod-conmon-a0fade8a521276c1591e2d4bf3ab634814b692e9b56fb3fb1d54d90c30a7de44.scope.
Jan 31 08:41:41 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:41:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ddd3a0132f1d07d7c99e4d31750e138ac660de3e56f2a0cf0965517de10e1a07/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 08:41:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ddd3a0132f1d07d7c99e4d31750e138ac660de3e56f2a0cf0965517de10e1a07/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 08:41:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ddd3a0132f1d07d7c99e4d31750e138ac660de3e56f2a0cf0965517de10e1a07/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 08:41:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ddd3a0132f1d07d7c99e4d31750e138ac660de3e56f2a0cf0965517de10e1a07/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 08:41:41 compute-0 podman[252687]: 2026-01-31 08:41:41.934085609 +0000 UTC m=+0.687000756 container init a0fade8a521276c1591e2d4bf3ab634814b692e9b56fb3fb1d54d90c30a7de44 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=brave_torvalds, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_REF=tentacle, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 08:41:41 compute-0 podman[252687]: 2026-01-31 08:41:41.941094748 +0000 UTC m=+0.694009865 container start a0fade8a521276c1591e2d4bf3ab634814b692e9b56fb3fb1d54d90c30a7de44 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=brave_torvalds, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 08:41:42 compute-0 brave_torvalds[252704]: {
Jan 31 08:41:42 compute-0 brave_torvalds[252704]:     "0": [
Jan 31 08:41:42 compute-0 brave_torvalds[252704]:         {
Jan 31 08:41:42 compute-0 brave_torvalds[252704]:             "devices": [
Jan 31 08:41:42 compute-0 brave_torvalds[252704]:                 "/dev/loop3"
Jan 31 08:41:42 compute-0 brave_torvalds[252704]:             ],
Jan 31 08:41:42 compute-0 brave_torvalds[252704]:             "lv_name": "ceph_lv0",
Jan 31 08:41:42 compute-0 brave_torvalds[252704]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 08:41:42 compute-0 brave_torvalds[252704]:             "lv_size": "21470642176",
Jan 31 08:41:42 compute-0 brave_torvalds[252704]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=MTsNbY-MKaT-jGv0-3onj-5WQa-gnK0-BbfLsK,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=82c880e6-d992-5408-8b12-efff9c275473,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=39c36249-2898-4a76-b317-8e4ca379866f,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 31 08:41:42 compute-0 brave_torvalds[252704]:             "lv_uuid": "MTsNbY-MKaT-jGv0-3onj-5WQa-gnK0-BbfLsK",
Jan 31 08:41:42 compute-0 brave_torvalds[252704]:             "name": "ceph_lv0",
Jan 31 08:41:42 compute-0 brave_torvalds[252704]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 08:41:42 compute-0 brave_torvalds[252704]:             "tags": {
Jan 31 08:41:42 compute-0 brave_torvalds[252704]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 31 08:41:42 compute-0 brave_torvalds[252704]:                 "ceph.block_uuid": "MTsNbY-MKaT-jGv0-3onj-5WQa-gnK0-BbfLsK",
Jan 31 08:41:42 compute-0 brave_torvalds[252704]:                 "ceph.cephx_lockbox_secret": "",
Jan 31 08:41:42 compute-0 brave_torvalds[252704]:                 "ceph.cluster_fsid": "82c880e6-d992-5408-8b12-efff9c275473",
Jan 31 08:41:42 compute-0 brave_torvalds[252704]:                 "ceph.cluster_name": "ceph",
Jan 31 08:41:42 compute-0 brave_torvalds[252704]:                 "ceph.crush_device_class": "",
Jan 31 08:41:42 compute-0 brave_torvalds[252704]:                 "ceph.encrypted": "0",
Jan 31 08:41:42 compute-0 brave_torvalds[252704]:                 "ceph.objectstore": "bluestore",
Jan 31 08:41:42 compute-0 brave_torvalds[252704]:                 "ceph.osd_fsid": "39c36249-2898-4a76-b317-8e4ca379866f",
Jan 31 08:41:42 compute-0 brave_torvalds[252704]:                 "ceph.osd_id": "0",
Jan 31 08:41:42 compute-0 brave_torvalds[252704]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 31 08:41:42 compute-0 brave_torvalds[252704]:                 "ceph.type": "block",
Jan 31 08:41:42 compute-0 brave_torvalds[252704]:                 "ceph.vdo": "0",
Jan 31 08:41:42 compute-0 brave_torvalds[252704]:                 "ceph.with_tpm": "0"
Jan 31 08:41:42 compute-0 brave_torvalds[252704]:             },
Jan 31 08:41:42 compute-0 brave_torvalds[252704]:             "type": "block",
Jan 31 08:41:42 compute-0 brave_torvalds[252704]:             "vg_name": "ceph_vg0"
Jan 31 08:41:42 compute-0 brave_torvalds[252704]:         }
Jan 31 08:41:42 compute-0 brave_torvalds[252704]:     ],
Jan 31 08:41:42 compute-0 brave_torvalds[252704]:     "1": [
Jan 31 08:41:42 compute-0 brave_torvalds[252704]:         {
Jan 31 08:41:42 compute-0 brave_torvalds[252704]:             "devices": [
Jan 31 08:41:42 compute-0 brave_torvalds[252704]:                 "/dev/loop4"
Jan 31 08:41:42 compute-0 brave_torvalds[252704]:             ],
Jan 31 08:41:42 compute-0 brave_torvalds[252704]:             "lv_name": "ceph_lv1",
Jan 31 08:41:42 compute-0 brave_torvalds[252704]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 31 08:41:42 compute-0 brave_torvalds[252704]:             "lv_size": "21470642176",
Jan 31 08:41:42 compute-0 brave_torvalds[252704]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=p93Mbf-DMxT-pcUt-jSJE-SFna-oscq-yTAd40,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=82c880e6-d992-5408-8b12-efff9c275473,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=dacad4fa-56d8-4937-b2d8-306fb75187f3,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 31 08:41:42 compute-0 brave_torvalds[252704]:             "lv_uuid": "p93Mbf-DMxT-pcUt-jSJE-SFna-oscq-yTAd40",
Jan 31 08:41:42 compute-0 brave_torvalds[252704]:             "name": "ceph_lv1",
Jan 31 08:41:42 compute-0 brave_torvalds[252704]:             "path": "/dev/ceph_vg1/ceph_lv1",
Jan 31 08:41:42 compute-0 brave_torvalds[252704]:             "tags": {
Jan 31 08:41:42 compute-0 brave_torvalds[252704]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 31 08:41:42 compute-0 brave_torvalds[252704]:                 "ceph.block_uuid": "p93Mbf-DMxT-pcUt-jSJE-SFna-oscq-yTAd40",
Jan 31 08:41:42 compute-0 brave_torvalds[252704]:                 "ceph.cephx_lockbox_secret": "",
Jan 31 08:41:42 compute-0 brave_torvalds[252704]:                 "ceph.cluster_fsid": "82c880e6-d992-5408-8b12-efff9c275473",
Jan 31 08:41:42 compute-0 brave_torvalds[252704]:                 "ceph.cluster_name": "ceph",
Jan 31 08:41:42 compute-0 brave_torvalds[252704]:                 "ceph.crush_device_class": "",
Jan 31 08:41:42 compute-0 brave_torvalds[252704]:                 "ceph.encrypted": "0",
Jan 31 08:41:42 compute-0 brave_torvalds[252704]:                 "ceph.objectstore": "bluestore",
Jan 31 08:41:42 compute-0 brave_torvalds[252704]:                 "ceph.osd_fsid": "dacad4fa-56d8-4937-b2d8-306fb75187f3",
Jan 31 08:41:42 compute-0 brave_torvalds[252704]:                 "ceph.osd_id": "1",
Jan 31 08:41:42 compute-0 brave_torvalds[252704]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 31 08:41:42 compute-0 brave_torvalds[252704]:                 "ceph.type": "block",
Jan 31 08:41:42 compute-0 brave_torvalds[252704]:                 "ceph.vdo": "0",
Jan 31 08:41:42 compute-0 brave_torvalds[252704]:                 "ceph.with_tpm": "0"
Jan 31 08:41:42 compute-0 brave_torvalds[252704]:             },
Jan 31 08:41:42 compute-0 brave_torvalds[252704]:             "type": "block",
Jan 31 08:41:42 compute-0 brave_torvalds[252704]:             "vg_name": "ceph_vg1"
Jan 31 08:41:42 compute-0 brave_torvalds[252704]:         }
Jan 31 08:41:42 compute-0 brave_torvalds[252704]:     ],
Jan 31 08:41:42 compute-0 brave_torvalds[252704]:     "2": [
Jan 31 08:41:42 compute-0 brave_torvalds[252704]:         {
Jan 31 08:41:42 compute-0 brave_torvalds[252704]:             "devices": [
Jan 31 08:41:42 compute-0 brave_torvalds[252704]:                 "/dev/loop5"
Jan 31 08:41:42 compute-0 brave_torvalds[252704]:             ],
Jan 31 08:41:42 compute-0 brave_torvalds[252704]:             "lv_name": "ceph_lv2",
Jan 31 08:41:42 compute-0 brave_torvalds[252704]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 31 08:41:42 compute-0 brave_torvalds[252704]:             "lv_size": "21470642176",
Jan 31 08:41:42 compute-0 brave_torvalds[252704]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=fxd7JU-HnwP-NvcE-M4xv-EgEF-kK7y-w6dXCS,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=82c880e6-d992-5408-8b12-efff9c275473,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=faa25865-e7b6-44f9-8188-08bf287b941b,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 31 08:41:42 compute-0 brave_torvalds[252704]:             "lv_uuid": "fxd7JU-HnwP-NvcE-M4xv-EgEF-kK7y-w6dXCS",
Jan 31 08:41:42 compute-0 brave_torvalds[252704]:             "name": "ceph_lv2",
Jan 31 08:41:42 compute-0 brave_torvalds[252704]:             "path": "/dev/ceph_vg2/ceph_lv2",
Jan 31 08:41:42 compute-0 brave_torvalds[252704]:             "tags": {
Jan 31 08:41:42 compute-0 brave_torvalds[252704]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 31 08:41:42 compute-0 brave_torvalds[252704]:                 "ceph.block_uuid": "fxd7JU-HnwP-NvcE-M4xv-EgEF-kK7y-w6dXCS",
Jan 31 08:41:42 compute-0 brave_torvalds[252704]:                 "ceph.cephx_lockbox_secret": "",
Jan 31 08:41:42 compute-0 brave_torvalds[252704]:                 "ceph.cluster_fsid": "82c880e6-d992-5408-8b12-efff9c275473",
Jan 31 08:41:42 compute-0 brave_torvalds[252704]:                 "ceph.cluster_name": "ceph",
Jan 31 08:41:42 compute-0 brave_torvalds[252704]:                 "ceph.crush_device_class": "",
Jan 31 08:41:42 compute-0 brave_torvalds[252704]:                 "ceph.encrypted": "0",
Jan 31 08:41:42 compute-0 brave_torvalds[252704]:                 "ceph.objectstore": "bluestore",
Jan 31 08:41:42 compute-0 brave_torvalds[252704]:                 "ceph.osd_fsid": "faa25865-e7b6-44f9-8188-08bf287b941b",
Jan 31 08:41:42 compute-0 brave_torvalds[252704]:                 "ceph.osd_id": "2",
Jan 31 08:41:42 compute-0 brave_torvalds[252704]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 31 08:41:42 compute-0 brave_torvalds[252704]:                 "ceph.type": "block",
Jan 31 08:41:42 compute-0 brave_torvalds[252704]:                 "ceph.vdo": "0",
Jan 31 08:41:42 compute-0 brave_torvalds[252704]:                 "ceph.with_tpm": "0"
Jan 31 08:41:42 compute-0 brave_torvalds[252704]:             },
Jan 31 08:41:42 compute-0 brave_torvalds[252704]:             "type": "block",
Jan 31 08:41:42 compute-0 brave_torvalds[252704]:             "vg_name": "ceph_vg2"
Jan 31 08:41:42 compute-0 brave_torvalds[252704]:         }
Jan 31 08:41:42 compute-0 brave_torvalds[252704]:     ]
Jan 31 08:41:42 compute-0 brave_torvalds[252704]: }
Jan 31 08:41:42 compute-0 ceph-mon[75227]: pgmap v1186: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:41:42 compute-0 systemd[1]: libpod-a0fade8a521276c1591e2d4bf3ab634814b692e9b56fb3fb1d54d90c30a7de44.scope: Deactivated successfully.
Jan 31 08:41:42 compute-0 podman[252687]: 2026-01-31 08:41:42.224366402 +0000 UTC m=+0.977281519 container attach a0fade8a521276c1591e2d4bf3ab634814b692e9b56fb3fb1d54d90c30a7de44 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=brave_torvalds, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_REF=tentacle)
Jan 31 08:41:42 compute-0 podman[252687]: 2026-01-31 08:41:42.22536018 +0000 UTC m=+0.978275337 container died a0fade8a521276c1591e2d4bf3ab634814b692e9b56fb3fb1d54d90c30a7de44 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=brave_torvalds, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 08:41:42 compute-0 systemd[1]: var-lib-containers-storage-overlay-ddd3a0132f1d07d7c99e4d31750e138ac660de3e56f2a0cf0965517de10e1a07-merged.mount: Deactivated successfully.
Jan 31 08:41:43 compute-0 podman[252687]: 2026-01-31 08:41:43.247704714 +0000 UTC m=+2.000619831 container remove a0fade8a521276c1591e2d4bf3ab634814b692e9b56fb3fb1d54d90c30a7de44 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=brave_torvalds, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 31 08:41:43 compute-0 sudo[252608]: pam_unix(sudo:session): session closed for user root
Jan 31 08:41:43 compute-0 systemd[1]: libpod-conmon-a0fade8a521276c1591e2d4bf3ab634814b692e9b56fb3fb1d54d90c30a7de44.scope: Deactivated successfully.
Jan 31 08:41:43 compute-0 sudo[252726]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 08:41:43 compute-0 sudo[252726]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:41:43 compute-0 sudo[252726]: pam_unix(sudo:session): session closed for user root
Jan 31 08:41:43 compute-0 sudo[252751]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/82c880e6-d992-5408-8b12-efff9c275473/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 82c880e6-d992-5408-8b12-efff9c275473 -- raw list --format json
Jan 31 08:41:43 compute-0 sudo[252751]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:41:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] _maybe_adjust
Jan 31 08:41:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:41:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 31 08:41:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:41:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 08:41:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:41:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 08:41:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:41:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 08:41:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:41:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 3.257160766784386e-07 of space, bias 1.0, pg target 9.771482300353158e-05 quantized to 32 (current 32)
Jan 31 08:41:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:41:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 2.5331644121694047e-06 of space, bias 4.0, pg target 0.0030397972946032857 quantized to 16 (current 16)
Jan 31 08:41:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:41:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 08:41:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:41:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 31 08:41:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:41:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Jan 31 08:41:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:41:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 08:41:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:41:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 31 08:41:43 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1187: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:41:43 compute-0 podman[252788]: 2026-01-31 08:41:43.657909829 +0000 UTC m=+0.022398757 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 08:41:44 compute-0 podman[252788]: 2026-01-31 08:41:44.004365684 +0000 UTC m=+0.368854512 container create fd3c35154ff0e5a9a06714318fcd08ebf04595f6dd41610250bd5770eb5fc73f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=interesting_herschel, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2)
Jan 31 08:41:44 compute-0 ceph-mon[75227]: pgmap v1187: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:41:44 compute-0 systemd[1]: Started libpod-conmon-fd3c35154ff0e5a9a06714318fcd08ebf04595f6dd41610250bd5770eb5fc73f.scope.
Jan 31 08:41:44 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:41:44 compute-0 podman[252788]: 2026-01-31 08:41:44.338200092 +0000 UTC m=+0.702688940 container init fd3c35154ff0e5a9a06714318fcd08ebf04595f6dd41610250bd5770eb5fc73f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=interesting_herschel, CEPH_REF=tentacle, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default)
Jan 31 08:41:44 compute-0 podman[252788]: 2026-01-31 08:41:44.343106672 +0000 UTC m=+0.707595500 container start fd3c35154ff0e5a9a06714318fcd08ebf04595f6dd41610250bd5770eb5fc73f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=interesting_herschel, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 08:41:44 compute-0 interesting_herschel[252804]: 167 167
Jan 31 08:41:44 compute-0 systemd[1]: libpod-fd3c35154ff0e5a9a06714318fcd08ebf04595f6dd41610250bd5770eb5fc73f.scope: Deactivated successfully.
Jan 31 08:41:44 compute-0 podman[252788]: 2026-01-31 08:41:44.491027817 +0000 UTC m=+0.855516675 container attach fd3c35154ff0e5a9a06714318fcd08ebf04595f6dd41610250bd5770eb5fc73f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=interesting_herschel, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3)
Jan 31 08:41:44 compute-0 podman[252788]: 2026-01-31 08:41:44.49149435 +0000 UTC m=+0.855983178 container died fd3c35154ff0e5a9a06714318fcd08ebf04595f6dd41610250bd5770eb5fc73f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=interesting_herschel, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=tentacle)
Jan 31 08:41:44 compute-0 systemd[1]: var-lib-containers-storage-overlay-b7122c1b68c5b431a6f89a44e9a06857c99fd4e223f9a702ea352031000742af-merged.mount: Deactivated successfully.
Jan 31 08:41:45 compute-0 podman[252788]: 2026-01-31 08:41:45.106827312 +0000 UTC m=+1.471316170 container remove fd3c35154ff0e5a9a06714318fcd08ebf04595f6dd41610250bd5770eb5fc73f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=interesting_herschel, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030)
Jan 31 08:41:45 compute-0 systemd[1]: libpod-conmon-fd3c35154ff0e5a9a06714318fcd08ebf04595f6dd41610250bd5770eb5fc73f.scope: Deactivated successfully.
Jan 31 08:41:45 compute-0 podman[252829]: 2026-01-31 08:41:45.231244841 +0000 UTC m=+0.018242448 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 08:41:45 compute-0 podman[252829]: 2026-01-31 08:41:45.412193983 +0000 UTC m=+0.199191600 container create ed5b4eba6c3917abb090e452d328ffce56a9c82f7579e78f08fe6ca722150534 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sharp_zhukovsky, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 08:41:45 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:41:45 compute-0 systemd[1]: Started libpod-conmon-ed5b4eba6c3917abb090e452d328ffce56a9c82f7579e78f08fe6ca722150534.scope.
Jan 31 08:41:45 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:41:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c87016442494f86c6d6afe06796bd62ebffcd09305a1e9ea378297298d0e0c6d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 08:41:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c87016442494f86c6d6afe06796bd62ebffcd09305a1e9ea378297298d0e0c6d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 08:41:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c87016442494f86c6d6afe06796bd62ebffcd09305a1e9ea378297298d0e0c6d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 08:41:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c87016442494f86c6d6afe06796bd62ebffcd09305a1e9ea378297298d0e0c6d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 08:41:45 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1188: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:41:45 compute-0 podman[252829]: 2026-01-31 08:41:45.752572996 +0000 UTC m=+0.539570613 container init ed5b4eba6c3917abb090e452d328ffce56a9c82f7579e78f08fe6ca722150534 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sharp_zhukovsky, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Jan 31 08:41:45 compute-0 podman[252829]: 2026-01-31 08:41:45.758793093 +0000 UTC m=+0.545790720 container start ed5b4eba6c3917abb090e452d328ffce56a9c82f7579e78f08fe6ca722150534 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sharp_zhukovsky, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Jan 31 08:41:45 compute-0 podman[252829]: 2026-01-31 08:41:45.857834272 +0000 UTC m=+0.644831859 container attach ed5b4eba6c3917abb090e452d328ffce56a9c82f7579e78f08fe6ca722150534 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sharp_zhukovsky, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 31 08:41:45 compute-0 ceph-mon[75227]: pgmap v1188: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:41:46 compute-0 lvm[252927]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 31 08:41:46 compute-0 lvm[252924]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 31 08:41:46 compute-0 lvm[252924]: VG ceph_vg0 finished
Jan 31 08:41:46 compute-0 lvm[252927]: VG ceph_vg1 finished
Jan 31 08:41:46 compute-0 lvm[252929]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 31 08:41:46 compute-0 lvm[252929]: VG ceph_vg2 finished
Jan 31 08:41:46 compute-0 sharp_zhukovsky[252846]: {}
Jan 31 08:41:46 compute-0 systemd[1]: libpod-ed5b4eba6c3917abb090e452d328ffce56a9c82f7579e78f08fe6ca722150534.scope: Deactivated successfully.
Jan 31 08:41:46 compute-0 podman[252829]: 2026-01-31 08:41:46.459177716 +0000 UTC m=+1.246175333 container died ed5b4eba6c3917abb090e452d328ffce56a9c82f7579e78f08fe6ca722150534 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sharp_zhukovsky, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Jan 31 08:41:46 compute-0 systemd[1]: var-lib-containers-storage-overlay-c87016442494f86c6d6afe06796bd62ebffcd09305a1e9ea378297298d0e0c6d-merged.mount: Deactivated successfully.
Jan 31 08:41:46 compute-0 podman[252829]: 2026-01-31 08:41:46.941381713 +0000 UTC m=+1.728379300 container remove ed5b4eba6c3917abb090e452d328ffce56a9c82f7579e78f08fe6ca722150534 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sharp_zhukovsky, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 08:41:46 compute-0 sudo[252751]: pam_unix(sudo:session): session closed for user root
Jan 31 08:41:46 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 31 08:41:47 compute-0 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 08:41:47 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 31 08:41:47 compute-0 systemd[1]: libpod-conmon-ed5b4eba6c3917abb090e452d328ffce56a9c82f7579e78f08fe6ca722150534.scope: Deactivated successfully.
Jan 31 08:41:47 compute-0 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 08:41:47 compute-0 sudo[252944]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 31 08:41:47 compute-0 sudo[252944]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:41:47 compute-0 sudo[252944]: pam_unix(sudo:session): session closed for user root
Jan 31 08:41:47 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1189: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:41:48 compute-0 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 08:41:48 compute-0 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 08:41:48 compute-0 ceph-mon[75227]: pgmap v1189: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:41:49 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1190: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:41:50 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:41:50 compute-0 ceph-mon[75227]: pgmap v1190: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:41:51 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1191: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:41:51 compute-0 ceph-mon[75227]: pgmap v1191: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:41:53 compute-0 podman[252970]: 2026-01-31 08:41:53.16550355 +0000 UTC m=+0.054599859 container health_status 5cc46d1955888fed41771eb977e7f9416e280539f01559118253757fe3eb0869 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '3b798815db4eef76ff54d2cfe5801aee605c637f16e47a4297de289ab1fdb9c1-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20260127)
Jan 31 08:41:53 compute-0 podman[252969]: 2026-01-31 08:41:53.184011935 +0000 UTC m=+0.077064486 container health_status 14c3c41ef3ecc0cb180f3b1c6b2646401de390411c75f2edf127900ced71a3ae (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '3b798815db4eef76ff54d2cfe5801aee605c637f16e47a4297de289ab1fdb9c1-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, container_name=ovn_controller, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team)
Jan 31 08:41:53 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1192: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:41:53 compute-0 ceph-mon[75227]: pgmap v1192: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:41:55 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:41:55 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1193: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:41:56 compute-0 ceph-mon[75227]: pgmap v1193: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:41:57 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1194: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:41:58 compute-0 ceph-mon[75227]: pgmap v1194: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:41:59 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1195: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:41:59 compute-0 ceph-mon[75227]: pgmap v1195: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:42:00 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:42:01 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1196: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:42:02 compute-0 ceph-mon[75227]: pgmap v1196: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:42:02 compute-0 ceph-mgr[75519]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:42:02 compute-0 ceph-mgr[75519]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:42:02 compute-0 ceph-mgr[75519]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:42:02 compute-0 ceph-mgr[75519]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:42:02 compute-0 ceph-mgr[75519]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:42:02 compute-0 ceph-mgr[75519]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:42:03 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1197: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:42:03 compute-0 ceph-mon[75227]: pgmap v1197: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:42:05 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:42:05 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1198: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:42:06 compute-0 ceph-mon[75227]: pgmap v1198: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:42:07 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1199: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:42:07 compute-0 ceph-mon[75227]: pgmap v1199: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:42:09 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1200: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:42:09 compute-0 ceph-mon[75227]: pgmap v1200: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:42:10 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:42:11 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1201: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:42:12 compute-0 ceph-mon[75227]: pgmap v1201: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:42:13 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1202: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:42:14 compute-0 ceph-mon[75227]: pgmap v1202: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:42:15 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:42:15 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1203: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:42:16 compute-0 ceph-mon[75227]: pgmap v1203: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:42:17 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1204: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:42:17 compute-0 ovn_metadata_agent[154972]: 2026-01-31 08:42:17.900 154977 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:42:17 compute-0 ovn_metadata_agent[154972]: 2026-01-31 08:42:17.901 154977 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:42:17 compute-0 ovn_metadata_agent[154972]: 2026-01-31 08:42:17.901 154977 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:42:17 compute-0 ceph-mon[75227]: pgmap v1204: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:42:18 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 31 08:42:18 compute-0 ceph-mon[75227]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3620524586' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 31 08:42:18 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 31 08:42:18 compute-0 ceph-mon[75227]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3620524586' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 31 08:42:19 compute-0 ceph-mon[75227]: from='client.? 192.168.122.10:0/3620524586' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 31 08:42:19 compute-0 ceph-mon[75227]: from='client.? 192.168.122.10:0/3620524586' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 31 08:42:19 compute-0 nova_compute[238824]: 2026-01-31 08:42:19.340 238828 DEBUG oslo_service.periodic_task [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:42:19 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1205: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:42:20 compute-0 ceph-mon[75227]: pgmap v1205: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:42:20 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:42:20 compute-0 ceph-mon[75227]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #54. Immutable memtables: 0.
Jan 31 08:42:20 compute-0 ceph-mon[75227]: rocksdb: (Original Log Time 2026/01/31-08:42:20.855369) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 31 08:42:20 compute-0 ceph-mon[75227]: rocksdb: [db/flush_job.cc:856] [default] [JOB 27] Flushing memtable with next log file: 54
Jan 31 08:42:20 compute-0 ceph-mon[75227]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769848940855413, "job": 27, "event": "flush_started", "num_memtables": 1, "num_entries": 1598, "num_deletes": 509, "total_data_size": 2087478, "memory_usage": 2130064, "flush_reason": "Manual Compaction"}
Jan 31 08:42:20 compute-0 ceph-mon[75227]: rocksdb: [db/flush_job.cc:885] [default] [JOB 27] Level-0 flush table #55: started
Jan 31 08:42:20 compute-0 ceph-mon[75227]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769848940940353, "cf_name": "default", "job": 27, "event": "table_file_creation", "file_number": 55, "file_size": 1882506, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 23184, "largest_seqno": 24781, "table_properties": {"data_size": 1875876, "index_size": 3256, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2309, "raw_key_size": 17357, "raw_average_key_size": 19, "raw_value_size": 1860360, "raw_average_value_size": 2044, "num_data_blocks": 147, "num_entries": 910, "num_filter_entries": 910, "num_deletions": 509, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769848802, "oldest_key_time": 1769848802, "file_creation_time": 1769848940, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "91992687-9ca4-489a-811f-a25b3432622d", "db_session_id": "RDN3DWKE2K2I6QTJYIJY", "orig_file_number": 55, "seqno_to_time_mapping": "N/A"}}
Jan 31 08:42:20 compute-0 ceph-mon[75227]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 27] Flush lasted 85023 microseconds, and 3633 cpu microseconds.
Jan 31 08:42:20 compute-0 ceph-mon[75227]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 31 08:42:21 compute-0 ceph-mon[75227]: rocksdb: (Original Log Time 2026/01/31-08:42:20.940395) [db/flush_job.cc:967] [default] [JOB 27] Level-0 flush table #55: 1882506 bytes OK
Jan 31 08:42:21 compute-0 ceph-mon[75227]: rocksdb: (Original Log Time 2026/01/31-08:42:20.940412) [db/memtable_list.cc:519] [default] Level-0 commit table #55 started
Jan 31 08:42:21 compute-0 ceph-mon[75227]: rocksdb: (Original Log Time 2026/01/31-08:42:21.036615) [db/memtable_list.cc:722] [default] Level-0 commit table #55: memtable #1 done
Jan 31 08:42:21 compute-0 ceph-mon[75227]: rocksdb: (Original Log Time 2026/01/31-08:42:21.036671) EVENT_LOG_v1 {"time_micros": 1769848941036659, "job": 27, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 31 08:42:21 compute-0 ceph-mon[75227]: rocksdb: (Original Log Time 2026/01/31-08:42:21.036701) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 31 08:42:21 compute-0 ceph-mon[75227]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 27] Try to delete WAL files size 2079455, prev total WAL file size 2079455, number of live WAL files 2.
Jan 31 08:42:21 compute-0 ceph-mon[75227]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000051.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 08:42:21 compute-0 ceph-mon[75227]: rocksdb: (Original Log Time 2026/01/31-08:42:21.037631) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D00353030' seq:72057594037927935, type:22 .. '6C6F676D00373533' seq:0, type:0; will stop at (end)
Jan 31 08:42:21 compute-0 ceph-mon[75227]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 28] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 31 08:42:21 compute-0 ceph-mon[75227]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 27 Base level 0, inputs: [55(1838KB)], [53(9530KB)]
Jan 31 08:42:21 compute-0 ceph-mon[75227]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769848941037712, "job": 28, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [55], "files_L6": [53], "score": -1, "input_data_size": 11642059, "oldest_snapshot_seqno": -1}
Jan 31 08:42:21 compute-0 ceph-mon[75227]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 28] Generated table #56: 4731 keys, 8350433 bytes, temperature: kUnknown
Jan 31 08:42:21 compute-0 ceph-mon[75227]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769848941149320, "cf_name": "default", "job": 28, "event": "table_file_creation", "file_number": 56, "file_size": 8350433, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 8317767, "index_size": 19756, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 11845, "raw_key_size": 117939, "raw_average_key_size": 24, "raw_value_size": 8231094, "raw_average_value_size": 1739, "num_data_blocks": 822, "num_entries": 4731, "num_filter_entries": 4731, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769846771, "oldest_key_time": 0, "file_creation_time": 1769848941, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "91992687-9ca4-489a-811f-a25b3432622d", "db_session_id": "RDN3DWKE2K2I6QTJYIJY", "orig_file_number": 56, "seqno_to_time_mapping": "N/A"}}
Jan 31 08:42:21 compute-0 ceph-mon[75227]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 31 08:42:21 compute-0 ceph-mon[75227]: rocksdb: (Original Log Time 2026/01/31-08:42:21.149611) [db/compaction/compaction_job.cc:1663] [default] [JOB 28] Compacted 1@0 + 1@6 files to L6 => 8350433 bytes
Jan 31 08:42:21 compute-0 ceph-mon[75227]: rocksdb: (Original Log Time 2026/01/31-08:42:21.179911) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 104.2 rd, 74.8 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.8, 9.3 +0.0 blob) out(8.0 +0.0 blob), read-write-amplify(10.6) write-amplify(4.4) OK, records in: 5749, records dropped: 1018 output_compression: NoCompression
Jan 31 08:42:21 compute-0 ceph-mon[75227]: rocksdb: (Original Log Time 2026/01/31-08:42:21.179948) EVENT_LOG_v1 {"time_micros": 1769848941179934, "job": 28, "event": "compaction_finished", "compaction_time_micros": 111697, "compaction_time_cpu_micros": 27282, "output_level": 6, "num_output_files": 1, "total_output_size": 8350433, "num_input_records": 5749, "num_output_records": 4731, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 31 08:42:21 compute-0 ceph-mon[75227]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000055.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 08:42:21 compute-0 ceph-mon[75227]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769848941180409, "job": 28, "event": "table_file_deletion", "file_number": 55}
Jan 31 08:42:21 compute-0 ceph-mon[75227]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000053.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 08:42:21 compute-0 ceph-mon[75227]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769848941181481, "job": 28, "event": "table_file_deletion", "file_number": 53}
Jan 31 08:42:21 compute-0 ceph-mon[75227]: rocksdb: (Original Log Time 2026/01/31-08:42:21.037532) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 08:42:21 compute-0 ceph-mon[75227]: rocksdb: (Original Log Time 2026/01/31-08:42:21.181556) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 08:42:21 compute-0 ceph-mon[75227]: rocksdb: (Original Log Time 2026/01/31-08:42:21.181561) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 08:42:21 compute-0 ceph-mon[75227]: rocksdb: (Original Log Time 2026/01/31-08:42:21.181563) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 08:42:21 compute-0 ceph-mon[75227]: rocksdb: (Original Log Time 2026/01/31-08:42:21.181565) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 08:42:21 compute-0 ceph-mon[75227]: rocksdb: (Original Log Time 2026/01/31-08:42:21.181567) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 08:42:21 compute-0 nova_compute[238824]: 2026-01-31 08:42:21.340 238828 DEBUG oslo_service.periodic_task [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:42:21 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1206: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:42:21 compute-0 ceph-mon[75227]: pgmap v1206: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:42:22 compute-0 nova_compute[238824]: 2026-01-31 08:42:22.339 238828 DEBUG oslo_service.periodic_task [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:42:22 compute-0 nova_compute[238824]: 2026-01-31 08:42:22.340 238828 DEBUG oslo_service.periodic_task [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:42:22 compute-0 nova_compute[238824]: 2026-01-31 08:42:22.340 238828 DEBUG nova.compute.manager [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 31 08:42:23 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1207: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:42:24 compute-0 ceph-mon[75227]: pgmap v1207: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:42:24 compute-0 podman[253016]: 2026-01-31 08:42:24.16810593 +0000 UTC m=+0.060021149 container health_status 5cc46d1955888fed41771eb977e7f9416e280539f01559118253757fe3eb0869 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '3b798815db4eef76ff54d2cfe5801aee605c637f16e47a4297de289ab1fdb9c1-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20260127)
Jan 31 08:42:24 compute-0 podman[253015]: 2026-01-31 08:42:24.190237154 +0000 UTC m=+0.085286982 container health_status 14c3c41ef3ecc0cb180f3b1c6b2646401de390411c75f2edf127900ced71a3ae (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '3b798815db4eef76ff54d2cfe5801aee605c637f16e47a4297de289ab1fdb9c1-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_controller)
Jan 31 08:42:25 compute-0 nova_compute[238824]: 2026-01-31 08:42:25.335 238828 DEBUG oslo_service.periodic_task [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:42:25 compute-0 nova_compute[238824]: 2026-01-31 08:42:25.351 238828 DEBUG oslo_service.periodic_task [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:42:25 compute-0 nova_compute[238824]: 2026-01-31 08:42:25.352 238828 DEBUG nova.compute.manager [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 31 08:42:25 compute-0 nova_compute[238824]: 2026-01-31 08:42:25.352 238828 DEBUG nova.compute.manager [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 31 08:42:25 compute-0 nova_compute[238824]: 2026-01-31 08:42:25.365 238828 DEBUG nova.compute.manager [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 31 08:42:25 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1208: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:42:25 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:42:26 compute-0 ceph-mon[75227]: pgmap v1208: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:42:26 compute-0 nova_compute[238824]: 2026-01-31 08:42:26.339 238828 DEBUG oslo_service.periodic_task [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:42:26 compute-0 nova_compute[238824]: 2026-01-31 08:42:26.340 238828 DEBUG oslo_service.periodic_task [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:42:26 compute-0 nova_compute[238824]: 2026-01-31 08:42:26.340 238828 DEBUG oslo_service.periodic_task [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:42:26 compute-0 nova_compute[238824]: 2026-01-31 08:42:26.366 238828 DEBUG oslo_concurrency.lockutils [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:42:26 compute-0 nova_compute[238824]: 2026-01-31 08:42:26.367 238828 DEBUG oslo_concurrency.lockutils [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:42:26 compute-0 nova_compute[238824]: 2026-01-31 08:42:26.367 238828 DEBUG oslo_concurrency.lockutils [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:42:26 compute-0 nova_compute[238824]: 2026-01-31 08:42:26.367 238828 DEBUG nova.compute.resource_tracker [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 31 08:42:26 compute-0 nova_compute[238824]: 2026-01-31 08:42:26.368 238828 DEBUG oslo_concurrency.processutils [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:42:26 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 31 08:42:26 compute-0 ceph-mon[75227]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1461642590' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 31 08:42:26 compute-0 nova_compute[238824]: 2026-01-31 08:42:26.912 238828 DEBUG oslo_concurrency.processutils [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.545s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:42:27 compute-0 nova_compute[238824]: 2026-01-31 08:42:27.102 238828 WARNING nova.virt.libvirt.driver [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 31 08:42:27 compute-0 nova_compute[238824]: 2026-01-31 08:42:27.104 238828 DEBUG nova.compute.resource_tracker [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5130MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 31 08:42:27 compute-0 nova_compute[238824]: 2026-01-31 08:42:27.104 238828 DEBUG oslo_concurrency.lockutils [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:42:27 compute-0 nova_compute[238824]: 2026-01-31 08:42:27.104 238828 DEBUG oslo_concurrency.lockutils [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:42:27 compute-0 nova_compute[238824]: 2026-01-31 08:42:27.178 238828 DEBUG nova.compute.resource_tracker [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 31 08:42:27 compute-0 nova_compute[238824]: 2026-01-31 08:42:27.178 238828 DEBUG nova.compute.resource_tracker [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 31 08:42:27 compute-0 nova_compute[238824]: 2026-01-31 08:42:27.196 238828 DEBUG oslo_concurrency.processutils [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:42:27 compute-0 ceph-mon[75227]: from='client.? 192.168.122.100:0/1461642590' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 31 08:42:27 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1209: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:42:27 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 31 08:42:27 compute-0 ceph-mon[75227]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/181459331' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 31 08:42:27 compute-0 nova_compute[238824]: 2026-01-31 08:42:27.713 238828 DEBUG oslo_concurrency.processutils [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.517s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:42:27 compute-0 nova_compute[238824]: 2026-01-31 08:42:27.719 238828 DEBUG nova.compute.provider_tree [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Inventory has not changed in ProviderTree for provider: 6d4ff98f-eb37-47a1-bfaf-01e7f5329d98 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 31 08:42:27 compute-0 nova_compute[238824]: 2026-01-31 08:42:27.746 238828 DEBUG nova.scheduler.client.report [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Inventory has not changed for provider 6d4ff98f-eb37-47a1-bfaf-01e7f5329d98 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 31 08:42:27 compute-0 nova_compute[238824]: 2026-01-31 08:42:27.748 238828 DEBUG nova.compute.resource_tracker [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 31 08:42:27 compute-0 nova_compute[238824]: 2026-01-31 08:42:27.748 238828 DEBUG oslo_concurrency.lockutils [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.644s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:42:28 compute-0 ceph-mon[75227]: pgmap v1209: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:42:28 compute-0 ceph-mon[75227]: from='client.? 192.168.122.100:0/181459331' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 31 08:42:28 compute-0 nova_compute[238824]: 2026-01-31 08:42:28.744 238828 DEBUG oslo_service.periodic_task [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:42:29 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1210: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:42:30 compute-0 ceph-mon[75227]: pgmap v1210: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:42:30 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:42:31 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1211: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:42:31 compute-0 ceph-mgr[75519]: [balancer INFO root] Optimize plan auto_2026-01-31_08:42:31
Jan 31 08:42:31 compute-0 ceph-mgr[75519]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 31 08:42:31 compute-0 ceph-mgr[75519]: [balancer INFO root] do_upmap
Jan 31 08:42:31 compute-0 ceph-mgr[75519]: [balancer INFO root] pools ['images', 'cephfs.cephfs.meta', 'vms', 'default.rgw.control', 'backups', 'volumes', '.mgr', 'default.rgw.log', 'cephfs.cephfs.data', '.rgw.root', 'default.rgw.meta']
Jan 31 08:42:31 compute-0 ceph-mgr[75519]: [balancer INFO root] prepared 0/10 upmap changes
Jan 31 08:42:32 compute-0 ceph-mon[75227]: pgmap v1211: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:42:32 compute-0 ceph-mgr[75519]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:42:32 compute-0 ceph-mgr[75519]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:42:32 compute-0 ceph-mgr[75519]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:42:32 compute-0 ceph-mgr[75519]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:42:32 compute-0 ceph-mgr[75519]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:42:32 compute-0 ceph-mgr[75519]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:42:33 compute-0 ceph-mgr[75519]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 31 08:42:33 compute-0 ceph-mgr[75519]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 08:42:33 compute-0 ceph-mgr[75519]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 31 08:42:33 compute-0 ceph-mgr[75519]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 08:42:33 compute-0 ceph-mgr[75519]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 08:42:33 compute-0 ceph-mgr[75519]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 08:42:33 compute-0 ceph-mgr[75519]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 08:42:33 compute-0 ceph-mgr[75519]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 08:42:33 compute-0 ceph-mgr[75519]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 08:42:33 compute-0 ceph-mgr[75519]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 08:42:33 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1212: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:42:34 compute-0 ceph-mon[75227]: pgmap v1212: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:42:35 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1213: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:42:35 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:42:36 compute-0 ceph-mon[75227]: pgmap v1213: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:42:37 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1214: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:42:38 compute-0 ceph-mon[75227]: pgmap v1214: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:42:39 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1215: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:42:40 compute-0 ceph-mon[75227]: pgmap v1215: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:42:40 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:42:41 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1216: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:42:42 compute-0 ceph-mon[75227]: pgmap v1216: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:42:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] _maybe_adjust
Jan 31 08:42:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:42:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 31 08:42:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:42:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 08:42:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:42:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 08:42:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:42:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 08:42:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:42:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 3.257160766784386e-07 of space, bias 1.0, pg target 9.771482300353158e-05 quantized to 32 (current 32)
Jan 31 08:42:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:42:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 2.5331644121694047e-06 of space, bias 4.0, pg target 0.0030397972946032857 quantized to 16 (current 16)
Jan 31 08:42:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:42:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 08:42:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:42:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 31 08:42:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:42:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Jan 31 08:42:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:42:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 08:42:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:42:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 31 08:42:43 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1217: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:42:43 compute-0 ceph-mon[75227]: pgmap v1217: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:42:45 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1218: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:42:45 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:42:45 compute-0 ceph-mon[75227]: pgmap v1218: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:42:47 compute-0 sudo[253102]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 08:42:47 compute-0 sudo[253102]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:42:47 compute-0 sudo[253102]: pam_unix(sudo:session): session closed for user root
Jan 31 08:42:47 compute-0 sudo[253127]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/82c880e6-d992-5408-8b12-efff9c275473/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Jan 31 08:42:47 compute-0 sudo[253127]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:42:47 compute-0 sudo[253127]: pam_unix(sudo:session): session closed for user root
Jan 31 08:42:47 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 31 08:42:47 compute-0 ceph-mon[75227]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 08:42:47 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 31 08:42:47 compute-0 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 31 08:42:47 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 31 08:42:47 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1219: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:42:47 compute-0 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 08:42:47 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 31 08:42:47 compute-0 ceph-mon[75227]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 31 08:42:47 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 31 08:42:47 compute-0 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 31 08:42:47 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 31 08:42:47 compute-0 ceph-mon[75227]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 08:42:47 compute-0 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 08:42:47 compute-0 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 31 08:42:47 compute-0 sudo[253183]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 08:42:47 compute-0 sudo[253183]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:42:47 compute-0 sudo[253183]: pam_unix(sudo:session): session closed for user root
Jan 31 08:42:47 compute-0 sudo[253208]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/82c880e6-d992-5408-8b12-efff9c275473/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 82c880e6-d992-5408-8b12-efff9c275473 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Jan 31 08:42:47 compute-0 sudo[253208]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:42:48 compute-0 podman[253244]: 2026-01-31 08:42:48.242927464 +0000 UTC m=+0.114781126 container create 7f2896a1c39654d860649e88c46d072d0b606979c80b04eef4d23815460649ac (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stupefied_mendeleev, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Jan 31 08:42:48 compute-0 podman[253244]: 2026-01-31 08:42:48.148597924 +0000 UTC m=+0.020451606 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 08:42:48 compute-0 systemd[1]: Started libpod-conmon-7f2896a1c39654d860649e88c46d072d0b606979c80b04eef4d23815460649ac.scope.
Jan 31 08:42:48 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:42:48 compute-0 podman[253244]: 2026-01-31 08:42:48.403952573 +0000 UTC m=+0.275806255 container init 7f2896a1c39654d860649e88c46d072d0b606979c80b04eef4d23815460649ac (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stupefied_mendeleev, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, io.buildah.version=1.41.3)
Jan 31 08:42:48 compute-0 podman[253244]: 2026-01-31 08:42:48.409580314 +0000 UTC m=+0.281433966 container start 7f2896a1c39654d860649e88c46d072d0b606979c80b04eef4d23815460649ac (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stupefied_mendeleev, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS)
Jan 31 08:42:48 compute-0 systemd[1]: libpod-7f2896a1c39654d860649e88c46d072d0b606979c80b04eef4d23815460649ac.scope: Deactivated successfully.
Jan 31 08:42:48 compute-0 stupefied_mendeleev[253260]: 167 167
Jan 31 08:42:48 compute-0 conmon[253260]: conmon 7f2896a1c39654d86064 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-7f2896a1c39654d860649e88c46d072d0b606979c80b04eef4d23815460649ac.scope/container/memory.events
Jan 31 08:42:48 compute-0 podman[253244]: 2026-01-31 08:42:48.468338546 +0000 UTC m=+0.340192198 container attach 7f2896a1c39654d860649e88c46d072d0b606979c80b04eef4d23815460649ac (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stupefied_mendeleev, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030)
Jan 31 08:42:48 compute-0 podman[253244]: 2026-01-31 08:42:48.468699336 +0000 UTC m=+0.340552988 container died 7f2896a1c39654d860649e88c46d072d0b606979c80b04eef4d23815460649ac (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stupefied_mendeleev, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.build-date=20251030, ceph=True, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 08:42:48 compute-0 systemd[1]: var-lib-containers-storage-overlay-b62712f56f4eee18d905342f7bea3c7b5452f7b89bc92a3e0f84c9154ea5efa8-merged.mount: Deactivated successfully.
Jan 31 08:42:48 compute-0 ceph-mon[75227]: pgmap v1219: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:42:48 compute-0 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 08:42:48 compute-0 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 31 08:42:48 compute-0 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 31 08:42:48 compute-0 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 08:42:49 compute-0 podman[253244]: 2026-01-31 08:42:49.036572299 +0000 UTC m=+0.908425951 container remove 7f2896a1c39654d860649e88c46d072d0b606979c80b04eef4d23815460649ac (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stupefied_mendeleev, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, OSD_FLAVOR=default)
Jan 31 08:42:49 compute-0 systemd[1]: libpod-conmon-7f2896a1c39654d860649e88c46d072d0b606979c80b04eef4d23815460649ac.scope: Deactivated successfully.
Jan 31 08:42:49 compute-0 podman[253285]: 2026-01-31 08:42:49.254517667 +0000 UTC m=+0.110596717 container create e3d3bd62cebe5e777383eaa2fad6c79065743660781d530908abbdbf4f232d21 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wonderful_darwin, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 08:42:49 compute-0 podman[253285]: 2026-01-31 08:42:49.166688883 +0000 UTC m=+0.022767963 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 08:42:49 compute-0 systemd[1]: Started libpod-conmon-e3d3bd62cebe5e777383eaa2fad6c79065743660781d530908abbdbf4f232d21.scope.
Jan 31 08:42:49 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:42:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0e8b9d6b49eafbba9226301689b5f061045f41b72dc15722087218c6a7426475/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 08:42:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0e8b9d6b49eafbba9226301689b5f061045f41b72dc15722087218c6a7426475/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 08:42:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0e8b9d6b49eafbba9226301689b5f061045f41b72dc15722087218c6a7426475/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 08:42:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0e8b9d6b49eafbba9226301689b5f061045f41b72dc15722087218c6a7426475/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 08:42:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0e8b9d6b49eafbba9226301689b5f061045f41b72dc15722087218c6a7426475/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 31 08:42:49 compute-0 podman[253285]: 2026-01-31 08:42:49.417399219 +0000 UTC m=+0.273478369 container init e3d3bd62cebe5e777383eaa2fad6c79065743660781d530908abbdbf4f232d21 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wonderful_darwin, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.build-date=20251030)
Jan 31 08:42:49 compute-0 podman[253285]: 2026-01-31 08:42:49.423769971 +0000 UTC m=+0.279849021 container start e3d3bd62cebe5e777383eaa2fad6c79065743660781d530908abbdbf4f232d21 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wonderful_darwin, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, io.buildah.version=1.41.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 08:42:49 compute-0 podman[253285]: 2026-01-31 08:42:49.512005396 +0000 UTC m=+0.368084476 container attach e3d3bd62cebe5e777383eaa2fad6c79065743660781d530908abbdbf4f232d21 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wonderful_darwin, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle)
Jan 31 08:42:49 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1220: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:42:49 compute-0 wonderful_darwin[253302]: --> passed data devices: 0 physical, 3 LVM
Jan 31 08:42:49 compute-0 wonderful_darwin[253302]: --> All data devices are unavailable
Jan 31 08:42:49 compute-0 systemd[1]: libpod-e3d3bd62cebe5e777383eaa2fad6c79065743660781d530908abbdbf4f232d21.scope: Deactivated successfully.
Jan 31 08:42:49 compute-0 podman[253285]: 2026-01-31 08:42:49.831819299 +0000 UTC m=+0.687898359 container died e3d3bd62cebe5e777383eaa2fad6c79065743660781d530908abbdbf4f232d21 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wonderful_darwin, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20251030)
Jan 31 08:42:50 compute-0 ceph-mon[75227]: pgmap v1220: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:42:50 compute-0 systemd[1]: var-lib-containers-storage-overlay-0e8b9d6b49eafbba9226301689b5f061045f41b72dc15722087218c6a7426475-merged.mount: Deactivated successfully.
Jan 31 08:42:50 compute-0 podman[253285]: 2026-01-31 08:42:50.560946978 +0000 UTC m=+1.417026028 container remove e3d3bd62cebe5e777383eaa2fad6c79065743660781d530908abbdbf4f232d21 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wonderful_darwin, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Jan 31 08:42:50 compute-0 systemd[1]: libpod-conmon-e3d3bd62cebe5e777383eaa2fad6c79065743660781d530908abbdbf4f232d21.scope: Deactivated successfully.
Jan 31 08:42:50 compute-0 sudo[253208]: pam_unix(sudo:session): session closed for user root
Jan 31 08:42:50 compute-0 sudo[253337]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 08:42:50 compute-0 sudo[253337]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:42:50 compute-0 sudo[253337]: pam_unix(sudo:session): session closed for user root
Jan 31 08:42:50 compute-0 sudo[253362]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/82c880e6-d992-5408-8b12-efff9c275473/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 82c880e6-d992-5408-8b12-efff9c275473 -- lvm list --format json
Jan 31 08:42:50 compute-0 sudo[253362]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:42:50 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:42:51 compute-0 podman[253400]: 2026-01-31 08:42:51.020540601 +0000 UTC m=+0.105312736 container create 5fab9779ef99cb1406c1a0d1e7822d1ae528670c02bca3a7234372d312bc3e43 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=tender_elgamal, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle)
Jan 31 08:42:51 compute-0 podman[253400]: 2026-01-31 08:42:50.935577029 +0000 UTC m=+0.020349204 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 08:42:51 compute-0 systemd[1]: Started libpod-conmon-5fab9779ef99cb1406c1a0d1e7822d1ae528670c02bca3a7234372d312bc3e43.scope.
Jan 31 08:42:51 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:42:51 compute-0 podman[253400]: 2026-01-31 08:42:51.247219449 +0000 UTC m=+0.331991594 container init 5fab9779ef99cb1406c1a0d1e7822d1ae528670c02bca3a7234372d312bc3e43 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=tender_elgamal, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Jan 31 08:42:51 compute-0 podman[253400]: 2026-01-31 08:42:51.252882651 +0000 UTC m=+0.337654776 container start 5fab9779ef99cb1406c1a0d1e7822d1ae528670c02bca3a7234372d312bc3e43 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=tender_elgamal, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=tentacle, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 08:42:51 compute-0 tender_elgamal[253417]: 167 167
Jan 31 08:42:51 compute-0 systemd[1]: libpod-5fab9779ef99cb1406c1a0d1e7822d1ae528670c02bca3a7234372d312bc3e43.scope: Deactivated successfully.
Jan 31 08:42:51 compute-0 podman[253400]: 2026-01-31 08:42:51.335798024 +0000 UTC m=+0.420570179 container attach 5fab9779ef99cb1406c1a0d1e7822d1ae528670c02bca3a7234372d312bc3e43 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=tender_elgamal, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_REF=tentacle)
Jan 31 08:42:51 compute-0 podman[253400]: 2026-01-31 08:42:51.336207505 +0000 UTC m=+0.420979640 container died 5fab9779ef99cb1406c1a0d1e7822d1ae528670c02bca3a7234372d312bc3e43 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=tender_elgamal, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_REF=tentacle, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Jan 31 08:42:51 compute-0 systemd[1]: var-lib-containers-storage-overlay-41e98167b715a5dd6d67e4b3b7ac9ddd2cd009919764a52fa1a10507adb81af5-merged.mount: Deactivated successfully.
Jan 31 08:42:51 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1221: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:42:51 compute-0 podman[253400]: 2026-01-31 08:42:51.905466458 +0000 UTC m=+0.990238583 container remove 5fab9779ef99cb1406c1a0d1e7822d1ae528670c02bca3a7234372d312bc3e43 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=tender_elgamal, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2)
Jan 31 08:42:51 compute-0 systemd[1]: libpod-conmon-5fab9779ef99cb1406c1a0d1e7822d1ae528670c02bca3a7234372d312bc3e43.scope: Deactivated successfully.
Jan 31 08:42:51 compute-0 ceph-mon[75227]: pgmap v1221: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:42:52 compute-0 podman[253444]: 2026-01-31 08:42:52.062313567 +0000 UTC m=+0.062774397 container create 54a11fb941248210233720a9b35b35f3fde56f14c757ac09bd8ee02482fb4b9e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=funny_kowalevski, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 08:42:52 compute-0 podman[253444]: 2026-01-31 08:42:52.019844862 +0000 UTC m=+0.020305712 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 08:42:52 compute-0 systemd[1]: Started libpod-conmon-54a11fb941248210233720a9b35b35f3fde56f14c757ac09bd8ee02482fb4b9e.scope.
Jan 31 08:42:52 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:42:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5eecc8179abc246bdf19c29bca9a1fb729b609ed377d517ff28f0d2c3ebadade/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 08:42:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5eecc8179abc246bdf19c29bca9a1fb729b609ed377d517ff28f0d2c3ebadade/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 08:42:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5eecc8179abc246bdf19c29bca9a1fb729b609ed377d517ff28f0d2c3ebadade/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 08:42:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5eecc8179abc246bdf19c29bca9a1fb729b609ed377d517ff28f0d2c3ebadade/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 08:42:52 compute-0 podman[253444]: 2026-01-31 08:42:52.235455413 +0000 UTC m=+0.235916263 container init 54a11fb941248210233720a9b35b35f3fde56f14c757ac09bd8ee02482fb4b9e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=funny_kowalevski, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 31 08:42:52 compute-0 podman[253444]: 2026-01-31 08:42:52.242200696 +0000 UTC m=+0.242661526 container start 54a11fb941248210233720a9b35b35f3fde56f14c757ac09bd8ee02482fb4b9e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=funny_kowalevski, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True)
Jan 31 08:42:52 compute-0 podman[253444]: 2026-01-31 08:42:52.314077163 +0000 UTC m=+0.314538013 container attach 54a11fb941248210233720a9b35b35f3fde56f14c757ac09bd8ee02482fb4b9e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=funny_kowalevski, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 08:42:52 compute-0 funny_kowalevski[253460]: {
Jan 31 08:42:52 compute-0 funny_kowalevski[253460]:     "0": [
Jan 31 08:42:52 compute-0 funny_kowalevski[253460]:         {
Jan 31 08:42:52 compute-0 funny_kowalevski[253460]:             "devices": [
Jan 31 08:42:52 compute-0 funny_kowalevski[253460]:                 "/dev/loop3"
Jan 31 08:42:52 compute-0 funny_kowalevski[253460]:             ],
Jan 31 08:42:52 compute-0 funny_kowalevski[253460]:             "lv_name": "ceph_lv0",
Jan 31 08:42:52 compute-0 funny_kowalevski[253460]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 08:42:52 compute-0 funny_kowalevski[253460]:             "lv_size": "21470642176",
Jan 31 08:42:52 compute-0 funny_kowalevski[253460]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=MTsNbY-MKaT-jGv0-3onj-5WQa-gnK0-BbfLsK,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=82c880e6-d992-5408-8b12-efff9c275473,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=39c36249-2898-4a76-b317-8e4ca379866f,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 31 08:42:52 compute-0 funny_kowalevski[253460]:             "lv_uuid": "MTsNbY-MKaT-jGv0-3onj-5WQa-gnK0-BbfLsK",
Jan 31 08:42:52 compute-0 funny_kowalevski[253460]:             "name": "ceph_lv0",
Jan 31 08:42:52 compute-0 funny_kowalevski[253460]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 08:42:52 compute-0 funny_kowalevski[253460]:             "tags": {
Jan 31 08:42:52 compute-0 funny_kowalevski[253460]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 31 08:42:52 compute-0 funny_kowalevski[253460]:                 "ceph.block_uuid": "MTsNbY-MKaT-jGv0-3onj-5WQa-gnK0-BbfLsK",
Jan 31 08:42:52 compute-0 funny_kowalevski[253460]:                 "ceph.cephx_lockbox_secret": "",
Jan 31 08:42:52 compute-0 funny_kowalevski[253460]:                 "ceph.cluster_fsid": "82c880e6-d992-5408-8b12-efff9c275473",
Jan 31 08:42:52 compute-0 funny_kowalevski[253460]:                 "ceph.cluster_name": "ceph",
Jan 31 08:42:52 compute-0 funny_kowalevski[253460]:                 "ceph.crush_device_class": "",
Jan 31 08:42:52 compute-0 funny_kowalevski[253460]:                 "ceph.encrypted": "0",
Jan 31 08:42:52 compute-0 funny_kowalevski[253460]:                 "ceph.objectstore": "bluestore",
Jan 31 08:42:52 compute-0 funny_kowalevski[253460]:                 "ceph.osd_fsid": "39c36249-2898-4a76-b317-8e4ca379866f",
Jan 31 08:42:52 compute-0 funny_kowalevski[253460]:                 "ceph.osd_id": "0",
Jan 31 08:42:52 compute-0 funny_kowalevski[253460]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 31 08:42:52 compute-0 funny_kowalevski[253460]:                 "ceph.type": "block",
Jan 31 08:42:52 compute-0 funny_kowalevski[253460]:                 "ceph.vdo": "0",
Jan 31 08:42:52 compute-0 funny_kowalevski[253460]:                 "ceph.with_tpm": "0"
Jan 31 08:42:52 compute-0 funny_kowalevski[253460]:             },
Jan 31 08:42:52 compute-0 funny_kowalevski[253460]:             "type": "block",
Jan 31 08:42:52 compute-0 funny_kowalevski[253460]:             "vg_name": "ceph_vg0"
Jan 31 08:42:52 compute-0 funny_kowalevski[253460]:         }
Jan 31 08:42:52 compute-0 funny_kowalevski[253460]:     ],
Jan 31 08:42:52 compute-0 funny_kowalevski[253460]:     "1": [
Jan 31 08:42:52 compute-0 funny_kowalevski[253460]:         {
Jan 31 08:42:52 compute-0 funny_kowalevski[253460]:             "devices": [
Jan 31 08:42:52 compute-0 funny_kowalevski[253460]:                 "/dev/loop4"
Jan 31 08:42:52 compute-0 funny_kowalevski[253460]:             ],
Jan 31 08:42:52 compute-0 funny_kowalevski[253460]:             "lv_name": "ceph_lv1",
Jan 31 08:42:52 compute-0 funny_kowalevski[253460]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 31 08:42:52 compute-0 funny_kowalevski[253460]:             "lv_size": "21470642176",
Jan 31 08:42:52 compute-0 funny_kowalevski[253460]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=p93Mbf-DMxT-pcUt-jSJE-SFna-oscq-yTAd40,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=82c880e6-d992-5408-8b12-efff9c275473,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=dacad4fa-56d8-4937-b2d8-306fb75187f3,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 31 08:42:52 compute-0 funny_kowalevski[253460]:             "lv_uuid": "p93Mbf-DMxT-pcUt-jSJE-SFna-oscq-yTAd40",
Jan 31 08:42:52 compute-0 funny_kowalevski[253460]:             "name": "ceph_lv1",
Jan 31 08:42:52 compute-0 funny_kowalevski[253460]:             "path": "/dev/ceph_vg1/ceph_lv1",
Jan 31 08:42:52 compute-0 funny_kowalevski[253460]:             "tags": {
Jan 31 08:42:52 compute-0 funny_kowalevski[253460]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 31 08:42:52 compute-0 funny_kowalevski[253460]:                 "ceph.block_uuid": "p93Mbf-DMxT-pcUt-jSJE-SFna-oscq-yTAd40",
Jan 31 08:42:52 compute-0 funny_kowalevski[253460]:                 "ceph.cephx_lockbox_secret": "",
Jan 31 08:42:52 compute-0 funny_kowalevski[253460]:                 "ceph.cluster_fsid": "82c880e6-d992-5408-8b12-efff9c275473",
Jan 31 08:42:52 compute-0 funny_kowalevski[253460]:                 "ceph.cluster_name": "ceph",
Jan 31 08:42:52 compute-0 funny_kowalevski[253460]:                 "ceph.crush_device_class": "",
Jan 31 08:42:52 compute-0 funny_kowalevski[253460]:                 "ceph.encrypted": "0",
Jan 31 08:42:52 compute-0 funny_kowalevski[253460]:                 "ceph.objectstore": "bluestore",
Jan 31 08:42:52 compute-0 funny_kowalevski[253460]:                 "ceph.osd_fsid": "dacad4fa-56d8-4937-b2d8-306fb75187f3",
Jan 31 08:42:52 compute-0 funny_kowalevski[253460]:                 "ceph.osd_id": "1",
Jan 31 08:42:52 compute-0 funny_kowalevski[253460]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 31 08:42:52 compute-0 funny_kowalevski[253460]:                 "ceph.type": "block",
Jan 31 08:42:52 compute-0 funny_kowalevski[253460]:                 "ceph.vdo": "0",
Jan 31 08:42:52 compute-0 funny_kowalevski[253460]:                 "ceph.with_tpm": "0"
Jan 31 08:42:52 compute-0 funny_kowalevski[253460]:             },
Jan 31 08:42:52 compute-0 funny_kowalevski[253460]:             "type": "block",
Jan 31 08:42:52 compute-0 funny_kowalevski[253460]:             "vg_name": "ceph_vg1"
Jan 31 08:42:52 compute-0 funny_kowalevski[253460]:         }
Jan 31 08:42:52 compute-0 funny_kowalevski[253460]:     ],
Jan 31 08:42:52 compute-0 funny_kowalevski[253460]:     "2": [
Jan 31 08:42:52 compute-0 funny_kowalevski[253460]:         {
Jan 31 08:42:52 compute-0 funny_kowalevski[253460]:             "devices": [
Jan 31 08:42:52 compute-0 funny_kowalevski[253460]:                 "/dev/loop5"
Jan 31 08:42:52 compute-0 funny_kowalevski[253460]:             ],
Jan 31 08:42:52 compute-0 funny_kowalevski[253460]:             "lv_name": "ceph_lv2",
Jan 31 08:42:52 compute-0 funny_kowalevski[253460]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 31 08:42:52 compute-0 funny_kowalevski[253460]:             "lv_size": "21470642176",
Jan 31 08:42:52 compute-0 funny_kowalevski[253460]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=fxd7JU-HnwP-NvcE-M4xv-EgEF-kK7y-w6dXCS,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=82c880e6-d992-5408-8b12-efff9c275473,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=faa25865-e7b6-44f9-8188-08bf287b941b,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 31 08:42:52 compute-0 funny_kowalevski[253460]:             "lv_uuid": "fxd7JU-HnwP-NvcE-M4xv-EgEF-kK7y-w6dXCS",
Jan 31 08:42:52 compute-0 funny_kowalevski[253460]:             "name": "ceph_lv2",
Jan 31 08:42:52 compute-0 funny_kowalevski[253460]:             "path": "/dev/ceph_vg2/ceph_lv2",
Jan 31 08:42:52 compute-0 funny_kowalevski[253460]:             "tags": {
Jan 31 08:42:52 compute-0 funny_kowalevski[253460]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 31 08:42:52 compute-0 funny_kowalevski[253460]:                 "ceph.block_uuid": "fxd7JU-HnwP-NvcE-M4xv-EgEF-kK7y-w6dXCS",
Jan 31 08:42:52 compute-0 funny_kowalevski[253460]:                 "ceph.cephx_lockbox_secret": "",
Jan 31 08:42:52 compute-0 funny_kowalevski[253460]:                 "ceph.cluster_fsid": "82c880e6-d992-5408-8b12-efff9c275473",
Jan 31 08:42:52 compute-0 funny_kowalevski[253460]:                 "ceph.cluster_name": "ceph",
Jan 31 08:42:52 compute-0 funny_kowalevski[253460]:                 "ceph.crush_device_class": "",
Jan 31 08:42:52 compute-0 funny_kowalevski[253460]:                 "ceph.encrypted": "0",
Jan 31 08:42:52 compute-0 funny_kowalevski[253460]:                 "ceph.objectstore": "bluestore",
Jan 31 08:42:52 compute-0 funny_kowalevski[253460]:                 "ceph.osd_fsid": "faa25865-e7b6-44f9-8188-08bf287b941b",
Jan 31 08:42:52 compute-0 funny_kowalevski[253460]:                 "ceph.osd_id": "2",
Jan 31 08:42:52 compute-0 funny_kowalevski[253460]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 31 08:42:52 compute-0 funny_kowalevski[253460]:                 "ceph.type": "block",
Jan 31 08:42:52 compute-0 funny_kowalevski[253460]:                 "ceph.vdo": "0",
Jan 31 08:42:52 compute-0 funny_kowalevski[253460]:                 "ceph.with_tpm": "0"
Jan 31 08:42:52 compute-0 funny_kowalevski[253460]:             },
Jan 31 08:42:52 compute-0 funny_kowalevski[253460]:             "type": "block",
Jan 31 08:42:52 compute-0 funny_kowalevski[253460]:             "vg_name": "ceph_vg2"
Jan 31 08:42:52 compute-0 funny_kowalevski[253460]:         }
Jan 31 08:42:52 compute-0 funny_kowalevski[253460]:     ]
Jan 31 08:42:52 compute-0 funny_kowalevski[253460]: }
Jan 31 08:42:52 compute-0 systemd[1]: libpod-54a11fb941248210233720a9b35b35f3fde56f14c757ac09bd8ee02482fb4b9e.scope: Deactivated successfully.
Jan 31 08:42:52 compute-0 podman[253444]: 2026-01-31 08:42:52.508829847 +0000 UTC m=+0.509290667 container died 54a11fb941248210233720a9b35b35f3fde56f14c757ac09bd8ee02482fb4b9e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=funny_kowalevski, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 08:42:52 compute-0 systemd[1]: var-lib-containers-storage-overlay-5eecc8179abc246bdf19c29bca9a1fb729b609ed377d517ff28f0d2c3ebadade-merged.mount: Deactivated successfully.
Jan 31 08:42:53 compute-0 podman[253444]: 2026-01-31 08:42:53.209807799 +0000 UTC m=+1.210268629 container remove 54a11fb941248210233720a9b35b35f3fde56f14c757ac09bd8ee02482fb4b9e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=funny_kowalevski, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 08:42:53 compute-0 systemd[1]: libpod-conmon-54a11fb941248210233720a9b35b35f3fde56f14c757ac09bd8ee02482fb4b9e.scope: Deactivated successfully.
Jan 31 08:42:53 compute-0 sudo[253362]: pam_unix(sudo:session): session closed for user root
Jan 31 08:42:53 compute-0 sudo[253481]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 08:42:53 compute-0 sudo[253481]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:42:53 compute-0 sudo[253481]: pam_unix(sudo:session): session closed for user root
Jan 31 08:42:53 compute-0 sudo[253506]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/82c880e6-d992-5408-8b12-efff9c275473/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 82c880e6-d992-5408-8b12-efff9c275473 -- raw list --format json
Jan 31 08:42:53 compute-0 sudo[253506]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:42:53 compute-0 podman[253543]: 2026-01-31 08:42:53.583285038 +0000 UTC m=+0.019502039 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 08:42:53 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1222: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:42:53 compute-0 podman[253543]: 2026-01-31 08:42:53.718612102 +0000 UTC m=+0.154829063 container create f4629d6b7b08aa8bc50d7247bf8dce6aebcfe03f4cba15f93d4c91b0df3bbebd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=admiring_khorana, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Jan 31 08:42:53 compute-0 systemd[1]: Started libpod-conmon-f4629d6b7b08aa8bc50d7247bf8dce6aebcfe03f4cba15f93d4c91b0df3bbebd.scope.
Jan 31 08:42:53 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:42:54 compute-0 ceph-mon[75227]: pgmap v1222: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:42:54 compute-0 podman[253543]: 2026-01-31 08:42:54.249446094 +0000 UTC m=+0.685663085 container init f4629d6b7b08aa8bc50d7247bf8dce6aebcfe03f4cba15f93d4c91b0df3bbebd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=admiring_khorana, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Jan 31 08:42:54 compute-0 podman[253543]: 2026-01-31 08:42:54.254103367 +0000 UTC m=+0.690320338 container start f4629d6b7b08aa8bc50d7247bf8dce6aebcfe03f4cba15f93d4c91b0df3bbebd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=admiring_khorana, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 08:42:54 compute-0 admiring_khorana[253559]: 167 167
Jan 31 08:42:54 compute-0 systemd[1]: libpod-f4629d6b7b08aa8bc50d7247bf8dce6aebcfe03f4cba15f93d4c91b0df3bbebd.scope: Deactivated successfully.
Jan 31 08:42:54 compute-0 podman[253543]: 2026-01-31 08:42:54.4970444 +0000 UTC m=+0.933261401 container attach f4629d6b7b08aa8bc50d7247bf8dce6aebcfe03f4cba15f93d4c91b0df3bbebd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=admiring_khorana, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 08:42:54 compute-0 podman[253543]: 2026-01-31 08:42:54.498831271 +0000 UTC m=+0.935048232 container died f4629d6b7b08aa8bc50d7247bf8dce6aebcfe03f4cba15f93d4c91b0df3bbebd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=admiring_khorana, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Jan 31 08:42:54 compute-0 systemd[1]: var-lib-containers-storage-overlay-16043fdba2cde5259c8979996b724abec2ed07ba6d528d79a2e76300f51a7224-merged.mount: Deactivated successfully.
Jan 31 08:42:55 compute-0 podman[253543]: 2026-01-31 08:42:55.08021928 +0000 UTC m=+1.516436251 container remove f4629d6b7b08aa8bc50d7247bf8dce6aebcfe03f4cba15f93d4c91b0df3bbebd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=admiring_khorana, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Jan 31 08:42:55 compute-0 systemd[1]: libpod-conmon-f4629d6b7b08aa8bc50d7247bf8dce6aebcfe03f4cba15f93d4c91b0df3bbebd.scope: Deactivated successfully.
Jan 31 08:42:55 compute-0 podman[253603]: 2026-01-31 08:42:55.180621984 +0000 UTC m=+0.023433652 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 08:42:55 compute-0 podman[253603]: 2026-01-31 08:42:55.323324578 +0000 UTC m=+0.166136246 container create 57d61eb607135ecf8a9c54f208bd568a287f0b4c6d3d197283714ff486de85de (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=beautiful_gould, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.license=GPLv2)
Jan 31 08:42:55 compute-0 podman[253574]: 2026-01-31 08:42:55.330537785 +0000 UTC m=+1.037678700 container health_status 5cc46d1955888fed41771eb977e7f9416e280539f01559118253757fe3eb0869 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '3b798815db4eef76ff54d2cfe5801aee605c637f16e47a4297de289ab1fdb9c1-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20260127, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Jan 31 08:42:55 compute-0 podman[253565]: 2026-01-31 08:42:55.431672479 +0000 UTC m=+1.140971606 container health_status 14c3c41ef3ecc0cb180f3b1c6b2646401de390411c75f2edf127900ced71a3ae (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '3b798815db4eef76ff54d2cfe5801aee605c637f16e47a4297de289ab1fdb9c1-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true)
Jan 31 08:42:55 compute-0 systemd[1]: Started libpod-conmon-57d61eb607135ecf8a9c54f208bd568a287f0b4c6d3d197283714ff486de85de.scope.
Jan 31 08:42:55 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:42:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/36174c8e720188b55b7c3f57c3e7f9617469678eb1953c5fb65dc1240927bc78/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 08:42:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/36174c8e720188b55b7c3f57c3e7f9617469678eb1953c5fb65dc1240927bc78/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 08:42:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/36174c8e720188b55b7c3f57c3e7f9617469678eb1953c5fb65dc1240927bc78/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 08:42:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/36174c8e720188b55b7c3f57c3e7f9617469678eb1953c5fb65dc1240927bc78/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 08:42:55 compute-0 podman[253603]: 2026-01-31 08:42:55.573028815 +0000 UTC m=+0.415840503 container init 57d61eb607135ecf8a9c54f208bd568a287f0b4c6d3d197283714ff486de85de (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=beautiful_gould, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030)
Jan 31 08:42:55 compute-0 podman[253603]: 2026-01-31 08:42:55.580334274 +0000 UTC m=+0.423145942 container start 57d61eb607135ecf8a9c54f208bd568a287f0b4c6d3d197283714ff486de85de (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=beautiful_gould, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 08:42:55 compute-0 podman[253603]: 2026-01-31 08:42:55.648237458 +0000 UTC m=+0.491049136 container attach 57d61eb607135ecf8a9c54f208bd568a287f0b4c6d3d197283714ff486de85de (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=beautiful_gould, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 08:42:55 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1223: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:42:55 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:42:55 compute-0 ceph-mon[75227]: pgmap v1223: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:42:56 compute-0 lvm[253720]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 31 08:42:56 compute-0 lvm[253717]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 31 08:42:56 compute-0 lvm[253717]: VG ceph_vg0 finished
Jan 31 08:42:56 compute-0 lvm[253720]: VG ceph_vg1 finished
Jan 31 08:42:56 compute-0 lvm[253722]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 31 08:42:56 compute-0 lvm[253722]: VG ceph_vg2 finished
Jan 31 08:42:56 compute-0 beautiful_gould[253640]: {}
Jan 31 08:42:56 compute-0 systemd[1]: libpod-57d61eb607135ecf8a9c54f208bd568a287f0b4c6d3d197283714ff486de85de.scope: Deactivated successfully.
Jan 31 08:42:56 compute-0 systemd[1]: libpod-57d61eb607135ecf8a9c54f208bd568a287f0b4c6d3d197283714ff486de85de.scope: Consumed 1.103s CPU time.
Jan 31 08:42:56 compute-0 podman[253603]: 2026-01-31 08:42:56.369630724 +0000 UTC m=+1.212442392 container died 57d61eb607135ecf8a9c54f208bd568a287f0b4c6d3d197283714ff486de85de (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=beautiful_gould, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 31 08:42:56 compute-0 systemd[1]: var-lib-containers-storage-overlay-36174c8e720188b55b7c3f57c3e7f9617469678eb1953c5fb65dc1240927bc78-merged.mount: Deactivated successfully.
Jan 31 08:42:57 compute-0 podman[253603]: 2026-01-31 08:42:57.263007274 +0000 UTC m=+2.105818962 container remove 57d61eb607135ecf8a9c54f208bd568a287f0b4c6d3d197283714ff486de85de (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=beautiful_gould, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 08:42:57 compute-0 systemd[1]: libpod-conmon-57d61eb607135ecf8a9c54f208bd568a287f0b4c6d3d197283714ff486de85de.scope: Deactivated successfully.
Jan 31 08:42:57 compute-0 sudo[253506]: pam_unix(sudo:session): session closed for user root
Jan 31 08:42:57 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 31 08:42:57 compute-0 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 08:42:57 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 31 08:42:57 compute-0 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 08:42:57 compute-0 sudo[253739]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 31 08:42:57 compute-0 sudo[253739]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:42:57 compute-0 sudo[253739]: pam_unix(sudo:session): session closed for user root
Jan 31 08:42:57 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1224: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:42:58 compute-0 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 08:42:58 compute-0 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 08:42:58 compute-0 ceph-mon[75227]: pgmap v1224: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:42:59 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1225: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:42:59 compute-0 ceph-mon[75227]: pgmap v1225: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:43:00 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:43:01 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1226: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:43:02 compute-0 ceph-mon[75227]: pgmap v1226: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:43:02 compute-0 ceph-mgr[75519]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:43:02 compute-0 ceph-mgr[75519]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:43:02 compute-0 ceph-mgr[75519]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:43:02 compute-0 ceph-mgr[75519]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:43:02 compute-0 ceph-mgr[75519]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:43:02 compute-0 ceph-mgr[75519]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:43:03 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1227: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:43:03 compute-0 ceph-mon[75227]: pgmap v1227: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:43:05 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1228: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:43:05 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:43:05 compute-0 ceph-mon[75227]: pgmap v1228: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:43:07 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1229: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:43:08 compute-0 ceph-mon[75227]: pgmap v1229: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:43:09 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1230: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:43:10 compute-0 ceph-mon[75227]: pgmap v1230: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:43:10 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:43:11 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1231: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:43:12 compute-0 ceph-mon[75227]: pgmap v1231: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:43:13 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1232: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:43:14 compute-0 ceph-mon[75227]: pgmap v1232: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:43:15 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1233: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:43:15 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:43:16 compute-0 ceph-mon[75227]: pgmap v1233: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:43:17 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1234: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:43:17 compute-0 ovn_metadata_agent[154972]: 2026-01-31 08:43:17.901 154977 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:43:17 compute-0 ovn_metadata_agent[154972]: 2026-01-31 08:43:17.903 154977 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:43:17 compute-0 ovn_metadata_agent[154972]: 2026-01-31 08:43:17.903 154977 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:43:18 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 31 08:43:18 compute-0 ceph-mon[75227]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3713800401' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 31 08:43:18 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 31 08:43:18 compute-0 ceph-mon[75227]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3713800401' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 31 08:43:18 compute-0 ceph-mon[75227]: pgmap v1234: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:43:19 compute-0 ceph-mon[75227]: from='client.? 192.168.122.10:0/3713800401' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 31 08:43:19 compute-0 ceph-mon[75227]: from='client.? 192.168.122.10:0/3713800401' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 31 08:43:19 compute-0 nova_compute[238824]: 2026-01-31 08:43:19.339 238828 DEBUG oslo_service.periodic_task [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:43:19 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1235: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:43:20 compute-0 ceph-mon[75227]: pgmap v1235: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:43:20 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:43:21 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1236: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:43:22 compute-0 ceph-mon[75227]: pgmap v1236: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:43:22 compute-0 nova_compute[238824]: 2026-01-31 08:43:22.339 238828 DEBUG oslo_service.periodic_task [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:43:22 compute-0 nova_compute[238824]: 2026-01-31 08:43:22.340 238828 DEBUG oslo_service.periodic_task [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:43:22 compute-0 nova_compute[238824]: 2026-01-31 08:43:22.341 238828 DEBUG oslo_service.periodic_task [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:43:22 compute-0 nova_compute[238824]: 2026-01-31 08:43:22.341 238828 DEBUG nova.compute.manager [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 31 08:43:23 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1237: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:43:23 compute-0 ceph-mon[75227]: pgmap v1237: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:43:25 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1238: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:43:25 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:43:26 compute-0 ceph-mon[75227]: pgmap v1238: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:43:26 compute-0 podman[253765]: 2026-01-31 08:43:26.185592234 +0000 UTC m=+0.049941331 container health_status 5cc46d1955888fed41771eb977e7f9416e280539f01559118253757fe3eb0869 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '3b798815db4eef76ff54d2cfe5801aee605c637f16e47a4297de289ab1fdb9c1-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Jan 31 08:43:26 compute-0 podman[253764]: 2026-01-31 08:43:26.232139136 +0000 UTC m=+0.098962854 container health_status 14c3c41ef3ecc0cb180f3b1c6b2646401de390411c75f2edf127900ced71a3ae (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, container_name=ovn_controller, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '3b798815db4eef76ff54d2cfe5801aee605c637f16e47a4297de289ab1fdb9c1-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127)
Jan 31 08:43:26 compute-0 nova_compute[238824]: 2026-01-31 08:43:26.339 238828 DEBUG oslo_service.periodic_task [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:43:26 compute-0 nova_compute[238824]: 2026-01-31 08:43:26.340 238828 DEBUG nova.compute.manager [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 31 08:43:26 compute-0 nova_compute[238824]: 2026-01-31 08:43:26.340 238828 DEBUG nova.compute.manager [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 31 08:43:26 compute-0 nova_compute[238824]: 2026-01-31 08:43:26.367 238828 DEBUG nova.compute.manager [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 31 08:43:26 compute-0 nova_compute[238824]: 2026-01-31 08:43:26.367 238828 DEBUG oslo_service.periodic_task [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:43:27 compute-0 nova_compute[238824]: 2026-01-31 08:43:27.339 238828 DEBUG oslo_service.periodic_task [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:43:27 compute-0 nova_compute[238824]: 2026-01-31 08:43:27.340 238828 DEBUG oslo_service.periodic_task [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:43:27 compute-0 nova_compute[238824]: 2026-01-31 08:43:27.368 238828 DEBUG oslo_concurrency.lockutils [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:43:27 compute-0 nova_compute[238824]: 2026-01-31 08:43:27.369 238828 DEBUG oslo_concurrency.lockutils [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:43:27 compute-0 nova_compute[238824]: 2026-01-31 08:43:27.369 238828 DEBUG oslo_concurrency.lockutils [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:43:27 compute-0 nova_compute[238824]: 2026-01-31 08:43:27.369 238828 DEBUG nova.compute.resource_tracker [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 31 08:43:27 compute-0 nova_compute[238824]: 2026-01-31 08:43:27.369 238828 DEBUG oslo_concurrency.processutils [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:43:27 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1239: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:43:27 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 31 08:43:27 compute-0 ceph-mon[75227]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1709318317' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 31 08:43:27 compute-0 nova_compute[238824]: 2026-01-31 08:43:27.928 238828 DEBUG oslo_concurrency.processutils [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.559s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:43:27 compute-0 ceph-mon[75227]: pgmap v1239: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:43:28 compute-0 nova_compute[238824]: 2026-01-31 08:43:28.083 238828 WARNING nova.virt.libvirt.driver [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 31 08:43:28 compute-0 nova_compute[238824]: 2026-01-31 08:43:28.084 238828 DEBUG nova.compute.resource_tracker [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5128MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 31 08:43:28 compute-0 nova_compute[238824]: 2026-01-31 08:43:28.085 238828 DEBUG oslo_concurrency.lockutils [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:43:28 compute-0 nova_compute[238824]: 2026-01-31 08:43:28.085 238828 DEBUG oslo_concurrency.lockutils [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:43:28 compute-0 nova_compute[238824]: 2026-01-31 08:43:28.163 238828 DEBUG nova.compute.resource_tracker [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 31 08:43:28 compute-0 nova_compute[238824]: 2026-01-31 08:43:28.164 238828 DEBUG nova.compute.resource_tracker [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 31 08:43:28 compute-0 nova_compute[238824]: 2026-01-31 08:43:28.180 238828 DEBUG oslo_concurrency.processutils [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:43:28 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 31 08:43:28 compute-0 ceph-mon[75227]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1317742371' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 31 08:43:28 compute-0 nova_compute[238824]: 2026-01-31 08:43:28.703 238828 DEBUG oslo_concurrency.processutils [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.523s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:43:28 compute-0 nova_compute[238824]: 2026-01-31 08:43:28.709 238828 DEBUG nova.compute.provider_tree [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Inventory has not changed in ProviderTree for provider: 6d4ff98f-eb37-47a1-bfaf-01e7f5329d98 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 31 08:43:28 compute-0 nova_compute[238824]: 2026-01-31 08:43:28.734 238828 DEBUG nova.scheduler.client.report [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Inventory has not changed for provider 6d4ff98f-eb37-47a1-bfaf-01e7f5329d98 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 31 08:43:28 compute-0 nova_compute[238824]: 2026-01-31 08:43:28.736 238828 DEBUG nova.compute.resource_tracker [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 31 08:43:28 compute-0 nova_compute[238824]: 2026-01-31 08:43:28.736 238828 DEBUG oslo_concurrency.lockutils [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.651s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:43:29 compute-0 ceph-mon[75227]: from='client.? 192.168.122.100:0/1709318317' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 31 08:43:29 compute-0 ceph-mon[75227]: from='client.? 192.168.122.100:0/1317742371' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 31 08:43:29 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1240: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:43:29 compute-0 nova_compute[238824]: 2026-01-31 08:43:29.731 238828 DEBUG oslo_service.periodic_task [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:43:30 compute-0 ceph-mon[75227]: pgmap v1240: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:43:30 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:43:31 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1241: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:43:31 compute-0 ceph-mgr[75519]: [balancer INFO root] Optimize plan auto_2026-01-31_08:43:31
Jan 31 08:43:31 compute-0 ceph-mgr[75519]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 31 08:43:31 compute-0 ceph-mgr[75519]: [balancer INFO root] do_upmap
Jan 31 08:43:31 compute-0 ceph-mgr[75519]: [balancer INFO root] pools ['cephfs.cephfs.data', 'vms', 'default.rgw.control', 'backups', 'cephfs.cephfs.meta', 'volumes', '.mgr', 'default.rgw.meta', '.rgw.root', 'default.rgw.log', 'images']
Jan 31 08:43:31 compute-0 ceph-mgr[75519]: [balancer INFO root] prepared 0/10 upmap changes
Jan 31 08:43:31 compute-0 ceph-mon[75227]: pgmap v1241: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:43:32 compute-0 ceph-mgr[75519]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:43:32 compute-0 ceph-mgr[75519]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:43:32 compute-0 ceph-mgr[75519]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:43:32 compute-0 ceph-mgr[75519]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:43:32 compute-0 ceph-mgr[75519]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:43:32 compute-0 ceph-mgr[75519]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:43:33 compute-0 ceph-mgr[75519]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 31 08:43:33 compute-0 ceph-mgr[75519]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 08:43:33 compute-0 ceph-mgr[75519]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 31 08:43:33 compute-0 ceph-mgr[75519]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 08:43:33 compute-0 ceph-mgr[75519]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 08:43:33 compute-0 ceph-mgr[75519]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 08:43:33 compute-0 ceph-mgr[75519]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 08:43:33 compute-0 ceph-mgr[75519]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 08:43:33 compute-0 ceph-mgr[75519]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 08:43:33 compute-0 ceph-mgr[75519]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 08:43:33 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1242: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:43:33 compute-0 ceph-mon[75227]: pgmap v1242: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:43:35 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1243: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:43:35 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:43:36 compute-0 ceph-mon[75227]: pgmap v1243: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:43:37 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1244: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:43:38 compute-0 ceph-mon[75227]: pgmap v1244: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:43:39 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1245: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:43:40 compute-0 ceph-mon[75227]: pgmap v1245: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:43:40 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:43:41 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1246: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:43:41 compute-0 ceph-mon[75227]: pgmap v1246: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:43:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] _maybe_adjust
Jan 31 08:43:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:43:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 31 08:43:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:43:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 08:43:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:43:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 08:43:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:43:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 08:43:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:43:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 3.257160766784386e-07 of space, bias 1.0, pg target 9.771482300353158e-05 quantized to 32 (current 32)
Jan 31 08:43:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:43:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 2.5331644121694047e-06 of space, bias 4.0, pg target 0.0030397972946032857 quantized to 16 (current 16)
Jan 31 08:43:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:43:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 08:43:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:43:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 31 08:43:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:43:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Jan 31 08:43:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:43:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 08:43:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:43:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 31 08:43:43 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1247: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:43:44 compute-0 ceph-mon[75227]: pgmap v1247: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:43:45 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1248: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:43:45 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:43:45 compute-0 ceph-mon[75227]: pgmap v1248: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:43:47 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1249: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:43:48 compute-0 ceph-mon[75227]: pgmap v1249: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:43:49 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1250: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:43:49 compute-0 ceph-mon[75227]: pgmap v1250: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:43:50 compute-0 ceph-mon[75227]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #57. Immutable memtables: 0.
Jan 31 08:43:50 compute-0 ceph-mon[75227]: rocksdb: (Original Log Time 2026/01/31-08:43:50.107221) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 31 08:43:50 compute-0 ceph-mon[75227]: rocksdb: [db/flush_job.cc:856] [default] [JOB 29] Flushing memtable with next log file: 57
Jan 31 08:43:50 compute-0 ceph-mon[75227]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769849030107342, "job": 29, "event": "flush_started", "num_memtables": 1, "num_entries": 947, "num_deletes": 251, "total_data_size": 1340247, "memory_usage": 1364496, "flush_reason": "Manual Compaction"}
Jan 31 08:43:50 compute-0 ceph-mon[75227]: rocksdb: [db/flush_job.cc:885] [default] [JOB 29] Level-0 flush table #58: started
Jan 31 08:43:50 compute-0 ceph-mon[75227]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769849030133798, "cf_name": "default", "job": 29, "event": "table_file_creation", "file_number": 58, "file_size": 1327609, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 24782, "largest_seqno": 25728, "table_properties": {"data_size": 1322899, "index_size": 2298, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1349, "raw_key_size": 10153, "raw_average_key_size": 19, "raw_value_size": 1313514, "raw_average_value_size": 2535, "num_data_blocks": 103, "num_entries": 518, "num_filter_entries": 518, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769848941, "oldest_key_time": 1769848941, "file_creation_time": 1769849030, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "91992687-9ca4-489a-811f-a25b3432622d", "db_session_id": "RDN3DWKE2K2I6QTJYIJY", "orig_file_number": 58, "seqno_to_time_mapping": "N/A"}}
Jan 31 08:43:50 compute-0 ceph-mon[75227]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 29] Flush lasted 26663 microseconds, and 2970 cpu microseconds.
Jan 31 08:43:50 compute-0 ceph-mon[75227]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 31 08:43:50 compute-0 ceph-mon[75227]: rocksdb: (Original Log Time 2026/01/31-08:43:50.133890) [db/flush_job.cc:967] [default] [JOB 29] Level-0 flush table #58: 1327609 bytes OK
Jan 31 08:43:50 compute-0 ceph-mon[75227]: rocksdb: (Original Log Time 2026/01/31-08:43:50.133915) [db/memtable_list.cc:519] [default] Level-0 commit table #58 started
Jan 31 08:43:50 compute-0 ceph-mon[75227]: rocksdb: (Original Log Time 2026/01/31-08:43:50.142506) [db/memtable_list.cc:722] [default] Level-0 commit table #58: memtable #1 done
Jan 31 08:43:50 compute-0 ceph-mon[75227]: rocksdb: (Original Log Time 2026/01/31-08:43:50.142550) EVENT_LOG_v1 {"time_micros": 1769849030142541, "job": 29, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 31 08:43:50 compute-0 ceph-mon[75227]: rocksdb: (Original Log Time 2026/01/31-08:43:50.142576) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 31 08:43:50 compute-0 ceph-mon[75227]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 29] Try to delete WAL files size 1335719, prev total WAL file size 1335719, number of live WAL files 2.
Jan 31 08:43:50 compute-0 ceph-mon[75227]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000054.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 08:43:50 compute-0 ceph-mon[75227]: rocksdb: (Original Log Time 2026/01/31-08:43:50.143145) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730032303038' seq:72057594037927935, type:22 .. '7061786F730032323630' seq:0, type:0; will stop at (end)
Jan 31 08:43:50 compute-0 ceph-mon[75227]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 30] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 31 08:43:50 compute-0 ceph-mon[75227]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 29 Base level 0, inputs: [58(1296KB)], [56(8154KB)]
Jan 31 08:43:50 compute-0 ceph-mon[75227]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769849030143208, "job": 30, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [58], "files_L6": [56], "score": -1, "input_data_size": 9678042, "oldest_snapshot_seqno": -1}
Jan 31 08:43:50 compute-0 ceph-mon[75227]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 30] Generated table #59: 4735 keys, 7904189 bytes, temperature: kUnknown
Jan 31 08:43:50 compute-0 ceph-mon[75227]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769849030381279, "cf_name": "default", "job": 30, "event": "table_file_creation", "file_number": 59, "file_size": 7904189, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 7871966, "index_size": 19313, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 11845, "raw_key_size": 118665, "raw_average_key_size": 25, "raw_value_size": 7785634, "raw_average_value_size": 1644, "num_data_blocks": 797, "num_entries": 4735, "num_filter_entries": 4735, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769846771, "oldest_key_time": 0, "file_creation_time": 1769849030, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "91992687-9ca4-489a-811f-a25b3432622d", "db_session_id": "RDN3DWKE2K2I6QTJYIJY", "orig_file_number": 59, "seqno_to_time_mapping": "N/A"}}
Jan 31 08:43:50 compute-0 ceph-mon[75227]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 31 08:43:50 compute-0 ceph-mon[75227]: rocksdb: (Original Log Time 2026/01/31-08:43:50.381727) [db/compaction/compaction_job.cc:1663] [default] [JOB 30] Compacted 1@0 + 1@6 files to L6 => 7904189 bytes
Jan 31 08:43:50 compute-0 ceph-mon[75227]: rocksdb: (Original Log Time 2026/01/31-08:43:50.385612) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 40.6 rd, 33.2 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.3, 8.0 +0.0 blob) out(7.5 +0.0 blob), read-write-amplify(13.2) write-amplify(6.0) OK, records in: 5249, records dropped: 514 output_compression: NoCompression
Jan 31 08:43:50 compute-0 ceph-mon[75227]: rocksdb: (Original Log Time 2026/01/31-08:43:50.385641) EVENT_LOG_v1 {"time_micros": 1769849030385628, "job": 30, "event": "compaction_finished", "compaction_time_micros": 238378, "compaction_time_cpu_micros": 13796, "output_level": 6, "num_output_files": 1, "total_output_size": 7904189, "num_input_records": 5249, "num_output_records": 4735, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 31 08:43:50 compute-0 ceph-mon[75227]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000058.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 08:43:50 compute-0 ceph-mon[75227]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769849030386132, "job": 30, "event": "table_file_deletion", "file_number": 58}
Jan 31 08:43:50 compute-0 ceph-mon[75227]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000056.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 08:43:50 compute-0 ceph-mon[75227]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769849030387684, "job": 30, "event": "table_file_deletion", "file_number": 56}
Jan 31 08:43:50 compute-0 ceph-mon[75227]: rocksdb: (Original Log Time 2026/01/31-08:43:50.143055) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 08:43:50 compute-0 ceph-mon[75227]: rocksdb: (Original Log Time 2026/01/31-08:43:50.387769) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 08:43:50 compute-0 ceph-mon[75227]: rocksdb: (Original Log Time 2026/01/31-08:43:50.387776) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 08:43:50 compute-0 ceph-mon[75227]: rocksdb: (Original Log Time 2026/01/31-08:43:50.387779) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 08:43:50 compute-0 ceph-mon[75227]: rocksdb: (Original Log Time 2026/01/31-08:43:50.387781) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 08:43:50 compute-0 ceph-mon[75227]: rocksdb: (Original Log Time 2026/01/31-08:43:50.387784) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 08:43:50 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:43:51 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1251: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:43:51 compute-0 ceph-mon[75227]: pgmap v1251: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:43:53 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1252: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:43:53 compute-0 ceph-mon[75227]: pgmap v1252: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:43:55 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1253: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:43:55 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:43:55 compute-0 ceph-mon[75227]: pgmap v1253: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:43:57 compute-0 podman[253854]: 2026-01-31 08:43:57.171092923 +0000 UTC m=+0.068198253 container health_status 14c3c41ef3ecc0cb180f3b1c6b2646401de390411c75f2edf127900ced71a3ae (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '3b798815db4eef76ff54d2cfe5801aee605c637f16e47a4297de289ab1fdb9c1-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, container_name=ovn_controller, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_controller, org.label-schema.build-date=20260127)
Jan 31 08:43:57 compute-0 podman[253855]: 2026-01-31 08:43:57.173034048 +0000 UTC m=+0.063938410 container health_status 5cc46d1955888fed41771eb977e7f9416e280539f01559118253757fe3eb0869 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '3b798815db4eef76ff54d2cfe5801aee605c637f16e47a4297de289ab1fdb9c1-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2)
Jan 31 08:43:57 compute-0 sudo[253897]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 08:43:57 compute-0 sudo[253897]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:43:57 compute-0 sudo[253897]: pam_unix(sudo:session): session closed for user root
Jan 31 08:43:57 compute-0 sudo[253922]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/82c880e6-d992-5408-8b12-efff9c275473/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Jan 31 08:43:57 compute-0 sudo[253922]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:43:57 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1254: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:43:57 compute-0 ceph-mon[75227]: pgmap v1254: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:43:58 compute-0 sudo[253922]: pam_unix(sudo:session): session closed for user root
Jan 31 08:43:58 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 31 08:43:58 compute-0 ceph-mon[75227]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 08:43:58 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 31 08:43:58 compute-0 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 31 08:43:58 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 31 08:43:58 compute-0 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 08:43:58 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 31 08:43:58 compute-0 ceph-mon[75227]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 31 08:43:58 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 31 08:43:58 compute-0 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 31 08:43:58 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 31 08:43:58 compute-0 ceph-mon[75227]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 08:43:58 compute-0 sudo[253978]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 08:43:58 compute-0 sudo[253978]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:43:58 compute-0 sudo[253978]: pam_unix(sudo:session): session closed for user root
Jan 31 08:43:58 compute-0 sudo[254003]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/82c880e6-d992-5408-8b12-efff9c275473/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 82c880e6-d992-5408-8b12-efff9c275473 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Jan 31 08:43:58 compute-0 sudo[254003]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:43:58 compute-0 podman[254042]: 2026-01-31 08:43:58.80900213 +0000 UTC m=+0.023505843 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 08:43:58 compute-0 podman[254042]: 2026-01-31 08:43:58.95605385 +0000 UTC m=+0.170557573 container create b0d352eff89748d599bc455dd767335da90b2d519f4d6b8c71edf8e444d4c499 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gifted_chaum, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, io.buildah.version=1.41.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 08:43:59 compute-0 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 08:43:59 compute-0 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 31 08:43:59 compute-0 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 08:43:59 compute-0 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 31 08:43:59 compute-0 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 31 08:43:59 compute-0 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 08:43:59 compute-0 systemd[1]: Started libpod-conmon-b0d352eff89748d599bc455dd767335da90b2d519f4d6b8c71edf8e444d4c499.scope.
Jan 31 08:43:59 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:43:59 compute-0 podman[254042]: 2026-01-31 08:43:59.157428003 +0000 UTC m=+0.371931726 container init b0d352eff89748d599bc455dd767335da90b2d519f4d6b8c71edf8e444d4c499 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gifted_chaum, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, ceph=True)
Jan 31 08:43:59 compute-0 podman[254042]: 2026-01-31 08:43:59.16538046 +0000 UTC m=+0.379884133 container start b0d352eff89748d599bc455dd767335da90b2d519f4d6b8c71edf8e444d4c499 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gifted_chaum, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 08:43:59 compute-0 gifted_chaum[254059]: 167 167
Jan 31 08:43:59 compute-0 systemd[1]: libpod-b0d352eff89748d599bc455dd767335da90b2d519f4d6b8c71edf8e444d4c499.scope: Deactivated successfully.
Jan 31 08:43:59 compute-0 podman[254042]: 2026-01-31 08:43:59.206435895 +0000 UTC m=+0.420939628 container attach b0d352eff89748d599bc455dd767335da90b2d519f4d6b8c71edf8e444d4c499 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gifted_chaum, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 08:43:59 compute-0 podman[254042]: 2026-01-31 08:43:59.207023892 +0000 UTC m=+0.421527595 container died b0d352eff89748d599bc455dd767335da90b2d519f4d6b8c71edf8e444d4c499 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gifted_chaum, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, OSD_FLAVOR=default)
Jan 31 08:43:59 compute-0 systemd[1]: var-lib-containers-storage-overlay-e81985b7de4e1079e9b46e13fef5a5c057a373a0048a22ef175338e84761e257-merged.mount: Deactivated successfully.
Jan 31 08:43:59 compute-0 podman[254042]: 2026-01-31 08:43:59.715621878 +0000 UTC m=+0.930125561 container remove b0d352eff89748d599bc455dd767335da90b2d519f4d6b8c71edf8e444d4c499 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gifted_chaum, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 08:43:59 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1255: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:43:59 compute-0 systemd[1]: libpod-conmon-b0d352eff89748d599bc455dd767335da90b2d519f4d6b8c71edf8e444d4c499.scope: Deactivated successfully.
Jan 31 08:43:59 compute-0 podman[254084]: 2026-01-31 08:43:59.81666497 +0000 UTC m=+0.024266915 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 08:43:59 compute-0 podman[254084]: 2026-01-31 08:43:59.917077044 +0000 UTC m=+0.124678979 container create b65fabd1a2d6eb9e3529a2f2f85fbcf8a80a040c12bc169f3ba3ff4e581bbabb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jolly_keller, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 08:43:59 compute-0 systemd[1]: Started libpod-conmon-b65fabd1a2d6eb9e3529a2f2f85fbcf8a80a040c12bc169f3ba3ff4e581bbabb.scope.
Jan 31 08:44:00 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:44:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3c2fed51933d173e8b2e9d3fcb9628ef8fb6ef5156b2aefb537bdcfe42c85144/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 08:44:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3c2fed51933d173e8b2e9d3fcb9628ef8fb6ef5156b2aefb537bdcfe42c85144/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 08:44:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3c2fed51933d173e8b2e9d3fcb9628ef8fb6ef5156b2aefb537bdcfe42c85144/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 08:44:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3c2fed51933d173e8b2e9d3fcb9628ef8fb6ef5156b2aefb537bdcfe42c85144/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 08:44:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3c2fed51933d173e8b2e9d3fcb9628ef8fb6ef5156b2aefb537bdcfe42c85144/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 31 08:44:00 compute-0 podman[254084]: 2026-01-31 08:44:00.120818555 +0000 UTC m=+0.328420530 container init b65fabd1a2d6eb9e3529a2f2f85fbcf8a80a040c12bc169f3ba3ff4e581bbabb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jolly_keller, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.41.3)
Jan 31 08:44:00 compute-0 podman[254084]: 2026-01-31 08:44:00.127235859 +0000 UTC m=+0.334837804 container start b65fabd1a2d6eb9e3529a2f2f85fbcf8a80a040c12bc169f3ba3ff4e581bbabb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jolly_keller, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030)
Jan 31 08:44:00 compute-0 podman[254084]: 2026-01-31 08:44:00.205106918 +0000 UTC m=+0.412708853 container attach b65fabd1a2d6eb9e3529a2f2f85fbcf8a80a040c12bc169f3ba3ff4e581bbabb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jolly_keller, org.label-schema.build-date=20251030, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 31 08:44:00 compute-0 ceph-mon[75227]: pgmap v1255: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:44:00 compute-0 jolly_keller[254101]: --> passed data devices: 0 physical, 3 LVM
Jan 31 08:44:00 compute-0 jolly_keller[254101]: --> All data devices are unavailable
Jan 31 08:44:00 compute-0 systemd[1]: libpod-b65fabd1a2d6eb9e3529a2f2f85fbcf8a80a040c12bc169f3ba3ff4e581bbabb.scope: Deactivated successfully.
Jan 31 08:44:00 compute-0 podman[254084]: 2026-01-31 08:44:00.583490167 +0000 UTC m=+0.791092092 container died b65fabd1a2d6eb9e3529a2f2f85fbcf8a80a040c12bc169f3ba3ff4e581bbabb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jolly_keller, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True)
Jan 31 08:44:00 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:44:01 compute-0 systemd[1]: var-lib-containers-storage-overlay-3c2fed51933d173e8b2e9d3fcb9628ef8fb6ef5156b2aefb537bdcfe42c85144-merged.mount: Deactivated successfully.
Jan 31 08:44:01 compute-0 podman[254084]: 2026-01-31 08:44:01.697859011 +0000 UTC m=+1.905460946 container remove b65fabd1a2d6eb9e3529a2f2f85fbcf8a80a040c12bc169f3ba3ff4e581bbabb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jolly_keller, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.41.3)
Jan 31 08:44:01 compute-0 systemd[1]: libpod-conmon-b65fabd1a2d6eb9e3529a2f2f85fbcf8a80a040c12bc169f3ba3ff4e581bbabb.scope: Deactivated successfully.
Jan 31 08:44:01 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1256: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:44:01 compute-0 sudo[254003]: pam_unix(sudo:session): session closed for user root
Jan 31 08:44:01 compute-0 sudo[254136]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 08:44:01 compute-0 sudo[254136]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:44:01 compute-0 sudo[254136]: pam_unix(sudo:session): session closed for user root
Jan 31 08:44:01 compute-0 sudo[254161]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/82c880e6-d992-5408-8b12-efff9c275473/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 82c880e6-d992-5408-8b12-efff9c275473 -- lvm list --format json
Jan 31 08:44:01 compute-0 sudo[254161]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:44:02 compute-0 ceph-mon[75227]: pgmap v1256: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:44:02 compute-0 podman[254200]: 2026-01-31 08:44:02.127662743 +0000 UTC m=+0.023893585 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 08:44:02 compute-0 podman[254200]: 2026-01-31 08:44:02.344664003 +0000 UTC m=+0.240894855 container create 1e5875e16a9e27d60683d9cb89feac9b477c14af6a7cf468ce2eeff15cab62cd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=intelligent_liskov, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 08:44:02 compute-0 systemd[1]: Started libpod-conmon-1e5875e16a9e27d60683d9cb89feac9b477c14af6a7cf468ce2eeff15cab62cd.scope.
Jan 31 08:44:02 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:44:02 compute-0 ceph-mgr[75519]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:44:02 compute-0 ceph-mgr[75519]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:44:02 compute-0 ceph-mgr[75519]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:44:02 compute-0 ceph-mgr[75519]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:44:02 compute-0 ceph-mgr[75519]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:44:02 compute-0 ceph-mgr[75519]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:44:02 compute-0 podman[254200]: 2026-01-31 08:44:02.850993174 +0000 UTC m=+0.747224046 container init 1e5875e16a9e27d60683d9cb89feac9b477c14af6a7cf468ce2eeff15cab62cd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=intelligent_liskov, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030)
Jan 31 08:44:02 compute-0 podman[254200]: 2026-01-31 08:44:02.854853934 +0000 UTC m=+0.751084756 container start 1e5875e16a9e27d60683d9cb89feac9b477c14af6a7cf468ce2eeff15cab62cd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=intelligent_liskov, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Jan 31 08:44:02 compute-0 intelligent_liskov[254216]: 167 167
Jan 31 08:44:02 compute-0 systemd[1]: libpod-1e5875e16a9e27d60683d9cb89feac9b477c14af6a7cf468ce2eeff15cab62cd.scope: Deactivated successfully.
Jan 31 08:44:02 compute-0 podman[254200]: 2026-01-31 08:44:02.893867581 +0000 UTC m=+0.790098413 container attach 1e5875e16a9e27d60683d9cb89feac9b477c14af6a7cf468ce2eeff15cab62cd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=intelligent_liskov, ceph=True, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Jan 31 08:44:02 compute-0 podman[254200]: 2026-01-31 08:44:02.894971993 +0000 UTC m=+0.791202845 container died 1e5875e16a9e27d60683d9cb89feac9b477c14af6a7cf468ce2eeff15cab62cd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=intelligent_liskov, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030)
Jan 31 08:44:02 compute-0 systemd[1]: var-lib-containers-storage-overlay-34290d84ddc8ac9c5fefc5d2aaec646d5467245115818f019d89534344b3875d-merged.mount: Deactivated successfully.
Jan 31 08:44:02 compute-0 podman[254200]: 2026-01-31 08:44:02.957369919 +0000 UTC m=+0.853600741 container remove 1e5875e16a9e27d60683d9cb89feac9b477c14af6a7cf468ce2eeff15cab62cd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=intelligent_liskov, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, io.buildah.version=1.41.3)
Jan 31 08:44:02 compute-0 systemd[1]: libpod-conmon-1e5875e16a9e27d60683d9cb89feac9b477c14af6a7cf468ce2eeff15cab62cd.scope: Deactivated successfully.
Jan 31 08:44:03 compute-0 podman[254240]: 2026-01-31 08:44:03.095519173 +0000 UTC m=+0.049649783 container create a181d9ed643580ff2b4427c0ab8bba1331954d442fb921155c3418666638ab58 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elated_hugle, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Jan 31 08:44:03 compute-0 systemd[1]: Started libpod-conmon-a181d9ed643580ff2b4427c0ab8bba1331954d442fb921155c3418666638ab58.scope.
Jan 31 08:44:03 compute-0 podman[254240]: 2026-01-31 08:44:03.070128116 +0000 UTC m=+0.024258746 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 08:44:03 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:44:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/37a05896a638c48bdedbbb61532408ea787fcea11af9c55a9f001a5d583b269c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 08:44:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/37a05896a638c48bdedbbb61532408ea787fcea11af9c55a9f001a5d583b269c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 08:44:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/37a05896a638c48bdedbbb61532408ea787fcea11af9c55a9f001a5d583b269c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 08:44:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/37a05896a638c48bdedbbb61532408ea787fcea11af9c55a9f001a5d583b269c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 08:44:03 compute-0 podman[254240]: 2026-01-31 08:44:03.196470102 +0000 UTC m=+0.150600732 container init a181d9ed643580ff2b4427c0ab8bba1331954d442fb921155c3418666638ab58 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elated_hugle, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 08:44:03 compute-0 podman[254240]: 2026-01-31 08:44:03.203168383 +0000 UTC m=+0.157298993 container start a181d9ed643580ff2b4427c0ab8bba1331954d442fb921155c3418666638ab58 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elated_hugle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 31 08:44:03 compute-0 podman[254240]: 2026-01-31 08:44:03.209372541 +0000 UTC m=+0.163503171 container attach a181d9ed643580ff2b4427c0ab8bba1331954d442fb921155c3418666638ab58 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elated_hugle, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Jan 31 08:44:03 compute-0 elated_hugle[254256]: {
Jan 31 08:44:03 compute-0 elated_hugle[254256]:     "0": [
Jan 31 08:44:03 compute-0 elated_hugle[254256]:         {
Jan 31 08:44:03 compute-0 elated_hugle[254256]:             "devices": [
Jan 31 08:44:03 compute-0 elated_hugle[254256]:                 "/dev/loop3"
Jan 31 08:44:03 compute-0 elated_hugle[254256]:             ],
Jan 31 08:44:03 compute-0 elated_hugle[254256]:             "lv_name": "ceph_lv0",
Jan 31 08:44:03 compute-0 elated_hugle[254256]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 08:44:03 compute-0 elated_hugle[254256]:             "lv_size": "21470642176",
Jan 31 08:44:03 compute-0 elated_hugle[254256]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=MTsNbY-MKaT-jGv0-3onj-5WQa-gnK0-BbfLsK,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=82c880e6-d992-5408-8b12-efff9c275473,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=39c36249-2898-4a76-b317-8e4ca379866f,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 31 08:44:03 compute-0 elated_hugle[254256]:             "lv_uuid": "MTsNbY-MKaT-jGv0-3onj-5WQa-gnK0-BbfLsK",
Jan 31 08:44:03 compute-0 elated_hugle[254256]:             "name": "ceph_lv0",
Jan 31 08:44:03 compute-0 elated_hugle[254256]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 08:44:03 compute-0 elated_hugle[254256]:             "tags": {
Jan 31 08:44:03 compute-0 elated_hugle[254256]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 31 08:44:03 compute-0 elated_hugle[254256]:                 "ceph.block_uuid": "MTsNbY-MKaT-jGv0-3onj-5WQa-gnK0-BbfLsK",
Jan 31 08:44:03 compute-0 elated_hugle[254256]:                 "ceph.cephx_lockbox_secret": "",
Jan 31 08:44:03 compute-0 elated_hugle[254256]:                 "ceph.cluster_fsid": "82c880e6-d992-5408-8b12-efff9c275473",
Jan 31 08:44:03 compute-0 elated_hugle[254256]:                 "ceph.cluster_name": "ceph",
Jan 31 08:44:03 compute-0 elated_hugle[254256]:                 "ceph.crush_device_class": "",
Jan 31 08:44:03 compute-0 elated_hugle[254256]:                 "ceph.encrypted": "0",
Jan 31 08:44:03 compute-0 elated_hugle[254256]:                 "ceph.objectstore": "bluestore",
Jan 31 08:44:03 compute-0 elated_hugle[254256]:                 "ceph.osd_fsid": "39c36249-2898-4a76-b317-8e4ca379866f",
Jan 31 08:44:03 compute-0 elated_hugle[254256]:                 "ceph.osd_id": "0",
Jan 31 08:44:03 compute-0 elated_hugle[254256]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 31 08:44:03 compute-0 elated_hugle[254256]:                 "ceph.type": "block",
Jan 31 08:44:03 compute-0 elated_hugle[254256]:                 "ceph.vdo": "0",
Jan 31 08:44:03 compute-0 elated_hugle[254256]:                 "ceph.with_tpm": "0"
Jan 31 08:44:03 compute-0 elated_hugle[254256]:             },
Jan 31 08:44:03 compute-0 elated_hugle[254256]:             "type": "block",
Jan 31 08:44:03 compute-0 elated_hugle[254256]:             "vg_name": "ceph_vg0"
Jan 31 08:44:03 compute-0 elated_hugle[254256]:         }
Jan 31 08:44:03 compute-0 elated_hugle[254256]:     ],
Jan 31 08:44:03 compute-0 elated_hugle[254256]:     "1": [
Jan 31 08:44:03 compute-0 elated_hugle[254256]:         {
Jan 31 08:44:03 compute-0 elated_hugle[254256]:             "devices": [
Jan 31 08:44:03 compute-0 elated_hugle[254256]:                 "/dev/loop4"
Jan 31 08:44:03 compute-0 elated_hugle[254256]:             ],
Jan 31 08:44:03 compute-0 elated_hugle[254256]:             "lv_name": "ceph_lv1",
Jan 31 08:44:03 compute-0 elated_hugle[254256]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 31 08:44:03 compute-0 elated_hugle[254256]:             "lv_size": "21470642176",
Jan 31 08:44:03 compute-0 elated_hugle[254256]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=p93Mbf-DMxT-pcUt-jSJE-SFna-oscq-yTAd40,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=82c880e6-d992-5408-8b12-efff9c275473,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=dacad4fa-56d8-4937-b2d8-306fb75187f3,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 31 08:44:03 compute-0 elated_hugle[254256]:             "lv_uuid": "p93Mbf-DMxT-pcUt-jSJE-SFna-oscq-yTAd40",
Jan 31 08:44:03 compute-0 elated_hugle[254256]:             "name": "ceph_lv1",
Jan 31 08:44:03 compute-0 elated_hugle[254256]:             "path": "/dev/ceph_vg1/ceph_lv1",
Jan 31 08:44:03 compute-0 elated_hugle[254256]:             "tags": {
Jan 31 08:44:03 compute-0 elated_hugle[254256]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 31 08:44:03 compute-0 elated_hugle[254256]:                 "ceph.block_uuid": "p93Mbf-DMxT-pcUt-jSJE-SFna-oscq-yTAd40",
Jan 31 08:44:03 compute-0 elated_hugle[254256]:                 "ceph.cephx_lockbox_secret": "",
Jan 31 08:44:03 compute-0 elated_hugle[254256]:                 "ceph.cluster_fsid": "82c880e6-d992-5408-8b12-efff9c275473",
Jan 31 08:44:03 compute-0 elated_hugle[254256]:                 "ceph.cluster_name": "ceph",
Jan 31 08:44:03 compute-0 elated_hugle[254256]:                 "ceph.crush_device_class": "",
Jan 31 08:44:03 compute-0 elated_hugle[254256]:                 "ceph.encrypted": "0",
Jan 31 08:44:03 compute-0 elated_hugle[254256]:                 "ceph.objectstore": "bluestore",
Jan 31 08:44:03 compute-0 elated_hugle[254256]:                 "ceph.osd_fsid": "dacad4fa-56d8-4937-b2d8-306fb75187f3",
Jan 31 08:44:03 compute-0 elated_hugle[254256]:                 "ceph.osd_id": "1",
Jan 31 08:44:03 compute-0 elated_hugle[254256]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 31 08:44:03 compute-0 elated_hugle[254256]:                 "ceph.type": "block",
Jan 31 08:44:03 compute-0 elated_hugle[254256]:                 "ceph.vdo": "0",
Jan 31 08:44:03 compute-0 elated_hugle[254256]:                 "ceph.with_tpm": "0"
Jan 31 08:44:03 compute-0 elated_hugle[254256]:             },
Jan 31 08:44:03 compute-0 elated_hugle[254256]:             "type": "block",
Jan 31 08:44:03 compute-0 elated_hugle[254256]:             "vg_name": "ceph_vg1"
Jan 31 08:44:03 compute-0 elated_hugle[254256]:         }
Jan 31 08:44:03 compute-0 elated_hugle[254256]:     ],
Jan 31 08:44:03 compute-0 elated_hugle[254256]:     "2": [
Jan 31 08:44:03 compute-0 elated_hugle[254256]:         {
Jan 31 08:44:03 compute-0 elated_hugle[254256]:             "devices": [
Jan 31 08:44:03 compute-0 elated_hugle[254256]:                 "/dev/loop5"
Jan 31 08:44:03 compute-0 elated_hugle[254256]:             ],
Jan 31 08:44:03 compute-0 elated_hugle[254256]:             "lv_name": "ceph_lv2",
Jan 31 08:44:03 compute-0 elated_hugle[254256]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 31 08:44:03 compute-0 elated_hugle[254256]:             "lv_size": "21470642176",
Jan 31 08:44:03 compute-0 elated_hugle[254256]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=fxd7JU-HnwP-NvcE-M4xv-EgEF-kK7y-w6dXCS,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=82c880e6-d992-5408-8b12-efff9c275473,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=faa25865-e7b6-44f9-8188-08bf287b941b,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 31 08:44:03 compute-0 elated_hugle[254256]:             "lv_uuid": "fxd7JU-HnwP-NvcE-M4xv-EgEF-kK7y-w6dXCS",
Jan 31 08:44:03 compute-0 elated_hugle[254256]:             "name": "ceph_lv2",
Jan 31 08:44:03 compute-0 elated_hugle[254256]:             "path": "/dev/ceph_vg2/ceph_lv2",
Jan 31 08:44:03 compute-0 elated_hugle[254256]:             "tags": {
Jan 31 08:44:03 compute-0 elated_hugle[254256]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 31 08:44:03 compute-0 elated_hugle[254256]:                 "ceph.block_uuid": "fxd7JU-HnwP-NvcE-M4xv-EgEF-kK7y-w6dXCS",
Jan 31 08:44:03 compute-0 elated_hugle[254256]:                 "ceph.cephx_lockbox_secret": "",
Jan 31 08:44:03 compute-0 elated_hugle[254256]:                 "ceph.cluster_fsid": "82c880e6-d992-5408-8b12-efff9c275473",
Jan 31 08:44:03 compute-0 elated_hugle[254256]:                 "ceph.cluster_name": "ceph",
Jan 31 08:44:03 compute-0 elated_hugle[254256]:                 "ceph.crush_device_class": "",
Jan 31 08:44:03 compute-0 elated_hugle[254256]:                 "ceph.encrypted": "0",
Jan 31 08:44:03 compute-0 elated_hugle[254256]:                 "ceph.objectstore": "bluestore",
Jan 31 08:44:03 compute-0 elated_hugle[254256]:                 "ceph.osd_fsid": "faa25865-e7b6-44f9-8188-08bf287b941b",
Jan 31 08:44:03 compute-0 elated_hugle[254256]:                 "ceph.osd_id": "2",
Jan 31 08:44:03 compute-0 elated_hugle[254256]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 31 08:44:03 compute-0 elated_hugle[254256]:                 "ceph.type": "block",
Jan 31 08:44:03 compute-0 elated_hugle[254256]:                 "ceph.vdo": "0",
Jan 31 08:44:03 compute-0 elated_hugle[254256]:                 "ceph.with_tpm": "0"
Jan 31 08:44:03 compute-0 elated_hugle[254256]:             },
Jan 31 08:44:03 compute-0 elated_hugle[254256]:             "type": "block",
Jan 31 08:44:03 compute-0 elated_hugle[254256]:             "vg_name": "ceph_vg2"
Jan 31 08:44:03 compute-0 elated_hugle[254256]:         }
Jan 31 08:44:03 compute-0 elated_hugle[254256]:     ]
Jan 31 08:44:03 compute-0 elated_hugle[254256]: }
Jan 31 08:44:03 compute-0 systemd[1]: libpod-a181d9ed643580ff2b4427c0ab8bba1331954d442fb921155c3418666638ab58.scope: Deactivated successfully.
Jan 31 08:44:03 compute-0 podman[254240]: 2026-01-31 08:44:03.494738538 +0000 UTC m=+0.448869148 container died a181d9ed643580ff2b4427c0ab8bba1331954d442fb921155c3418666638ab58 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elated_hugle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, io.buildah.version=1.41.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 08:44:03 compute-0 systemd[1]: var-lib-containers-storage-overlay-37a05896a638c48bdedbbb61532408ea787fcea11af9c55a9f001a5d583b269c-merged.mount: Deactivated successfully.
Jan 31 08:44:03 compute-0 podman[254240]: 2026-01-31 08:44:03.592726963 +0000 UTC m=+0.546857573 container remove a181d9ed643580ff2b4427c0ab8bba1331954d442fb921155c3418666638ab58 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elated_hugle, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True)
Jan 31 08:44:03 compute-0 systemd[1]: libpod-conmon-a181d9ed643580ff2b4427c0ab8bba1331954d442fb921155c3418666638ab58.scope: Deactivated successfully.
Jan 31 08:44:03 compute-0 sudo[254161]: pam_unix(sudo:session): session closed for user root
Jan 31 08:44:03 compute-0 sudo[254278]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 08:44:03 compute-0 sudo[254278]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:44:03 compute-0 sudo[254278]: pam_unix(sudo:session): session closed for user root
Jan 31 08:44:03 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1257: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:44:03 compute-0 sudo[254303]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/82c880e6-d992-5408-8b12-efff9c275473/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 82c880e6-d992-5408-8b12-efff9c275473 -- raw list --format json
Jan 31 08:44:03 compute-0 sudo[254303]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:44:03 compute-0 ceph-mon[75227]: pgmap v1257: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:44:04 compute-0 podman[254341]: 2026-01-31 08:44:04.012447246 +0000 UTC m=+0.025273075 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 08:44:04 compute-0 podman[254341]: 2026-01-31 08:44:04.137937047 +0000 UTC m=+0.150762856 container create 2cc44f5e472d841a9e6a05da54e126ea5a909b89fb2019159a3c30a5d0ff4382 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=confident_bhaskara, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 08:44:04 compute-0 systemd[1]: Started libpod-conmon-2cc44f5e472d841a9e6a05da54e126ea5a909b89fb2019159a3c30a5d0ff4382.scope.
Jan 31 08:44:04 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:44:04 compute-0 podman[254341]: 2026-01-31 08:44:04.223306001 +0000 UTC m=+0.236131840 container init 2cc44f5e472d841a9e6a05da54e126ea5a909b89fb2019159a3c30a5d0ff4382 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=confident_bhaskara, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, OSD_FLAVOR=default)
Jan 31 08:44:04 compute-0 podman[254341]: 2026-01-31 08:44:04.229370504 +0000 UTC m=+0.242196313 container start 2cc44f5e472d841a9e6a05da54e126ea5a909b89fb2019159a3c30a5d0ff4382 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=confident_bhaskara, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, OSD_FLAVOR=default, ceph=True, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 08:44:04 compute-0 confident_bhaskara[254358]: 167 167
Jan 31 08:44:04 compute-0 systemd[1]: libpod-2cc44f5e472d841a9e6a05da54e126ea5a909b89fb2019159a3c30a5d0ff4382.scope: Deactivated successfully.
Jan 31 08:44:04 compute-0 podman[254341]: 2026-01-31 08:44:04.238330211 +0000 UTC m=+0.251156040 container attach 2cc44f5e472d841a9e6a05da54e126ea5a909b89fb2019159a3c30a5d0ff4382 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=confident_bhaskara, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 08:44:04 compute-0 podman[254341]: 2026-01-31 08:44:04.238716512 +0000 UTC m=+0.251542321 container died 2cc44f5e472d841a9e6a05da54e126ea5a909b89fb2019159a3c30a5d0ff4382 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=confident_bhaskara, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 08:44:04 compute-0 systemd[1]: var-lib-containers-storage-overlay-bda1cdda8a2b77d5e3d87a9bdb0452dfdeb94d80571d18d9e4aad52013f98ddd-merged.mount: Deactivated successfully.
Jan 31 08:44:04 compute-0 podman[254341]: 2026-01-31 08:44:04.297853414 +0000 UTC m=+0.310679223 container remove 2cc44f5e472d841a9e6a05da54e126ea5a909b89fb2019159a3c30a5d0ff4382 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=confident_bhaskara, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Jan 31 08:44:04 compute-0 systemd[1]: libpod-conmon-2cc44f5e472d841a9e6a05da54e126ea5a909b89fb2019159a3c30a5d0ff4382.scope: Deactivated successfully.
Jan 31 08:44:04 compute-0 podman[254383]: 2026-01-31 08:44:04.417096267 +0000 UTC m=+0.041248222 container create b279eb4f9cdc9b369e286294977c1e8a74d5a68f17ea7f85ce6659a0b7119f57 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=clever_galois, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 08:44:04 compute-0 systemd[1]: Started libpod-conmon-b279eb4f9cdc9b369e286294977c1e8a74d5a68f17ea7f85ce6659a0b7119f57.scope.
Jan 31 08:44:04 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:44:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cb45dc7f53ca57160022d23dae413139079ffcebefa026f8c5aaa20af827d138/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 08:44:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cb45dc7f53ca57160022d23dae413139079ffcebefa026f8c5aaa20af827d138/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 08:44:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cb45dc7f53ca57160022d23dae413139079ffcebefa026f8c5aaa20af827d138/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 08:44:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cb45dc7f53ca57160022d23dae413139079ffcebefa026f8c5aaa20af827d138/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 08:44:04 compute-0 podman[254383]: 2026-01-31 08:44:04.488682516 +0000 UTC m=+0.112834481 container init b279eb4f9cdc9b369e286294977c1e8a74d5a68f17ea7f85ce6659a0b7119f57 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=clever_galois, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030)
Jan 31 08:44:04 compute-0 podman[254383]: 2026-01-31 08:44:04.394868141 +0000 UTC m=+0.019020116 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 08:44:04 compute-0 podman[254383]: 2026-01-31 08:44:04.497066336 +0000 UTC m=+0.121218281 container start b279eb4f9cdc9b369e286294977c1e8a74d5a68f17ea7f85ce6659a0b7119f57 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=clever_galois, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default)
Jan 31 08:44:04 compute-0 podman[254383]: 2026-01-31 08:44:04.50174822 +0000 UTC m=+0.125900225 container attach b279eb4f9cdc9b369e286294977c1e8a74d5a68f17ea7f85ce6659a0b7119f57 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=clever_galois, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 08:44:05 compute-0 lvm[254479]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 31 08:44:05 compute-0 lvm[254480]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 31 08:44:05 compute-0 lvm[254480]: VG ceph_vg1 finished
Jan 31 08:44:05 compute-0 lvm[254479]: VG ceph_vg0 finished
Jan 31 08:44:05 compute-0 lvm[254482]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 31 08:44:05 compute-0 lvm[254482]: VG ceph_vg2 finished
Jan 31 08:44:05 compute-0 clever_galois[254400]: {}
Jan 31 08:44:05 compute-0 systemd[1]: libpod-b279eb4f9cdc9b369e286294977c1e8a74d5a68f17ea7f85ce6659a0b7119f57.scope: Deactivated successfully.
Jan 31 08:44:05 compute-0 podman[254383]: 2026-01-31 08:44:05.178972552 +0000 UTC m=+0.803124528 container died b279eb4f9cdc9b369e286294977c1e8a74d5a68f17ea7f85ce6659a0b7119f57 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=clever_galois, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 31 08:44:05 compute-0 systemd[1]: var-lib-containers-storage-overlay-cb45dc7f53ca57160022d23dae413139079ffcebefa026f8c5aaa20af827d138-merged.mount: Deactivated successfully.
Jan 31 08:44:05 compute-0 podman[254383]: 2026-01-31 08:44:05.234641846 +0000 UTC m=+0.858793791 container remove b279eb4f9cdc9b369e286294977c1e8a74d5a68f17ea7f85ce6659a0b7119f57 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=clever_galois, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Jan 31 08:44:05 compute-0 systemd[1]: libpod-conmon-b279eb4f9cdc9b369e286294977c1e8a74d5a68f17ea7f85ce6659a0b7119f57.scope: Deactivated successfully.
Jan 31 08:44:05 compute-0 sudo[254303]: pam_unix(sudo:session): session closed for user root
Jan 31 08:44:05 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 31 08:44:05 compute-0 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 08:44:05 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 31 08:44:05 compute-0 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 08:44:05 compute-0 sudo[254499]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 31 08:44:05 compute-0 sudo[254499]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:44:05 compute-0 sudo[254499]: pam_unix(sudo:session): session closed for user root
Jan 31 08:44:05 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1258: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:44:05 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:44:06 compute-0 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 08:44:06 compute-0 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 08:44:06 compute-0 ceph-mon[75227]: pgmap v1258: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:44:07 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1259: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:44:08 compute-0 ceph-mon[75227]: pgmap v1259: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:44:09 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1260: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:44:09 compute-0 ceph-mon[75227]: pgmap v1260: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:44:10 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:44:11 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1261: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:44:12 compute-0 ceph-mon[75227]: pgmap v1261: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:44:13 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1262: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:44:14 compute-0 ceph-mon[75227]: pgmap v1262: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:44:15 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1263: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:44:15 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:44:16 compute-0 ceph-mon[75227]: pgmap v1263: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:44:17 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1264: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:44:17 compute-0 ovn_metadata_agent[154972]: 2026-01-31 08:44:17.902 154977 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:44:17 compute-0 ovn_metadata_agent[154972]: 2026-01-31 08:44:17.905 154977 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:44:17 compute-0 ovn_metadata_agent[154972]: 2026-01-31 08:44:17.905 154977 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:44:17 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 31 08:44:17 compute-0 ceph-mon[75227]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3519593418' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 31 08:44:17 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 31 08:44:17 compute-0 ceph-mon[75227]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3519593418' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 31 08:44:18 compute-0 ceph-mon[75227]: pgmap v1264: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:44:18 compute-0 ceph-mon[75227]: from='client.? 192.168.122.10:0/3519593418' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 31 08:44:18 compute-0 ceph-mon[75227]: from='client.? 192.168.122.10:0/3519593418' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 31 08:44:19 compute-0 nova_compute[238824]: 2026-01-31 08:44:19.339 238828 DEBUG oslo_service.periodic_task [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:44:19 compute-0 nova_compute[238824]: 2026-01-31 08:44:19.340 238828 DEBUG nova.compute.manager [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Jan 31 08:44:19 compute-0 nova_compute[238824]: 2026-01-31 08:44:19.352 238828 DEBUG nova.compute.manager [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Jan 31 08:44:19 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1265: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:44:19 compute-0 ceph-mon[75227]: pgmap v1265: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:44:20 compute-0 nova_compute[238824]: 2026-01-31 08:44:20.340 238828 DEBUG oslo_service.periodic_task [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:44:20 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:44:21 compute-0 nova_compute[238824]: 2026-01-31 08:44:21.354 238828 DEBUG oslo_service.periodic_task [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:44:21 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1266: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:44:21 compute-0 ceph-mon[75227]: pgmap v1266: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:44:22 compute-0 nova_compute[238824]: 2026-01-31 08:44:22.339 238828 DEBUG oslo_service.periodic_task [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:44:22 compute-0 nova_compute[238824]: 2026-01-31 08:44:22.340 238828 DEBUG oslo_service.periodic_task [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:44:22 compute-0 nova_compute[238824]: 2026-01-31 08:44:22.340 238828 DEBUG nova.compute.manager [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 31 08:44:23 compute-0 nova_compute[238824]: 2026-01-31 08:44:23.340 238828 DEBUG oslo_service.periodic_task [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:44:23 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1267: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:44:23 compute-0 ceph-mon[75227]: pgmap v1267: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:44:25 compute-0 nova_compute[238824]: 2026-01-31 08:44:25.334 238828 DEBUG oslo_service.periodic_task [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:44:25 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1268: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:44:25 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:44:25 compute-0 ceph-mon[75227]: pgmap v1268: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:44:26 compute-0 nova_compute[238824]: 2026-01-31 08:44:26.339 238828 DEBUG oslo_service.periodic_task [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:44:26 compute-0 nova_compute[238824]: 2026-01-31 08:44:26.340 238828 DEBUG nova.compute.manager [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 31 08:44:26 compute-0 nova_compute[238824]: 2026-01-31 08:44:26.340 238828 DEBUG nova.compute.manager [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 31 08:44:26 compute-0 nova_compute[238824]: 2026-01-31 08:44:26.359 238828 DEBUG nova.compute.manager [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 31 08:44:26 compute-0 nova_compute[238824]: 2026-01-31 08:44:26.360 238828 DEBUG oslo_service.periodic_task [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:44:26 compute-0 nova_compute[238824]: 2026-01-31 08:44:26.360 238828 DEBUG nova.compute.manager [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Jan 31 08:44:27 compute-0 nova_compute[238824]: 2026-01-31 08:44:27.353 238828 DEBUG oslo_service.periodic_task [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:44:27 compute-0 nova_compute[238824]: 2026-01-31 08:44:27.353 238828 DEBUG oslo_service.periodic_task [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:44:27 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1269: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:44:28 compute-0 podman[254525]: 2026-01-31 08:44:28.157830668 +0000 UTC m=+0.052428672 container health_status 5cc46d1955888fed41771eb977e7f9416e280539f01559118253757fe3eb0869 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '3b798815db4eef76ff54d2cfe5801aee605c637f16e47a4297de289ab1fdb9c1-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 31 08:44:28 compute-0 podman[254524]: 2026-01-31 08:44:28.186429806 +0000 UTC m=+0.081124222 container health_status 14c3c41ef3ecc0cb180f3b1c6b2646401de390411c75f2edf127900ced71a3ae (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '3b798815db4eef76ff54d2cfe5801aee605c637f16e47a4297de289ab1fdb9c1-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, container_name=ovn_controller)
Jan 31 08:44:28 compute-0 nova_compute[238824]: 2026-01-31 08:44:28.339 238828 DEBUG oslo_service.periodic_task [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:44:28 compute-0 nova_compute[238824]: 2026-01-31 08:44:28.366 238828 DEBUG oslo_concurrency.lockutils [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:44:28 compute-0 nova_compute[238824]: 2026-01-31 08:44:28.367 238828 DEBUG oslo_concurrency.lockutils [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:44:28 compute-0 nova_compute[238824]: 2026-01-31 08:44:28.367 238828 DEBUG oslo_concurrency.lockutils [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:44:28 compute-0 nova_compute[238824]: 2026-01-31 08:44:28.368 238828 DEBUG nova.compute.resource_tracker [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 31 08:44:28 compute-0 nova_compute[238824]: 2026-01-31 08:44:28.368 238828 DEBUG oslo_concurrency.processutils [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:44:28 compute-0 ceph-mon[75227]: pgmap v1269: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:44:28 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 31 08:44:28 compute-0 ceph-mon[75227]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/694860644' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 31 08:44:28 compute-0 nova_compute[238824]: 2026-01-31 08:44:28.892 238828 DEBUG oslo_concurrency.processutils [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.524s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:44:29 compute-0 nova_compute[238824]: 2026-01-31 08:44:29.045 238828 WARNING nova.virt.libvirt.driver [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 31 08:44:29 compute-0 nova_compute[238824]: 2026-01-31 08:44:29.046 238828 DEBUG nova.compute.resource_tracker [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5113MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 31 08:44:29 compute-0 nova_compute[238824]: 2026-01-31 08:44:29.046 238828 DEBUG oslo_concurrency.lockutils [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:44:29 compute-0 nova_compute[238824]: 2026-01-31 08:44:29.047 238828 DEBUG oslo_concurrency.lockutils [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:44:29 compute-0 nova_compute[238824]: 2026-01-31 08:44:29.520 238828 DEBUG nova.compute.resource_tracker [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 31 08:44:29 compute-0 nova_compute[238824]: 2026-01-31 08:44:29.521 238828 DEBUG nova.compute.resource_tracker [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 31 08:44:29 compute-0 nova_compute[238824]: 2026-01-31 08:44:29.538 238828 DEBUG oslo_concurrency.processutils [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:44:29 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1270: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:44:29 compute-0 ceph-mon[75227]: from='client.? 192.168.122.100:0/694860644' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 31 08:44:30 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 31 08:44:30 compute-0 ceph-mon[75227]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2350916459' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 31 08:44:30 compute-0 nova_compute[238824]: 2026-01-31 08:44:30.088 238828 DEBUG oslo_concurrency.processutils [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.550s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:44:30 compute-0 nova_compute[238824]: 2026-01-31 08:44:30.093 238828 DEBUG nova.compute.provider_tree [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Inventory has not changed in ProviderTree for provider: 6d4ff98f-eb37-47a1-bfaf-01e7f5329d98 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 31 08:44:30 compute-0 nova_compute[238824]: 2026-01-31 08:44:30.110 238828 DEBUG nova.scheduler.client.report [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Inventory has not changed for provider 6d4ff98f-eb37-47a1-bfaf-01e7f5329d98 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 31 08:44:30 compute-0 nova_compute[238824]: 2026-01-31 08:44:30.111 238828 DEBUG nova.compute.resource_tracker [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 31 08:44:30 compute-0 nova_compute[238824]: 2026-01-31 08:44:30.112 238828 DEBUG oslo_concurrency.lockutils [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.065s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:44:30 compute-0 sshd-session[254613]: Accepted publickey for zuul from 192.168.122.30 port 33018 ssh2: ECDSA SHA256:Skb+4tfaoVfLHQIqkRSeA/sFlTrVc6ZnX8V66qTLHY8
Jan 31 08:44:30 compute-0 systemd-logind[793]: New session 52 of user zuul.
Jan 31 08:44:30 compute-0 systemd[1]: Started Session 52 of User zuul.
Jan 31 08:44:30 compute-0 sshd-session[254613]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 31 08:44:30 compute-0 ceph-mon[75227]: pgmap v1270: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:44:30 compute-0 ceph-mon[75227]: from='client.? 192.168.122.100:0/2350916459' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 31 08:44:30 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:44:31 compute-0 nova_compute[238824]: 2026-01-31 08:44:31.107 238828 DEBUG oslo_service.periodic_task [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:44:31 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1271: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:44:31 compute-0 ceph-mgr[75519]: [balancer INFO root] Optimize plan auto_2026-01-31_08:44:31
Jan 31 08:44:31 compute-0 ceph-mgr[75519]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 31 08:44:31 compute-0 ceph-mgr[75519]: [balancer INFO root] do_upmap
Jan 31 08:44:31 compute-0 ceph-mgr[75519]: [balancer INFO root] pools ['default.rgw.log', 'cephfs.cephfs.data', 'images', 'cephfs.cephfs.meta', '.mgr', 'default.rgw.control', 'vms', '.rgw.root', 'backups', 'volumes', 'default.rgw.meta']
Jan 31 08:44:31 compute-0 ceph-mgr[75519]: [balancer INFO root] prepared 0/10 upmap changes
Jan 31 08:44:32 compute-0 ceph-mon[75227]: pgmap v1271: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:44:32 compute-0 ceph-mgr[75519]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:44:32 compute-0 ceph-mgr[75519]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:44:32 compute-0 ceph-mgr[75519]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:44:32 compute-0 ceph-mgr[75519]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:44:32 compute-0 ceph-mgr[75519]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:44:32 compute-0 ceph-mgr[75519]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:44:33 compute-0 ovn_metadata_agent[154972]: 2026-01-31 08:44:33.053 154977 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=3, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ae:5f:f2', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'd6:1b:f0:08:31:5c'}, ipsec=False) old=SB_Global(nb_cfg=2) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 31 08:44:33 compute-0 ovn_metadata_agent[154972]: 2026-01-31 08:44:33.055 154977 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 4 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 31 08:44:33 compute-0 ceph-mgr[75519]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 31 08:44:33 compute-0 ceph-mgr[75519]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 31 08:44:33 compute-0 ceph-mgr[75519]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 08:44:33 compute-0 ceph-mgr[75519]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 08:44:33 compute-0 ceph-mgr[75519]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 08:44:33 compute-0 ceph-mgr[75519]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 08:44:33 compute-0 ceph-mgr[75519]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 08:44:33 compute-0 ceph-mgr[75519]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 08:44:33 compute-0 ceph-mgr[75519]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 08:44:33 compute-0 ceph-mgr[75519]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 08:44:33 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1272: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:44:33 compute-0 ceph-mon[75227]: pgmap v1272: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:44:35 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1273: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:44:35 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:44:35 compute-0 ceph-mon[75227]: pgmap v1273: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:44:36 compute-0 sshd-session[254616]: Connection closed by 192.168.122.30 port 33018
Jan 31 08:44:36 compute-0 sshd-session[254613]: pam_unix(sshd:session): session closed for user zuul
Jan 31 08:44:36 compute-0 systemd[1]: session-52.scope: Deactivated successfully.
Jan 31 08:44:36 compute-0 systemd-logind[793]: Session 52 logged out. Waiting for processes to exit.
Jan 31 08:44:36 compute-0 systemd-logind[793]: Removed session 52.
Jan 31 08:44:37 compute-0 ovn_metadata_agent[154972]: 2026-01-31 08:44:37.057 154977 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=c8bc61c4-1b90-42d4-9c52-3d83532ede66, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '3'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:44:37 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1274: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:44:37 compute-0 ceph-mon[75227]: pgmap v1274: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:44:39 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1275: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:44:40 compute-0 ceph-mon[75227]: pgmap v1275: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:44:40 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:44:41 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1276: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:44:42 compute-0 ceph-mon[75227]: pgmap v1276: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:44:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] _maybe_adjust
Jan 31 08:44:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:44:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 31 08:44:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:44:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 08:44:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:44:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 08:44:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:44:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 08:44:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:44:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 3.257160766784386e-07 of space, bias 1.0, pg target 9.771482300353158e-05 quantized to 32 (current 32)
Jan 31 08:44:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:44:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 2.5331644121694047e-06 of space, bias 4.0, pg target 0.0030397972946032857 quantized to 16 (current 16)
Jan 31 08:44:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:44:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 08:44:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:44:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 31 08:44:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:44:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Jan 31 08:44:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:44:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 08:44:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:44:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 31 08:44:43 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1277: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:44:44 compute-0 ceph-mon[75227]: pgmap v1277: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:44:45 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1278: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:44:45 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:44:46 compute-0 ceph-mon[75227]: pgmap v1278: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:44:47 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1279: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:44:48 compute-0 ceph-mon[75227]: pgmap v1279: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:44:49 compute-0 nova_compute[238824]: 2026-01-31 08:44:49.353 238828 DEBUG oslo_service.periodic_task [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:44:49 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1280: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:44:49 compute-0 ceph-mon[75227]: pgmap v1280: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:44:50 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:44:51 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1281: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:44:52 compute-0 ceph-mon[75227]: pgmap v1281: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:44:53 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1282: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:44:54 compute-0 ceph-mon[75227]: pgmap v1282: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:44:55 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1283: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:44:55 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:44:56 compute-0 ceph-mon[75227]: pgmap v1283: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:44:57 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1284: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:44:58 compute-0 ceph-mon[75227]: pgmap v1284: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:44:59 compute-0 podman[254871]: 2026-01-31 08:44:59.176478216 +0000 UTC m=+0.056005854 container health_status 5cc46d1955888fed41771eb977e7f9416e280539f01559118253757fe3eb0869 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '3b798815db4eef76ff54d2cfe5801aee605c637f16e47a4297de289ab1fdb9c1-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2)
Jan 31 08:44:59 compute-0 podman[254870]: 2026-01-31 08:44:59.198005142 +0000 UTC m=+0.084067977 container health_status 14c3c41ef3ecc0cb180f3b1c6b2646401de390411c75f2edf127900ced71a3ae (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20260127, tcib_managed=true, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '3b798815db4eef76ff54d2cfe5801aee605c637f16e47a4297de289ab1fdb9c1-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Jan 31 08:44:59 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1285: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:44:59 compute-0 ceph-mon[75227]: pgmap v1285: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:45:00 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:45:01 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1286: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:45:02 compute-0 ceph-mgr[75519]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:45:02 compute-0 ceph-mgr[75519]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:45:02 compute-0 ceph-mgr[75519]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:45:02 compute-0 ceph-mgr[75519]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:45:02 compute-0 ceph-mgr[75519]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:45:02 compute-0 ceph-mgr[75519]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:45:02 compute-0 ceph-mon[75227]: pgmap v1286: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:45:03 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1287: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:45:03 compute-0 ceph-mon[75227]: pgmap v1287: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:45:05 compute-0 sudo[254917]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 08:45:05 compute-0 sudo[254917]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:45:05 compute-0 sudo[254917]: pam_unix(sudo:session): session closed for user root
Jan 31 08:45:05 compute-0 sudo[254942]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/82c880e6-d992-5408-8b12-efff9c275473/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Jan 31 08:45:05 compute-0 sudo[254942]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:45:05 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1288: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:45:05 compute-0 sudo[254942]: pam_unix(sudo:session): session closed for user root
Jan 31 08:45:05 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 31 08:45:05 compute-0 ceph-mon[75227]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 08:45:05 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 31 08:45:05 compute-0 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 31 08:45:05 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 31 08:45:05 compute-0 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 08:45:05 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 31 08:45:05 compute-0 ceph-mon[75227]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 31 08:45:05 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 31 08:45:05 compute-0 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 31 08:45:05 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 31 08:45:05 compute-0 ceph-mon[75227]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 08:45:05 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:45:06 compute-0 sudo[254999]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 08:45:06 compute-0 sudo[254999]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:45:06 compute-0 sudo[254999]: pam_unix(sudo:session): session closed for user root
Jan 31 08:45:06 compute-0 sudo[255024]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/82c880e6-d992-5408-8b12-efff9c275473/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 82c880e6-d992-5408-8b12-efff9c275473 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Jan 31 08:45:06 compute-0 sudo[255024]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:45:06 compute-0 podman[255060]: 2026-01-31 08:45:06.453642033 +0000 UTC m=+0.033190101 container create 8ac3fb3ff9fd8932712903f7aa8b47a22ba99d3c569678970e5a36bfe909371d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gifted_banach, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 08:45:06 compute-0 systemd[1]: Started libpod-conmon-8ac3fb3ff9fd8932712903f7aa8b47a22ba99d3c569678970e5a36bfe909371d.scope.
Jan 31 08:45:06 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:45:06 compute-0 podman[255060]: 2026-01-31 08:45:06.531356257 +0000 UTC m=+0.110904345 container init 8ac3fb3ff9fd8932712903f7aa8b47a22ba99d3c569678970e5a36bfe909371d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gifted_banach, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3)
Jan 31 08:45:06 compute-0 podman[255060]: 2026-01-31 08:45:06.439397865 +0000 UTC m=+0.018945953 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 08:45:06 compute-0 podman[255060]: 2026-01-31 08:45:06.53706179 +0000 UTC m=+0.116609858 container start 8ac3fb3ff9fd8932712903f7aa8b47a22ba99d3c569678970e5a36bfe909371d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gifted_banach, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 08:45:06 compute-0 podman[255060]: 2026-01-31 08:45:06.5401923 +0000 UTC m=+0.119740418 container attach 8ac3fb3ff9fd8932712903f7aa8b47a22ba99d3c569678970e5a36bfe909371d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gifted_banach, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 08:45:06 compute-0 systemd[1]: libpod-8ac3fb3ff9fd8932712903f7aa8b47a22ba99d3c569678970e5a36bfe909371d.scope: Deactivated successfully.
Jan 31 08:45:06 compute-0 gifted_banach[255076]: 167 167
Jan 31 08:45:06 compute-0 conmon[255076]: conmon 8ac3fb3ff9fd89327129 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-8ac3fb3ff9fd8932712903f7aa8b47a22ba99d3c569678970e5a36bfe909371d.scope/container/memory.events
Jan 31 08:45:06 compute-0 podman[255060]: 2026-01-31 08:45:06.542201987 +0000 UTC m=+0.121750065 container died 8ac3fb3ff9fd8932712903f7aa8b47a22ba99d3c569678970e5a36bfe909371d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gifted_banach, CEPH_REF=tentacle, ceph=True, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 31 08:45:06 compute-0 systemd[1]: var-lib-containers-storage-overlay-596ba08bf2a296dda3c8ae1c5fe2aa70de7a039107a0c557efb43d1b6710989e-merged.mount: Deactivated successfully.
Jan 31 08:45:06 compute-0 podman[255060]: 2026-01-31 08:45:06.59226929 +0000 UTC m=+0.171817348 container remove 8ac3fb3ff9fd8932712903f7aa8b47a22ba99d3c569678970e5a36bfe909371d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gifted_banach, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3)
Jan 31 08:45:06 compute-0 systemd[1]: libpod-conmon-8ac3fb3ff9fd8932712903f7aa8b47a22ba99d3c569678970e5a36bfe909371d.scope: Deactivated successfully.
Jan 31 08:45:06 compute-0 podman[255101]: 2026-01-31 08:45:06.695971317 +0000 UTC m=+0.032969134 container create 1e73dfc870cb3b64ed10e69111b8482ad3a65b00537b2561cefe20e4455145cf (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=interesting_yonath, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 08:45:06 compute-0 systemd[1]: Started libpod-conmon-1e73dfc870cb3b64ed10e69111b8482ad3a65b00537b2561cefe20e4455145cf.scope.
Jan 31 08:45:06 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:45:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/29282ff8fa4fb399acdbaf814dc9f2203e847e9c5f6132d36aed59a617a4a26c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 08:45:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/29282ff8fa4fb399acdbaf814dc9f2203e847e9c5f6132d36aed59a617a4a26c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 08:45:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/29282ff8fa4fb399acdbaf814dc9f2203e847e9c5f6132d36aed59a617a4a26c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 08:45:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/29282ff8fa4fb399acdbaf814dc9f2203e847e9c5f6132d36aed59a617a4a26c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 08:45:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/29282ff8fa4fb399acdbaf814dc9f2203e847e9c5f6132d36aed59a617a4a26c/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 31 08:45:06 compute-0 podman[255101]: 2026-01-31 08:45:06.767467074 +0000 UTC m=+0.104464891 container init 1e73dfc870cb3b64ed10e69111b8482ad3a65b00537b2561cefe20e4455145cf (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=interesting_yonath, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 08:45:06 compute-0 podman[255101]: 2026-01-31 08:45:06.775316938 +0000 UTC m=+0.112314755 container start 1e73dfc870cb3b64ed10e69111b8482ad3a65b00537b2561cefe20e4455145cf (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=interesting_yonath, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 31 08:45:06 compute-0 podman[255101]: 2026-01-31 08:45:06.682459041 +0000 UTC m=+0.019456878 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 08:45:06 compute-0 podman[255101]: 2026-01-31 08:45:06.803887826 +0000 UTC m=+0.140885633 container attach 1e73dfc870cb3b64ed10e69111b8482ad3a65b00537b2561cefe20e4455145cf (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=interesting_yonath, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 08:45:06 compute-0 ceph-mon[75227]: pgmap v1288: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:45:06 compute-0 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 08:45:06 compute-0 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 31 08:45:06 compute-0 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 08:45:06 compute-0 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 31 08:45:06 compute-0 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 31 08:45:06 compute-0 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 08:45:07 compute-0 interesting_yonath[255117]: --> passed data devices: 0 physical, 3 LVM
Jan 31 08:45:07 compute-0 interesting_yonath[255117]: --> All data devices are unavailable
Jan 31 08:45:07 compute-0 systemd[1]: libpod-1e73dfc870cb3b64ed10e69111b8482ad3a65b00537b2561cefe20e4455145cf.scope: Deactivated successfully.
Jan 31 08:45:07 compute-0 podman[255101]: 2026-01-31 08:45:07.236508718 +0000 UTC m=+0.573506545 container died 1e73dfc870cb3b64ed10e69111b8482ad3a65b00537b2561cefe20e4455145cf (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=interesting_yonath, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=tentacle)
Jan 31 08:45:07 compute-0 systemd[1]: var-lib-containers-storage-overlay-29282ff8fa4fb399acdbaf814dc9f2203e847e9c5f6132d36aed59a617a4a26c-merged.mount: Deactivated successfully.
Jan 31 08:45:07 compute-0 podman[255101]: 2026-01-31 08:45:07.276611036 +0000 UTC m=+0.613608853 container remove 1e73dfc870cb3b64ed10e69111b8482ad3a65b00537b2561cefe20e4455145cf (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=interesting_yonath, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 08:45:07 compute-0 systemd[1]: libpod-conmon-1e73dfc870cb3b64ed10e69111b8482ad3a65b00537b2561cefe20e4455145cf.scope: Deactivated successfully.
Jan 31 08:45:07 compute-0 sudo[255024]: pam_unix(sudo:session): session closed for user root
Jan 31 08:45:07 compute-0 sudo[255149]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 08:45:07 compute-0 sudo[255149]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:45:07 compute-0 sudo[255149]: pam_unix(sudo:session): session closed for user root
Jan 31 08:45:07 compute-0 sudo[255174]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/82c880e6-d992-5408-8b12-efff9c275473/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 82c880e6-d992-5408-8b12-efff9c275473 -- lvm list --format json
Jan 31 08:45:07 compute-0 sudo[255174]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:45:07 compute-0 podman[255211]: 2026-01-31 08:45:07.661867952 +0000 UTC m=+0.035848767 container create 22cff697f383b6a84dfb19c03a4cbab7183003b55ee03d1fce8308b3704cab38 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=clever_wozniak, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 08:45:07 compute-0 systemd[1]: Started libpod-conmon-22cff697f383b6a84dfb19c03a4cbab7183003b55ee03d1fce8308b3704cab38.scope.
Jan 31 08:45:07 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:45:07 compute-0 podman[255211]: 2026-01-31 08:45:07.714405476 +0000 UTC m=+0.088386311 container init 22cff697f383b6a84dfb19c03a4cbab7183003b55ee03d1fce8308b3704cab38 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=clever_wozniak, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle)
Jan 31 08:45:07 compute-0 podman[255211]: 2026-01-31 08:45:07.720175021 +0000 UTC m=+0.094155836 container start 22cff697f383b6a84dfb19c03a4cbab7183003b55ee03d1fce8308b3704cab38 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=clever_wozniak, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 08:45:07 compute-0 clever_wozniak[255227]: 167 167
Jan 31 08:45:07 compute-0 systemd[1]: libpod-22cff697f383b6a84dfb19c03a4cbab7183003b55ee03d1fce8308b3704cab38.scope: Deactivated successfully.
Jan 31 08:45:07 compute-0 conmon[255227]: conmon 22cff697f383b6a84dfb <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-22cff697f383b6a84dfb19c03a4cbab7183003b55ee03d1fce8308b3704cab38.scope/container/memory.events
Jan 31 08:45:07 compute-0 podman[255211]: 2026-01-31 08:45:07.724848165 +0000 UTC m=+0.098828980 container attach 22cff697f383b6a84dfb19c03a4cbab7183003b55ee03d1fce8308b3704cab38 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=clever_wozniak, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, io.buildah.version=1.41.3)
Jan 31 08:45:07 compute-0 podman[255211]: 2026-01-31 08:45:07.72540578 +0000 UTC m=+0.099386595 container died 22cff697f383b6a84dfb19c03a4cbab7183003b55ee03d1fce8308b3704cab38 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=clever_wozniak, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0)
Jan 31 08:45:07 compute-0 podman[255211]: 2026-01-31 08:45:07.645760131 +0000 UTC m=+0.019740966 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 08:45:07 compute-0 systemd[1]: var-lib-containers-storage-overlay-982c652eed57059d6e26e352ea273ee27873d068729defe6d458fe4ea59a289d-merged.mount: Deactivated successfully.
Jan 31 08:45:07 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1289: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:45:07 compute-0 podman[255211]: 2026-01-31 08:45:07.75893091 +0000 UTC m=+0.132911725 container remove 22cff697f383b6a84dfb19c03a4cbab7183003b55ee03d1fce8308b3704cab38 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=clever_wozniak, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 31 08:45:07 compute-0 systemd[1]: libpod-conmon-22cff697f383b6a84dfb19c03a4cbab7183003b55ee03d1fce8308b3704cab38.scope: Deactivated successfully.
Jan 31 08:45:07 compute-0 ceph-mon[75227]: pgmap v1289: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:45:07 compute-0 podman[255251]: 2026-01-31 08:45:07.891746841 +0000 UTC m=+0.037101613 container create 4dc77474a684aa3cc7645dd6087859ad6ecdf104a8d72a9c399c82edf4d0f7ba (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hungry_rhodes, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, ceph=True)
Jan 31 08:45:07 compute-0 systemd[1]: Started libpod-conmon-4dc77474a684aa3cc7645dd6087859ad6ecdf104a8d72a9c399c82edf4d0f7ba.scope.
Jan 31 08:45:07 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:45:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4b349a34568c7d44093d814cfcc47a1c2290c37e62b37a9b9cff7023528fa627/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 08:45:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4b349a34568c7d44093d814cfcc47a1c2290c37e62b37a9b9cff7023528fa627/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 08:45:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4b349a34568c7d44093d814cfcc47a1c2290c37e62b37a9b9cff7023528fa627/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 08:45:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4b349a34568c7d44093d814cfcc47a1c2290c37e62b37a9b9cff7023528fa627/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 08:45:07 compute-0 podman[255251]: 2026-01-31 08:45:07.874421455 +0000 UTC m=+0.019776257 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 08:45:07 compute-0 podman[255251]: 2026-01-31 08:45:07.974680625 +0000 UTC m=+0.120035427 container init 4dc77474a684aa3cc7645dd6087859ad6ecdf104a8d72a9c399c82edf4d0f7ba (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hungry_rhodes, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 08:45:07 compute-0 podman[255251]: 2026-01-31 08:45:07.980023598 +0000 UTC m=+0.125378410 container start 4dc77474a684aa3cc7645dd6087859ad6ecdf104a8d72a9c399c82edf4d0f7ba (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hungry_rhodes, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030)
Jan 31 08:45:07 compute-0 podman[255251]: 2026-01-31 08:45:07.98427346 +0000 UTC m=+0.129628262 container attach 4dc77474a684aa3cc7645dd6087859ad6ecdf104a8d72a9c399c82edf4d0f7ba (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hungry_rhodes, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_REF=tentacle, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Jan 31 08:45:08 compute-0 hungry_rhodes[255267]: {
Jan 31 08:45:08 compute-0 hungry_rhodes[255267]:     "0": [
Jan 31 08:45:08 compute-0 hungry_rhodes[255267]:         {
Jan 31 08:45:08 compute-0 hungry_rhodes[255267]:             "devices": [
Jan 31 08:45:08 compute-0 hungry_rhodes[255267]:                 "/dev/loop3"
Jan 31 08:45:08 compute-0 hungry_rhodes[255267]:             ],
Jan 31 08:45:08 compute-0 hungry_rhodes[255267]:             "lv_name": "ceph_lv0",
Jan 31 08:45:08 compute-0 hungry_rhodes[255267]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 08:45:08 compute-0 hungry_rhodes[255267]:             "lv_size": "21470642176",
Jan 31 08:45:08 compute-0 hungry_rhodes[255267]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=MTsNbY-MKaT-jGv0-3onj-5WQa-gnK0-BbfLsK,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=82c880e6-d992-5408-8b12-efff9c275473,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=39c36249-2898-4a76-b317-8e4ca379866f,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 31 08:45:08 compute-0 hungry_rhodes[255267]:             "lv_uuid": "MTsNbY-MKaT-jGv0-3onj-5WQa-gnK0-BbfLsK",
Jan 31 08:45:08 compute-0 hungry_rhodes[255267]:             "name": "ceph_lv0",
Jan 31 08:45:08 compute-0 hungry_rhodes[255267]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 08:45:08 compute-0 hungry_rhodes[255267]:             "tags": {
Jan 31 08:45:08 compute-0 hungry_rhodes[255267]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 31 08:45:08 compute-0 hungry_rhodes[255267]:                 "ceph.block_uuid": "MTsNbY-MKaT-jGv0-3onj-5WQa-gnK0-BbfLsK",
Jan 31 08:45:08 compute-0 hungry_rhodes[255267]:                 "ceph.cephx_lockbox_secret": "",
Jan 31 08:45:08 compute-0 hungry_rhodes[255267]:                 "ceph.cluster_fsid": "82c880e6-d992-5408-8b12-efff9c275473",
Jan 31 08:45:08 compute-0 hungry_rhodes[255267]:                 "ceph.cluster_name": "ceph",
Jan 31 08:45:08 compute-0 hungry_rhodes[255267]:                 "ceph.crush_device_class": "",
Jan 31 08:45:08 compute-0 hungry_rhodes[255267]:                 "ceph.encrypted": "0",
Jan 31 08:45:08 compute-0 hungry_rhodes[255267]:                 "ceph.objectstore": "bluestore",
Jan 31 08:45:08 compute-0 hungry_rhodes[255267]:                 "ceph.osd_fsid": "39c36249-2898-4a76-b317-8e4ca379866f",
Jan 31 08:45:08 compute-0 hungry_rhodes[255267]:                 "ceph.osd_id": "0",
Jan 31 08:45:08 compute-0 hungry_rhodes[255267]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 31 08:45:08 compute-0 hungry_rhodes[255267]:                 "ceph.type": "block",
Jan 31 08:45:08 compute-0 hungry_rhodes[255267]:                 "ceph.vdo": "0",
Jan 31 08:45:08 compute-0 hungry_rhodes[255267]:                 "ceph.with_tpm": "0"
Jan 31 08:45:08 compute-0 hungry_rhodes[255267]:             },
Jan 31 08:45:08 compute-0 hungry_rhodes[255267]:             "type": "block",
Jan 31 08:45:08 compute-0 hungry_rhodes[255267]:             "vg_name": "ceph_vg0"
Jan 31 08:45:08 compute-0 hungry_rhodes[255267]:         }
Jan 31 08:45:08 compute-0 hungry_rhodes[255267]:     ],
Jan 31 08:45:08 compute-0 hungry_rhodes[255267]:     "1": [
Jan 31 08:45:08 compute-0 hungry_rhodes[255267]:         {
Jan 31 08:45:08 compute-0 hungry_rhodes[255267]:             "devices": [
Jan 31 08:45:08 compute-0 hungry_rhodes[255267]:                 "/dev/loop4"
Jan 31 08:45:08 compute-0 hungry_rhodes[255267]:             ],
Jan 31 08:45:08 compute-0 hungry_rhodes[255267]:             "lv_name": "ceph_lv1",
Jan 31 08:45:08 compute-0 hungry_rhodes[255267]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 31 08:45:08 compute-0 hungry_rhodes[255267]:             "lv_size": "21470642176",
Jan 31 08:45:08 compute-0 hungry_rhodes[255267]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=p93Mbf-DMxT-pcUt-jSJE-SFna-oscq-yTAd40,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=82c880e6-d992-5408-8b12-efff9c275473,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=dacad4fa-56d8-4937-b2d8-306fb75187f3,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 31 08:45:08 compute-0 hungry_rhodes[255267]:             "lv_uuid": "p93Mbf-DMxT-pcUt-jSJE-SFna-oscq-yTAd40",
Jan 31 08:45:08 compute-0 hungry_rhodes[255267]:             "name": "ceph_lv1",
Jan 31 08:45:08 compute-0 hungry_rhodes[255267]:             "path": "/dev/ceph_vg1/ceph_lv1",
Jan 31 08:45:08 compute-0 hungry_rhodes[255267]:             "tags": {
Jan 31 08:45:08 compute-0 hungry_rhodes[255267]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 31 08:45:08 compute-0 hungry_rhodes[255267]:                 "ceph.block_uuid": "p93Mbf-DMxT-pcUt-jSJE-SFna-oscq-yTAd40",
Jan 31 08:45:08 compute-0 hungry_rhodes[255267]:                 "ceph.cephx_lockbox_secret": "",
Jan 31 08:45:08 compute-0 hungry_rhodes[255267]:                 "ceph.cluster_fsid": "82c880e6-d992-5408-8b12-efff9c275473",
Jan 31 08:45:08 compute-0 hungry_rhodes[255267]:                 "ceph.cluster_name": "ceph",
Jan 31 08:45:08 compute-0 hungry_rhodes[255267]:                 "ceph.crush_device_class": "",
Jan 31 08:45:08 compute-0 hungry_rhodes[255267]:                 "ceph.encrypted": "0",
Jan 31 08:45:08 compute-0 hungry_rhodes[255267]:                 "ceph.objectstore": "bluestore",
Jan 31 08:45:08 compute-0 hungry_rhodes[255267]:                 "ceph.osd_fsid": "dacad4fa-56d8-4937-b2d8-306fb75187f3",
Jan 31 08:45:08 compute-0 hungry_rhodes[255267]:                 "ceph.osd_id": "1",
Jan 31 08:45:08 compute-0 hungry_rhodes[255267]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 31 08:45:08 compute-0 hungry_rhodes[255267]:                 "ceph.type": "block",
Jan 31 08:45:08 compute-0 hungry_rhodes[255267]:                 "ceph.vdo": "0",
Jan 31 08:45:08 compute-0 hungry_rhodes[255267]:                 "ceph.with_tpm": "0"
Jan 31 08:45:08 compute-0 hungry_rhodes[255267]:             },
Jan 31 08:45:08 compute-0 hungry_rhodes[255267]:             "type": "block",
Jan 31 08:45:08 compute-0 hungry_rhodes[255267]:             "vg_name": "ceph_vg1"
Jan 31 08:45:08 compute-0 hungry_rhodes[255267]:         }
Jan 31 08:45:08 compute-0 hungry_rhodes[255267]:     ],
Jan 31 08:45:08 compute-0 hungry_rhodes[255267]:     "2": [
Jan 31 08:45:08 compute-0 hungry_rhodes[255267]:         {
Jan 31 08:45:08 compute-0 hungry_rhodes[255267]:             "devices": [
Jan 31 08:45:08 compute-0 hungry_rhodes[255267]:                 "/dev/loop5"
Jan 31 08:45:08 compute-0 hungry_rhodes[255267]:             ],
Jan 31 08:45:08 compute-0 hungry_rhodes[255267]:             "lv_name": "ceph_lv2",
Jan 31 08:45:08 compute-0 hungry_rhodes[255267]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 31 08:45:08 compute-0 hungry_rhodes[255267]:             "lv_size": "21470642176",
Jan 31 08:45:08 compute-0 hungry_rhodes[255267]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=fxd7JU-HnwP-NvcE-M4xv-EgEF-kK7y-w6dXCS,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=82c880e6-d992-5408-8b12-efff9c275473,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=faa25865-e7b6-44f9-8188-08bf287b941b,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 31 08:45:08 compute-0 hungry_rhodes[255267]:             "lv_uuid": "fxd7JU-HnwP-NvcE-M4xv-EgEF-kK7y-w6dXCS",
Jan 31 08:45:08 compute-0 hungry_rhodes[255267]:             "name": "ceph_lv2",
Jan 31 08:45:08 compute-0 hungry_rhodes[255267]:             "path": "/dev/ceph_vg2/ceph_lv2",
Jan 31 08:45:08 compute-0 hungry_rhodes[255267]:             "tags": {
Jan 31 08:45:08 compute-0 hungry_rhodes[255267]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 31 08:45:08 compute-0 hungry_rhodes[255267]:                 "ceph.block_uuid": "fxd7JU-HnwP-NvcE-M4xv-EgEF-kK7y-w6dXCS",
Jan 31 08:45:08 compute-0 hungry_rhodes[255267]:                 "ceph.cephx_lockbox_secret": "",
Jan 31 08:45:08 compute-0 hungry_rhodes[255267]:                 "ceph.cluster_fsid": "82c880e6-d992-5408-8b12-efff9c275473",
Jan 31 08:45:08 compute-0 hungry_rhodes[255267]:                 "ceph.cluster_name": "ceph",
Jan 31 08:45:08 compute-0 hungry_rhodes[255267]:                 "ceph.crush_device_class": "",
Jan 31 08:45:08 compute-0 hungry_rhodes[255267]:                 "ceph.encrypted": "0",
Jan 31 08:45:08 compute-0 hungry_rhodes[255267]:                 "ceph.objectstore": "bluestore",
Jan 31 08:45:08 compute-0 hungry_rhodes[255267]:                 "ceph.osd_fsid": "faa25865-e7b6-44f9-8188-08bf287b941b",
Jan 31 08:45:08 compute-0 hungry_rhodes[255267]:                 "ceph.osd_id": "2",
Jan 31 08:45:08 compute-0 hungry_rhodes[255267]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 31 08:45:08 compute-0 hungry_rhodes[255267]:                 "ceph.type": "block",
Jan 31 08:45:08 compute-0 hungry_rhodes[255267]:                 "ceph.vdo": "0",
Jan 31 08:45:08 compute-0 hungry_rhodes[255267]:                 "ceph.with_tpm": "0"
Jan 31 08:45:08 compute-0 hungry_rhodes[255267]:             },
Jan 31 08:45:08 compute-0 hungry_rhodes[255267]:             "type": "block",
Jan 31 08:45:08 compute-0 hungry_rhodes[255267]:             "vg_name": "ceph_vg2"
Jan 31 08:45:08 compute-0 hungry_rhodes[255267]:         }
Jan 31 08:45:08 compute-0 hungry_rhodes[255267]:     ]
Jan 31 08:45:08 compute-0 hungry_rhodes[255267]: }
Jan 31 08:45:08 compute-0 systemd[1]: libpod-4dc77474a684aa3cc7645dd6087859ad6ecdf104a8d72a9c399c82edf4d0f7ba.scope: Deactivated successfully.
Jan 31 08:45:08 compute-0 podman[255251]: 2026-01-31 08:45:08.298339328 +0000 UTC m=+0.443694140 container died 4dc77474a684aa3cc7645dd6087859ad6ecdf104a8d72a9c399c82edf4d0f7ba (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hungry_rhodes, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Jan 31 08:45:08 compute-0 systemd[1]: var-lib-containers-storage-overlay-4b349a34568c7d44093d814cfcc47a1c2290c37e62b37a9b9cff7023528fa627-merged.mount: Deactivated successfully.
Jan 31 08:45:08 compute-0 podman[255251]: 2026-01-31 08:45:08.345325343 +0000 UTC m=+0.490680135 container remove 4dc77474a684aa3cc7645dd6087859ad6ecdf104a8d72a9c399c82edf4d0f7ba (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hungry_rhodes, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 08:45:08 compute-0 systemd[1]: libpod-conmon-4dc77474a684aa3cc7645dd6087859ad6ecdf104a8d72a9c399c82edf4d0f7ba.scope: Deactivated successfully.
Jan 31 08:45:08 compute-0 sudo[255174]: pam_unix(sudo:session): session closed for user root
Jan 31 08:45:08 compute-0 sudo[255288]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 08:45:08 compute-0 sudo[255288]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:45:08 compute-0 sudo[255288]: pam_unix(sudo:session): session closed for user root
Jan 31 08:45:08 compute-0 sudo[255313]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/82c880e6-d992-5408-8b12-efff9c275473/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 82c880e6-d992-5408-8b12-efff9c275473 -- raw list --format json
Jan 31 08:45:08 compute-0 sudo[255313]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:45:08 compute-0 podman[255350]: 2026-01-31 08:45:08.751716444 +0000 UTC m=+0.038052410 container create 9b3dfaa397b89754cd10e089053d881fa084614dcb311daf25df8251ae1f3254 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stupefied_faraday, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Jan 31 08:45:08 compute-0 systemd[1]: Started libpod-conmon-9b3dfaa397b89754cd10e089053d881fa084614dcb311daf25df8251ae1f3254.scope.
Jan 31 08:45:08 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:45:08 compute-0 podman[255350]: 2026-01-31 08:45:08.806649676 +0000 UTC m=+0.092985702 container init 9b3dfaa397b89754cd10e089053d881fa084614dcb311daf25df8251ae1f3254 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stupefied_faraday, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 08:45:08 compute-0 podman[255350]: 2026-01-31 08:45:08.813910104 +0000 UTC m=+0.100246070 container start 9b3dfaa397b89754cd10e089053d881fa084614dcb311daf25df8251ae1f3254 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stupefied_faraday, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Jan 31 08:45:08 compute-0 stupefied_faraday[255367]: 167 167
Jan 31 08:45:08 compute-0 podman[255350]: 2026-01-31 08:45:08.817558069 +0000 UTC m=+0.103894035 container attach 9b3dfaa397b89754cd10e089053d881fa084614dcb311daf25df8251ae1f3254 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stupefied_faraday, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 08:45:08 compute-0 systemd[1]: libpod-9b3dfaa397b89754cd10e089053d881fa084614dcb311daf25df8251ae1f3254.scope: Deactivated successfully.
Jan 31 08:45:08 compute-0 podman[255350]: 2026-01-31 08:45:08.819293428 +0000 UTC m=+0.105629404 container died 9b3dfaa397b89754cd10e089053d881fa084614dcb311daf25df8251ae1f3254 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stupefied_faraday, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Jan 31 08:45:08 compute-0 podman[255350]: 2026-01-31 08:45:08.735383877 +0000 UTC m=+0.021719853 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 08:45:08 compute-0 systemd[1]: var-lib-containers-storage-overlay-49f9493aae9a2ef63689c0cc3777a3950ff17cba1188eead46a37f13e9a68b7f-merged.mount: Deactivated successfully.
Jan 31 08:45:08 compute-0 podman[255350]: 2026-01-31 08:45:08.861716563 +0000 UTC m=+0.148052559 container remove 9b3dfaa397b89754cd10e089053d881fa084614dcb311daf25df8251ae1f3254 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stupefied_faraday, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030)
Jan 31 08:45:08 compute-0 systemd[1]: libpod-conmon-9b3dfaa397b89754cd10e089053d881fa084614dcb311daf25df8251ae1f3254.scope: Deactivated successfully.
Jan 31 08:45:09 compute-0 podman[255391]: 2026-01-31 08:45:09.025128969 +0000 UTC m=+0.036857295 container create b8c6013e4c75f5b7a02da1bfea02e267d8ad0faf3ee32173a07beeb30dcf1877 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_engelbart, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.license=GPLv2)
Jan 31 08:45:09 compute-0 systemd[1]: Started libpod-conmon-b8c6013e4c75f5b7a02da1bfea02e267d8ad0faf3ee32173a07beeb30dcf1877.scope.
Jan 31 08:45:09 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:45:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/731f9ddcc025c5616130354693c004ad7793f743df4ea3c352a523c8e2bdaa32/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 08:45:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/731f9ddcc025c5616130354693c004ad7793f743df4ea3c352a523c8e2bdaa32/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 08:45:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/731f9ddcc025c5616130354693c004ad7793f743df4ea3c352a523c8e2bdaa32/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 08:45:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/731f9ddcc025c5616130354693c004ad7793f743df4ea3c352a523c8e2bdaa32/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 08:45:09 compute-0 podman[255391]: 2026-01-31 08:45:09.009598245 +0000 UTC m=+0.021326601 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 08:45:09 compute-0 podman[255391]: 2026-01-31 08:45:09.110124652 +0000 UTC m=+0.121852988 container init b8c6013e4c75f5b7a02da1bfea02e267d8ad0faf3ee32173a07beeb30dcf1877 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_engelbart, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3)
Jan 31 08:45:09 compute-0 podman[255391]: 2026-01-31 08:45:09.114503047 +0000 UTC m=+0.126231373 container start b8c6013e4c75f5b7a02da1bfea02e267d8ad0faf3ee32173a07beeb30dcf1877 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_engelbart, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 08:45:09 compute-0 podman[255391]: 2026-01-31 08:45:09.117521054 +0000 UTC m=+0.129249410 container attach b8c6013e4c75f5b7a02da1bfea02e267d8ad0faf3ee32173a07beeb30dcf1877 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_engelbart, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3)
Jan 31 08:45:09 compute-0 lvm[255486]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 31 08:45:09 compute-0 lvm[255486]: VG ceph_vg0 finished
Jan 31 08:45:09 compute-0 lvm[255487]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 31 08:45:09 compute-0 lvm[255487]: VG ceph_vg1 finished
Jan 31 08:45:09 compute-0 lvm[255489]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 31 08:45:09 compute-0 lvm[255489]: VG ceph_vg2 finished
Jan 31 08:45:09 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1290: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:45:09 compute-0 happy_engelbart[255408]: {}
Jan 31 08:45:09 compute-0 systemd[1]: libpod-b8c6013e4c75f5b7a02da1bfea02e267d8ad0faf3ee32173a07beeb30dcf1877.scope: Deactivated successfully.
Jan 31 08:45:09 compute-0 systemd[1]: libpod-b8c6013e4c75f5b7a02da1bfea02e267d8ad0faf3ee32173a07beeb30dcf1877.scope: Consumed 1.051s CPU time.
Jan 31 08:45:09 compute-0 podman[255391]: 2026-01-31 08:45:09.830757297 +0000 UTC m=+0.842485663 container died b8c6013e4c75f5b7a02da1bfea02e267d8ad0faf3ee32173a07beeb30dcf1877 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_engelbart, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Jan 31 08:45:09 compute-0 systemd[1]: var-lib-containers-storage-overlay-731f9ddcc025c5616130354693c004ad7793f743df4ea3c352a523c8e2bdaa32-merged.mount: Deactivated successfully.
Jan 31 08:45:09 compute-0 podman[255391]: 2026-01-31 08:45:09.954067556 +0000 UTC m=+0.965795882 container remove b8c6013e4c75f5b7a02da1bfea02e267d8ad0faf3ee32173a07beeb30dcf1877 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_engelbart, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Jan 31 08:45:09 compute-0 systemd[1]: libpod-conmon-b8c6013e4c75f5b7a02da1bfea02e267d8ad0faf3ee32173a07beeb30dcf1877.scope: Deactivated successfully.
Jan 31 08:45:09 compute-0 sudo[255313]: pam_unix(sudo:session): session closed for user root
Jan 31 08:45:09 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 31 08:45:10 compute-0 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 08:45:10 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 31 08:45:10 compute-0 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 08:45:10 compute-0 sudo[255505]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 31 08:45:10 compute-0 sudo[255505]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:45:10 compute-0 sudo[255505]: pam_unix(sudo:session): session closed for user root
Jan 31 08:45:10 compute-0 ceph-mon[75227]: pgmap v1290: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:45:10 compute-0 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 08:45:10 compute-0 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 08:45:10 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:45:11 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1291: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:45:11 compute-0 ceph-mon[75227]: pgmap v1291: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:45:13 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1292: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:45:14 compute-0 ceph-mon[75227]: pgmap v1292: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:45:15 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1293: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:45:15 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:45:16 compute-0 ceph-mon[75227]: pgmap v1293: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:45:17 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1294: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:45:17 compute-0 ceph-mon[75227]: pgmap v1294: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:45:17 compute-0 ovn_metadata_agent[154972]: 2026-01-31 08:45:17.904 154977 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:45:17 compute-0 ovn_metadata_agent[154972]: 2026-01-31 08:45:17.905 154977 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:45:17 compute-0 ovn_metadata_agent[154972]: 2026-01-31 08:45:17.906 154977 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:45:17 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 31 08:45:17 compute-0 ceph-mon[75227]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/375406498' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 31 08:45:17 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 31 08:45:17 compute-0 ceph-mon[75227]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/375406498' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 31 08:45:18 compute-0 ceph-mon[75227]: from='client.? 192.168.122.10:0/375406498' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 31 08:45:18 compute-0 ceph-mon[75227]: from='client.? 192.168.122.10:0/375406498' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 31 08:45:19 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1295: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:45:19 compute-0 ceph-mon[75227]: pgmap v1295: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:45:20 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:45:21 compute-0 nova_compute[238824]: 2026-01-31 08:45:21.366 238828 DEBUG oslo_service.periodic_task [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:45:21 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1296: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:45:22 compute-0 nova_compute[238824]: 2026-01-31 08:45:22.340 238828 DEBUG oslo_service.periodic_task [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:45:22 compute-0 nova_compute[238824]: 2026-01-31 08:45:22.340 238828 DEBUG nova.compute.manager [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 31 08:45:22 compute-0 ceph-mon[75227]: pgmap v1296: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:45:23 compute-0 nova_compute[238824]: 2026-01-31 08:45:23.339 238828 DEBUG oslo_service.periodic_task [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:45:23 compute-0 nova_compute[238824]: 2026-01-31 08:45:23.340 238828 DEBUG oslo_service.periodic_task [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:45:23 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1297: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:45:23 compute-0 ceph-mon[75227]: pgmap v1297: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:45:25 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1298: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:45:25 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:45:26 compute-0 ceph-mon[75227]: pgmap v1298: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:45:27 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1299: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:45:27 compute-0 ceph-mon[75227]: pgmap v1299: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:45:28 compute-0 nova_compute[238824]: 2026-01-31 08:45:28.339 238828 DEBUG oslo_service.periodic_task [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:45:28 compute-0 nova_compute[238824]: 2026-01-31 08:45:28.339 238828 DEBUG nova.compute.manager [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 31 08:45:28 compute-0 nova_compute[238824]: 2026-01-31 08:45:28.340 238828 DEBUG nova.compute.manager [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 31 08:45:28 compute-0 nova_compute[238824]: 2026-01-31 08:45:28.354 238828 DEBUG nova.compute.manager [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 31 08:45:28 compute-0 nova_compute[238824]: 2026-01-31 08:45:28.354 238828 DEBUG oslo_service.periodic_task [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:45:29 compute-0 nova_compute[238824]: 2026-01-31 08:45:29.339 238828 DEBUG oslo_service.periodic_task [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:45:29 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1300: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:45:30 compute-0 podman[255531]: 2026-01-31 08:45:30.1966718 +0000 UTC m=+0.080516505 container health_status 5cc46d1955888fed41771eb977e7f9416e280539f01559118253757fe3eb0869 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ovn_metadata_agent, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '3b798815db4eef76ff54d2cfe5801aee605c637f16e47a4297de289ab1fdb9c1-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent)
Jan 31 08:45:30 compute-0 podman[255530]: 2026-01-31 08:45:30.202151657 +0000 UTC m=+0.086442195 container health_status 14c3c41ef3ecc0cb180f3b1c6b2646401de390411c75f2edf127900ced71a3ae (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '3b798815db4eef76ff54d2cfe5801aee605c637f16e47a4297de289ab1fdb9c1-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20260127, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_controller)
Jan 31 08:45:30 compute-0 nova_compute[238824]: 2026-01-31 08:45:30.339 238828 DEBUG oslo_service.periodic_task [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:45:30 compute-0 nova_compute[238824]: 2026-01-31 08:45:30.372 238828 DEBUG oslo_concurrency.lockutils [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:45:30 compute-0 nova_compute[238824]: 2026-01-31 08:45:30.372 238828 DEBUG oslo_concurrency.lockutils [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:45:30 compute-0 nova_compute[238824]: 2026-01-31 08:45:30.373 238828 DEBUG oslo_concurrency.lockutils [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:45:30 compute-0 nova_compute[238824]: 2026-01-31 08:45:30.373 238828 DEBUG nova.compute.resource_tracker [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 31 08:45:30 compute-0 nova_compute[238824]: 2026-01-31 08:45:30.373 238828 DEBUG oslo_concurrency.processutils [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:45:30 compute-0 ceph-mon[75227]: pgmap v1300: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:45:30 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 31 08:45:30 compute-0 ceph-mon[75227]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/821916903' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 31 08:45:30 compute-0 nova_compute[238824]: 2026-01-31 08:45:30.925 238828 DEBUG oslo_concurrency.processutils [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.552s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:45:30 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:45:31 compute-0 nova_compute[238824]: 2026-01-31 08:45:31.071 238828 WARNING nova.virt.libvirt.driver [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 31 08:45:31 compute-0 nova_compute[238824]: 2026-01-31 08:45:31.073 238828 DEBUG nova.compute.resource_tracker [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5093MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 31 08:45:31 compute-0 nova_compute[238824]: 2026-01-31 08:45:31.073 238828 DEBUG oslo_concurrency.lockutils [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:45:31 compute-0 nova_compute[238824]: 2026-01-31 08:45:31.073 238828 DEBUG oslo_concurrency.lockutils [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:45:31 compute-0 nova_compute[238824]: 2026-01-31 08:45:31.220 238828 DEBUG nova.compute.resource_tracker [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 31 08:45:31 compute-0 nova_compute[238824]: 2026-01-31 08:45:31.221 238828 DEBUG nova.compute.resource_tracker [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 31 08:45:31 compute-0 nova_compute[238824]: 2026-01-31 08:45:31.282 238828 DEBUG nova.scheduler.client.report [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Refreshing inventories for resource provider 6d4ff98f-eb37-47a1-bfaf-01e7f5329d98 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Jan 31 08:45:31 compute-0 nova_compute[238824]: 2026-01-31 08:45:31.338 238828 DEBUG nova.scheduler.client.report [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Updating ProviderTree inventory for provider 6d4ff98f-eb37-47a1-bfaf-01e7f5329d98 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Jan 31 08:45:31 compute-0 nova_compute[238824]: 2026-01-31 08:45:31.338 238828 DEBUG nova.compute.provider_tree [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Updating inventory in ProviderTree for provider 6d4ff98f-eb37-47a1-bfaf-01e7f5329d98 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Jan 31 08:45:31 compute-0 nova_compute[238824]: 2026-01-31 08:45:31.357 238828 DEBUG nova.scheduler.client.report [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Refreshing aggregate associations for resource provider 6d4ff98f-eb37-47a1-bfaf-01e7f5329d98, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Jan 31 08:45:31 compute-0 nova_compute[238824]: 2026-01-31 08:45:31.379 238828 DEBUG nova.scheduler.client.report [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Refreshing trait associations for resource provider 6d4ff98f-eb37-47a1-bfaf-01e7f5329d98, traits: COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_NODE,COMPUTE_IMAGE_TYPE_ISO,HW_CPU_X86_F16C,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_STORAGE_BUS_SCSI,HW_CPU_X86_FMA3,HW_CPU_X86_SHA,HW_CPU_X86_SSE,HW_CPU_X86_SSSE3,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,HW_CPU_X86_MMX,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_STORAGE_BUS_IDE,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_ACCELERATORS,HW_CPU_X86_CLMUL,HW_CPU_X86_BMI,HW_CPU_X86_AESNI,HW_CPU_X86_SSE2,HW_CPU_X86_SVM,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_NET_VIF_MODEL_E1000E,HW_CPU_X86_AVX,COMPUTE_SECURITY_TPM_2_0,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_DEVICE_TAGGING,COMPUTE_SECURITY_UEFI_SECURE_BOOT,HW_CPU_X86_SSE41,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_STORAGE_BUS_FDC,COMPUTE_GRAPHICS_MODEL_BOCHS,HW_CPU_X86_AVX2,HW_CPU_X86_BMI2,COMPUTE_VOLUME_EXTEND,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_TRUSTED_CERTS,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_SECURITY_TPM_1_2,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_GRAPHICS_MODEL_VIRTIO,HW_CPU_X86_ABM,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_RESCUE_BFV,HW_CPU_X86_SSE42,HW_CPU_X86_SSE4A,COMPUTE_STORAGE_BUS_SATA,COMPUTE_STORAGE_BUS_USB,COMPUTE_VIOMMU_MODEL_INTEL,HW_CPU_X86_AMD_SVM _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Jan 31 08:45:31 compute-0 nova_compute[238824]: 2026-01-31 08:45:31.396 238828 DEBUG oslo_concurrency.processutils [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:45:31 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1301: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:45:31 compute-0 ceph-mon[75227]: from='client.? 192.168.122.100:0/821916903' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 31 08:45:31 compute-0 ceph-mgr[75519]: [balancer INFO root] Optimize plan auto_2026-01-31_08:45:31
Jan 31 08:45:31 compute-0 ceph-mgr[75519]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 31 08:45:31 compute-0 ceph-mgr[75519]: [balancer INFO root] do_upmap
Jan 31 08:45:31 compute-0 ceph-mgr[75519]: [balancer INFO root] pools ['.mgr', '.rgw.root', 'vms', 'images', 'volumes', 'cephfs.cephfs.data', 'default.rgw.log', 'default.rgw.meta', 'cephfs.cephfs.meta', 'default.rgw.control', 'backups']
Jan 31 08:45:31 compute-0 ceph-mgr[75519]: [balancer INFO root] prepared 0/10 upmap changes
Jan 31 08:45:31 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 31 08:45:31 compute-0 ceph-mon[75227]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1646191734' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 31 08:45:31 compute-0 nova_compute[238824]: 2026-01-31 08:45:31.987 238828 DEBUG oslo_concurrency.processutils [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.592s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:45:31 compute-0 nova_compute[238824]: 2026-01-31 08:45:31.992 238828 DEBUG nova.compute.provider_tree [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Inventory has not changed in ProviderTree for provider: 6d4ff98f-eb37-47a1-bfaf-01e7f5329d98 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 31 08:45:32 compute-0 nova_compute[238824]: 2026-01-31 08:45:32.032 238828 DEBUG nova.scheduler.client.report [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Inventory has not changed for provider 6d4ff98f-eb37-47a1-bfaf-01e7f5329d98 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 31 08:45:32 compute-0 nova_compute[238824]: 2026-01-31 08:45:32.034 238828 DEBUG nova.compute.resource_tracker [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 31 08:45:32 compute-0 nova_compute[238824]: 2026-01-31 08:45:32.034 238828 DEBUG oslo_concurrency.lockutils [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.961s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:45:32 compute-0 ceph-mgr[75519]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:45:32 compute-0 ceph-mgr[75519]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:45:32 compute-0 ceph-mgr[75519]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:45:32 compute-0 ceph-mgr[75519]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:45:32 compute-0 ceph-mon[75227]: pgmap v1301: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:45:32 compute-0 ceph-mon[75227]: from='client.? 192.168.122.100:0/1646191734' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 31 08:45:32 compute-0 ceph-mgr[75519]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:45:32 compute-0 ceph-mgr[75519]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:45:33 compute-0 nova_compute[238824]: 2026-01-31 08:45:33.029 238828 DEBUG oslo_service.periodic_task [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:45:33 compute-0 ceph-mgr[75519]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 31 08:45:33 compute-0 ceph-mgr[75519]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 31 08:45:33 compute-0 ceph-mgr[75519]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 08:45:33 compute-0 ceph-mgr[75519]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 08:45:33 compute-0 ceph-mgr[75519]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 08:45:33 compute-0 ceph-mgr[75519]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 08:45:33 compute-0 ceph-mgr[75519]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 08:45:33 compute-0 ceph-mgr[75519]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 08:45:33 compute-0 ceph-mgr[75519]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 08:45:33 compute-0 ceph-mgr[75519]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 08:45:33 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1302: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:45:33 compute-0 ceph-mon[75227]: pgmap v1302: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:45:35 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1303: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:45:35 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:45:36 compute-0 ceph-mon[75227]: pgmap v1303: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:45:37 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1304: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:45:37 compute-0 ceph-mon[75227]: pgmap v1304: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:45:39 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1305: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:45:40 compute-0 ceph-mon[75227]: pgmap v1305: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:45:40 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:45:41 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1306: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:45:41 compute-0 ceph-mon[75227]: pgmap v1306: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:45:42 compute-0 ovn_metadata_agent[154972]: 2026-01-31 08:45:42.010 154977 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=4, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ae:5f:f2', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'd6:1b:f0:08:31:5c'}, ipsec=False) old=SB_Global(nb_cfg=3) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 31 08:45:42 compute-0 ovn_metadata_agent[154972]: 2026-01-31 08:45:42.012 154977 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 5 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 31 08:45:42 compute-0 sshd-session[255615]: Accepted publickey for zuul from 192.168.122.30 port 33614 ssh2: ECDSA SHA256:Skb+4tfaoVfLHQIqkRSeA/sFlTrVc6ZnX8V66qTLHY8
Jan 31 08:45:42 compute-0 systemd-logind[793]: New session 53 of user zuul.
Jan 31 08:45:42 compute-0 systemd[1]: Started Session 53 of User zuul.
Jan 31 08:45:42 compute-0 sshd-session[255615]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 31 08:45:42 compute-0 sudo[255688]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/systemctl list-units -a --no-pager --plain iscsid.service
Jan 31 08:45:42 compute-0 sudo[255688]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:45:42 compute-0 sudo[255688]: pam_unix(sudo:session): session closed for user root
Jan 31 08:45:42 compute-0 sudo[255713]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/systemctl list-units -a --no-pager --plain edpm_nova_compute.service
Jan 31 08:45:42 compute-0 sudo[255713]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:45:42 compute-0 sudo[255713]: pam_unix(sudo:session): session closed for user root
Jan 31 08:45:43 compute-0 sudo[255738]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/systemctl list-units -a --no-pager --plain edpm_ovn_controller.service
Jan 31 08:45:43 compute-0 sudo[255738]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:45:43 compute-0 sudo[255738]: pam_unix(sudo:session): session closed for user root
Jan 31 08:45:43 compute-0 sudo[255763]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/systemctl list-units -a --no-pager --plain edpm_ovn_metadata_agent.service
Jan 31 08:45:43 compute-0 sudo[255763]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:45:43 compute-0 sudo[255763]: pam_unix(sudo:session): session closed for user root
Jan 31 08:45:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] _maybe_adjust
Jan 31 08:45:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:45:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 31 08:45:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:45:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 08:45:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:45:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 08:45:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:45:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 08:45:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:45:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 3.257160766784386e-07 of space, bias 1.0, pg target 9.771482300353158e-05 quantized to 32 (current 32)
Jan 31 08:45:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:45:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 2.5331644121694047e-06 of space, bias 4.0, pg target 0.0030397972946032857 quantized to 16 (current 16)
Jan 31 08:45:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:45:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 08:45:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:45:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 31 08:45:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:45:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Jan 31 08:45:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:45:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 08:45:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:45:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 31 08:45:43 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1307: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:45:43 compute-0 ceph-mon[75227]: pgmap v1307: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:45:45 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1308: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:45:45 compute-0 ceph-mon[75227]: pgmap v1308: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:45:45 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:45:46 compute-0 sshd-session[255788]: Accepted publickey for zuul from 192.168.122.30 port 55604 ssh2: ECDSA SHA256:Skb+4tfaoVfLHQIqkRSeA/sFlTrVc6ZnX8V66qTLHY8
Jan 31 08:45:46 compute-0 systemd-logind[793]: New session 54 of user zuul.
Jan 31 08:45:46 compute-0 systemd[1]: Started Session 54 of User zuul.
Jan 31 08:45:46 compute-0 sshd-session[255788]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 31 08:45:47 compute-0 ovn_metadata_agent[154972]: 2026-01-31 08:45:47.013 154977 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=c8bc61c4-1b90-42d4-9c52-3d83532ede66, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '4'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 31 08:45:47 compute-0 sudo[255861]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/test -f /var/podman_client_access_setup
Jan 31 08:45:47 compute-0 sudo[255861]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:45:47 compute-0 sudo[255861]: pam_unix(sudo:session): session closed for user root
Jan 31 08:45:47 compute-0 sudo[255887]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/sbin/groupadd -f podman
Jan 31 08:45:47 compute-0 sudo[255887]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:45:47 compute-0 groupadd[255889]: group added to /etc/group: name=podman, GID=42479
Jan 31 08:45:47 compute-0 groupadd[255889]: group added to /etc/gshadow: name=podman
Jan 31 08:45:47 compute-0 groupadd[255889]: new group: name=podman, GID=42479
Jan 31 08:45:47 compute-0 sudo[255887]: pam_unix(sudo:session): session closed for user root
Jan 31 08:45:47 compute-0 sudo[255895]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/sbin/usermod -a -G podman zuul
Jan 31 08:45:47 compute-0 sudo[255895]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:45:47 compute-0 usermod[255897]: add 'zuul' to group 'podman'
Jan 31 08:45:47 compute-0 usermod[255897]: add 'zuul' to shadow group 'podman'
Jan 31 08:45:47 compute-0 sudo[255895]: pam_unix(sudo:session): session closed for user root
Jan 31 08:45:47 compute-0 sudo[255904]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/chmod -R o=wxr /etc/tmpfiles.d
Jan 31 08:45:47 compute-0 sudo[255904]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:45:47 compute-0 sudo[255904]: pam_unix(sudo:session): session closed for user root
Jan 31 08:45:47 compute-0 sudo[255907]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/echo 'd /run/podman 0770 root zuul'
Jan 31 08:45:47 compute-0 sudo[255907]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:45:47 compute-0 sudo[255907]: pam_unix(sudo:session): session closed for user root
Jan 31 08:45:47 compute-0 sudo[255910]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/cp /lib/systemd/system/podman.socket /etc/systemd/system/podman.socket
Jan 31 08:45:47 compute-0 sudo[255910]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:45:47 compute-0 sudo[255910]: pam_unix(sudo:session): session closed for user root
Jan 31 08:45:47 compute-0 sudo[255913]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/crudini --set /etc/systemd/system/podman.socket Socket SocketMode 0660
Jan 31 08:45:47 compute-0 sudo[255913]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:45:47 compute-0 sudo[255913]: pam_unix(sudo:session): session closed for user root
Jan 31 08:45:47 compute-0 sudo[255916]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/crudini --set /etc/systemd/system/podman.socket Socket SocketGroup podman
Jan 31 08:45:47 compute-0 sudo[255916]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:45:47 compute-0 sudo[255916]: pam_unix(sudo:session): session closed for user root
Jan 31 08:45:47 compute-0 sudo[255919]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/systemctl daemon-reload
Jan 31 08:45:47 compute-0 sudo[255919]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:45:47 compute-0 systemd[1]: Reloading.
Jan 31 08:45:47 compute-0 systemd-sysv-generator[255948]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 08:45:47 compute-0 systemd-rc-local-generator[255944]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 08:45:47 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1309: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:45:47 compute-0 sudo[255919]: pam_unix(sudo:session): session closed for user root
Jan 31 08:45:47 compute-0 sudo[255955]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/systemd-tmpfiles --create
Jan 31 08:45:47 compute-0 sudo[255955]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:45:47 compute-0 ceph-mon[75227]: pgmap v1309: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:45:48 compute-0 sudo[255955]: pam_unix(sudo:session): session closed for user root
Jan 31 08:45:48 compute-0 sudo[255958]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/systemctl enable --now podman.socket
Jan 31 08:45:48 compute-0 sudo[255958]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:45:48 compute-0 systemd[1]: Reloading.
Jan 31 08:45:48 compute-0 systemd-rc-local-generator[255984]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 08:45:48 compute-0 systemd-sysv-generator[255989]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 08:45:48 compute-0 systemd[1]: Starting Podman API Socket...
Jan 31 08:45:48 compute-0 systemd[1]: Listening on Podman API Socket.
Jan 31 08:45:48 compute-0 sudo[255958]: pam_unix(sudo:session): session closed for user root
Jan 31 08:45:48 compute-0 sudo[255996]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/chmod 777 /run/podman
Jan 31 08:45:48 compute-0 sudo[255996]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:45:48 compute-0 sudo[255996]: pam_unix(sudo:session): session closed for user root
Jan 31 08:45:48 compute-0 sudo[255999]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/chown -R root: /run/podman
Jan 31 08:45:48 compute-0 sudo[255999]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:45:48 compute-0 sudo[255999]: pam_unix(sudo:session): session closed for user root
Jan 31 08:45:48 compute-0 sudo[256002]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/chmod g+rw /run/podman/podman.sock
Jan 31 08:45:48 compute-0 sudo[256002]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:45:48 compute-0 sudo[256002]: pam_unix(sudo:session): session closed for user root
Jan 31 08:45:48 compute-0 sudo[256005]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/chmod 777 /run/podman/podman.sock
Jan 31 08:45:48 compute-0 sudo[256005]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:45:48 compute-0 sudo[256005]: pam_unix(sudo:session): session closed for user root
Jan 31 08:45:48 compute-0 sudo[256008]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/sbin/setenforce 0
Jan 31 08:45:48 compute-0 sudo[256008]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:45:48 compute-0 sudo[256008]: pam_unix(sudo:session): session closed for user root
Jan 31 08:45:48 compute-0 sudo[256011]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/systemctl restart podman.socket
Jan 31 08:45:48 compute-0 dbus-broker-launch[778]: avc:  op=setenforce lsm=selinux enforcing=0 res=1
Jan 31 08:45:48 compute-0 sudo[256011]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:45:48 compute-0 systemd[1]: podman.socket: Deactivated successfully.
Jan 31 08:45:48 compute-0 systemd[1]: Closed Podman API Socket.
Jan 31 08:45:48 compute-0 systemd[1]: Stopping Podman API Socket...
Jan 31 08:45:48 compute-0 systemd[1]: Starting Podman API Socket...
Jan 31 08:45:48 compute-0 systemd[1]: Listening on Podman API Socket.
Jan 31 08:45:48 compute-0 sudo[256011]: pam_unix(sudo:session): session closed for user root
Jan 31 08:45:48 compute-0 sudo[255864]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/touch /var/podman_client_access_setup
Jan 31 08:45:48 compute-0 sudo[255864]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:45:48 compute-0 sudo[255864]: pam_unix(sudo:session): session closed for user root
Jan 31 08:45:48 compute-0 sshd-session[256017]: Accepted publickey for zuul from 192.168.122.30 port 55606 ssh2: ECDSA SHA256:Skb+4tfaoVfLHQIqkRSeA/sFlTrVc6ZnX8V66qTLHY8
Jan 31 08:45:48 compute-0 systemd-logind[793]: New session 55 of user zuul.
Jan 31 08:45:48 compute-0 systemd[1]: Started Session 55 of User zuul.
Jan 31 08:45:48 compute-0 sshd-session[256017]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 31 08:45:48 compute-0 systemd[1]: Starting Podman API Service...
Jan 31 08:45:48 compute-0 systemd[1]: Started Podman API Service.
Jan 31 08:45:48 compute-0 podman[256021]: time="2026-01-31T08:45:48Z" level=info msg="/usr/bin/podman filtering at log level info"
Jan 31 08:45:48 compute-0 podman[256021]: time="2026-01-31T08:45:48Z" level=info msg="Setting parallel job count to 25"
Jan 31 08:45:48 compute-0 podman[256021]: time="2026-01-31T08:45:48Z" level=info msg="Using sqlite as database backend"
Jan 31 08:45:48 compute-0 podman[256021]: time="2026-01-31T08:45:48Z" level=info msg="Not using native diff for overlay, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled"
Jan 31 08:45:48 compute-0 podman[256021]: time="2026-01-31T08:45:48Z" level=info msg="Using systemd socket activation to determine API endpoint"
Jan 31 08:45:48 compute-0 podman[256021]: time="2026-01-31T08:45:48Z" level=info msg="API service listening on \"/run/podman/podman.sock\". URI: \"unix:///run/podman/podman.sock\""
Jan 31 08:45:48 compute-0 podman[256021]: @ - - [31/Jan/2026:08:45:48 +0000] "HEAD /v4.7.0/libpod/_ping HTTP/1.1" 200 0 "" "PodmanPy/4.7.0 (API v4.7.0; Compatible v1.40)"
Jan 31 08:45:48 compute-0 podman[256021]: @ - - [31/Jan/2026:08:45:48 +0000] "GET /v4.7.0/libpod/containers/json HTTP/1.1" 200 22535 "" "PodmanPy/4.7.0 (API v4.7.0; Compatible v1.40)"
Jan 31 08:45:49 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1310: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:45:49 compute-0 ceph-mon[75227]: pgmap v1310: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:45:50 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:45:51 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1311: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:45:51 compute-0 ceph-mon[75227]: pgmap v1311: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:45:53 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1312: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:45:53 compute-0 ceph-mon[75227]: pgmap v1312: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:45:55 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1313: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:45:55 compute-0 ceph-mon[75227]: pgmap v1313: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:45:55 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:45:57 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1314: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:45:57 compute-0 ceph-mon[75227]: pgmap v1314: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:45:59 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1315: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:45:59 compute-0 ceph-mon[75227]: pgmap v1315: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:46:00 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:46:01 compute-0 podman[256032]: 2026-01-31 08:46:01.158960097 +0000 UTC m=+0.046205323 container health_status 5cc46d1955888fed41771eb977e7f9416e280539f01559118253757fe3eb0869 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '3b798815db4eef76ff54d2cfe5801aee605c637f16e47a4297de289ab1fdb9c1-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible)
Jan 31 08:46:01 compute-0 podman[256031]: 2026-01-31 08:46:01.178336692 +0000 UTC m=+0.066383871 container health_status 14c3c41ef3ecc0cb180f3b1c6b2646401de390411c75f2edf127900ced71a3ae (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '3b798815db4eef76ff54d2cfe5801aee605c637f16e47a4297de289ab1fdb9c1-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, tcib_managed=true, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.schema-version=1.0)
Jan 31 08:46:01 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1316: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:46:01 compute-0 ceph-mon[75227]: pgmap v1316: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:46:02 compute-0 ceph-mgr[75519]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:46:02 compute-0 ceph-mgr[75519]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:46:02 compute-0 ceph-mgr[75519]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:46:02 compute-0 ceph-mgr[75519]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:46:02 compute-0 ceph-mgr[75519]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:46:02 compute-0 ceph-mgr[75519]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:46:03 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1317: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:46:03 compute-0 podman[256021]: time="2026-01-31T08:46:03Z" level=info msg="Received shutdown.Stop(), terminating!" PID=256021
Jan 31 08:46:03 compute-0 systemd[1]: podman.service: Deactivated successfully.
Jan 31 08:46:03 compute-0 ceph-mon[75227]: pgmap v1317: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:46:05 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1318: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:46:05 compute-0 ceph-mon[75227]: pgmap v1318: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:46:05 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:46:07 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1319: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:46:08 compute-0 ceph-mon[75227]: pgmap v1319: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:46:09 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1320: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:46:09 compute-0 ceph-mon[75227]: pgmap v1320: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:46:10 compute-0 sudo[256074]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 08:46:10 compute-0 sudo[256074]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:46:10 compute-0 sudo[256074]: pam_unix(sudo:session): session closed for user root
Jan 31 08:46:10 compute-0 sudo[256099]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/82c880e6-d992-5408-8b12-efff9c275473/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Jan 31 08:46:10 compute-0 sudo[256099]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:46:10 compute-0 sudo[256099]: pam_unix(sudo:session): session closed for user root
Jan 31 08:46:10 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 31 08:46:10 compute-0 ceph-mon[75227]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 08:46:10 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 31 08:46:10 compute-0 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 31 08:46:10 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 31 08:46:10 compute-0 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 08:46:10 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 31 08:46:10 compute-0 ceph-mon[75227]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 31 08:46:10 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 31 08:46:10 compute-0 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 31 08:46:10 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 31 08:46:10 compute-0 ceph-mon[75227]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 08:46:10 compute-0 sudo[256154]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 08:46:10 compute-0 sudo[256154]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:46:10 compute-0 sudo[256154]: pam_unix(sudo:session): session closed for user root
Jan 31 08:46:10 compute-0 sudo[256179]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/82c880e6-d992-5408-8b12-efff9c275473/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 82c880e6-d992-5408-8b12-efff9c275473 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Jan 31 08:46:10 compute-0 sudo[256179]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:46:10 compute-0 podman[256215]: 2026-01-31 08:46:10.914067595 +0000 UTC m=+0.030541865 container create 0fd15a1a73ffeb77d2f26c5b390ac81426ab7d91f975cd7b5e70dd6b1c013c5c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sleepy_jang, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Jan 31 08:46:10 compute-0 systemd[1]: Started libpod-conmon-0fd15a1a73ffeb77d2f26c5b390ac81426ab7d91f975cd7b5e70dd6b1c013c5c.scope.
Jan 31 08:46:10 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:46:10 compute-0 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 08:46:10 compute-0 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 31 08:46:10 compute-0 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 08:46:10 compute-0 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 31 08:46:10 compute-0 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 31 08:46:10 compute-0 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 08:46:10 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:46:10 compute-0 podman[256215]: 2026-01-31 08:46:10.988490895 +0000 UTC m=+0.104965185 container init 0fd15a1a73ffeb77d2f26c5b390ac81426ab7d91f975cd7b5e70dd6b1c013c5c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sleepy_jang, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030)
Jan 31 08:46:10 compute-0 podman[256215]: 2026-01-31 08:46:10.995133735 +0000 UTC m=+0.111608005 container start 0fd15a1a73ffeb77d2f26c5b390ac81426ab7d91f975cd7b5e70dd6b1c013c5c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sleepy_jang, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 08:46:10 compute-0 podman[256215]: 2026-01-31 08:46:10.901397673 +0000 UTC m=+0.017871963 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 08:46:10 compute-0 sleepy_jang[256232]: 167 167
Jan 31 08:46:10 compute-0 systemd[1]: libpod-0fd15a1a73ffeb77d2f26c5b390ac81426ab7d91f975cd7b5e70dd6b1c013c5c.scope: Deactivated successfully.
Jan 31 08:46:10 compute-0 podman[256215]: 2026-01-31 08:46:10.999710016 +0000 UTC m=+0.116184326 container attach 0fd15a1a73ffeb77d2f26c5b390ac81426ab7d91f975cd7b5e70dd6b1c013c5c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sleepy_jang, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Jan 31 08:46:11 compute-0 podman[256215]: 2026-01-31 08:46:11.000078187 +0000 UTC m=+0.116552487 container died 0fd15a1a73ffeb77d2f26c5b390ac81426ab7d91f975cd7b5e70dd6b1c013c5c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sleepy_jang, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 31 08:46:11 compute-0 systemd[1]: var-lib-containers-storage-overlay-2bc3d128a2e40fa9de6c7add9dfeacb99564f59a73651dbd16ec012a274392cf-merged.mount: Deactivated successfully.
Jan 31 08:46:11 compute-0 podman[256215]: 2026-01-31 08:46:11.039849115 +0000 UTC m=+0.156323395 container remove 0fd15a1a73ffeb77d2f26c5b390ac81426ab7d91f975cd7b5e70dd6b1c013c5c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sleepy_jang, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 08:46:11 compute-0 systemd[1]: libpod-conmon-0fd15a1a73ffeb77d2f26c5b390ac81426ab7d91f975cd7b5e70dd6b1c013c5c.scope: Deactivated successfully.
Jan 31 08:46:11 compute-0 podman[256256]: 2026-01-31 08:46:11.171376349 +0000 UTC m=+0.046621556 container create a72b1ab4892aa79d96d4488c3614fe6970bafc23143935b3513c7cebbc3dd192 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=affectionate_mendeleev, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, ceph=True, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20251030)
Jan 31 08:46:11 compute-0 systemd[1]: Started libpod-conmon-a72b1ab4892aa79d96d4488c3614fe6970bafc23143935b3513c7cebbc3dd192.scope.
Jan 31 08:46:11 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:46:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8eeaeb0510eda7fe092697704beb7c0f26f1e8d1444f87f9a8800a77b401f82a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 08:46:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8eeaeb0510eda7fe092697704beb7c0f26f1e8d1444f87f9a8800a77b401f82a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 08:46:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8eeaeb0510eda7fe092697704beb7c0f26f1e8d1444f87f9a8800a77b401f82a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 08:46:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8eeaeb0510eda7fe092697704beb7c0f26f1e8d1444f87f9a8800a77b401f82a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 08:46:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8eeaeb0510eda7fe092697704beb7c0f26f1e8d1444f87f9a8800a77b401f82a/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 31 08:46:11 compute-0 podman[256256]: 2026-01-31 08:46:11.153410214 +0000 UTC m=+0.028655411 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 08:46:11 compute-0 podman[256256]: 2026-01-31 08:46:11.26855775 +0000 UTC m=+0.143803017 container init a72b1ab4892aa79d96d4488c3614fe6970bafc23143935b3513c7cebbc3dd192 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=affectionate_mendeleev, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 08:46:11 compute-0 podman[256256]: 2026-01-31 08:46:11.27763641 +0000 UTC m=+0.152881617 container start a72b1ab4892aa79d96d4488c3614fe6970bafc23143935b3513c7cebbc3dd192 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=affectionate_mendeleev, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Jan 31 08:46:11 compute-0 podman[256256]: 2026-01-31 08:46:11.283386784 +0000 UTC m=+0.158631981 container attach a72b1ab4892aa79d96d4488c3614fe6970bafc23143935b3513c7cebbc3dd192 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=affectionate_mendeleev, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 31 08:46:11 compute-0 affectionate_mendeleev[256272]: --> passed data devices: 0 physical, 3 LVM
Jan 31 08:46:11 compute-0 affectionate_mendeleev[256272]: --> All data devices are unavailable
Jan 31 08:46:11 compute-0 systemd[1]: libpod-a72b1ab4892aa79d96d4488c3614fe6970bafc23143935b3513c7cebbc3dd192.scope: Deactivated successfully.
Jan 31 08:46:11 compute-0 podman[256256]: 2026-01-31 08:46:11.72174121 +0000 UTC m=+0.596986397 container died a72b1ab4892aa79d96d4488c3614fe6970bafc23143935b3513c7cebbc3dd192 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=affectionate_mendeleev, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, ceph=True)
Jan 31 08:46:11 compute-0 systemd[1]: var-lib-containers-storage-overlay-8eeaeb0510eda7fe092697704beb7c0f26f1e8d1444f87f9a8800a77b401f82a-merged.mount: Deactivated successfully.
Jan 31 08:46:11 compute-0 podman[256256]: 2026-01-31 08:46:11.765460952 +0000 UTC m=+0.640706109 container remove a72b1ab4892aa79d96d4488c3614fe6970bafc23143935b3513c7cebbc3dd192 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=affectionate_mendeleev, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 08:46:11 compute-0 systemd[1]: libpod-conmon-a72b1ab4892aa79d96d4488c3614fe6970bafc23143935b3513c7cebbc3dd192.scope: Deactivated successfully.
Jan 31 08:46:11 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1321: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:46:11 compute-0 sudo[256296]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/sbin/ip --brief address list
Jan 31 08:46:11 compute-0 sudo[256296]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:46:11 compute-0 sudo[256296]: pam_unix(sudo:session): session closed for user root
Jan 31 08:46:11 compute-0 sudo[256179]: pam_unix(sudo:session): session closed for user root
Jan 31 08:46:11 compute-0 sudo[256329]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 08:46:11 compute-0 sudo[256329]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:46:11 compute-0 sudo[256329]: pam_unix(sudo:session): session closed for user root
Jan 31 08:46:11 compute-0 sudo[256352]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/sbin/ip -o netns list
Jan 31 08:46:11 compute-0 sudo[256352]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:46:11 compute-0 sudo[256352]: pam_unix(sudo:session): session closed for user root
Jan 31 08:46:11 compute-0 sudo[256372]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/82c880e6-d992-5408-8b12-efff9c275473/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 82c880e6-d992-5408-8b12-efff9c275473 -- lvm list --format json
Jan 31 08:46:11 compute-0 sudo[256372]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:46:11 compute-0 sshd-session[255618]: Connection closed by 192.168.122.30 port 33614
Jan 31 08:46:11 compute-0 sshd-session[255615]: pam_unix(sshd:session): session closed for user zuul
Jan 31 08:46:11 compute-0 systemd[1]: session-53.scope: Deactivated successfully.
Jan 31 08:46:11 compute-0 systemd-logind[793]: Session 53 logged out. Waiting for processes to exit.
Jan 31 08:46:11 compute-0 systemd-logind[793]: Removed session 53.
Jan 31 08:46:11 compute-0 ceph-mon[75227]: pgmap v1321: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:46:11 compute-0 sshd-session[255791]: Connection closed by 192.168.122.30 port 55604
Jan 31 08:46:11 compute-0 sshd-session[255788]: pam_unix(sshd:session): session closed for user zuul
Jan 31 08:46:11 compute-0 systemd[1]: session-54.scope: Deactivated successfully.
Jan 31 08:46:11 compute-0 systemd-logind[793]: Session 54 logged out. Waiting for processes to exit.
Jan 31 08:46:11 compute-0 systemd-logind[793]: Removed session 54.
Jan 31 08:46:12 compute-0 podman[256416]: 2026-01-31 08:46:12.126824424 +0000 UTC m=+0.032389368 container create d7f966dcf26cf9eed93423057aff59da1a3c864fad91f6848f23634091871b59 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hopeful_nash, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 08:46:12 compute-0 systemd[1]: Started libpod-conmon-d7f966dcf26cf9eed93423057aff59da1a3c864fad91f6848f23634091871b59.scope.
Jan 31 08:46:12 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:46:12 compute-0 podman[256416]: 2026-01-31 08:46:12.201574364 +0000 UTC m=+0.107139318 container init d7f966dcf26cf9eed93423057aff59da1a3c864fad91f6848f23634091871b59 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hopeful_nash, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Jan 31 08:46:12 compute-0 podman[256416]: 2026-01-31 08:46:12.206236727 +0000 UTC m=+0.111801671 container start d7f966dcf26cf9eed93423057aff59da1a3c864fad91f6848f23634091871b59 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hopeful_nash, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Jan 31 08:46:12 compute-0 podman[256416]: 2026-01-31 08:46:12.112684529 +0000 UTC m=+0.018249493 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 08:46:12 compute-0 podman[256416]: 2026-01-31 08:46:12.209363626 +0000 UTC m=+0.114928570 container attach d7f966dcf26cf9eed93423057aff59da1a3c864fad91f6848f23634091871b59 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hopeful_nash, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Jan 31 08:46:12 compute-0 hopeful_nash[256432]: 167 167
Jan 31 08:46:12 compute-0 systemd[1]: libpod-d7f966dcf26cf9eed93423057aff59da1a3c864fad91f6848f23634091871b59.scope: Deactivated successfully.
Jan 31 08:46:12 compute-0 podman[256416]: 2026-01-31 08:46:12.212469405 +0000 UTC m=+0.118034379 container died d7f966dcf26cf9eed93423057aff59da1a3c864fad91f6848f23634091871b59 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hopeful_nash, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 08:46:12 compute-0 systemd[1]: var-lib-containers-storage-overlay-e6287dde528a9927502ae0ca96830833795336ff2cf9c96a3a2200218e0338c3-merged.mount: Deactivated successfully.
Jan 31 08:46:12 compute-0 podman[256416]: 2026-01-31 08:46:12.245531972 +0000 UTC m=+0.151096956 container remove d7f966dcf26cf9eed93423057aff59da1a3c864fad91f6848f23634091871b59 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hopeful_nash, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 08:46:12 compute-0 systemd[1]: libpod-conmon-d7f966dcf26cf9eed93423057aff59da1a3c864fad91f6848f23634091871b59.scope: Deactivated successfully.
Jan 31 08:46:12 compute-0 podman[256457]: 2026-01-31 08:46:12.385698553 +0000 UTC m=+0.041883059 container create 1c15188a086a6c39c44a38d8cdf30d1f5f6da1e05c06ec8b78ef09bedbb5097a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vibrant_carson, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Jan 31 08:46:12 compute-0 systemd[1]: Started libpod-conmon-1c15188a086a6c39c44a38d8cdf30d1f5f6da1e05c06ec8b78ef09bedbb5097a.scope.
Jan 31 08:46:12 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:46:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2f0560ba962bb5adb863e95a7eb3fb420fb9287b0708bb66478e433fc14ae334/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 08:46:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2f0560ba962bb5adb863e95a7eb3fb420fb9287b0708bb66478e433fc14ae334/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 08:46:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2f0560ba962bb5adb863e95a7eb3fb420fb9287b0708bb66478e433fc14ae334/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 08:46:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2f0560ba962bb5adb863e95a7eb3fb420fb9287b0708bb66478e433fc14ae334/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 08:46:12 compute-0 podman[256457]: 2026-01-31 08:46:12.366517404 +0000 UTC m=+0.022701880 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 08:46:12 compute-0 podman[256457]: 2026-01-31 08:46:12.47187702 +0000 UTC m=+0.128061506 container init 1c15188a086a6c39c44a38d8cdf30d1f5f6da1e05c06ec8b78ef09bedbb5097a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vibrant_carson, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, ceph=True)
Jan 31 08:46:12 compute-0 podman[256457]: 2026-01-31 08:46:12.480722783 +0000 UTC m=+0.136907239 container start 1c15188a086a6c39c44a38d8cdf30d1f5f6da1e05c06ec8b78ef09bedbb5097a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vibrant_carson, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Jan 31 08:46:12 compute-0 podman[256457]: 2026-01-31 08:46:12.48375774 +0000 UTC m=+0.139942206 container attach 1c15188a086a6c39c44a38d8cdf30d1f5f6da1e05c06ec8b78ef09bedbb5097a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vibrant_carson, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 08:46:12 compute-0 vibrant_carson[256474]: {
Jan 31 08:46:12 compute-0 vibrant_carson[256474]:     "0": [
Jan 31 08:46:12 compute-0 vibrant_carson[256474]:         {
Jan 31 08:46:12 compute-0 vibrant_carson[256474]:             "devices": [
Jan 31 08:46:12 compute-0 vibrant_carson[256474]:                 "/dev/loop3"
Jan 31 08:46:12 compute-0 vibrant_carson[256474]:             ],
Jan 31 08:46:12 compute-0 vibrant_carson[256474]:             "lv_name": "ceph_lv0",
Jan 31 08:46:12 compute-0 vibrant_carson[256474]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 08:46:12 compute-0 vibrant_carson[256474]:             "lv_size": "21470642176",
Jan 31 08:46:12 compute-0 vibrant_carson[256474]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=MTsNbY-MKaT-jGv0-3onj-5WQa-gnK0-BbfLsK,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=82c880e6-d992-5408-8b12-efff9c275473,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=39c36249-2898-4a76-b317-8e4ca379866f,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 31 08:46:12 compute-0 vibrant_carson[256474]:             "lv_uuid": "MTsNbY-MKaT-jGv0-3onj-5WQa-gnK0-BbfLsK",
Jan 31 08:46:12 compute-0 vibrant_carson[256474]:             "name": "ceph_lv0",
Jan 31 08:46:12 compute-0 vibrant_carson[256474]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 08:46:12 compute-0 vibrant_carson[256474]:             "tags": {
Jan 31 08:46:12 compute-0 vibrant_carson[256474]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 31 08:46:12 compute-0 vibrant_carson[256474]:                 "ceph.block_uuid": "MTsNbY-MKaT-jGv0-3onj-5WQa-gnK0-BbfLsK",
Jan 31 08:46:12 compute-0 vibrant_carson[256474]:                 "ceph.cephx_lockbox_secret": "",
Jan 31 08:46:12 compute-0 vibrant_carson[256474]:                 "ceph.cluster_fsid": "82c880e6-d992-5408-8b12-efff9c275473",
Jan 31 08:46:12 compute-0 vibrant_carson[256474]:                 "ceph.cluster_name": "ceph",
Jan 31 08:46:12 compute-0 vibrant_carson[256474]:                 "ceph.crush_device_class": "",
Jan 31 08:46:12 compute-0 vibrant_carson[256474]:                 "ceph.encrypted": "0",
Jan 31 08:46:12 compute-0 vibrant_carson[256474]:                 "ceph.objectstore": "bluestore",
Jan 31 08:46:12 compute-0 vibrant_carson[256474]:                 "ceph.osd_fsid": "39c36249-2898-4a76-b317-8e4ca379866f",
Jan 31 08:46:12 compute-0 vibrant_carson[256474]:                 "ceph.osd_id": "0",
Jan 31 08:46:12 compute-0 vibrant_carson[256474]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 31 08:46:12 compute-0 vibrant_carson[256474]:                 "ceph.type": "block",
Jan 31 08:46:12 compute-0 vibrant_carson[256474]:                 "ceph.vdo": "0",
Jan 31 08:46:12 compute-0 vibrant_carson[256474]:                 "ceph.with_tpm": "0"
Jan 31 08:46:12 compute-0 vibrant_carson[256474]:             },
Jan 31 08:46:12 compute-0 vibrant_carson[256474]:             "type": "block",
Jan 31 08:46:12 compute-0 vibrant_carson[256474]:             "vg_name": "ceph_vg0"
Jan 31 08:46:12 compute-0 vibrant_carson[256474]:         }
Jan 31 08:46:12 compute-0 vibrant_carson[256474]:     ],
Jan 31 08:46:12 compute-0 vibrant_carson[256474]:     "1": [
Jan 31 08:46:12 compute-0 vibrant_carson[256474]:         {
Jan 31 08:46:12 compute-0 vibrant_carson[256474]:             "devices": [
Jan 31 08:46:12 compute-0 vibrant_carson[256474]:                 "/dev/loop4"
Jan 31 08:46:12 compute-0 vibrant_carson[256474]:             ],
Jan 31 08:46:12 compute-0 vibrant_carson[256474]:             "lv_name": "ceph_lv1",
Jan 31 08:46:12 compute-0 vibrant_carson[256474]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 31 08:46:12 compute-0 vibrant_carson[256474]:             "lv_size": "21470642176",
Jan 31 08:46:12 compute-0 vibrant_carson[256474]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=p93Mbf-DMxT-pcUt-jSJE-SFna-oscq-yTAd40,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=82c880e6-d992-5408-8b12-efff9c275473,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=dacad4fa-56d8-4937-b2d8-306fb75187f3,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 31 08:46:12 compute-0 vibrant_carson[256474]:             "lv_uuid": "p93Mbf-DMxT-pcUt-jSJE-SFna-oscq-yTAd40",
Jan 31 08:46:12 compute-0 vibrant_carson[256474]:             "name": "ceph_lv1",
Jan 31 08:46:12 compute-0 vibrant_carson[256474]:             "path": "/dev/ceph_vg1/ceph_lv1",
Jan 31 08:46:12 compute-0 vibrant_carson[256474]:             "tags": {
Jan 31 08:46:12 compute-0 vibrant_carson[256474]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 31 08:46:12 compute-0 vibrant_carson[256474]:                 "ceph.block_uuid": "p93Mbf-DMxT-pcUt-jSJE-SFna-oscq-yTAd40",
Jan 31 08:46:12 compute-0 vibrant_carson[256474]:                 "ceph.cephx_lockbox_secret": "",
Jan 31 08:46:12 compute-0 vibrant_carson[256474]:                 "ceph.cluster_fsid": "82c880e6-d992-5408-8b12-efff9c275473",
Jan 31 08:46:12 compute-0 vibrant_carson[256474]:                 "ceph.cluster_name": "ceph",
Jan 31 08:46:12 compute-0 vibrant_carson[256474]:                 "ceph.crush_device_class": "",
Jan 31 08:46:12 compute-0 vibrant_carson[256474]:                 "ceph.encrypted": "0",
Jan 31 08:46:12 compute-0 vibrant_carson[256474]:                 "ceph.objectstore": "bluestore",
Jan 31 08:46:12 compute-0 vibrant_carson[256474]:                 "ceph.osd_fsid": "dacad4fa-56d8-4937-b2d8-306fb75187f3",
Jan 31 08:46:12 compute-0 vibrant_carson[256474]:                 "ceph.osd_id": "1",
Jan 31 08:46:12 compute-0 vibrant_carson[256474]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 31 08:46:12 compute-0 vibrant_carson[256474]:                 "ceph.type": "block",
Jan 31 08:46:12 compute-0 vibrant_carson[256474]:                 "ceph.vdo": "0",
Jan 31 08:46:12 compute-0 vibrant_carson[256474]:                 "ceph.with_tpm": "0"
Jan 31 08:46:12 compute-0 vibrant_carson[256474]:             },
Jan 31 08:46:12 compute-0 vibrant_carson[256474]:             "type": "block",
Jan 31 08:46:12 compute-0 vibrant_carson[256474]:             "vg_name": "ceph_vg1"
Jan 31 08:46:12 compute-0 vibrant_carson[256474]:         }
Jan 31 08:46:12 compute-0 vibrant_carson[256474]:     ],
Jan 31 08:46:12 compute-0 vibrant_carson[256474]:     "2": [
Jan 31 08:46:12 compute-0 vibrant_carson[256474]:         {
Jan 31 08:46:12 compute-0 vibrant_carson[256474]:             "devices": [
Jan 31 08:46:12 compute-0 vibrant_carson[256474]:                 "/dev/loop5"
Jan 31 08:46:12 compute-0 vibrant_carson[256474]:             ],
Jan 31 08:46:12 compute-0 vibrant_carson[256474]:             "lv_name": "ceph_lv2",
Jan 31 08:46:12 compute-0 vibrant_carson[256474]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 31 08:46:12 compute-0 vibrant_carson[256474]:             "lv_size": "21470642176",
Jan 31 08:46:12 compute-0 vibrant_carson[256474]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=fxd7JU-HnwP-NvcE-M4xv-EgEF-kK7y-w6dXCS,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=82c880e6-d992-5408-8b12-efff9c275473,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=faa25865-e7b6-44f9-8188-08bf287b941b,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 31 08:46:12 compute-0 vibrant_carson[256474]:             "lv_uuid": "fxd7JU-HnwP-NvcE-M4xv-EgEF-kK7y-w6dXCS",
Jan 31 08:46:12 compute-0 vibrant_carson[256474]:             "name": "ceph_lv2",
Jan 31 08:46:12 compute-0 vibrant_carson[256474]:             "path": "/dev/ceph_vg2/ceph_lv2",
Jan 31 08:46:12 compute-0 vibrant_carson[256474]:             "tags": {
Jan 31 08:46:12 compute-0 vibrant_carson[256474]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 31 08:46:12 compute-0 vibrant_carson[256474]:                 "ceph.block_uuid": "fxd7JU-HnwP-NvcE-M4xv-EgEF-kK7y-w6dXCS",
Jan 31 08:46:12 compute-0 vibrant_carson[256474]:                 "ceph.cephx_lockbox_secret": "",
Jan 31 08:46:12 compute-0 vibrant_carson[256474]:                 "ceph.cluster_fsid": "82c880e6-d992-5408-8b12-efff9c275473",
Jan 31 08:46:12 compute-0 vibrant_carson[256474]:                 "ceph.cluster_name": "ceph",
Jan 31 08:46:12 compute-0 vibrant_carson[256474]:                 "ceph.crush_device_class": "",
Jan 31 08:46:12 compute-0 vibrant_carson[256474]:                 "ceph.encrypted": "0",
Jan 31 08:46:12 compute-0 vibrant_carson[256474]:                 "ceph.objectstore": "bluestore",
Jan 31 08:46:12 compute-0 vibrant_carson[256474]:                 "ceph.osd_fsid": "faa25865-e7b6-44f9-8188-08bf287b941b",
Jan 31 08:46:12 compute-0 vibrant_carson[256474]:                 "ceph.osd_id": "2",
Jan 31 08:46:12 compute-0 vibrant_carson[256474]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 31 08:46:12 compute-0 vibrant_carson[256474]:                 "ceph.type": "block",
Jan 31 08:46:12 compute-0 vibrant_carson[256474]:                 "ceph.vdo": "0",
Jan 31 08:46:12 compute-0 vibrant_carson[256474]:                 "ceph.with_tpm": "0"
Jan 31 08:46:12 compute-0 vibrant_carson[256474]:             },
Jan 31 08:46:12 compute-0 vibrant_carson[256474]:             "type": "block",
Jan 31 08:46:12 compute-0 vibrant_carson[256474]:             "vg_name": "ceph_vg2"
Jan 31 08:46:12 compute-0 vibrant_carson[256474]:         }
Jan 31 08:46:12 compute-0 vibrant_carson[256474]:     ]
Jan 31 08:46:12 compute-0 vibrant_carson[256474]: }
Jan 31 08:46:12 compute-0 systemd[1]: libpod-1c15188a086a6c39c44a38d8cdf30d1f5f6da1e05c06ec8b78ef09bedbb5097a.scope: Deactivated successfully.
Jan 31 08:46:12 compute-0 podman[256457]: 2026-01-31 08:46:12.722469852 +0000 UTC m=+0.378654308 container died 1c15188a086a6c39c44a38d8cdf30d1f5f6da1e05c06ec8b78ef09bedbb5097a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vibrant_carson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030)
Jan 31 08:46:12 compute-0 systemd[1]: var-lib-containers-storage-overlay-2f0560ba962bb5adb863e95a7eb3fb420fb9287b0708bb66478e433fc14ae334-merged.mount: Deactivated successfully.
Jan 31 08:46:12 compute-0 podman[256457]: 2026-01-31 08:46:12.760005396 +0000 UTC m=+0.416189852 container remove 1c15188a086a6c39c44a38d8cdf30d1f5f6da1e05c06ec8b78ef09bedbb5097a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vibrant_carson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 08:46:12 compute-0 systemd[1]: libpod-conmon-1c15188a086a6c39c44a38d8cdf30d1f5f6da1e05c06ec8b78ef09bedbb5097a.scope: Deactivated successfully.
Jan 31 08:46:12 compute-0 sudo[256372]: pam_unix(sudo:session): session closed for user root
Jan 31 08:46:12 compute-0 sudo[256496]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 08:46:12 compute-0 sudo[256496]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:46:12 compute-0 sudo[256496]: pam_unix(sudo:session): session closed for user root
Jan 31 08:46:12 compute-0 sudo[256521]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/82c880e6-d992-5408-8b12-efff9c275473/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 82c880e6-d992-5408-8b12-efff9c275473 -- raw list --format json
Jan 31 08:46:12 compute-0 sudo[256521]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:46:13 compute-0 podman[256556]: 2026-01-31 08:46:13.195643555 +0000 UTC m=+0.064910119 container create 656e2eb1f196961a9c622b5b15ef296f0eb893b842bb9d35f354772d2b866198 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eloquent_bouman, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 08:46:13 compute-0 podman[256556]: 2026-01-31 08:46:13.151213843 +0000 UTC m=+0.020480417 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 08:46:13 compute-0 systemd[1]: Started libpod-conmon-656e2eb1f196961a9c622b5b15ef296f0eb893b842bb9d35f354772d2b866198.scope.
Jan 31 08:46:13 compute-0 ceph-mon[75227]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 31 08:46:13 compute-0 ceph-mon[75227]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 2400.0 total, 600.0 interval
                                           Cumulative writes: 5958 writes, 26K keys, 5958 commit groups, 1.0 writes per commit group, ingest: 0.04 GB, 0.02 MB/s
                                           Cumulative WAL: 5958 writes, 5958 syncs, 1.00 writes per sync, written: 0.04 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 1331 writes, 6035 keys, 1331 commit groups, 1.0 writes per commit group, ingest: 8.83 MB, 0.01 MB/s
                                           Interval WAL: 1331 writes, 1331 syncs, 1.00 writes per sync, written: 0.01 GB, 0.01 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0     17.5      1.77              0.07        15    0.118       0      0       0.0       0.0
                                             L6      1/0    7.54 MB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   3.4     41.3     33.9      3.13              0.27        14    0.223     65K   7757       0.0       0.0
                                            Sum      1/0    7.54 MB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   4.4     26.4     28.0      4.89              0.34        29    0.169     65K   7757       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   5.1     48.6     48.3      0.84              0.09         8    0.105     21K   2566       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Low      0/0    0.00 KB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   0.0     41.3     33.9      3.13              0.27        14    0.223     65K   7757       0.0       0.0
                                           High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     17.5      1.76              0.07        14    0.126       0      0       0.0       0.0
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     13.5      0.00              0.00         1    0.004       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 2400.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.030, interval 0.008
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.13 GB write, 0.06 MB/s write, 0.13 GB read, 0.05 MB/s read, 4.9 seconds
                                           Interval compaction: 0.04 GB write, 0.07 MB/s write, 0.04 GB read, 0.07 MB/s read, 0.8 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55bf4c7858d0#2 capacity: 304.00 MB usage: 14.21 MB table_size: 0 occupancy: 18446744073709551615 collections: 5 last_copies: 0 last_secs: 0.000154 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(873,13.69 MB,4.5045%) FilterBlock(30,184.98 KB,0.0594239%) IndexBlock(30,341.11 KB,0.109577%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
Jan 31 08:46:13 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:46:13 compute-0 podman[256556]: 2026-01-31 08:46:13.318080439 +0000 UTC m=+0.187347013 container init 656e2eb1f196961a9c622b5b15ef296f0eb893b842bb9d35f354772d2b866198 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eloquent_bouman, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Jan 31 08:46:13 compute-0 podman[256556]: 2026-01-31 08:46:13.326370456 +0000 UTC m=+0.195637050 container start 656e2eb1f196961a9c622b5b15ef296f0eb893b842bb9d35f354772d2b866198 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eloquent_bouman, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_REF=tentacle)
Jan 31 08:46:13 compute-0 eloquent_bouman[256572]: 167 167
Jan 31 08:46:13 compute-0 systemd[1]: libpod-656e2eb1f196961a9c622b5b15ef296f0eb893b842bb9d35f354772d2b866198.scope: Deactivated successfully.
Jan 31 08:46:13 compute-0 podman[256556]: 2026-01-31 08:46:13.339542333 +0000 UTC m=+0.208808907 container attach 656e2eb1f196961a9c622b5b15ef296f0eb893b842bb9d35f354772d2b866198 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eloquent_bouman, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=tentacle)
Jan 31 08:46:13 compute-0 podman[256556]: 2026-01-31 08:46:13.339870122 +0000 UTC m=+0.209136676 container died 656e2eb1f196961a9c622b5b15ef296f0eb893b842bb9d35f354772d2b866198 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eloquent_bouman, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Jan 31 08:46:13 compute-0 sshd-session[256020]: Connection closed by 192.168.122.30 port 55606
Jan 31 08:46:13 compute-0 sshd-session[256017]: pam_unix(sshd:session): session closed for user zuul
Jan 31 08:46:13 compute-0 systemd[1]: var-lib-containers-storage-overlay-6decfe61cde12582084cfb6e143a8ebb0713e9af75fd1d5c9937d037e6bd31d5-merged.mount: Deactivated successfully.
Jan 31 08:46:13 compute-0 systemd[1]: session-55.scope: Deactivated successfully.
Jan 31 08:46:13 compute-0 systemd-logind[793]: Session 55 logged out. Waiting for processes to exit.
Jan 31 08:46:13 compute-0 systemd-logind[793]: Removed session 55.
Jan 31 08:46:13 compute-0 podman[256556]: 2026-01-31 08:46:13.375405949 +0000 UTC m=+0.244672523 container remove 656e2eb1f196961a9c622b5b15ef296f0eb893b842bb9d35f354772d2b866198 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eloquent_bouman, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 08:46:13 compute-0 systemd[1]: libpod-conmon-656e2eb1f196961a9c622b5b15ef296f0eb893b842bb9d35f354772d2b866198.scope: Deactivated successfully.
Jan 31 08:46:13 compute-0 podman[256596]: 2026-01-31 08:46:13.517511516 +0000 UTC m=+0.041093187 container create 862b3ab10cb5b90d11d705b30db7ffdbaee22133e12994d01ca3d12a9dbf99ec (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=blissful_blackwell, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.41.3, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS)
Jan 31 08:46:13 compute-0 systemd[1]: Started libpod-conmon-862b3ab10cb5b90d11d705b30db7ffdbaee22133e12994d01ca3d12a9dbf99ec.scope.
Jan 31 08:46:13 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:46:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/df28d53928f226406745d817fad1a9111b08807680b6f75a655019eff25cf5e1/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 08:46:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/df28d53928f226406745d817fad1a9111b08807680b6f75a655019eff25cf5e1/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 08:46:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/df28d53928f226406745d817fad1a9111b08807680b6f75a655019eff25cf5e1/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 08:46:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/df28d53928f226406745d817fad1a9111b08807680b6f75a655019eff25cf5e1/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 08:46:13 compute-0 podman[256596]: 2026-01-31 08:46:13.496437553 +0000 UTC m=+0.020019274 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 08:46:13 compute-0 podman[256596]: 2026-01-31 08:46:13.595729535 +0000 UTC m=+0.119311226 container init 862b3ab10cb5b90d11d705b30db7ffdbaee22133e12994d01ca3d12a9dbf99ec (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=blissful_blackwell, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 08:46:13 compute-0 podman[256596]: 2026-01-31 08:46:13.602818638 +0000 UTC m=+0.126400309 container start 862b3ab10cb5b90d11d705b30db7ffdbaee22133e12994d01ca3d12a9dbf99ec (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=blissful_blackwell, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Jan 31 08:46:13 compute-0 podman[256596]: 2026-01-31 08:46:13.605773363 +0000 UTC m=+0.129355034 container attach 862b3ab10cb5b90d11d705b30db7ffdbaee22133e12994d01ca3d12a9dbf99ec (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=blissful_blackwell, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=tentacle)
Jan 31 08:46:13 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1322: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:46:13 compute-0 ceph-mon[75227]: pgmap v1322: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:46:14 compute-0 lvm[256690]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 31 08:46:14 compute-0 lvm[256691]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 31 08:46:14 compute-0 lvm[256690]: VG ceph_vg0 finished
Jan 31 08:46:14 compute-0 lvm[256691]: VG ceph_vg1 finished
Jan 31 08:46:14 compute-0 lvm[256693]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 31 08:46:14 compute-0 lvm[256693]: VG ceph_vg2 finished
Jan 31 08:46:14 compute-0 blissful_blackwell[256612]: {}
Jan 31 08:46:14 compute-0 systemd[1]: libpod-862b3ab10cb5b90d11d705b30db7ffdbaee22133e12994d01ca3d12a9dbf99ec.scope: Deactivated successfully.
Jan 31 08:46:14 compute-0 systemd[1]: libpod-862b3ab10cb5b90d11d705b30db7ffdbaee22133e12994d01ca3d12a9dbf99ec.scope: Consumed 1.024s CPU time.
Jan 31 08:46:14 compute-0 podman[256596]: 2026-01-31 08:46:14.304979234 +0000 UTC m=+0.828560945 container died 862b3ab10cb5b90d11d705b30db7ffdbaee22133e12994d01ca3d12a9dbf99ec (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=blissful_blackwell, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.build-date=20251030, CEPH_REF=tentacle)
Jan 31 08:46:14 compute-0 systemd[1]: var-lib-containers-storage-overlay-df28d53928f226406745d817fad1a9111b08807680b6f75a655019eff25cf5e1-merged.mount: Deactivated successfully.
Jan 31 08:46:14 compute-0 podman[256596]: 2026-01-31 08:46:14.349683474 +0000 UTC m=+0.873265145 container remove 862b3ab10cb5b90d11d705b30db7ffdbaee22133e12994d01ca3d12a9dbf99ec (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=blissful_blackwell, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Jan 31 08:46:14 compute-0 systemd[1]: libpod-conmon-862b3ab10cb5b90d11d705b30db7ffdbaee22133e12994d01ca3d12a9dbf99ec.scope: Deactivated successfully.
Jan 31 08:46:14 compute-0 sudo[256521]: pam_unix(sudo:session): session closed for user root
Jan 31 08:46:14 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 31 08:46:14 compute-0 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 08:46:14 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 31 08:46:14 compute-0 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 08:46:14 compute-0 sudo[256708]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 31 08:46:14 compute-0 sudo[256708]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:46:14 compute-0 sudo[256708]: pam_unix(sudo:session): session closed for user root
Jan 31 08:46:15 compute-0 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 08:46:15 compute-0 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 08:46:15 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1323: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:46:15 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:46:16 compute-0 ceph-mon[75227]: pgmap v1323: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:46:17 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1324: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:46:17 compute-0 ceph-mon[75227]: pgmap v1324: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:46:17 compute-0 ovn_metadata_agent[154972]: 2026-01-31 08:46:17.906 154977 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:46:17 compute-0 ovn_metadata_agent[154972]: 2026-01-31 08:46:17.908 154977 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:46:17 compute-0 ovn_metadata_agent[154972]: 2026-01-31 08:46:17.908 154977 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:46:17 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 31 08:46:17 compute-0 ceph-mon[75227]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3130115664' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 31 08:46:17 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 31 08:46:17 compute-0 ceph-mon[75227]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3130115664' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 31 08:46:18 compute-0 ceph-mon[75227]: from='client.? 192.168.122.10:0/3130115664' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 31 08:46:18 compute-0 ceph-mon[75227]: from='client.? 192.168.122.10:0/3130115664' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 31 08:46:19 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1325: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:46:19 compute-0 ceph-mon[75227]: pgmap v1325: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:46:20 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:46:21 compute-0 nova_compute[238824]: 2026-01-31 08:46:21.340 238828 DEBUG oslo_service.periodic_task [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:46:21 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1326: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:46:21 compute-0 ceph-mon[75227]: pgmap v1326: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:46:23 compute-0 nova_compute[238824]: 2026-01-31 08:46:23.339 238828 DEBUG oslo_service.periodic_task [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:46:23 compute-0 nova_compute[238824]: 2026-01-31 08:46:23.339 238828 DEBUG nova.compute.manager [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 31 08:46:23 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1327: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:46:23 compute-0 ceph-mon[75227]: pgmap v1327: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:46:25 compute-0 nova_compute[238824]: 2026-01-31 08:46:25.340 238828 DEBUG oslo_service.periodic_task [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:46:25 compute-0 nova_compute[238824]: 2026-01-31 08:46:25.340 238828 DEBUG oslo_service.periodic_task [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:46:25 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1328: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:46:25 compute-0 ceph-mon[75227]: pgmap v1328: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:46:25 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:46:27 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1329: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:46:27 compute-0 ceph-mon[75227]: pgmap v1329: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:46:28 compute-0 nova_compute[238824]: 2026-01-31 08:46:28.334 238828 DEBUG oslo_service.periodic_task [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:46:29 compute-0 nova_compute[238824]: 2026-01-31 08:46:29.338 238828 DEBUG oslo_service.periodic_task [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:46:29 compute-0 nova_compute[238824]: 2026-01-31 08:46:29.339 238828 DEBUG nova.compute.manager [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 31 08:46:29 compute-0 nova_compute[238824]: 2026-01-31 08:46:29.339 238828 DEBUG nova.compute.manager [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 31 08:46:29 compute-0 nova_compute[238824]: 2026-01-31 08:46:29.354 238828 DEBUG nova.compute.manager [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 31 08:46:29 compute-0 nova_compute[238824]: 2026-01-31 08:46:29.354 238828 DEBUG oslo_service.periodic_task [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:46:29 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1330: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:46:29 compute-0 ceph-mon[75227]: pgmap v1330: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:46:30 compute-0 sshd-session[256734]: Connection closed by 92.118.39.76 port 45960
Jan 31 08:46:30 compute-0 nova_compute[238824]: 2026-01-31 08:46:30.339 238828 DEBUG oslo_service.periodic_task [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:46:30 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:46:31 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1331: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:46:31 compute-0 ceph-mgr[75519]: [balancer INFO root] Optimize plan auto_2026-01-31_08:46:31
Jan 31 08:46:31 compute-0 ceph-mgr[75519]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 31 08:46:31 compute-0 ceph-mgr[75519]: [balancer INFO root] do_upmap
Jan 31 08:46:31 compute-0 ceph-mgr[75519]: [balancer INFO root] pools ['default.rgw.log', 'volumes', 'backups', '.rgw.root', 'cephfs.cephfs.data', 'images', 'default.rgw.control', 'cephfs.cephfs.meta', '.mgr', 'vms', 'default.rgw.meta']
Jan 31 08:46:31 compute-0 ceph-mgr[75519]: [balancer INFO root] prepared 0/10 upmap changes
Jan 31 08:46:31 compute-0 ceph-mon[75227]: pgmap v1331: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:46:32 compute-0 podman[256736]: 2026-01-31 08:46:32.17560632 +0000 UTC m=+0.056642690 container health_status 5cc46d1955888fed41771eb977e7f9416e280539f01559118253757fe3eb0869 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '3b798815db4eef76ff54d2cfe5801aee605c637f16e47a4297de289ab1fdb9c1-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Jan 31 08:46:32 compute-0 podman[256735]: 2026-01-31 08:46:32.206325024 +0000 UTC m=+0.091269926 container health_status 14c3c41ef3ecc0cb180f3b1c6b2646401de390411c75f2edf127900ced71a3ae (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '3b798815db4eef76ff54d2cfe5801aee605c637f16e47a4297de289ab1fdb9c1-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_id=ovn_controller, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS)
Jan 31 08:46:32 compute-0 nova_compute[238824]: 2026-01-31 08:46:32.339 238828 DEBUG oslo_service.periodic_task [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:46:32 compute-0 nova_compute[238824]: 2026-01-31 08:46:32.372 238828 DEBUG oslo_concurrency.lockutils [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:46:32 compute-0 nova_compute[238824]: 2026-01-31 08:46:32.372 238828 DEBUG oslo_concurrency.lockutils [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:46:32 compute-0 nova_compute[238824]: 2026-01-31 08:46:32.373 238828 DEBUG oslo_concurrency.lockutils [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:46:32 compute-0 nova_compute[238824]: 2026-01-31 08:46:32.373 238828 DEBUG nova.compute.resource_tracker [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 31 08:46:32 compute-0 nova_compute[238824]: 2026-01-31 08:46:32.374 238828 DEBUG oslo_concurrency.processutils [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:46:32 compute-0 ceph-mgr[75519]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:46:32 compute-0 ceph-mgr[75519]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:46:32 compute-0 ceph-mgr[75519]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:46:32 compute-0 ceph-mgr[75519]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:46:32 compute-0 ceph-mgr[75519]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:46:32 compute-0 ceph-mgr[75519]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:46:32 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 31 08:46:32 compute-0 ceph-mon[75227]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2762862891' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 31 08:46:32 compute-0 nova_compute[238824]: 2026-01-31 08:46:32.876 238828 DEBUG oslo_concurrency.processutils [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.502s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:46:32 compute-0 ceph-mon[75227]: from='client.? 192.168.122.100:0/2762862891' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 31 08:46:33 compute-0 nova_compute[238824]: 2026-01-31 08:46:33.047 238828 WARNING nova.virt.libvirt.driver [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 31 08:46:33 compute-0 nova_compute[238824]: 2026-01-31 08:46:33.048 238828 DEBUG nova.compute.resource_tracker [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5105MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 31 08:46:33 compute-0 nova_compute[238824]: 2026-01-31 08:46:33.049 238828 DEBUG oslo_concurrency.lockutils [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:46:33 compute-0 nova_compute[238824]: 2026-01-31 08:46:33.049 238828 DEBUG oslo_concurrency.lockutils [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:46:33 compute-0 nova_compute[238824]: 2026-01-31 08:46:33.119 238828 DEBUG nova.compute.resource_tracker [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 31 08:46:33 compute-0 nova_compute[238824]: 2026-01-31 08:46:33.119 238828 DEBUG nova.compute.resource_tracker [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 31 08:46:33 compute-0 nova_compute[238824]: 2026-01-31 08:46:33.136 238828 DEBUG oslo_concurrency.processutils [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:46:33 compute-0 ceph-mgr[75519]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 31 08:46:33 compute-0 ceph-mgr[75519]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 08:46:33 compute-0 ceph-mgr[75519]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 31 08:46:33 compute-0 ceph-mgr[75519]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 08:46:33 compute-0 ceph-mgr[75519]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 08:46:33 compute-0 ceph-mgr[75519]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 08:46:33 compute-0 ceph-mgr[75519]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 08:46:33 compute-0 ceph-mgr[75519]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 08:46:33 compute-0 ceph-mgr[75519]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 08:46:33 compute-0 ceph-mgr[75519]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 08:46:33 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 31 08:46:33 compute-0 ceph-mon[75227]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/601642418' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 31 08:46:33 compute-0 nova_compute[238824]: 2026-01-31 08:46:33.675 238828 DEBUG oslo_concurrency.processutils [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.538s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:46:33 compute-0 nova_compute[238824]: 2026-01-31 08:46:33.679 238828 DEBUG nova.compute.provider_tree [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Inventory has not changed in ProviderTree for provider: 6d4ff98f-eb37-47a1-bfaf-01e7f5329d98 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 31 08:46:33 compute-0 nova_compute[238824]: 2026-01-31 08:46:33.695 238828 DEBUG nova.scheduler.client.report [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Inventory has not changed for provider 6d4ff98f-eb37-47a1-bfaf-01e7f5329d98 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 31 08:46:33 compute-0 nova_compute[238824]: 2026-01-31 08:46:33.697 238828 DEBUG nova.compute.resource_tracker [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 31 08:46:33 compute-0 nova_compute[238824]: 2026-01-31 08:46:33.697 238828 DEBUG oslo_concurrency.lockutils [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.648s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:46:33 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1332: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:46:33 compute-0 ceph-mon[75227]: from='client.? 192.168.122.100:0/601642418' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 31 08:46:33 compute-0 ceph-mon[75227]: pgmap v1332: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:46:34 compute-0 nova_compute[238824]: 2026-01-31 08:46:34.693 238828 DEBUG oslo_service.periodic_task [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:46:35 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1333: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:46:35 compute-0 ceph-mon[75227]: pgmap v1333: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:46:35 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:46:37 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1334: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:46:37 compute-0 ceph-mon[75227]: pgmap v1334: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:46:39 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1335: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:46:39 compute-0 ceph-mon[75227]: pgmap v1335: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:46:40 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:46:41 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1336: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:46:41 compute-0 ceph-mon[75227]: pgmap v1336: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:46:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] _maybe_adjust
Jan 31 08:46:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:46:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 31 08:46:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:46:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 08:46:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:46:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 08:46:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:46:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 08:46:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:46:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 3.257160766784386e-07 of space, bias 1.0, pg target 9.771482300353158e-05 quantized to 32 (current 32)
Jan 31 08:46:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:46:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 2.5331644121694047e-06 of space, bias 4.0, pg target 0.0030397972946032857 quantized to 16 (current 16)
Jan 31 08:46:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:46:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 08:46:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:46:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 31 08:46:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:46:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Jan 31 08:46:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:46:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 08:46:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:46:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 31 08:46:43 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1337: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:46:43 compute-0 ceph-mon[75227]: pgmap v1337: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:46:45 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1338: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:46:45 compute-0 ceph-mon[75227]: pgmap v1338: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:46:45 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:46:47 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1339: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:46:48 compute-0 ceph-mon[75227]: pgmap v1339: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:46:49 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1340: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:46:49 compute-0 ceph-mon[75227]: pgmap v1340: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:46:50 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:46:51 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1341: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:46:51 compute-0 ceph-mon[75227]: pgmap v1341: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:46:53 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1342: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:46:54 compute-0 ceph-mon[75227]: pgmap v1342: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:46:55 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1343: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:46:55 compute-0 ceph-mon[75227]: pgmap v1343: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:46:55 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:46:57 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1344: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:46:57 compute-0 ceph-mon[75227]: pgmap v1344: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:46:59 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1345: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:46:59 compute-0 ceph-mon[75227]: pgmap v1345: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:47:00 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:47:01 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1346: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:47:01 compute-0 ceph-mon[75227]: pgmap v1346: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:47:02 compute-0 ceph-mgr[75519]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:47:02 compute-0 ceph-mgr[75519]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:47:02 compute-0 ceph-mgr[75519]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:47:02 compute-0 ceph-mgr[75519]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:47:02 compute-0 ceph-mgr[75519]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:47:02 compute-0 ceph-mgr[75519]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:47:03 compute-0 podman[256826]: 2026-01-31 08:47:03.174961903 +0000 UTC m=+0.068306196 container health_status 5cc46d1955888fed41771eb977e7f9416e280539f01559118253757fe3eb0869 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '3b798815db4eef76ff54d2cfe5801aee605c637f16e47a4297de289ab1fdb9c1-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true)
Jan 31 08:47:03 compute-0 podman[256825]: 2026-01-31 08:47:03.204135472 +0000 UTC m=+0.098681049 container health_status 14c3c41ef3ecc0cb180f3b1c6b2646401de390411c75f2edf127900ced71a3ae (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '3b798815db4eef76ff54d2cfe5801aee605c637f16e47a4297de289ab1fdb9c1-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Jan 31 08:47:03 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1347: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:47:03 compute-0 ceph-mon[75227]: pgmap v1347: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:47:05 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1348: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:47:05 compute-0 ceph-mon[75227]: pgmap v1348: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:47:05 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:47:07 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1349: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:47:07 compute-0 ceph-mon[75227]: pgmap v1349: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:47:09 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1350: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:47:09 compute-0 ceph-mon[75227]: pgmap v1350: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:47:10 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:47:11 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1351: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:47:11 compute-0 ceph-mon[75227]: pgmap v1351: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:47:13 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1352: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:47:13 compute-0 ceph-mon[75227]: pgmap v1352: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:47:14 compute-0 sudo[256870]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 08:47:14 compute-0 sudo[256870]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:47:14 compute-0 sudo[256870]: pam_unix(sudo:session): session closed for user root
Jan 31 08:47:14 compute-0 sudo[256895]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/82c880e6-d992-5408-8b12-efff9c275473/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Jan 31 08:47:14 compute-0 sudo[256895]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:47:15 compute-0 sudo[256895]: pam_unix(sudo:session): session closed for user root
Jan 31 08:47:15 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 31 08:47:15 compute-0 ceph-mon[75227]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 08:47:15 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 31 08:47:15 compute-0 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 31 08:47:15 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 31 08:47:15 compute-0 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 08:47:15 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 31 08:47:15 compute-0 ceph-mon[75227]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 31 08:47:15 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 31 08:47:15 compute-0 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 31 08:47:15 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 31 08:47:15 compute-0 ceph-mon[75227]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 08:47:15 compute-0 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 08:47:15 compute-0 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 31 08:47:15 compute-0 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 08:47:15 compute-0 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 31 08:47:15 compute-0 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 31 08:47:15 compute-0 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 08:47:15 compute-0 sudo[256951]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 08:47:15 compute-0 sudo[256951]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:47:15 compute-0 sudo[256951]: pam_unix(sudo:session): session closed for user root
Jan 31 08:47:15 compute-0 sudo[256976]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/82c880e6-d992-5408-8b12-efff9c275473/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 82c880e6-d992-5408-8b12-efff9c275473 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Jan 31 08:47:15 compute-0 sudo[256976]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:47:15 compute-0 podman[257013]: 2026-01-31 08:47:15.423009106 +0000 UTC m=+0.044988145 container create 31b8167b9aa73984656d12fee83945595041f8bb17814022978cda3d7a2068f4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=angry_goldberg, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=tentacle)
Jan 31 08:47:15 compute-0 systemd[1]: Started libpod-conmon-31b8167b9aa73984656d12fee83945595041f8bb17814022978cda3d7a2068f4.scope.
Jan 31 08:47:15 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:47:15 compute-0 podman[257013]: 2026-01-31 08:47:15.492963437 +0000 UTC m=+0.114942496 container init 31b8167b9aa73984656d12fee83945595041f8bb17814022978cda3d7a2068f4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=angry_goldberg, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3)
Jan 31 08:47:15 compute-0 podman[257013]: 2026-01-31 08:47:15.398767579 +0000 UTC m=+0.020746638 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 08:47:15 compute-0 podman[257013]: 2026-01-31 08:47:15.499508605 +0000 UTC m=+0.121487634 container start 31b8167b9aa73984656d12fee83945595041f8bb17814022978cda3d7a2068f4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=angry_goldberg, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030)
Jan 31 08:47:15 compute-0 podman[257013]: 2026-01-31 08:47:15.502702637 +0000 UTC m=+0.124681676 container attach 31b8167b9aa73984656d12fee83945595041f8bb17814022978cda3d7a2068f4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=angry_goldberg, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 08:47:15 compute-0 angry_goldberg[257029]: 167 167
Jan 31 08:47:15 compute-0 systemd[1]: libpod-31b8167b9aa73984656d12fee83945595041f8bb17814022978cda3d7a2068f4.scope: Deactivated successfully.
Jan 31 08:47:15 compute-0 podman[257013]: 2026-01-31 08:47:15.505491607 +0000 UTC m=+0.127470646 container died 31b8167b9aa73984656d12fee83945595041f8bb17814022978cda3d7a2068f4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=angry_goldberg, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=tentacle)
Jan 31 08:47:15 compute-0 systemd[1]: var-lib-containers-storage-overlay-9daa876ec012f9cdf64f3bdc54074bc32e4b21d8dabdf80bdca2e2f7e23b2b3a-merged.mount: Deactivated successfully.
Jan 31 08:47:15 compute-0 podman[257013]: 2026-01-31 08:47:15.545889189 +0000 UTC m=+0.167868228 container remove 31b8167b9aa73984656d12fee83945595041f8bb17814022978cda3d7a2068f4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=angry_goldberg, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 31 08:47:15 compute-0 systemd[1]: libpod-conmon-31b8167b9aa73984656d12fee83945595041f8bb17814022978cda3d7a2068f4.scope: Deactivated successfully.
Jan 31 08:47:15 compute-0 podman[257054]: 2026-01-31 08:47:15.674286502 +0000 UTC m=+0.035732359 container create de889504e6212085eb3926e990a21f17e268ef507c461db23dd9e9613f96ed68 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_yalow, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Jan 31 08:47:15 compute-0 systemd[1]: Started libpod-conmon-de889504e6212085eb3926e990a21f17e268ef507c461db23dd9e9613f96ed68.scope.
Jan 31 08:47:15 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:47:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e7dc237bdf5756beab2caba073d98aeac6623b5400f30a8af1c98144f0b6d4b1/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 08:47:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e7dc237bdf5756beab2caba073d98aeac6623b5400f30a8af1c98144f0b6d4b1/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 08:47:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e7dc237bdf5756beab2caba073d98aeac6623b5400f30a8af1c98144f0b6d4b1/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 08:47:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e7dc237bdf5756beab2caba073d98aeac6623b5400f30a8af1c98144f0b6d4b1/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 08:47:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e7dc237bdf5756beab2caba073d98aeac6623b5400f30a8af1c98144f0b6d4b1/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 31 08:47:15 compute-0 podman[257054]: 2026-01-31 08:47:15.743214805 +0000 UTC m=+0.104660662 container init de889504e6212085eb3926e990a21f17e268ef507c461db23dd9e9613f96ed68 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_yalow, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030)
Jan 31 08:47:15 compute-0 podman[257054]: 2026-01-31 08:47:15.750081932 +0000 UTC m=+0.111527789 container start de889504e6212085eb3926e990a21f17e268ef507c461db23dd9e9613f96ed68 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_yalow, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3)
Jan 31 08:47:15 compute-0 podman[257054]: 2026-01-31 08:47:15.753095509 +0000 UTC m=+0.114541466 container attach de889504e6212085eb3926e990a21f17e268ef507c461db23dd9e9613f96ed68 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_yalow, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 08:47:15 compute-0 podman[257054]: 2026-01-31 08:47:15.656772058 +0000 UTC m=+0.018217935 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 08:47:15 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1353: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:47:15 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:47:16 compute-0 ceph-osd[85971]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 31 08:47:16 compute-0 ceph-osd[85971]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 2400.1 total, 600.0 interval
                                           Cumulative writes: 6532 writes, 26K keys, 6532 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.01 MB/s
                                           Cumulative WAL: 6532 writes, 1295 syncs, 5.04 writes per sync, written: 0.02 GB, 0.01 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 346 writes, 673 keys, 346 commit groups, 1.0 writes per commit group, ingest: 0.27 MB, 0.00 MB/s
                                           Interval WAL: 346 writes, 170 syncs, 2.04 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Jan 31 08:47:16 compute-0 stoic_yalow[257071]: --> passed data devices: 0 physical, 3 LVM
Jan 31 08:47:16 compute-0 stoic_yalow[257071]: --> All data devices are unavailable
Jan 31 08:47:16 compute-0 ceph-mon[75227]: pgmap v1353: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:47:16 compute-0 systemd[1]: libpod-de889504e6212085eb3926e990a21f17e268ef507c461db23dd9e9613f96ed68.scope: Deactivated successfully.
Jan 31 08:47:16 compute-0 podman[257054]: 2026-01-31 08:47:16.15227979 +0000 UTC m=+0.513725647 container died de889504e6212085eb3926e990a21f17e268ef507c461db23dd9e9613f96ed68 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_yalow, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 31 08:47:16 compute-0 systemd[1]: var-lib-containers-storage-overlay-e7dc237bdf5756beab2caba073d98aeac6623b5400f30a8af1c98144f0b6d4b1-merged.mount: Deactivated successfully.
Jan 31 08:47:16 compute-0 podman[257054]: 2026-01-31 08:47:16.191789097 +0000 UTC m=+0.553234954 container remove de889504e6212085eb3926e990a21f17e268ef507c461db23dd9e9613f96ed68 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_yalow, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 08:47:16 compute-0 systemd[1]: libpod-conmon-de889504e6212085eb3926e990a21f17e268ef507c461db23dd9e9613f96ed68.scope: Deactivated successfully.
Jan 31 08:47:16 compute-0 sudo[256976]: pam_unix(sudo:session): session closed for user root
Jan 31 08:47:16 compute-0 sudo[257105]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 08:47:16 compute-0 sudo[257105]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:47:16 compute-0 sudo[257105]: pam_unix(sudo:session): session closed for user root
Jan 31 08:47:16 compute-0 sudo[257130]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/82c880e6-d992-5408-8b12-efff9c275473/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 82c880e6-d992-5408-8b12-efff9c275473 -- lvm list --format json
Jan 31 08:47:16 compute-0 sudo[257130]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:47:16 compute-0 podman[257167]: 2026-01-31 08:47:16.59207924 +0000 UTC m=+0.035198424 container create 432e528c3767c0339264bc118c255a359a5230cb5f4f86d4b4095d332846c181 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nostalgic_cohen, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Jan 31 08:47:16 compute-0 systemd[1]: Started libpod-conmon-432e528c3767c0339264bc118c255a359a5230cb5f4f86d4b4095d332846c181.scope.
Jan 31 08:47:16 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:47:16 compute-0 podman[257167]: 2026-01-31 08:47:16.660060365 +0000 UTC m=+0.103179569 container init 432e528c3767c0339264bc118c255a359a5230cb5f4f86d4b4095d332846c181 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nostalgic_cohen, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20251030)
Jan 31 08:47:16 compute-0 podman[257167]: 2026-01-31 08:47:16.664220395 +0000 UTC m=+0.107339579 container start 432e528c3767c0339264bc118c255a359a5230cb5f4f86d4b4095d332846c181 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nostalgic_cohen, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 08:47:16 compute-0 podman[257167]: 2026-01-31 08:47:16.666899622 +0000 UTC m=+0.110018806 container attach 432e528c3767c0339264bc118c255a359a5230cb5f4f86d4b4095d332846c181 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nostalgic_cohen, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.build-date=20251030)
Jan 31 08:47:16 compute-0 nostalgic_cohen[257183]: 167 167
Jan 31 08:47:16 compute-0 systemd[1]: libpod-432e528c3767c0339264bc118c255a359a5230cb5f4f86d4b4095d332846c181.scope: Deactivated successfully.
Jan 31 08:47:16 compute-0 podman[257167]: 2026-01-31 08:47:16.668801336 +0000 UTC m=+0.111920530 container died 432e528c3767c0339264bc118c255a359a5230cb5f4f86d4b4095d332846c181 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nostalgic_cohen, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_REF=tentacle)
Jan 31 08:47:16 compute-0 podman[257167]: 2026-01-31 08:47:16.576155502 +0000 UTC m=+0.019274716 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 08:47:16 compute-0 systemd[1]: var-lib-containers-storage-overlay-7b65ceaa0dfb2331a71630f355eac743b4e2925d1c9138511179c6bdbcc3806c-merged.mount: Deactivated successfully.
Jan 31 08:47:16 compute-0 podman[257167]: 2026-01-31 08:47:16.710385302 +0000 UTC m=+0.153504486 container remove 432e528c3767c0339264bc118c255a359a5230cb5f4f86d4b4095d332846c181 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nostalgic_cohen, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030)
Jan 31 08:47:16 compute-0 systemd[1]: libpod-conmon-432e528c3767c0339264bc118c255a359a5230cb5f4f86d4b4095d332846c181.scope: Deactivated successfully.
Jan 31 08:47:16 compute-0 podman[257209]: 2026-01-31 08:47:16.834685357 +0000 UTC m=+0.033184635 container create c11283ee1d67fca4225094b5e283c52028c85c3ee3ba29db8ef08b1df5d3fb38 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hopeful_tesla, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Jan 31 08:47:16 compute-0 systemd[1]: Started libpod-conmon-c11283ee1d67fca4225094b5e283c52028c85c3ee3ba29db8ef08b1df5d3fb38.scope.
Jan 31 08:47:16 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:47:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f8120aebfee65637e726cee9787314f0548c134c78a9157b251a6de279eb0f9d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 08:47:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f8120aebfee65637e726cee9787314f0548c134c78a9157b251a6de279eb0f9d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 08:47:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f8120aebfee65637e726cee9787314f0548c134c78a9157b251a6de279eb0f9d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 08:47:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f8120aebfee65637e726cee9787314f0548c134c78a9157b251a6de279eb0f9d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 08:47:16 compute-0 podman[257209]: 2026-01-31 08:47:16.82017885 +0000 UTC m=+0.018678158 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 08:47:16 compute-0 podman[257209]: 2026-01-31 08:47:16.93455389 +0000 UTC m=+0.133053208 container init c11283ee1d67fca4225094b5e283c52028c85c3ee3ba29db8ef08b1df5d3fb38 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hopeful_tesla, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True)
Jan 31 08:47:16 compute-0 podman[257209]: 2026-01-31 08:47:16.940436339 +0000 UTC m=+0.138935617 container start c11283ee1d67fca4225094b5e283c52028c85c3ee3ba29db8ef08b1df5d3fb38 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hopeful_tesla, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 08:47:16 compute-0 podman[257209]: 2026-01-31 08:47:16.946800012 +0000 UTC m=+0.145299300 container attach c11283ee1d67fca4225094b5e283c52028c85c3ee3ba29db8ef08b1df5d3fb38 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hopeful_tesla, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 31 08:47:17 compute-0 hopeful_tesla[257226]: {
Jan 31 08:47:17 compute-0 hopeful_tesla[257226]:     "0": [
Jan 31 08:47:17 compute-0 hopeful_tesla[257226]:         {
Jan 31 08:47:17 compute-0 hopeful_tesla[257226]:             "devices": [
Jan 31 08:47:17 compute-0 hopeful_tesla[257226]:                 "/dev/loop3"
Jan 31 08:47:17 compute-0 hopeful_tesla[257226]:             ],
Jan 31 08:47:17 compute-0 hopeful_tesla[257226]:             "lv_name": "ceph_lv0",
Jan 31 08:47:17 compute-0 hopeful_tesla[257226]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 08:47:17 compute-0 hopeful_tesla[257226]:             "lv_size": "21470642176",
Jan 31 08:47:17 compute-0 hopeful_tesla[257226]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=MTsNbY-MKaT-jGv0-3onj-5WQa-gnK0-BbfLsK,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=82c880e6-d992-5408-8b12-efff9c275473,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=39c36249-2898-4a76-b317-8e4ca379866f,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 31 08:47:17 compute-0 hopeful_tesla[257226]:             "lv_uuid": "MTsNbY-MKaT-jGv0-3onj-5WQa-gnK0-BbfLsK",
Jan 31 08:47:17 compute-0 hopeful_tesla[257226]:             "name": "ceph_lv0",
Jan 31 08:47:17 compute-0 hopeful_tesla[257226]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 08:47:17 compute-0 hopeful_tesla[257226]:             "tags": {
Jan 31 08:47:17 compute-0 hopeful_tesla[257226]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 31 08:47:17 compute-0 hopeful_tesla[257226]:                 "ceph.block_uuid": "MTsNbY-MKaT-jGv0-3onj-5WQa-gnK0-BbfLsK",
Jan 31 08:47:17 compute-0 hopeful_tesla[257226]:                 "ceph.cephx_lockbox_secret": "",
Jan 31 08:47:17 compute-0 hopeful_tesla[257226]:                 "ceph.cluster_fsid": "82c880e6-d992-5408-8b12-efff9c275473",
Jan 31 08:47:17 compute-0 hopeful_tesla[257226]:                 "ceph.cluster_name": "ceph",
Jan 31 08:47:17 compute-0 hopeful_tesla[257226]:                 "ceph.crush_device_class": "",
Jan 31 08:47:17 compute-0 hopeful_tesla[257226]:                 "ceph.encrypted": "0",
Jan 31 08:47:17 compute-0 hopeful_tesla[257226]:                 "ceph.objectstore": "bluestore",
Jan 31 08:47:17 compute-0 hopeful_tesla[257226]:                 "ceph.osd_fsid": "39c36249-2898-4a76-b317-8e4ca379866f",
Jan 31 08:47:17 compute-0 hopeful_tesla[257226]:                 "ceph.osd_id": "0",
Jan 31 08:47:17 compute-0 hopeful_tesla[257226]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 31 08:47:17 compute-0 hopeful_tesla[257226]:                 "ceph.type": "block",
Jan 31 08:47:17 compute-0 hopeful_tesla[257226]:                 "ceph.vdo": "0",
Jan 31 08:47:17 compute-0 hopeful_tesla[257226]:                 "ceph.with_tpm": "0"
Jan 31 08:47:17 compute-0 hopeful_tesla[257226]:             },
Jan 31 08:47:17 compute-0 hopeful_tesla[257226]:             "type": "block",
Jan 31 08:47:17 compute-0 hopeful_tesla[257226]:             "vg_name": "ceph_vg0"
Jan 31 08:47:17 compute-0 hopeful_tesla[257226]:         }
Jan 31 08:47:17 compute-0 hopeful_tesla[257226]:     ],
Jan 31 08:47:17 compute-0 hopeful_tesla[257226]:     "1": [
Jan 31 08:47:17 compute-0 hopeful_tesla[257226]:         {
Jan 31 08:47:17 compute-0 hopeful_tesla[257226]:             "devices": [
Jan 31 08:47:17 compute-0 hopeful_tesla[257226]:                 "/dev/loop4"
Jan 31 08:47:17 compute-0 hopeful_tesla[257226]:             ],
Jan 31 08:47:17 compute-0 hopeful_tesla[257226]:             "lv_name": "ceph_lv1",
Jan 31 08:47:17 compute-0 hopeful_tesla[257226]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 31 08:47:17 compute-0 hopeful_tesla[257226]:             "lv_size": "21470642176",
Jan 31 08:47:17 compute-0 hopeful_tesla[257226]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=p93Mbf-DMxT-pcUt-jSJE-SFna-oscq-yTAd40,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=82c880e6-d992-5408-8b12-efff9c275473,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=dacad4fa-56d8-4937-b2d8-306fb75187f3,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 31 08:47:17 compute-0 hopeful_tesla[257226]:             "lv_uuid": "p93Mbf-DMxT-pcUt-jSJE-SFna-oscq-yTAd40",
Jan 31 08:47:17 compute-0 hopeful_tesla[257226]:             "name": "ceph_lv1",
Jan 31 08:47:17 compute-0 hopeful_tesla[257226]:             "path": "/dev/ceph_vg1/ceph_lv1",
Jan 31 08:47:17 compute-0 hopeful_tesla[257226]:             "tags": {
Jan 31 08:47:17 compute-0 hopeful_tesla[257226]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 31 08:47:17 compute-0 hopeful_tesla[257226]:                 "ceph.block_uuid": "p93Mbf-DMxT-pcUt-jSJE-SFna-oscq-yTAd40",
Jan 31 08:47:17 compute-0 hopeful_tesla[257226]:                 "ceph.cephx_lockbox_secret": "",
Jan 31 08:47:17 compute-0 hopeful_tesla[257226]:                 "ceph.cluster_fsid": "82c880e6-d992-5408-8b12-efff9c275473",
Jan 31 08:47:17 compute-0 hopeful_tesla[257226]:                 "ceph.cluster_name": "ceph",
Jan 31 08:47:17 compute-0 hopeful_tesla[257226]:                 "ceph.crush_device_class": "",
Jan 31 08:47:17 compute-0 hopeful_tesla[257226]:                 "ceph.encrypted": "0",
Jan 31 08:47:17 compute-0 hopeful_tesla[257226]:                 "ceph.objectstore": "bluestore",
Jan 31 08:47:17 compute-0 hopeful_tesla[257226]:                 "ceph.osd_fsid": "dacad4fa-56d8-4937-b2d8-306fb75187f3",
Jan 31 08:47:17 compute-0 hopeful_tesla[257226]:                 "ceph.osd_id": "1",
Jan 31 08:47:17 compute-0 hopeful_tesla[257226]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 31 08:47:17 compute-0 hopeful_tesla[257226]:                 "ceph.type": "block",
Jan 31 08:47:17 compute-0 hopeful_tesla[257226]:                 "ceph.vdo": "0",
Jan 31 08:47:17 compute-0 hopeful_tesla[257226]:                 "ceph.with_tpm": "0"
Jan 31 08:47:17 compute-0 hopeful_tesla[257226]:             },
Jan 31 08:47:17 compute-0 hopeful_tesla[257226]:             "type": "block",
Jan 31 08:47:17 compute-0 hopeful_tesla[257226]:             "vg_name": "ceph_vg1"
Jan 31 08:47:17 compute-0 hopeful_tesla[257226]:         }
Jan 31 08:47:17 compute-0 hopeful_tesla[257226]:     ],
Jan 31 08:47:17 compute-0 hopeful_tesla[257226]:     "2": [
Jan 31 08:47:17 compute-0 hopeful_tesla[257226]:         {
Jan 31 08:47:17 compute-0 hopeful_tesla[257226]:             "devices": [
Jan 31 08:47:17 compute-0 hopeful_tesla[257226]:                 "/dev/loop5"
Jan 31 08:47:17 compute-0 hopeful_tesla[257226]:             ],
Jan 31 08:47:17 compute-0 hopeful_tesla[257226]:             "lv_name": "ceph_lv2",
Jan 31 08:47:17 compute-0 hopeful_tesla[257226]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 31 08:47:17 compute-0 hopeful_tesla[257226]:             "lv_size": "21470642176",
Jan 31 08:47:17 compute-0 hopeful_tesla[257226]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=fxd7JU-HnwP-NvcE-M4xv-EgEF-kK7y-w6dXCS,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=82c880e6-d992-5408-8b12-efff9c275473,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=faa25865-e7b6-44f9-8188-08bf287b941b,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 31 08:47:17 compute-0 hopeful_tesla[257226]:             "lv_uuid": "fxd7JU-HnwP-NvcE-M4xv-EgEF-kK7y-w6dXCS",
Jan 31 08:47:17 compute-0 hopeful_tesla[257226]:             "name": "ceph_lv2",
Jan 31 08:47:17 compute-0 hopeful_tesla[257226]:             "path": "/dev/ceph_vg2/ceph_lv2",
Jan 31 08:47:17 compute-0 hopeful_tesla[257226]:             "tags": {
Jan 31 08:47:17 compute-0 hopeful_tesla[257226]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 31 08:47:17 compute-0 hopeful_tesla[257226]:                 "ceph.block_uuid": "fxd7JU-HnwP-NvcE-M4xv-EgEF-kK7y-w6dXCS",
Jan 31 08:47:17 compute-0 hopeful_tesla[257226]:                 "ceph.cephx_lockbox_secret": "",
Jan 31 08:47:17 compute-0 hopeful_tesla[257226]:                 "ceph.cluster_fsid": "82c880e6-d992-5408-8b12-efff9c275473",
Jan 31 08:47:17 compute-0 hopeful_tesla[257226]:                 "ceph.cluster_name": "ceph",
Jan 31 08:47:17 compute-0 hopeful_tesla[257226]:                 "ceph.crush_device_class": "",
Jan 31 08:47:17 compute-0 hopeful_tesla[257226]:                 "ceph.encrypted": "0",
Jan 31 08:47:17 compute-0 hopeful_tesla[257226]:                 "ceph.objectstore": "bluestore",
Jan 31 08:47:17 compute-0 hopeful_tesla[257226]:                 "ceph.osd_fsid": "faa25865-e7b6-44f9-8188-08bf287b941b",
Jan 31 08:47:17 compute-0 hopeful_tesla[257226]:                 "ceph.osd_id": "2",
Jan 31 08:47:17 compute-0 hopeful_tesla[257226]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 31 08:47:17 compute-0 hopeful_tesla[257226]:                 "ceph.type": "block",
Jan 31 08:47:17 compute-0 hopeful_tesla[257226]:                 "ceph.vdo": "0",
Jan 31 08:47:17 compute-0 hopeful_tesla[257226]:                 "ceph.with_tpm": "0"
Jan 31 08:47:17 compute-0 hopeful_tesla[257226]:             },
Jan 31 08:47:17 compute-0 hopeful_tesla[257226]:             "type": "block",
Jan 31 08:47:17 compute-0 hopeful_tesla[257226]:             "vg_name": "ceph_vg2"
Jan 31 08:47:17 compute-0 hopeful_tesla[257226]:         }
Jan 31 08:47:17 compute-0 hopeful_tesla[257226]:     ]
Jan 31 08:47:17 compute-0 hopeful_tesla[257226]: }
Jan 31 08:47:17 compute-0 systemd[1]: libpod-c11283ee1d67fca4225094b5e283c52028c85c3ee3ba29db8ef08b1df5d3fb38.scope: Deactivated successfully.
Jan 31 08:47:17 compute-0 podman[257209]: 2026-01-31 08:47:17.218174907 +0000 UTC m=+0.416674185 container died c11283ee1d67fca4225094b5e283c52028c85c3ee3ba29db8ef08b1df5d3fb38 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hopeful_tesla, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle)
Jan 31 08:47:17 compute-0 systemd[1]: var-lib-containers-storage-overlay-f8120aebfee65637e726cee9787314f0548c134c78a9157b251a6de279eb0f9d-merged.mount: Deactivated successfully.
Jan 31 08:47:17 compute-0 podman[257209]: 2026-01-31 08:47:17.258308432 +0000 UTC m=+0.456807710 container remove c11283ee1d67fca4225094b5e283c52028c85c3ee3ba29db8ef08b1df5d3fb38 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hopeful_tesla, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle)
Jan 31 08:47:17 compute-0 systemd[1]: libpod-conmon-c11283ee1d67fca4225094b5e283c52028c85c3ee3ba29db8ef08b1df5d3fb38.scope: Deactivated successfully.
Jan 31 08:47:17 compute-0 sudo[257130]: pam_unix(sudo:session): session closed for user root
Jan 31 08:47:17 compute-0 sudo[257245]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 08:47:17 compute-0 sudo[257245]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:47:17 compute-0 sudo[257245]: pam_unix(sudo:session): session closed for user root
Jan 31 08:47:17 compute-0 sudo[257270]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/82c880e6-d992-5408-8b12-efff9c275473/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 82c880e6-d992-5408-8b12-efff9c275473 -- raw list --format json
Jan 31 08:47:17 compute-0 sudo[257270]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:47:17 compute-0 podman[257307]: 2026-01-31 08:47:17.666890523 +0000 UTC m=+0.040901937 container create 6efea40f5722b7b864604970c54d5e026286e16796960f6622c669f8bada494a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=tender_edison, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS)
Jan 31 08:47:17 compute-0 systemd[1]: Started libpod-conmon-6efea40f5722b7b864604970c54d5e026286e16796960f6622c669f8bada494a.scope.
Jan 31 08:47:17 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:47:17 compute-0 podman[257307]: 2026-01-31 08:47:17.735841297 +0000 UTC m=+0.109852741 container init 6efea40f5722b7b864604970c54d5e026286e16796960f6622c669f8bada494a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=tender_edison, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.build-date=20251030, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 08:47:17 compute-0 podman[257307]: 2026-01-31 08:47:17.741099878 +0000 UTC m=+0.115111302 container start 6efea40f5722b7b864604970c54d5e026286e16796960f6622c669f8bada494a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=tender_edison, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle)
Jan 31 08:47:17 compute-0 podman[257307]: 2026-01-31 08:47:17.648321479 +0000 UTC m=+0.022332923 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 08:47:17 compute-0 tender_edison[257324]: 167 167
Jan 31 08:47:17 compute-0 podman[257307]: 2026-01-31 08:47:17.744239938 +0000 UTC m=+0.118251372 container attach 6efea40f5722b7b864604970c54d5e026286e16796960f6622c669f8bada494a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=tender_edison, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 08:47:17 compute-0 systemd[1]: libpod-6efea40f5722b7b864604970c54d5e026286e16796960f6622c669f8bada494a.scope: Deactivated successfully.
Jan 31 08:47:17 compute-0 podman[257307]: 2026-01-31 08:47:17.74502145 +0000 UTC m=+0.119032874 container died 6efea40f5722b7b864604970c54d5e026286e16796960f6622c669f8bada494a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=tender_edison, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Jan 31 08:47:17 compute-0 systemd[1]: var-lib-containers-storage-overlay-8dbdab9b7e5a713f8af88d5a15a5dd10a10ba57949f0b464c36459bc07614d4d-merged.mount: Deactivated successfully.
Jan 31 08:47:17 compute-0 podman[257307]: 2026-01-31 08:47:17.78222114 +0000 UTC m=+0.156232564 container remove 6efea40f5722b7b864604970c54d5e026286e16796960f6622c669f8bada494a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=tender_edison, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Jan 31 08:47:17 compute-0 systemd[1]: libpod-conmon-6efea40f5722b7b864604970c54d5e026286e16796960f6622c669f8bada494a.scope: Deactivated successfully.
Jan 31 08:47:17 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1354: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:47:17 compute-0 ceph-mon[75227]: pgmap v1354: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:47:17 compute-0 podman[257350]: 2026-01-31 08:47:17.899972367 +0000 UTC m=+0.033102943 container create 25b393a3d4e8834ccb797abc2439166e30d8053fc1db635e55e800990cbf9759 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ecstatic_haslett, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 31 08:47:17 compute-0 ovn_metadata_agent[154972]: 2026-01-31 08:47:17.907 154977 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:47:17 compute-0 ovn_metadata_agent[154972]: 2026-01-31 08:47:17.908 154977 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:47:17 compute-0 ovn_metadata_agent[154972]: 2026-01-31 08:47:17.908 154977 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:47:17 compute-0 systemd[1]: Started libpod-conmon-25b393a3d4e8834ccb797abc2439166e30d8053fc1db635e55e800990cbf9759.scope.
Jan 31 08:47:17 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:47:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/08099d69b490cfbd1b831fc621d9e94f5cc3ad4b256d175d9966cd0ca2dcf223/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 08:47:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/08099d69b490cfbd1b831fc621d9e94f5cc3ad4b256d175d9966cd0ca2dcf223/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 08:47:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/08099d69b490cfbd1b831fc621d9e94f5cc3ad4b256d175d9966cd0ca2dcf223/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 08:47:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/08099d69b490cfbd1b831fc621d9e94f5cc3ad4b256d175d9966cd0ca2dcf223/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 08:47:17 compute-0 podman[257350]: 2026-01-31 08:47:17.968748885 +0000 UTC m=+0.101879491 container init 25b393a3d4e8834ccb797abc2439166e30d8053fc1db635e55e800990cbf9759 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ecstatic_haslett, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 08:47:17 compute-0 podman[257350]: 2026-01-31 08:47:17.974229843 +0000 UTC m=+0.107360389 container start 25b393a3d4e8834ccb797abc2439166e30d8053fc1db635e55e800990cbf9759 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ecstatic_haslett, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle)
Jan 31 08:47:17 compute-0 podman[257350]: 2026-01-31 08:47:17.978024302 +0000 UTC m=+0.111154898 container attach 25b393a3d4e8834ccb797abc2439166e30d8053fc1db635e55e800990cbf9759 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ecstatic_haslett, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 08:47:17 compute-0 podman[257350]: 2026-01-31 08:47:17.885427599 +0000 UTC m=+0.018558165 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 08:47:17 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 31 08:47:18 compute-0 ceph-mon[75227]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1529098669' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 31 08:47:18 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 31 08:47:18 compute-0 ceph-mon[75227]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1529098669' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 31 08:47:18 compute-0 lvm[257444]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 31 08:47:18 compute-0 lvm[257445]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 31 08:47:18 compute-0 lvm[257444]: VG ceph_vg0 finished
Jan 31 08:47:18 compute-0 lvm[257445]: VG ceph_vg1 finished
Jan 31 08:47:18 compute-0 lvm[257447]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 31 08:47:18 compute-0 lvm[257447]: VG ceph_vg2 finished
Jan 31 08:47:18 compute-0 ecstatic_haslett[257366]: {}
Jan 31 08:47:18 compute-0 systemd[1]: libpod-25b393a3d4e8834ccb797abc2439166e30d8053fc1db635e55e800990cbf9759.scope: Deactivated successfully.
Jan 31 08:47:18 compute-0 systemd[1]: libpod-25b393a3d4e8834ccb797abc2439166e30d8053fc1db635e55e800990cbf9759.scope: Consumed 1.021s CPU time.
Jan 31 08:47:18 compute-0 podman[257350]: 2026-01-31 08:47:18.792298602 +0000 UTC m=+0.925429138 container died 25b393a3d4e8834ccb797abc2439166e30d8053fc1db635e55e800990cbf9759 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ecstatic_haslett, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 31 08:47:18 compute-0 systemd[1]: var-lib-containers-storage-overlay-08099d69b490cfbd1b831fc621d9e94f5cc3ad4b256d175d9966cd0ca2dcf223-merged.mount: Deactivated successfully.
Jan 31 08:47:18 compute-0 podman[257350]: 2026-01-31 08:47:18.841964581 +0000 UTC m=+0.975095117 container remove 25b393a3d4e8834ccb797abc2439166e30d8053fc1db635e55e800990cbf9759 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ecstatic_haslett, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20251030, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 08:47:18 compute-0 systemd[1]: libpod-conmon-25b393a3d4e8834ccb797abc2439166e30d8053fc1db635e55e800990cbf9759.scope: Deactivated successfully.
Jan 31 08:47:18 compute-0 sudo[257270]: pam_unix(sudo:session): session closed for user root
Jan 31 08:47:18 compute-0 ceph-mon[75227]: from='client.? 192.168.122.10:0/1529098669' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 31 08:47:18 compute-0 ceph-mon[75227]: from='client.? 192.168.122.10:0/1529098669' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 31 08:47:18 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 31 08:47:18 compute-0 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 08:47:18 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 31 08:47:18 compute-0 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 08:47:18 compute-0 sudo[257464]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 31 08:47:18 compute-0 sudo[257464]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:47:18 compute-0 sudo[257464]: pam_unix(sudo:session): session closed for user root
Jan 31 08:47:19 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1355: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:47:19 compute-0 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 08:47:19 compute-0 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 08:47:19 compute-0 ceph-mon[75227]: pgmap v1355: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:47:20 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:47:21 compute-0 ceph-osd[87035]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 31 08:47:21 compute-0 ceph-osd[87035]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 2400.3 total, 600.0 interval
                                           Cumulative writes: 7939 writes, 31K keys, 7939 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.01 MB/s
                                           Cumulative WAL: 7939 writes, 1746 syncs, 4.55 writes per sync, written: 0.02 GB, 0.01 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 313 writes, 601 keys, 313 commit groups, 1.0 writes per commit group, ingest: 0.22 MB, 0.00 MB/s
                                           Interval WAL: 313 writes, 149 syncs, 2.10 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Jan 31 08:47:21 compute-0 nova_compute[238824]: 2026-01-31 08:47:21.339 238828 DEBUG oslo_service.periodic_task [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:47:21 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1356: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:47:21 compute-0 ceph-mon[75227]: pgmap v1356: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:47:23 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1357: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:47:24 compute-0 ceph-mon[75227]: pgmap v1357: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:47:25 compute-0 nova_compute[238824]: 2026-01-31 08:47:25.340 238828 DEBUG oslo_service.periodic_task [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:47:25 compute-0 nova_compute[238824]: 2026-01-31 08:47:25.340 238828 DEBUG nova.compute.manager [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 31 08:47:25 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1358: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:47:25 compute-0 ceph-mon[75227]: pgmap v1358: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:47:25 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:47:26 compute-0 nova_compute[238824]: 2026-01-31 08:47:26.340 238828 DEBUG oslo_service.periodic_task [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:47:27 compute-0 nova_compute[238824]: 2026-01-31 08:47:27.339 238828 DEBUG oslo_service.periodic_task [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:47:27 compute-0 ceph-osd[88096]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 31 08:47:27 compute-0 ceph-osd[88096]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 2400.8 total, 600.0 interval
                                           Cumulative writes: 6379 writes, 26K keys, 6379 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.01 MB/s
                                           Cumulative WAL: 6379 writes, 1179 syncs, 5.41 writes per sync, written: 0.02 GB, 0.01 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 245 writes, 416 keys, 245 commit groups, 1.0 writes per commit group, ingest: 0.15 MB, 0.00 MB/s
                                           Interval WAL: 245 writes, 117 syncs, 2.09 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Jan 31 08:47:27 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1359: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:47:28 compute-0 ceph-mon[75227]: pgmap v1359: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:47:29 compute-0 nova_compute[238824]: 2026-01-31 08:47:29.339 238828 DEBUG oslo_service.periodic_task [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:47:29 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1360: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:47:30 compute-0 ceph-mon[75227]: pgmap v1360: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:47:30 compute-0 nova_compute[238824]: 2026-01-31 08:47:30.340 238828 DEBUG oslo_service.periodic_task [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:47:30 compute-0 nova_compute[238824]: 2026-01-31 08:47:30.340 238828 DEBUG nova.compute.manager [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 31 08:47:30 compute-0 nova_compute[238824]: 2026-01-31 08:47:30.340 238828 DEBUG nova.compute.manager [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 31 08:47:30 compute-0 nova_compute[238824]: 2026-01-31 08:47:30.356 238828 DEBUG nova.compute.manager [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 31 08:47:30 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:47:31 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1361: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:47:31 compute-0 ceph-mgr[75519]: [balancer INFO root] Optimize plan auto_2026-01-31_08:47:31
Jan 31 08:47:31 compute-0 ceph-mgr[75519]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 31 08:47:31 compute-0 ceph-mgr[75519]: [balancer INFO root] do_upmap
Jan 31 08:47:31 compute-0 ceph-mgr[75519]: [balancer INFO root] pools ['images', 'vms', 'backups', '.rgw.root', 'default.rgw.control', 'cephfs.cephfs.data', 'cephfs.cephfs.meta', 'volumes', 'default.rgw.meta', '.mgr', 'default.rgw.log']
Jan 31 08:47:31 compute-0 ceph-mgr[75519]: [balancer INFO root] prepared 0/10 upmap changes
Jan 31 08:47:31 compute-0 ceph-mon[75227]: pgmap v1361: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:47:32 compute-0 ceph-mgr[75519]: [devicehealth INFO root] Check health
Jan 31 08:47:32 compute-0 nova_compute[238824]: 2026-01-31 08:47:32.339 238828 DEBUG oslo_service.periodic_task [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:47:32 compute-0 ceph-mgr[75519]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:47:32 compute-0 ceph-mgr[75519]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:47:32 compute-0 ceph-mgr[75519]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:47:32 compute-0 ceph-mgr[75519]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:47:32 compute-0 ceph-mgr[75519]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:47:32 compute-0 ceph-mgr[75519]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:47:33 compute-0 ceph-mgr[75519]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 31 08:47:33 compute-0 ceph-mgr[75519]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 08:47:33 compute-0 ceph-mgr[75519]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 31 08:47:33 compute-0 ceph-mgr[75519]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 08:47:33 compute-0 ceph-mgr[75519]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 08:47:33 compute-0 ceph-mgr[75519]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 08:47:33 compute-0 ceph-mgr[75519]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 08:47:33 compute-0 ceph-mgr[75519]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 08:47:33 compute-0 ceph-mgr[75519]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 08:47:33 compute-0 ceph-mgr[75519]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 08:47:33 compute-0 nova_compute[238824]: 2026-01-31 08:47:33.339 238828 DEBUG oslo_service.periodic_task [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:47:33 compute-0 nova_compute[238824]: 2026-01-31 08:47:33.365 238828 DEBUG oslo_concurrency.lockutils [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:47:33 compute-0 nova_compute[238824]: 2026-01-31 08:47:33.366 238828 DEBUG oslo_concurrency.lockutils [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:47:33 compute-0 nova_compute[238824]: 2026-01-31 08:47:33.366 238828 DEBUG oslo_concurrency.lockutils [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:47:33 compute-0 nova_compute[238824]: 2026-01-31 08:47:33.366 238828 DEBUG nova.compute.resource_tracker [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 31 08:47:33 compute-0 nova_compute[238824]: 2026-01-31 08:47:33.366 238828 DEBUG oslo_concurrency.processutils [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:47:33 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1362: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:47:33 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 31 08:47:33 compute-0 ceph-mon[75227]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3434740814' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 31 08:47:33 compute-0 nova_compute[238824]: 2026-01-31 08:47:33.868 238828 DEBUG oslo_concurrency.processutils [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.502s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:47:33 compute-0 ceph-mon[75227]: pgmap v1362: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:47:33 compute-0 ceph-mon[75227]: from='client.? 192.168.122.100:0/3434740814' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 31 08:47:33 compute-0 nova_compute[238824]: 2026-01-31 08:47:33.994 238828 WARNING nova.virt.libvirt.driver [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 31 08:47:33 compute-0 nova_compute[238824]: 2026-01-31 08:47:33.995 238828 DEBUG nova.compute.resource_tracker [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5072MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 31 08:47:33 compute-0 nova_compute[238824]: 2026-01-31 08:47:33.996 238828 DEBUG oslo_concurrency.lockutils [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:47:33 compute-0 nova_compute[238824]: 2026-01-31 08:47:33.996 238828 DEBUG oslo_concurrency.lockutils [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:47:34 compute-0 nova_compute[238824]: 2026-01-31 08:47:34.054 238828 DEBUG nova.compute.resource_tracker [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 31 08:47:34 compute-0 nova_compute[238824]: 2026-01-31 08:47:34.054 238828 DEBUG nova.compute.resource_tracker [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 31 08:47:34 compute-0 nova_compute[238824]: 2026-01-31 08:47:34.071 238828 DEBUG oslo_concurrency.processutils [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:47:34 compute-0 podman[257513]: 2026-01-31 08:47:34.156963203 +0000 UTC m=+0.050053160 container health_status 5cc46d1955888fed41771eb977e7f9416e280539f01559118253757fe3eb0869 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '3b798815db4eef76ff54d2cfe5801aee605c637f16e47a4297de289ab1fdb9c1-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Jan 31 08:47:34 compute-0 podman[257512]: 2026-01-31 08:47:34.179775729 +0000 UTC m=+0.073288199 container health_status 14c3c41ef3ecc0cb180f3b1c6b2646401de390411c75f2edf127900ced71a3ae (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '3b798815db4eef76ff54d2cfe5801aee605c637f16e47a4297de289ab1fdb9c1-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller)
Jan 31 08:47:34 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 31 08:47:34 compute-0 ceph-mon[75227]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1144545224' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 31 08:47:34 compute-0 nova_compute[238824]: 2026-01-31 08:47:34.602 238828 DEBUG oslo_concurrency.processutils [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.531s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:47:34 compute-0 nova_compute[238824]: 2026-01-31 08:47:34.608 238828 DEBUG nova.compute.provider_tree [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Inventory has not changed in ProviderTree for provider: 6d4ff98f-eb37-47a1-bfaf-01e7f5329d98 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 31 08:47:34 compute-0 nova_compute[238824]: 2026-01-31 08:47:34.754 238828 DEBUG nova.scheduler.client.report [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Inventory has not changed for provider 6d4ff98f-eb37-47a1-bfaf-01e7f5329d98 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 31 08:47:34 compute-0 nova_compute[238824]: 2026-01-31 08:47:34.756 238828 DEBUG nova.compute.resource_tracker [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 31 08:47:34 compute-0 nova_compute[238824]: 2026-01-31 08:47:34.756 238828 DEBUG oslo_concurrency.lockutils [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.760s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:47:34 compute-0 ceph-mon[75227]: from='client.? 192.168.122.100:0/1144545224' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 31 08:47:35 compute-0 nova_compute[238824]: 2026-01-31 08:47:35.751 238828 DEBUG oslo_service.periodic_task [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:47:35 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1363: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:47:35 compute-0 ceph-mon[75227]: pgmap v1363: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:47:36 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:47:37 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1364: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:47:37 compute-0 ceph-mon[75227]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #60. Immutable memtables: 0.
Jan 31 08:47:37 compute-0 ceph-mon[75227]: rocksdb: (Original Log Time 2026/01/31-08:47:37.923701) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 31 08:47:37 compute-0 ceph-mon[75227]: rocksdb: [db/flush_job.cc:856] [default] [JOB 31] Flushing memtable with next log file: 60
Jan 31 08:47:37 compute-0 ceph-mon[75227]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769849257923749, "job": 31, "event": "flush_started", "num_memtables": 1, "num_entries": 2050, "num_deletes": 251, "total_data_size": 3454811, "memory_usage": 3515760, "flush_reason": "Manual Compaction"}
Jan 31 08:47:37 compute-0 ceph-mon[75227]: rocksdb: [db/flush_job.cc:885] [default] [JOB 31] Level-0 flush table #61: started
Jan 31 08:47:37 compute-0 ceph-mon[75227]: pgmap v1364: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:47:37 compute-0 ceph-mon[75227]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769849257952762, "cf_name": "default", "job": 31, "event": "table_file_creation", "file_number": 61, "file_size": 3388061, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 25729, "largest_seqno": 27778, "table_properties": {"data_size": 3378673, "index_size": 5946, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2373, "raw_key_size": 18646, "raw_average_key_size": 20, "raw_value_size": 3360081, "raw_average_value_size": 3616, "num_data_blocks": 264, "num_entries": 929, "num_filter_entries": 929, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769849031, "oldest_key_time": 1769849031, "file_creation_time": 1769849257, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "91992687-9ca4-489a-811f-a25b3432622d", "db_session_id": "RDN3DWKE2K2I6QTJYIJY", "orig_file_number": 61, "seqno_to_time_mapping": "N/A"}}
Jan 31 08:47:37 compute-0 ceph-mon[75227]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 31] Flush lasted 29157 microseconds, and 8872 cpu microseconds.
Jan 31 08:47:37 compute-0 ceph-mon[75227]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 31 08:47:37 compute-0 ceph-mon[75227]: rocksdb: (Original Log Time 2026/01/31-08:47:37.952854) [db/flush_job.cc:967] [default] [JOB 31] Level-0 flush table #61: 3388061 bytes OK
Jan 31 08:47:37 compute-0 ceph-mon[75227]: rocksdb: (Original Log Time 2026/01/31-08:47:37.952888) [db/memtable_list.cc:519] [default] Level-0 commit table #61 started
Jan 31 08:47:37 compute-0 ceph-mon[75227]: rocksdb: (Original Log Time 2026/01/31-08:47:37.956185) [db/memtable_list.cc:722] [default] Level-0 commit table #61: memtable #1 done
Jan 31 08:47:37 compute-0 ceph-mon[75227]: rocksdb: (Original Log Time 2026/01/31-08:47:37.956218) EVENT_LOG_v1 {"time_micros": 1769849257956207, "job": 31, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 31 08:47:37 compute-0 ceph-mon[75227]: rocksdb: (Original Log Time 2026/01/31-08:47:37.956283) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 31 08:47:37 compute-0 ceph-mon[75227]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 31] Try to delete WAL files size 3446234, prev total WAL file size 3446234, number of live WAL files 2.
Jan 31 08:47:37 compute-0 ceph-mon[75227]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000057.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 08:47:37 compute-0 ceph-mon[75227]: rocksdb: (Original Log Time 2026/01/31-08:47:37.957889) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730032323539' seq:72057594037927935, type:22 .. '7061786F730032353131' seq:0, type:0; will stop at (end)
Jan 31 08:47:37 compute-0 ceph-mon[75227]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 32] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 31 08:47:37 compute-0 ceph-mon[75227]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 31 Base level 0, inputs: [61(3308KB)], [59(7718KB)]
Jan 31 08:47:37 compute-0 ceph-mon[75227]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769849257957994, "job": 32, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [61], "files_L6": [59], "score": -1, "input_data_size": 11292250, "oldest_snapshot_seqno": -1}
Jan 31 08:47:38 compute-0 ceph-mon[75227]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 32] Generated table #62: 5150 keys, 9454183 bytes, temperature: kUnknown
Jan 31 08:47:38 compute-0 ceph-mon[75227]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769849258058753, "cf_name": "default", "job": 32, "event": "table_file_creation", "file_number": 62, "file_size": 9454183, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 9417686, "index_size": 22499, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 12933, "raw_key_size": 127884, "raw_average_key_size": 24, "raw_value_size": 9322525, "raw_average_value_size": 1810, "num_data_blocks": 930, "num_entries": 5150, "num_filter_entries": 5150, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769846771, "oldest_key_time": 0, "file_creation_time": 1769849257, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "91992687-9ca4-489a-811f-a25b3432622d", "db_session_id": "RDN3DWKE2K2I6QTJYIJY", "orig_file_number": 62, "seqno_to_time_mapping": "N/A"}}
Jan 31 08:47:38 compute-0 ceph-mon[75227]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 31 08:47:38 compute-0 ceph-mon[75227]: rocksdb: (Original Log Time 2026/01/31-08:47:38.058976) [db/compaction/compaction_job.cc:1663] [default] [JOB 32] Compacted 1@0 + 1@6 files to L6 => 9454183 bytes
Jan 31 08:47:38 compute-0 ceph-mon[75227]: rocksdb: (Original Log Time 2026/01/31-08:47:38.060622) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 112.0 rd, 93.8 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.2, 7.5 +0.0 blob) out(9.0 +0.0 blob), read-write-amplify(6.1) write-amplify(2.8) OK, records in: 5664, records dropped: 514 output_compression: NoCompression
Jan 31 08:47:38 compute-0 ceph-mon[75227]: rocksdb: (Original Log Time 2026/01/31-08:47:38.060640) EVENT_LOG_v1 {"time_micros": 1769849258060631, "job": 32, "event": "compaction_finished", "compaction_time_micros": 100827, "compaction_time_cpu_micros": 27815, "output_level": 6, "num_output_files": 1, "total_output_size": 9454183, "num_input_records": 5664, "num_output_records": 5150, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 31 08:47:38 compute-0 ceph-mon[75227]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000061.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 08:47:38 compute-0 ceph-mon[75227]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769849258061328, "job": 32, "event": "table_file_deletion", "file_number": 61}
Jan 31 08:47:38 compute-0 ceph-mon[75227]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000059.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 08:47:38 compute-0 ceph-mon[75227]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769849258062455, "job": 32, "event": "table_file_deletion", "file_number": 59}
Jan 31 08:47:38 compute-0 ceph-mon[75227]: rocksdb: (Original Log Time 2026/01/31-08:47:37.957708) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 08:47:38 compute-0 ceph-mon[75227]: rocksdb: (Original Log Time 2026/01/31-08:47:38.062690) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 08:47:38 compute-0 ceph-mon[75227]: rocksdb: (Original Log Time 2026/01/31-08:47:38.062696) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 08:47:38 compute-0 ceph-mon[75227]: rocksdb: (Original Log Time 2026/01/31-08:47:38.062698) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 08:47:38 compute-0 ceph-mon[75227]: rocksdb: (Original Log Time 2026/01/31-08:47:38.062700) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 08:47:38 compute-0 ceph-mon[75227]: rocksdb: (Original Log Time 2026/01/31-08:47:38.062702) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 08:47:39 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1365: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:47:39 compute-0 ceph-mon[75227]: pgmap v1365: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:47:41 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:47:41 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1366: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:47:41 compute-0 ceph-mon[75227]: pgmap v1366: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:47:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] _maybe_adjust
Jan 31 08:47:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:47:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 31 08:47:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:47:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 08:47:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:47:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 08:47:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:47:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 08:47:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:47:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 3.257160766784386e-07 of space, bias 1.0, pg target 9.771482300353158e-05 quantized to 32 (current 32)
Jan 31 08:47:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:47:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 2.5331644121694047e-06 of space, bias 4.0, pg target 0.0030397972946032857 quantized to 16 (current 16)
Jan 31 08:47:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:47:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 08:47:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:47:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 31 08:47:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:47:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Jan 31 08:47:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:47:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 08:47:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:47:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 31 08:47:43 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1367: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:47:43 compute-0 ceph-mon[75227]: pgmap v1367: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:47:45 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1368: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:47:45 compute-0 ceph-mon[75227]: pgmap v1368: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:47:46 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:47:47 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1369: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:47:48 compute-0 ceph-mon[75227]: pgmap v1369: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:47:49 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1370: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:47:50 compute-0 ceph-mon[75227]: pgmap v1370: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:47:51 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:47:51 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1371: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:47:51 compute-0 ceph-mon[75227]: pgmap v1371: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:47:53 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1372: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:47:53 compute-0 ceph-mon[75227]: pgmap v1372: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:47:55 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1373: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:47:55 compute-0 ceph-mon[75227]: pgmap v1373: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:47:56 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:47:57 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1374: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:47:57 compute-0 ceph-mon[75227]: pgmap v1374: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:47:59 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1375: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:48:00 compute-0 ceph-mon[75227]: pgmap v1375: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:48:01 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:48:01 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1376: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:48:01 compute-0 ceph-mon[75227]: pgmap v1376: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:48:02 compute-0 ceph-mgr[75519]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:48:02 compute-0 ceph-mgr[75519]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:48:02 compute-0 ceph-mgr[75519]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:48:02 compute-0 ceph-mgr[75519]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:48:02 compute-0 ceph-mgr[75519]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:48:02 compute-0 ceph-mgr[75519]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:48:03 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1377: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:48:03 compute-0 ceph-mon[75227]: pgmap v1377: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:48:05 compute-0 podman[257578]: 2026-01-31 08:48:05.167987553 +0000 UTC m=+0.051130572 container health_status 5cc46d1955888fed41771eb977e7f9416e280539f01559118253757fe3eb0869 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '3b798815db4eef76ff54d2cfe5801aee605c637f16e47a4297de289ab1fdb9c1-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent)
Jan 31 08:48:05 compute-0 podman[257577]: 2026-01-31 08:48:05.194541797 +0000 UTC m=+0.077304205 container health_status 14c3c41ef3ecc0cb180f3b1c6b2646401de390411c75f2edf127900ced71a3ae (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '3b798815db4eef76ff54d2cfe5801aee605c637f16e47a4297de289ab1fdb9c1-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.build-date=20260127, managed_by=edpm_ansible, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 31 08:48:05 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1378: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:48:05 compute-0 ceph-mon[75227]: pgmap v1378: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:48:06 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:48:07 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1379: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:48:07 compute-0 ceph-mon[75227]: pgmap v1379: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:48:09 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1380: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:48:09 compute-0 ceph-mon[75227]: pgmap v1380: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:48:11 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:48:11 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1381: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:48:11 compute-0 ceph-mon[75227]: pgmap v1381: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:48:13 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1382: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:48:13 compute-0 ceph-mon[75227]: pgmap v1382: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:48:15 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1383: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:48:15 compute-0 ceph-mon[75227]: pgmap v1383: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:48:16 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:48:17 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1384: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:48:17 compute-0 ovn_metadata_agent[154972]: 2026-01-31 08:48:17.909 154977 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:48:17 compute-0 ovn_metadata_agent[154972]: 2026-01-31 08:48:17.909 154977 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:48:17 compute-0 ovn_metadata_agent[154972]: 2026-01-31 08:48:17.909 154977 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:48:17 compute-0 ceph-mon[75227]: pgmap v1384: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:48:18 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 31 08:48:18 compute-0 ceph-mon[75227]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4277259967' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 31 08:48:18 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 31 08:48:18 compute-0 ceph-mon[75227]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4277259967' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 31 08:48:18 compute-0 ceph-mon[75227]: from='client.? 192.168.122.10:0/4277259967' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 31 08:48:18 compute-0 ceph-mon[75227]: from='client.? 192.168.122.10:0/4277259967' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 31 08:48:19 compute-0 sudo[257621]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 08:48:19 compute-0 sudo[257621]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:48:19 compute-0 sudo[257621]: pam_unix(sudo:session): session closed for user root
Jan 31 08:48:19 compute-0 sudo[257646]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/82c880e6-d992-5408-8b12-efff9c275473/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 check-host
Jan 31 08:48:19 compute-0 sudo[257646]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:48:19 compute-0 sudo[257646]: pam_unix(sudo:session): session closed for user root
Jan 31 08:48:19 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 31 08:48:19 compute-0 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 08:48:19 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 31 08:48:19 compute-0 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 08:48:19 compute-0 sudo[257692]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 08:48:19 compute-0 sudo[257692]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:48:19 compute-0 sudo[257692]: pam_unix(sudo:session): session closed for user root
Jan 31 08:48:19 compute-0 sudo[257717]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/82c880e6-d992-5408-8b12-efff9c275473/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Jan 31 08:48:19 compute-0 sudo[257717]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:48:19 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1385: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:48:20 compute-0 sudo[257717]: pam_unix(sudo:session): session closed for user root
Jan 31 08:48:20 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 31 08:48:20 compute-0 ceph-mon[75227]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 08:48:20 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 31 08:48:20 compute-0 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 31 08:48:20 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 31 08:48:20 compute-0 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 08:48:20 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 31 08:48:20 compute-0 ceph-mon[75227]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 31 08:48:20 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 31 08:48:20 compute-0 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 31 08:48:20 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 31 08:48:20 compute-0 ceph-mon[75227]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 08:48:20 compute-0 sudo[257772]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 08:48:20 compute-0 sudo[257772]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:48:20 compute-0 sudo[257772]: pam_unix(sudo:session): session closed for user root
Jan 31 08:48:20 compute-0 sudo[257797]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/82c880e6-d992-5408-8b12-efff9c275473/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 82c880e6-d992-5408-8b12-efff9c275473 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Jan 31 08:48:20 compute-0 sudo[257797]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:48:20 compute-0 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 08:48:20 compute-0 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 08:48:20 compute-0 ceph-mon[75227]: pgmap v1385: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:48:20 compute-0 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 08:48:20 compute-0 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 31 08:48:20 compute-0 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 08:48:20 compute-0 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 31 08:48:20 compute-0 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 31 08:48:20 compute-0 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 08:48:20 compute-0 podman[257834]: 2026-01-31 08:48:20.442219895 +0000 UTC m=+0.025874795 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 08:48:20 compute-0 podman[257834]: 2026-01-31 08:48:20.5765977 +0000 UTC m=+0.160252540 container create 9d470588ea503e062e93bad9bda5b94d6e2e7a09a6274ab39d544451d4c6bb42 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gifted_lehmann, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Jan 31 08:48:20 compute-0 systemd[1]: Started libpod-conmon-9d470588ea503e062e93bad9bda5b94d6e2e7a09a6274ab39d544451d4c6bb42.scope.
Jan 31 08:48:20 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:48:20 compute-0 podman[257834]: 2026-01-31 08:48:20.738885498 +0000 UTC m=+0.322540388 container init 9d470588ea503e062e93bad9bda5b94d6e2e7a09a6274ab39d544451d4c6bb42 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gifted_lehmann, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 08:48:20 compute-0 podman[257834]: 2026-01-31 08:48:20.74802502 +0000 UTC m=+0.331679810 container start 9d470588ea503e062e93bad9bda5b94d6e2e7a09a6274ab39d544451d4c6bb42 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gifted_lehmann, ceph=True, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 31 08:48:20 compute-0 gifted_lehmann[257850]: 167 167
Jan 31 08:48:20 compute-0 systemd[1]: libpod-9d470588ea503e062e93bad9bda5b94d6e2e7a09a6274ab39d544451d4c6bb42.scope: Deactivated successfully.
Jan 31 08:48:20 compute-0 podman[257834]: 2026-01-31 08:48:20.763118165 +0000 UTC m=+0.346772975 container attach 9d470588ea503e062e93bad9bda5b94d6e2e7a09a6274ab39d544451d4c6bb42 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gifted_lehmann, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Jan 31 08:48:20 compute-0 podman[257834]: 2026-01-31 08:48:20.763973429 +0000 UTC m=+0.347628239 container died 9d470588ea503e062e93bad9bda5b94d6e2e7a09a6274ab39d544451d4c6bb42 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gifted_lehmann, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 08:48:20 compute-0 systemd[1]: var-lib-containers-storage-overlay-584831878db90c34905d5c3aa0c914c8ff443951e49d802e6b04ce7db08a4668-merged.mount: Deactivated successfully.
Jan 31 08:48:21 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:48:21 compute-0 podman[257834]: 2026-01-31 08:48:21.085227549 +0000 UTC m=+0.668882339 container remove 9d470588ea503e062e93bad9bda5b94d6e2e7a09a6274ab39d544451d4c6bb42 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gifted_lehmann, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Jan 31 08:48:21 compute-0 systemd[1]: libpod-conmon-9d470588ea503e062e93bad9bda5b94d6e2e7a09a6274ab39d544451d4c6bb42.scope: Deactivated successfully.
Jan 31 08:48:21 compute-0 podman[257874]: 2026-01-31 08:48:21.20173961 +0000 UTC m=+0.019123361 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 08:48:21 compute-0 nova_compute[238824]: 2026-01-31 08:48:21.339 238828 DEBUG oslo_service.periodic_task [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:48:21 compute-0 podman[257874]: 2026-01-31 08:48:21.391355064 +0000 UTC m=+0.208738795 container create 6955ec4c8612f5354d03e5734add3401c83270600e8bf671688d1cbd0c0e1127 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=compassionate_jackson, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 08:48:21 compute-0 systemd[1]: Started libpod-conmon-6955ec4c8612f5354d03e5734add3401c83270600e8bf671688d1cbd0c0e1127.scope.
Jan 31 08:48:21 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:48:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/26116fca06276571b5b26b5ae5ee0245969551770306d68e766a586188af961c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 08:48:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/26116fca06276571b5b26b5ae5ee0245969551770306d68e766a586188af961c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 08:48:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/26116fca06276571b5b26b5ae5ee0245969551770306d68e766a586188af961c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 08:48:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/26116fca06276571b5b26b5ae5ee0245969551770306d68e766a586188af961c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 08:48:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/26116fca06276571b5b26b5ae5ee0245969551770306d68e766a586188af961c/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 31 08:48:21 compute-0 podman[257874]: 2026-01-31 08:48:21.580494574 +0000 UTC m=+0.397878395 container init 6955ec4c8612f5354d03e5734add3401c83270600e8bf671688d1cbd0c0e1127 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=compassionate_jackson, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Jan 31 08:48:21 compute-0 podman[257874]: 2026-01-31 08:48:21.586776485 +0000 UTC m=+0.404160256 container start 6955ec4c8612f5354d03e5734add3401c83270600e8bf671688d1cbd0c0e1127 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=compassionate_jackson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 08:48:21 compute-0 podman[257874]: 2026-01-31 08:48:21.612205166 +0000 UTC m=+0.429588897 container attach 6955ec4c8612f5354d03e5734add3401c83270600e8bf671688d1cbd0c0e1127 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=compassionate_jackson, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True)
Jan 31 08:48:21 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1386: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:48:21 compute-0 ceph-mon[75227]: pgmap v1386: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:48:21 compute-0 compassionate_jackson[257890]: --> passed data devices: 0 physical, 3 LVM
Jan 31 08:48:21 compute-0 compassionate_jackson[257890]: --> All data devices are unavailable
Jan 31 08:48:21 compute-0 systemd[1]: libpod-6955ec4c8612f5354d03e5734add3401c83270600e8bf671688d1cbd0c0e1127.scope: Deactivated successfully.
Jan 31 08:48:21 compute-0 podman[257874]: 2026-01-31 08:48:21.99374715 +0000 UTC m=+0.811130881 container died 6955ec4c8612f5354d03e5734add3401c83270600e8bf671688d1cbd0c0e1127 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=compassionate_jackson, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle)
Jan 31 08:48:22 compute-0 systemd[1]: var-lib-containers-storage-overlay-26116fca06276571b5b26b5ae5ee0245969551770306d68e766a586188af961c-merged.mount: Deactivated successfully.
Jan 31 08:48:22 compute-0 podman[257874]: 2026-01-31 08:48:22.055435544 +0000 UTC m=+0.872819265 container remove 6955ec4c8612f5354d03e5734add3401c83270600e8bf671688d1cbd0c0e1127 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=compassionate_jackson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 08:48:22 compute-0 systemd[1]: libpod-conmon-6955ec4c8612f5354d03e5734add3401c83270600e8bf671688d1cbd0c0e1127.scope: Deactivated successfully.
Jan 31 08:48:22 compute-0 sudo[257797]: pam_unix(sudo:session): session closed for user root
Jan 31 08:48:22 compute-0 sudo[257924]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 08:48:22 compute-0 sudo[257924]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:48:22 compute-0 sudo[257924]: pam_unix(sudo:session): session closed for user root
Jan 31 08:48:22 compute-0 sudo[257949]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/82c880e6-d992-5408-8b12-efff9c275473/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 82c880e6-d992-5408-8b12-efff9c275473 -- lvm list --format json
Jan 31 08:48:22 compute-0 sudo[257949]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:48:22 compute-0 podman[257986]: 2026-01-31 08:48:22.489015855 +0000 UTC m=+0.048749224 container create 15c9173e0179c2dad5a2575144da10963032fc897ef9b5c6542af8cbe508fc78 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eloquent_almeida, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2)
Jan 31 08:48:22 compute-0 systemd[1]: Started libpod-conmon-15c9173e0179c2dad5a2575144da10963032fc897ef9b5c6542af8cbe508fc78.scope.
Jan 31 08:48:22 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:48:22 compute-0 podman[257986]: 2026-01-31 08:48:22.458179088 +0000 UTC m=+0.017912447 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 08:48:22 compute-0 podman[257986]: 2026-01-31 08:48:22.56049032 +0000 UTC m=+0.120223669 container init 15c9173e0179c2dad5a2575144da10963032fc897ef9b5c6542af8cbe508fc78 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eloquent_almeida, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2)
Jan 31 08:48:22 compute-0 podman[257986]: 2026-01-31 08:48:22.566195274 +0000 UTC m=+0.125928613 container start 15c9173e0179c2dad5a2575144da10963032fc897ef9b5c6542af8cbe508fc78 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eloquent_almeida, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_REF=tentacle)
Jan 31 08:48:22 compute-0 eloquent_almeida[258002]: 167 167
Jan 31 08:48:22 compute-0 systemd[1]: libpod-15c9173e0179c2dad5a2575144da10963032fc897ef9b5c6542af8cbe508fc78.scope: Deactivated successfully.
Jan 31 08:48:22 compute-0 conmon[258002]: conmon 15c9173e0179c2dad5a2 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-15c9173e0179c2dad5a2575144da10963032fc897ef9b5c6542af8cbe508fc78.scope/container/memory.events
Jan 31 08:48:22 compute-0 podman[257986]: 2026-01-31 08:48:22.573318019 +0000 UTC m=+0.133051348 container attach 15c9173e0179c2dad5a2575144da10963032fc897ef9b5c6542af8cbe508fc78 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eloquent_almeida, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 08:48:22 compute-0 podman[257986]: 2026-01-31 08:48:22.574309538 +0000 UTC m=+0.134042867 container died 15c9173e0179c2dad5a2575144da10963032fc897ef9b5c6542af8cbe508fc78 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eloquent_almeida, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, ceph=True)
Jan 31 08:48:22 compute-0 systemd[1]: var-lib-containers-storage-overlay-9b6000616a898a1a3a1da4cc8440deee821b40f473d4e7f5f8820d045adfafd9-merged.mount: Deactivated successfully.
Jan 31 08:48:22 compute-0 podman[257986]: 2026-01-31 08:48:22.656114311 +0000 UTC m=+0.215847630 container remove 15c9173e0179c2dad5a2575144da10963032fc897ef9b5c6542af8cbe508fc78 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eloquent_almeida, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Jan 31 08:48:22 compute-0 systemd[1]: libpod-conmon-15c9173e0179c2dad5a2575144da10963032fc897ef9b5c6542af8cbe508fc78.scope: Deactivated successfully.
Jan 31 08:48:22 compute-0 podman[258026]: 2026-01-31 08:48:22.788393815 +0000 UTC m=+0.042077371 container create 4528b31ecaa1ab1dced776636797a850e973cbada34ea2dfac65a4824c2d7b01 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quirky_lumiere, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 31 08:48:22 compute-0 systemd[1]: Started libpod-conmon-4528b31ecaa1ab1dced776636797a850e973cbada34ea2dfac65a4824c2d7b01.scope.
Jan 31 08:48:22 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:48:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b58ae1d8eeaf17ac981201cca425ccfda467828b76215b3c27bd4683094ed622/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 08:48:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b58ae1d8eeaf17ac981201cca425ccfda467828b76215b3c27bd4683094ed622/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 08:48:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b58ae1d8eeaf17ac981201cca425ccfda467828b76215b3c27bd4683094ed622/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 08:48:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b58ae1d8eeaf17ac981201cca425ccfda467828b76215b3c27bd4683094ed622/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 08:48:22 compute-0 podman[258026]: 2026-01-31 08:48:22.766135425 +0000 UTC m=+0.019819011 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 08:48:22 compute-0 podman[258026]: 2026-01-31 08:48:22.865830933 +0000 UTC m=+0.119514509 container init 4528b31ecaa1ab1dced776636797a850e973cbada34ea2dfac65a4824c2d7b01 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quirky_lumiere, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Jan 31 08:48:22 compute-0 podman[258026]: 2026-01-31 08:48:22.871412683 +0000 UTC m=+0.125096239 container start 4528b31ecaa1ab1dced776636797a850e973cbada34ea2dfac65a4824c2d7b01 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quirky_lumiere, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 31 08:48:22 compute-0 podman[258026]: 2026-01-31 08:48:22.882501742 +0000 UTC m=+0.136185298 container attach 4528b31ecaa1ab1dced776636797a850e973cbada34ea2dfac65a4824c2d7b01 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quirky_lumiere, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True)
Jan 31 08:48:23 compute-0 quirky_lumiere[258042]: {
Jan 31 08:48:23 compute-0 quirky_lumiere[258042]:     "0": [
Jan 31 08:48:23 compute-0 quirky_lumiere[258042]:         {
Jan 31 08:48:23 compute-0 quirky_lumiere[258042]:             "devices": [
Jan 31 08:48:23 compute-0 quirky_lumiere[258042]:                 "/dev/loop3"
Jan 31 08:48:23 compute-0 quirky_lumiere[258042]:             ],
Jan 31 08:48:23 compute-0 quirky_lumiere[258042]:             "lv_name": "ceph_lv0",
Jan 31 08:48:23 compute-0 quirky_lumiere[258042]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 08:48:23 compute-0 quirky_lumiere[258042]:             "lv_size": "21470642176",
Jan 31 08:48:23 compute-0 quirky_lumiere[258042]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=MTsNbY-MKaT-jGv0-3onj-5WQa-gnK0-BbfLsK,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=82c880e6-d992-5408-8b12-efff9c275473,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=39c36249-2898-4a76-b317-8e4ca379866f,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 31 08:48:23 compute-0 quirky_lumiere[258042]:             "lv_uuid": "MTsNbY-MKaT-jGv0-3onj-5WQa-gnK0-BbfLsK",
Jan 31 08:48:23 compute-0 quirky_lumiere[258042]:             "name": "ceph_lv0",
Jan 31 08:48:23 compute-0 quirky_lumiere[258042]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 08:48:23 compute-0 quirky_lumiere[258042]:             "tags": {
Jan 31 08:48:23 compute-0 quirky_lumiere[258042]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 31 08:48:23 compute-0 quirky_lumiere[258042]:                 "ceph.block_uuid": "MTsNbY-MKaT-jGv0-3onj-5WQa-gnK0-BbfLsK",
Jan 31 08:48:23 compute-0 quirky_lumiere[258042]:                 "ceph.cephx_lockbox_secret": "",
Jan 31 08:48:23 compute-0 quirky_lumiere[258042]:                 "ceph.cluster_fsid": "82c880e6-d992-5408-8b12-efff9c275473",
Jan 31 08:48:23 compute-0 quirky_lumiere[258042]:                 "ceph.cluster_name": "ceph",
Jan 31 08:48:23 compute-0 quirky_lumiere[258042]:                 "ceph.crush_device_class": "",
Jan 31 08:48:23 compute-0 quirky_lumiere[258042]:                 "ceph.encrypted": "0",
Jan 31 08:48:23 compute-0 quirky_lumiere[258042]:                 "ceph.objectstore": "bluestore",
Jan 31 08:48:23 compute-0 quirky_lumiere[258042]:                 "ceph.osd_fsid": "39c36249-2898-4a76-b317-8e4ca379866f",
Jan 31 08:48:23 compute-0 quirky_lumiere[258042]:                 "ceph.osd_id": "0",
Jan 31 08:48:23 compute-0 quirky_lumiere[258042]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 31 08:48:23 compute-0 quirky_lumiere[258042]:                 "ceph.type": "block",
Jan 31 08:48:23 compute-0 quirky_lumiere[258042]:                 "ceph.vdo": "0",
Jan 31 08:48:23 compute-0 quirky_lumiere[258042]:                 "ceph.with_tpm": "0"
Jan 31 08:48:23 compute-0 quirky_lumiere[258042]:             },
Jan 31 08:48:23 compute-0 quirky_lumiere[258042]:             "type": "block",
Jan 31 08:48:23 compute-0 quirky_lumiere[258042]:             "vg_name": "ceph_vg0"
Jan 31 08:48:23 compute-0 quirky_lumiere[258042]:         }
Jan 31 08:48:23 compute-0 quirky_lumiere[258042]:     ],
Jan 31 08:48:23 compute-0 quirky_lumiere[258042]:     "1": [
Jan 31 08:48:23 compute-0 quirky_lumiere[258042]:         {
Jan 31 08:48:23 compute-0 quirky_lumiere[258042]:             "devices": [
Jan 31 08:48:23 compute-0 quirky_lumiere[258042]:                 "/dev/loop4"
Jan 31 08:48:23 compute-0 quirky_lumiere[258042]:             ],
Jan 31 08:48:23 compute-0 quirky_lumiere[258042]:             "lv_name": "ceph_lv1",
Jan 31 08:48:23 compute-0 quirky_lumiere[258042]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 31 08:48:23 compute-0 quirky_lumiere[258042]:             "lv_size": "21470642176",
Jan 31 08:48:23 compute-0 quirky_lumiere[258042]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=p93Mbf-DMxT-pcUt-jSJE-SFna-oscq-yTAd40,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=82c880e6-d992-5408-8b12-efff9c275473,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=dacad4fa-56d8-4937-b2d8-306fb75187f3,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 31 08:48:23 compute-0 quirky_lumiere[258042]:             "lv_uuid": "p93Mbf-DMxT-pcUt-jSJE-SFna-oscq-yTAd40",
Jan 31 08:48:23 compute-0 quirky_lumiere[258042]:             "name": "ceph_lv1",
Jan 31 08:48:23 compute-0 quirky_lumiere[258042]:             "path": "/dev/ceph_vg1/ceph_lv1",
Jan 31 08:48:23 compute-0 quirky_lumiere[258042]:             "tags": {
Jan 31 08:48:23 compute-0 quirky_lumiere[258042]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 31 08:48:23 compute-0 quirky_lumiere[258042]:                 "ceph.block_uuid": "p93Mbf-DMxT-pcUt-jSJE-SFna-oscq-yTAd40",
Jan 31 08:48:23 compute-0 quirky_lumiere[258042]:                 "ceph.cephx_lockbox_secret": "",
Jan 31 08:48:23 compute-0 quirky_lumiere[258042]:                 "ceph.cluster_fsid": "82c880e6-d992-5408-8b12-efff9c275473",
Jan 31 08:48:23 compute-0 quirky_lumiere[258042]:                 "ceph.cluster_name": "ceph",
Jan 31 08:48:23 compute-0 quirky_lumiere[258042]:                 "ceph.crush_device_class": "",
Jan 31 08:48:23 compute-0 quirky_lumiere[258042]:                 "ceph.encrypted": "0",
Jan 31 08:48:23 compute-0 quirky_lumiere[258042]:                 "ceph.objectstore": "bluestore",
Jan 31 08:48:23 compute-0 quirky_lumiere[258042]:                 "ceph.osd_fsid": "dacad4fa-56d8-4937-b2d8-306fb75187f3",
Jan 31 08:48:23 compute-0 quirky_lumiere[258042]:                 "ceph.osd_id": "1",
Jan 31 08:48:23 compute-0 quirky_lumiere[258042]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 31 08:48:23 compute-0 quirky_lumiere[258042]:                 "ceph.type": "block",
Jan 31 08:48:23 compute-0 quirky_lumiere[258042]:                 "ceph.vdo": "0",
Jan 31 08:48:23 compute-0 quirky_lumiere[258042]:                 "ceph.with_tpm": "0"
Jan 31 08:48:23 compute-0 quirky_lumiere[258042]:             },
Jan 31 08:48:23 compute-0 quirky_lumiere[258042]:             "type": "block",
Jan 31 08:48:23 compute-0 quirky_lumiere[258042]:             "vg_name": "ceph_vg1"
Jan 31 08:48:23 compute-0 quirky_lumiere[258042]:         }
Jan 31 08:48:23 compute-0 quirky_lumiere[258042]:     ],
Jan 31 08:48:23 compute-0 quirky_lumiere[258042]:     "2": [
Jan 31 08:48:23 compute-0 quirky_lumiere[258042]:         {
Jan 31 08:48:23 compute-0 quirky_lumiere[258042]:             "devices": [
Jan 31 08:48:23 compute-0 quirky_lumiere[258042]:                 "/dev/loop5"
Jan 31 08:48:23 compute-0 quirky_lumiere[258042]:             ],
Jan 31 08:48:23 compute-0 quirky_lumiere[258042]:             "lv_name": "ceph_lv2",
Jan 31 08:48:23 compute-0 quirky_lumiere[258042]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 31 08:48:23 compute-0 quirky_lumiere[258042]:             "lv_size": "21470642176",
Jan 31 08:48:23 compute-0 quirky_lumiere[258042]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=fxd7JU-HnwP-NvcE-M4xv-EgEF-kK7y-w6dXCS,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=82c880e6-d992-5408-8b12-efff9c275473,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=faa25865-e7b6-44f9-8188-08bf287b941b,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 31 08:48:23 compute-0 quirky_lumiere[258042]:             "lv_uuid": "fxd7JU-HnwP-NvcE-M4xv-EgEF-kK7y-w6dXCS",
Jan 31 08:48:23 compute-0 quirky_lumiere[258042]:             "name": "ceph_lv2",
Jan 31 08:48:23 compute-0 quirky_lumiere[258042]:             "path": "/dev/ceph_vg2/ceph_lv2",
Jan 31 08:48:23 compute-0 quirky_lumiere[258042]:             "tags": {
Jan 31 08:48:23 compute-0 quirky_lumiere[258042]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 31 08:48:23 compute-0 quirky_lumiere[258042]:                 "ceph.block_uuid": "fxd7JU-HnwP-NvcE-M4xv-EgEF-kK7y-w6dXCS",
Jan 31 08:48:23 compute-0 quirky_lumiere[258042]:                 "ceph.cephx_lockbox_secret": "",
Jan 31 08:48:23 compute-0 quirky_lumiere[258042]:                 "ceph.cluster_fsid": "82c880e6-d992-5408-8b12-efff9c275473",
Jan 31 08:48:23 compute-0 quirky_lumiere[258042]:                 "ceph.cluster_name": "ceph",
Jan 31 08:48:23 compute-0 quirky_lumiere[258042]:                 "ceph.crush_device_class": "",
Jan 31 08:48:23 compute-0 quirky_lumiere[258042]:                 "ceph.encrypted": "0",
Jan 31 08:48:23 compute-0 quirky_lumiere[258042]:                 "ceph.objectstore": "bluestore",
Jan 31 08:48:23 compute-0 quirky_lumiere[258042]:                 "ceph.osd_fsid": "faa25865-e7b6-44f9-8188-08bf287b941b",
Jan 31 08:48:23 compute-0 quirky_lumiere[258042]:                 "ceph.osd_id": "2",
Jan 31 08:48:23 compute-0 quirky_lumiere[258042]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 31 08:48:23 compute-0 quirky_lumiere[258042]:                 "ceph.type": "block",
Jan 31 08:48:23 compute-0 quirky_lumiere[258042]:                 "ceph.vdo": "0",
Jan 31 08:48:23 compute-0 quirky_lumiere[258042]:                 "ceph.with_tpm": "0"
Jan 31 08:48:23 compute-0 quirky_lumiere[258042]:             },
Jan 31 08:48:23 compute-0 quirky_lumiere[258042]:             "type": "block",
Jan 31 08:48:23 compute-0 quirky_lumiere[258042]:             "vg_name": "ceph_vg2"
Jan 31 08:48:23 compute-0 quirky_lumiere[258042]:         }
Jan 31 08:48:23 compute-0 quirky_lumiere[258042]:     ]
Jan 31 08:48:23 compute-0 quirky_lumiere[258042]: }
Jan 31 08:48:23 compute-0 systemd[1]: libpod-4528b31ecaa1ab1dced776636797a850e973cbada34ea2dfac65a4824c2d7b01.scope: Deactivated successfully.
Jan 31 08:48:23 compute-0 podman[258026]: 2026-01-31 08:48:23.143093127 +0000 UTC m=+0.396776683 container died 4528b31ecaa1ab1dced776636797a850e973cbada34ea2dfac65a4824c2d7b01 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quirky_lumiere, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 31 08:48:23 compute-0 systemd[1]: var-lib-containers-storage-overlay-b58ae1d8eeaf17ac981201cca425ccfda467828b76215b3c27bd4683094ed622-merged.mount: Deactivated successfully.
Jan 31 08:48:23 compute-0 podman[258026]: 2026-01-31 08:48:23.447442761 +0000 UTC m=+0.701126307 container remove 4528b31ecaa1ab1dced776636797a850e973cbada34ea2dfac65a4824c2d7b01 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quirky_lumiere, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 08:48:23 compute-0 sudo[257949]: pam_unix(sudo:session): session closed for user root
Jan 31 08:48:23 compute-0 sudo[258065]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 08:48:23 compute-0 sudo[258065]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:48:23 compute-0 sudo[258065]: pam_unix(sudo:session): session closed for user root
Jan 31 08:48:23 compute-0 systemd[1]: libpod-conmon-4528b31ecaa1ab1dced776636797a850e973cbada34ea2dfac65a4824c2d7b01.scope: Deactivated successfully.
Jan 31 08:48:23 compute-0 sudo[258090]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/82c880e6-d992-5408-8b12-efff9c275473/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 82c880e6-d992-5408-8b12-efff9c275473 -- raw list --format json
Jan 31 08:48:23 compute-0 sudo[258090]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:48:23 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1387: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:48:23 compute-0 podman[258127]: 2026-01-31 08:48:23.816286928 +0000 UTC m=+0.018975086 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 08:48:23 compute-0 podman[258127]: 2026-01-31 08:48:23.919459466 +0000 UTC m=+0.122147624 container create dda5b6362cb77c83c86517e8f333f188197500d6ee9d255c759975351335aea6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_vaughan, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 08:48:23 compute-0 ceph-mon[75227]: pgmap v1387: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:48:23 compute-0 systemd[1]: Started libpod-conmon-dda5b6362cb77c83c86517e8f333f188197500d6ee9d255c759975351335aea6.scope.
Jan 31 08:48:24 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:48:24 compute-0 podman[258127]: 2026-01-31 08:48:24.089516877 +0000 UTC m=+0.292205135 container init dda5b6362cb77c83c86517e8f333f188197500d6ee9d255c759975351335aea6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_vaughan, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, ceph=True, org.label-schema.vendor=CentOS)
Jan 31 08:48:24 compute-0 podman[258127]: 2026-01-31 08:48:24.097833406 +0000 UTC m=+0.300521564 container start dda5b6362cb77c83c86517e8f333f188197500d6ee9d255c759975351335aea6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_vaughan, org.label-schema.build-date=20251030, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Jan 31 08:48:24 compute-0 happy_vaughan[258143]: 167 167
Jan 31 08:48:24 compute-0 systemd[1]: libpod-dda5b6362cb77c83c86517e8f333f188197500d6ee9d255c759975351335aea6.scope: Deactivated successfully.
Jan 31 08:48:24 compute-0 podman[258127]: 2026-01-31 08:48:24.153616431 +0000 UTC m=+0.356304629 container attach dda5b6362cb77c83c86517e8f333f188197500d6ee9d255c759975351335aea6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_vaughan, org.label-schema.build-date=20251030, CEPH_REF=tentacle, ceph=True, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 08:48:24 compute-0 podman[258127]: 2026-01-31 08:48:24.154752843 +0000 UTC m=+0.357441031 container died dda5b6362cb77c83c86517e8f333f188197500d6ee9d255c759975351335aea6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_vaughan, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 08:48:24 compute-0 systemd[1]: var-lib-containers-storage-overlay-d7282ff69586829ef95f6f6f3a8a9d630b1c1bc855d98a9157fad66082359075-merged.mount: Deactivated successfully.
Jan 31 08:48:24 compute-0 podman[258127]: 2026-01-31 08:48:24.341711961 +0000 UTC m=+0.544400159 container remove dda5b6362cb77c83c86517e8f333f188197500d6ee9d255c759975351335aea6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_vaughan, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251030)
Jan 31 08:48:24 compute-0 systemd[1]: libpod-conmon-dda5b6362cb77c83c86517e8f333f188197500d6ee9d255c759975351335aea6.scope: Deactivated successfully.
Jan 31 08:48:24 compute-0 podman[258166]: 2026-01-31 08:48:24.4922356 +0000 UTC m=+0.038571690 container create 027aedbe56f8ca0a0ee9a7525fb2dc576e21cd712ffe7a805033f64ed179e2f2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wizardly_yonath, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Jan 31 08:48:24 compute-0 systemd[1]: Started libpod-conmon-027aedbe56f8ca0a0ee9a7525fb2dc576e21cd712ffe7a805033f64ed179e2f2.scope.
Jan 31 08:48:24 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:48:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/428296870a560cbb9c778c27d76d455e384ea8c2b7a0aa543880f4116f0b844a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 08:48:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/428296870a560cbb9c778c27d76d455e384ea8c2b7a0aa543880f4116f0b844a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 08:48:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/428296870a560cbb9c778c27d76d455e384ea8c2b7a0aa543880f4116f0b844a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 08:48:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/428296870a560cbb9c778c27d76d455e384ea8c2b7a0aa543880f4116f0b844a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 08:48:24 compute-0 podman[258166]: 2026-01-31 08:48:24.562866111 +0000 UTC m=+0.109202231 container init 027aedbe56f8ca0a0ee9a7525fb2dc576e21cd712ffe7a805033f64ed179e2f2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wizardly_yonath, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 08:48:24 compute-0 podman[258166]: 2026-01-31 08:48:24.569260555 +0000 UTC m=+0.115596635 container start 027aedbe56f8ca0a0ee9a7525fb2dc576e21cd712ffe7a805033f64ed179e2f2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wizardly_yonath, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 31 08:48:24 compute-0 podman[258166]: 2026-01-31 08:48:24.475524049 +0000 UTC m=+0.021860159 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 08:48:24 compute-0 podman[258166]: 2026-01-31 08:48:24.574845966 +0000 UTC m=+0.121182056 container attach 027aedbe56f8ca0a0ee9a7525fb2dc576e21cd712ffe7a805033f64ed179e2f2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wizardly_yonath, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 08:48:25 compute-0 lvm[258259]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 31 08:48:25 compute-0 lvm[258259]: VG ceph_vg0 finished
Jan 31 08:48:25 compute-0 lvm[258262]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 31 08:48:25 compute-0 lvm[258262]: VG ceph_vg1 finished
Jan 31 08:48:25 compute-0 lvm[258264]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 31 08:48:25 compute-0 lvm[258264]: VG ceph_vg2 finished
Jan 31 08:48:25 compute-0 wizardly_yonath[258182]: {}
Jan 31 08:48:25 compute-0 systemd[1]: libpod-027aedbe56f8ca0a0ee9a7525fb2dc576e21cd712ffe7a805033f64ed179e2f2.scope: Deactivated successfully.
Jan 31 08:48:25 compute-0 systemd[1]: libpod-027aedbe56f8ca0a0ee9a7525fb2dc576e21cd712ffe7a805033f64ed179e2f2.scope: Consumed 1.025s CPU time.
Jan 31 08:48:25 compute-0 podman[258166]: 2026-01-31 08:48:25.331615152 +0000 UTC m=+0.877951262 container died 027aedbe56f8ca0a0ee9a7525fb2dc576e21cd712ffe7a805033f64ed179e2f2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wizardly_yonath, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 08:48:25 compute-0 systemd[1]: var-lib-containers-storage-overlay-428296870a560cbb9c778c27d76d455e384ea8c2b7a0aa543880f4116f0b844a-merged.mount: Deactivated successfully.
Jan 31 08:48:25 compute-0 podman[258166]: 2026-01-31 08:48:25.587615115 +0000 UTC m=+1.133951205 container remove 027aedbe56f8ca0a0ee9a7525fb2dc576e21cd712ffe7a805033f64ed179e2f2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wizardly_yonath, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_REF=tentacle, ceph=True, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Jan 31 08:48:25 compute-0 systemd[1]: libpod-conmon-027aedbe56f8ca0a0ee9a7525fb2dc576e21cd712ffe7a805033f64ed179e2f2.scope: Deactivated successfully.
Jan 31 08:48:25 compute-0 sudo[258090]: pam_unix(sudo:session): session closed for user root
Jan 31 08:48:25 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 31 08:48:25 compute-0 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 08:48:25 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 31 08:48:25 compute-0 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 08:48:25 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1388: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:48:25 compute-0 sudo[258278]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 31 08:48:25 compute-0 sudo[258278]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:48:25 compute-0 sudo[258278]: pam_unix(sudo:session): session closed for user root
Jan 31 08:48:26 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:48:26 compute-0 nova_compute[238824]: 2026-01-31 08:48:26.339 238828 DEBUG oslo_service.periodic_task [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:48:26 compute-0 nova_compute[238824]: 2026-01-31 08:48:26.340 238828 DEBUG oslo_service.periodic_task [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:48:26 compute-0 nova_compute[238824]: 2026-01-31 08:48:26.340 238828 DEBUG nova.compute.manager [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 31 08:48:26 compute-0 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 08:48:26 compute-0 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 08:48:26 compute-0 ceph-mon[75227]: pgmap v1388: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:48:27 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1389: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:48:28 compute-0 ceph-mon[75227]: pgmap v1389: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:48:29 compute-0 nova_compute[238824]: 2026-01-31 08:48:29.341 238828 DEBUG oslo_service.periodic_task [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:48:29 compute-0 nova_compute[238824]: 2026-01-31 08:48:29.341 238828 DEBUG oslo_service.periodic_task [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:48:29 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1390: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:48:29 compute-0 ceph-mon[75227]: pgmap v1390: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:48:30 compute-0 nova_compute[238824]: 2026-01-31 08:48:30.334 238828 DEBUG oslo_service.periodic_task [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:48:31 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:48:31 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1391: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:48:31 compute-0 ceph-mgr[75519]: [balancer INFO root] Optimize plan auto_2026-01-31_08:48:31
Jan 31 08:48:31 compute-0 ceph-mgr[75519]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 31 08:48:31 compute-0 ceph-mgr[75519]: [balancer INFO root] do_upmap
Jan 31 08:48:31 compute-0 ceph-mgr[75519]: [balancer INFO root] pools ['images', 'cephfs.cephfs.data', 'default.rgw.control', 'vms', 'backups', 'cephfs.cephfs.meta', '.mgr', '.rgw.root', 'volumes', 'default.rgw.meta', 'default.rgw.log']
Jan 31 08:48:31 compute-0 ceph-mgr[75519]: [balancer INFO root] prepared 0/10 upmap changes
Jan 31 08:48:31 compute-0 ceph-mon[75227]: pgmap v1391: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:48:32 compute-0 nova_compute[238824]: 2026-01-31 08:48:32.339 238828 DEBUG oslo_service.periodic_task [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:48:32 compute-0 nova_compute[238824]: 2026-01-31 08:48:32.339 238828 DEBUG nova.compute.manager [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 31 08:48:32 compute-0 nova_compute[238824]: 2026-01-31 08:48:32.340 238828 DEBUG nova.compute.manager [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 31 08:48:32 compute-0 nova_compute[238824]: 2026-01-31 08:48:32.354 238828 DEBUG nova.compute.manager [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 31 08:48:32 compute-0 ceph-mgr[75519]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:48:32 compute-0 ceph-mgr[75519]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:48:32 compute-0 ceph-mgr[75519]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:48:32 compute-0 ceph-mgr[75519]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:48:32 compute-0 ceph-mgr[75519]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:48:32 compute-0 ceph-mgr[75519]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:48:33 compute-0 ceph-mgr[75519]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 31 08:48:33 compute-0 ceph-mgr[75519]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 08:48:33 compute-0 ceph-mgr[75519]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 31 08:48:33 compute-0 ceph-mgr[75519]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 08:48:33 compute-0 ceph-mgr[75519]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 08:48:33 compute-0 ceph-mgr[75519]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 08:48:33 compute-0 ceph-mgr[75519]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 08:48:33 compute-0 ceph-mgr[75519]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 08:48:33 compute-0 ceph-mgr[75519]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 08:48:33 compute-0 ceph-mgr[75519]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 08:48:33 compute-0 nova_compute[238824]: 2026-01-31 08:48:33.339 238828 DEBUG oslo_service.periodic_task [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:48:33 compute-0 nova_compute[238824]: 2026-01-31 08:48:33.340 238828 DEBUG oslo_service.periodic_task [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:48:33 compute-0 nova_compute[238824]: 2026-01-31 08:48:33.369 238828 DEBUG oslo_concurrency.lockutils [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:48:33 compute-0 nova_compute[238824]: 2026-01-31 08:48:33.370 238828 DEBUG oslo_concurrency.lockutils [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:48:33 compute-0 nova_compute[238824]: 2026-01-31 08:48:33.370 238828 DEBUG oslo_concurrency.lockutils [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:48:33 compute-0 nova_compute[238824]: 2026-01-31 08:48:33.370 238828 DEBUG nova.compute.resource_tracker [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 31 08:48:33 compute-0 nova_compute[238824]: 2026-01-31 08:48:33.370 238828 DEBUG oslo_concurrency.processutils [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:48:33 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1392: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:48:33 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 31 08:48:33 compute-0 ceph-mon[75227]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/933584053' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 31 08:48:33 compute-0 ceph-mon[75227]: pgmap v1392: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:48:33 compute-0 nova_compute[238824]: 2026-01-31 08:48:33.937 238828 DEBUG oslo_concurrency.processutils [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.567s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:48:34 compute-0 nova_compute[238824]: 2026-01-31 08:48:34.054 238828 WARNING nova.virt.libvirt.driver [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 31 08:48:34 compute-0 nova_compute[238824]: 2026-01-31 08:48:34.055 238828 DEBUG nova.compute.resource_tracker [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5050MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 31 08:48:34 compute-0 nova_compute[238824]: 2026-01-31 08:48:34.056 238828 DEBUG oslo_concurrency.lockutils [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:48:34 compute-0 nova_compute[238824]: 2026-01-31 08:48:34.056 238828 DEBUG oslo_concurrency.lockutils [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:48:34 compute-0 nova_compute[238824]: 2026-01-31 08:48:34.154 238828 DEBUG nova.compute.resource_tracker [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 31 08:48:34 compute-0 nova_compute[238824]: 2026-01-31 08:48:34.154 238828 DEBUG nova.compute.resource_tracker [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 31 08:48:34 compute-0 nova_compute[238824]: 2026-01-31 08:48:34.175 238828 DEBUG oslo_concurrency.processutils [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:48:34 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 31 08:48:34 compute-0 ceph-mon[75227]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3823023545' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 31 08:48:34 compute-0 nova_compute[238824]: 2026-01-31 08:48:34.679 238828 DEBUG oslo_concurrency.processutils [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.504s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:48:34 compute-0 nova_compute[238824]: 2026-01-31 08:48:34.684 238828 DEBUG nova.compute.provider_tree [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Inventory has not changed in ProviderTree for provider: 6d4ff98f-eb37-47a1-bfaf-01e7f5329d98 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 31 08:48:34 compute-0 ceph-mon[75227]: from='client.? 192.168.122.100:0/933584053' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 31 08:48:34 compute-0 ceph-mon[75227]: from='client.? 192.168.122.100:0/3823023545' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 31 08:48:35 compute-0 nova_compute[238824]: 2026-01-31 08:48:35.307 238828 DEBUG nova.scheduler.client.report [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Inventory has not changed for provider 6d4ff98f-eb37-47a1-bfaf-01e7f5329d98 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 31 08:48:35 compute-0 nova_compute[238824]: 2026-01-31 08:48:35.309 238828 DEBUG nova.compute.resource_tracker [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 31 08:48:35 compute-0 nova_compute[238824]: 2026-01-31 08:48:35.309 238828 DEBUG oslo_concurrency.lockutils [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.253s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:48:35 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1393: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Jan 31 08:48:35 compute-0 ceph-mon[75227]: pgmap v1393: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Jan 31 08:48:36 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:48:36 compute-0 podman[258348]: 2026-01-31 08:48:36.169798109 +0000 UTC m=+0.060080789 container health_status 5cc46d1955888fed41771eb977e7f9416e280539f01559118253757fe3eb0869 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '3b798815db4eef76ff54d2cfe5801aee605c637f16e47a4297de289ab1fdb9c1-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20260127)
Jan 31 08:48:36 compute-0 podman[258347]: 2026-01-31 08:48:36.205091214 +0000 UTC m=+0.100199293 container health_status 14c3c41ef3ecc0cb180f3b1c6b2646401de390411c75f2edf127900ced71a3ae (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '3b798815db4eef76ff54d2cfe5801aee605c637f16e47a4297de289ab1fdb9c1-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, io.buildah.version=1.41.3, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.schema-version=1.0)
Jan 31 08:48:36 compute-0 nova_compute[238824]: 2026-01-31 08:48:36.303 238828 DEBUG oslo_service.periodic_task [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:48:37 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1394: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Jan 31 08:48:37 compute-0 ceph-mon[75227]: pgmap v1394: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Jan 31 08:48:39 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1395: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Jan 31 08:48:39 compute-0 ceph-mon[75227]: pgmap v1395: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Jan 31 08:48:41 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:48:41 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1396: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Jan 31 08:48:41 compute-0 ceph-mon[75227]: pgmap v1396: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Jan 31 08:48:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] _maybe_adjust
Jan 31 08:48:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:48:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 31 08:48:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:48:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 08:48:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:48:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 08:48:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:48:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 08:48:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:48:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 3.257160766784386e-07 of space, bias 1.0, pg target 9.771482300353158e-05 quantized to 32 (current 32)
Jan 31 08:48:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:48:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 2.5331644121694047e-06 of space, bias 4.0, pg target 0.0030397972946032857 quantized to 16 (current 16)
Jan 31 08:48:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:48:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 08:48:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:48:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 31 08:48:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:48:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Jan 31 08:48:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:48:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 08:48:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:48:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 31 08:48:43 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1397: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Jan 31 08:48:43 compute-0 ceph-mon[75227]: pgmap v1397: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Jan 31 08:48:45 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1398: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Jan 31 08:48:45 compute-0 ceph-mon[75227]: pgmap v1398: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Jan 31 08:48:46 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:48:47 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1399: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:48:47 compute-0 ceph-mon[75227]: pgmap v1399: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:48:49 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1400: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:48:50 compute-0 ceph-mon[75227]: pgmap v1400: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:48:51 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:48:51 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1401: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:48:52 compute-0 ceph-mon[75227]: pgmap v1401: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:48:53 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1402: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:48:54 compute-0 ceph-mon[75227]: pgmap v1402: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:48:55 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e141 do_prune osdmap full prune enabled
Jan 31 08:48:55 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e142 e142: 3 total, 3 up, 3 in
Jan 31 08:48:55 compute-0 ceph-mon[75227]: log_channel(cluster) log [DBG] : osdmap e142: 3 total, 3 up, 3 in
Jan 31 08:48:55 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1404: 305 pgs: 305 active+clean; 21 MiB data, 145 MiB used, 60 GiB / 60 GiB avail; 4.5 KiB/s rd, 2.0 MiB/s wr, 7 op/s
Jan 31 08:48:56 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:48:56 compute-0 ceph-mon[75227]: osdmap e142: 3 total, 3 up, 3 in
Jan 31 08:48:56 compute-0 ceph-mon[75227]: pgmap v1404: 305 pgs: 305 active+clean; 21 MiB data, 145 MiB used, 60 GiB / 60 GiB avail; 4.5 KiB/s rd, 2.0 MiB/s wr, 7 op/s
Jan 31 08:48:57 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1405: 305 pgs: 305 active+clean; 21 MiB data, 145 MiB used, 60 GiB / 60 GiB avail; 4.5 KiB/s rd, 2.0 MiB/s wr, 7 op/s
Jan 31 08:48:57 compute-0 ceph-mon[75227]: pgmap v1405: 305 pgs: 305 active+clean; 21 MiB data, 145 MiB used, 60 GiB / 60 GiB avail; 4.5 KiB/s rd, 2.0 MiB/s wr, 7 op/s
Jan 31 08:48:59 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1406: 305 pgs: 305 active+clean; 21 MiB data, 157 MiB used, 60 GiB / 60 GiB avail; 7.3 KiB/s rd, 2.0 MiB/s wr, 11 op/s
Jan 31 08:48:59 compute-0 ceph-mon[75227]: pgmap v1406: 305 pgs: 305 active+clean; 21 MiB data, 157 MiB used, 60 GiB / 60 GiB avail; 7.3 KiB/s rd, 2.0 MiB/s wr, 11 op/s
Jan 31 08:49:01 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:49:01 compute-0 ceph-mon[75227]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #63. Immutable memtables: 0.
Jan 31 08:49:01 compute-0 ceph-mon[75227]: rocksdb: (Original Log Time 2026/01/31-08:49:01.019517) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 31 08:49:01 compute-0 ceph-mon[75227]: rocksdb: [db/flush_job.cc:856] [default] [JOB 33] Flushing memtable with next log file: 63
Jan 31 08:49:01 compute-0 ceph-mon[75227]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769849341019552, "job": 33, "event": "flush_started", "num_memtables": 1, "num_entries": 903, "num_deletes": 250, "total_data_size": 1290751, "memory_usage": 1311120, "flush_reason": "Manual Compaction"}
Jan 31 08:49:01 compute-0 ceph-mon[75227]: rocksdb: [db/flush_job.cc:885] [default] [JOB 33] Level-0 flush table #64: started
Jan 31 08:49:01 compute-0 ceph-mon[75227]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769849341030160, "cf_name": "default", "job": 33, "event": "table_file_creation", "file_number": 64, "file_size": 802201, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 27779, "largest_seqno": 28681, "table_properties": {"data_size": 798489, "index_size": 1428, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1221, "raw_key_size": 9719, "raw_average_key_size": 20, "raw_value_size": 790553, "raw_average_value_size": 1682, "num_data_blocks": 65, "num_entries": 470, "num_filter_entries": 470, "num_deletions": 250, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769849259, "oldest_key_time": 1769849259, "file_creation_time": 1769849341, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "91992687-9ca4-489a-811f-a25b3432622d", "db_session_id": "RDN3DWKE2K2I6QTJYIJY", "orig_file_number": 64, "seqno_to_time_mapping": "N/A"}}
Jan 31 08:49:01 compute-0 ceph-mon[75227]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 33] Flush lasted 10711 microseconds, and 2230 cpu microseconds.
Jan 31 08:49:01 compute-0 ceph-mon[75227]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 31 08:49:01 compute-0 ceph-mon[75227]: rocksdb: (Original Log Time 2026/01/31-08:49:01.030222) [db/flush_job.cc:967] [default] [JOB 33] Level-0 flush table #64: 802201 bytes OK
Jan 31 08:49:01 compute-0 ceph-mon[75227]: rocksdb: (Original Log Time 2026/01/31-08:49:01.030244) [db/memtable_list.cc:519] [default] Level-0 commit table #64 started
Jan 31 08:49:01 compute-0 ceph-mon[75227]: rocksdb: (Original Log Time 2026/01/31-08:49:01.033142) [db/memtable_list.cc:722] [default] Level-0 commit table #64: memtable #1 done
Jan 31 08:49:01 compute-0 ceph-mon[75227]: rocksdb: (Original Log Time 2026/01/31-08:49:01.033154) EVENT_LOG_v1 {"time_micros": 1769849341033150, "job": 33, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 31 08:49:01 compute-0 ceph-mon[75227]: rocksdb: (Original Log Time 2026/01/31-08:49:01.033171) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 31 08:49:01 compute-0 ceph-mon[75227]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 33] Try to delete WAL files size 1286376, prev total WAL file size 1286376, number of live WAL files 2.
Jan 31 08:49:01 compute-0 ceph-mon[75227]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000060.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 08:49:01 compute-0 ceph-mon[75227]: rocksdb: (Original Log Time 2026/01/31-08:49:01.033559) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740031303036' seq:72057594037927935, type:22 .. '6D6772737461740031323537' seq:0, type:0; will stop at (end)
Jan 31 08:49:01 compute-0 ceph-mon[75227]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 34] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 31 08:49:01 compute-0 ceph-mon[75227]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 33 Base level 0, inputs: [64(783KB)], [62(9232KB)]
Jan 31 08:49:01 compute-0 ceph-mon[75227]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769849341033593, "job": 34, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [64], "files_L6": [62], "score": -1, "input_data_size": 10256384, "oldest_snapshot_seqno": -1}
Jan 31 08:49:01 compute-0 ceph-mon[75227]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 34] Generated table #65: 5139 keys, 7381180 bytes, temperature: kUnknown
Jan 31 08:49:01 compute-0 ceph-mon[75227]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769849341067482, "cf_name": "default", "job": 34, "event": "table_file_creation", "file_number": 65, "file_size": 7381180, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 7348450, "index_size": 18796, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 12869, "raw_key_size": 127802, "raw_average_key_size": 24, "raw_value_size": 7257057, "raw_average_value_size": 1412, "num_data_blocks": 779, "num_entries": 5139, "num_filter_entries": 5139, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769846771, "oldest_key_time": 0, "file_creation_time": 1769849341, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "91992687-9ca4-489a-811f-a25b3432622d", "db_session_id": "RDN3DWKE2K2I6QTJYIJY", "orig_file_number": 65, "seqno_to_time_mapping": "N/A"}}
Jan 31 08:49:01 compute-0 ceph-mon[75227]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 31 08:49:01 compute-0 ceph-mon[75227]: rocksdb: (Original Log Time 2026/01/31-08:49:01.067687) [db/compaction/compaction_job.cc:1663] [default] [JOB 34] Compacted 1@0 + 1@6 files to L6 => 7381180 bytes
Jan 31 08:49:01 compute-0 ceph-mon[75227]: rocksdb: (Original Log Time 2026/01/31-08:49:01.068920) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 302.1 rd, 217.4 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.8, 9.0 +0.0 blob) out(7.0 +0.0 blob), read-write-amplify(22.0) write-amplify(9.2) OK, records in: 5620, records dropped: 481 output_compression: NoCompression
Jan 31 08:49:01 compute-0 ceph-mon[75227]: rocksdb: (Original Log Time 2026/01/31-08:49:01.068937) EVENT_LOG_v1 {"time_micros": 1769849341068927, "job": 34, "event": "compaction_finished", "compaction_time_micros": 33955, "compaction_time_cpu_micros": 13358, "output_level": 6, "num_output_files": 1, "total_output_size": 7381180, "num_input_records": 5620, "num_output_records": 5139, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 31 08:49:01 compute-0 ceph-mon[75227]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000064.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 08:49:01 compute-0 ceph-mon[75227]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769849341069139, "job": 34, "event": "table_file_deletion", "file_number": 64}
Jan 31 08:49:01 compute-0 ceph-mon[75227]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000062.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 08:49:01 compute-0 ceph-mon[75227]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769849341070330, "job": 34, "event": "table_file_deletion", "file_number": 62}
Jan 31 08:49:01 compute-0 ceph-mon[75227]: rocksdb: (Original Log Time 2026/01/31-08:49:01.033494) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 08:49:01 compute-0 ceph-mon[75227]: rocksdb: (Original Log Time 2026/01/31-08:49:01.070386) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 08:49:01 compute-0 ceph-mon[75227]: rocksdb: (Original Log Time 2026/01/31-08:49:01.070391) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 08:49:01 compute-0 ceph-mon[75227]: rocksdb: (Original Log Time 2026/01/31-08:49:01.070393) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 08:49:01 compute-0 ceph-mon[75227]: rocksdb: (Original Log Time 2026/01/31-08:49:01.070394) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 08:49:01 compute-0 ceph-mon[75227]: rocksdb: (Original Log Time 2026/01/31-08:49:01.070396) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 08:49:01 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1407: 305 pgs: 305 active+clean; 21 MiB data, 157 MiB used, 60 GiB / 60 GiB avail; 7.7 KiB/s rd, 2.0 MiB/s wr, 11 op/s
Jan 31 08:49:02 compute-0 ceph-mon[75227]: pgmap v1407: 305 pgs: 305 active+clean; 21 MiB data, 157 MiB used, 60 GiB / 60 GiB avail; 7.7 KiB/s rd, 2.0 MiB/s wr, 11 op/s
Jan 31 08:49:02 compute-0 ceph-mgr[75519]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:49:02 compute-0 ceph-mgr[75519]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:49:02 compute-0 ceph-mgr[75519]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:49:02 compute-0 ceph-mgr[75519]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:49:02 compute-0 ceph-mgr[75519]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:49:02 compute-0 ceph-mgr[75519]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:49:03 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1408: 305 pgs: 305 active+clean; 21 MiB data, 157 MiB used, 60 GiB / 60 GiB avail; 7.7 KiB/s rd, 2.0 MiB/s wr, 11 op/s
Jan 31 08:49:04 compute-0 ceph-mon[75227]: pgmap v1408: 305 pgs: 305 active+clean; 21 MiB data, 157 MiB used, 60 GiB / 60 GiB avail; 7.7 KiB/s rd, 2.0 MiB/s wr, 11 op/s
Jan 31 08:49:05 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1409: 305 pgs: 305 active+clean; 21 MiB data, 157 MiB used, 60 GiB / 60 GiB avail; 4.9 KiB/s rd, 771 KiB/s wr, 7 op/s
Jan 31 08:49:06 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:49:06 compute-0 ceph-mon[75227]: pgmap v1409: 305 pgs: 305 active+clean; 21 MiB data, 157 MiB used, 60 GiB / 60 GiB avail; 4.9 KiB/s rd, 771 KiB/s wr, 7 op/s
Jan 31 08:49:07 compute-0 podman[258391]: 2026-01-31 08:49:07.167241166 +0000 UTC m=+0.060496791 container health_status 14c3c41ef3ecc0cb180f3b1c6b2646401de390411c75f2edf127900ced71a3ae (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '3b798815db4eef76ff54d2cfe5801aee605c637f16e47a4297de289ab1fdb9c1-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, io.buildah.version=1.41.3)
Jan 31 08:49:07 compute-0 podman[258392]: 2026-01-31 08:49:07.175310018 +0000 UTC m=+0.066128933 container health_status 5cc46d1955888fed41771eb977e7f9416e280539f01559118253757fe3eb0869 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '3b798815db4eef76ff54d2cfe5801aee605c637f16e47a4297de289ab1fdb9c1-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3)
Jan 31 08:49:07 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1410: 305 pgs: 305 active+clean; 21 MiB data, 157 MiB used, 60 GiB / 60 GiB avail; 2.7 KiB/s rd, 426 B/s wr, 3 op/s
Jan 31 08:49:08 compute-0 ceph-mon[75227]: pgmap v1410: 305 pgs: 305 active+clean; 21 MiB data, 157 MiB used, 60 GiB / 60 GiB avail; 2.7 KiB/s rd, 426 B/s wr, 3 op/s
Jan 31 08:49:09 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1411: 305 pgs: 305 active+clean; 21 MiB data, 157 MiB used, 60 GiB / 60 GiB avail; 2.7 KiB/s rd, 426 B/s wr, 3 op/s
Jan 31 08:49:09 compute-0 ceph-mon[75227]: pgmap v1411: 305 pgs: 305 active+clean; 21 MiB data, 157 MiB used, 60 GiB / 60 GiB avail; 2.7 KiB/s rd, 426 B/s wr, 3 op/s
Jan 31 08:49:11 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:49:11 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1412: 305 pgs: 305 active+clean; 21 MiB data, 157 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 170 B/s wr, 0 op/s
Jan 31 08:49:12 compute-0 ceph-mon[75227]: pgmap v1412: 305 pgs: 305 active+clean; 21 MiB data, 157 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 170 B/s wr, 0 op/s
Jan 31 08:49:13 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1413: 305 pgs: 305 active+clean; 21 MiB data, 157 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:49:13 compute-0 ceph-mon[75227]: pgmap v1413: 305 pgs: 305 active+clean; 21 MiB data, 157 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:49:15 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1414: 305 pgs: 305 active+clean; 21 MiB data, 157 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:49:15 compute-0 ceph-mon[75227]: pgmap v1414: 305 pgs: 305 active+clean; 21 MiB data, 157 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:49:16 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:49:17 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1415: 305 pgs: 305 active+clean; 21 MiB data, 157 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:49:17 compute-0 ovn_metadata_agent[154972]: 2026-01-31 08:49:17.910 154977 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:49:17 compute-0 ovn_metadata_agent[154972]: 2026-01-31 08:49:17.911 154977 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:49:17 compute-0 ovn_metadata_agent[154972]: 2026-01-31 08:49:17.911 154977 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:49:17 compute-0 ceph-mon[75227]: pgmap v1415: 305 pgs: 305 active+clean; 21 MiB data, 157 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:49:17 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 31 08:49:17 compute-0 ceph-mon[75227]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/800954654' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 31 08:49:17 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 31 08:49:17 compute-0 ceph-mon[75227]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/800954654' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 31 08:49:18 compute-0 ceph-mon[75227]: from='client.? 192.168.122.10:0/800954654' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 31 08:49:18 compute-0 ceph-mon[75227]: from='client.? 192.168.122.10:0/800954654' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 31 08:49:19 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1416: 305 pgs: 305 active+clean; 21 MiB data, 157 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:49:19 compute-0 ceph-mon[75227]: pgmap v1416: 305 pgs: 305 active+clean; 21 MiB data, 157 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:49:20 compute-0 nova_compute[238824]: 2026-01-31 08:49:20.339 238828 DEBUG oslo_service.periodic_task [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:49:20 compute-0 nova_compute[238824]: 2026-01-31 08:49:20.340 238828 DEBUG nova.compute.manager [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Jan 31 08:49:20 compute-0 nova_compute[238824]: 2026-01-31 08:49:20.355 238828 DEBUG nova.compute.manager [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Jan 31 08:49:21 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:49:21 compute-0 nova_compute[238824]: 2026-01-31 08:49:21.355 238828 DEBUG oslo_service.periodic_task [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:49:21 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1417: 305 pgs: 305 active+clean; 21 MiB data, 157 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:49:21 compute-0 ceph-mon[75227]: pgmap v1417: 305 pgs: 305 active+clean; 21 MiB data, 157 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:49:23 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1418: 305 pgs: 305 active+clean; 21 MiB data, 157 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:49:23 compute-0 ceph-mon[75227]: pgmap v1418: 305 pgs: 305 active+clean; 21 MiB data, 157 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:49:25 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1419: 305 pgs: 305 active+clean; 21 MiB data, 157 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:49:25 compute-0 sudo[258437]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 08:49:25 compute-0 sudo[258437]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:49:25 compute-0 sudo[258437]: pam_unix(sudo:session): session closed for user root
Jan 31 08:49:25 compute-0 ceph-mon[75227]: pgmap v1419: 305 pgs: 305 active+clean; 21 MiB data, 157 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:49:25 compute-0 sudo[258462]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/82c880e6-d992-5408-8b12-efff9c275473/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ls
Jan 31 08:49:25 compute-0 sudo[258462]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:49:26 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:49:26 compute-0 nova_compute[238824]: 2026-01-31 08:49:26.339 238828 DEBUG oslo_service.periodic_task [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:49:26 compute-0 nova_compute[238824]: 2026-01-31 08:49:26.339 238828 DEBUG oslo_service.periodic_task [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:49:26 compute-0 nova_compute[238824]: 2026-01-31 08:49:26.339 238828 DEBUG nova.compute.manager [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 31 08:49:26 compute-0 nova_compute[238824]: 2026-01-31 08:49:26.340 238828 DEBUG oslo_service.periodic_task [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:49:26 compute-0 nova_compute[238824]: 2026-01-31 08:49:26.340 238828 DEBUG nova.compute.manager [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Jan 31 08:49:26 compute-0 podman[258531]: 2026-01-31 08:49:26.374544526 +0000 UTC m=+0.052921633 container exec 2c160fb9852a007dc977740f88f96001cc57b1cb392a9e315d541aef8037777a (image=quay.io/ceph/ceph:v20, name=ceph-82c880e6-d992-5408-8b12-efff9c275473-mon-compute-0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 08:49:26 compute-0 podman[258531]: 2026-01-31 08:49:26.502575559 +0000 UTC m=+0.180952646 container exec_died 2c160fb9852a007dc977740f88f96001cc57b1cb392a9e315d541aef8037777a (image=quay.io/ceph/ceph:v20, name=ceph-82c880e6-d992-5408-8b12-efff9c275473-mon-compute-0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Jan 31 08:49:27 compute-0 sudo[258462]: pam_unix(sudo:session): session closed for user root
Jan 31 08:49:27 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 31 08:49:27 compute-0 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 08:49:27 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 31 08:49:27 compute-0 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 08:49:27 compute-0 sudo[258719]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 08:49:27 compute-0 sudo[258719]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:49:27 compute-0 sudo[258719]: pam_unix(sudo:session): session closed for user root
Jan 31 08:49:27 compute-0 sudo[258744]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/82c880e6-d992-5408-8b12-efff9c275473/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Jan 31 08:49:27 compute-0 sudo[258744]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:49:27 compute-0 sudo[258744]: pam_unix(sudo:session): session closed for user root
Jan 31 08:49:27 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0)
Jan 31 08:49:27 compute-0 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} : dispatch
Jan 31 08:49:27 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 31 08:49:27 compute-0 ceph-mon[75227]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 08:49:27 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 31 08:49:27 compute-0 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 31 08:49:27 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 31 08:49:27 compute-0 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 08:49:27 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 31 08:49:27 compute-0 ceph-mon[75227]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 31 08:49:27 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 31 08:49:27 compute-0 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 31 08:49:27 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 31 08:49:27 compute-0 ceph-mon[75227]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 08:49:27 compute-0 sudo[258801]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 08:49:27 compute-0 sudo[258801]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:49:27 compute-0 sudo[258801]: pam_unix(sudo:session): session closed for user root
Jan 31 08:49:27 compute-0 sudo[258826]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/82c880e6-d992-5408-8b12-efff9c275473/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 82c880e6-d992-5408-8b12-efff9c275473 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Jan 31 08:49:27 compute-0 sudo[258826]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:49:27 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1420: 305 pgs: 305 active+clean; 21 MiB data, 157 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:49:27 compute-0 podman[258861]: 2026-01-31 08:49:27.916920368 +0000 UTC m=+0.036909472 container create bf1456dfce06e57533a4e5ddbb6ff3e1097b6596f2cd32f1fb33e6adae33ea45 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=musing_wilson, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 08:49:27 compute-0 systemd[1]: Started libpod-conmon-bf1456dfce06e57533a4e5ddbb6ff3e1097b6596f2cd32f1fb33e6adae33ea45.scope.
Jan 31 08:49:27 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:49:27 compute-0 podman[258861]: 2026-01-31 08:49:27.980996071 +0000 UTC m=+0.100985195 container init bf1456dfce06e57533a4e5ddbb6ff3e1097b6596f2cd32f1fb33e6adae33ea45 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=musing_wilson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=tentacle, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS)
Jan 31 08:49:27 compute-0 podman[258861]: 2026-01-31 08:49:27.985319365 +0000 UTC m=+0.105308469 container start bf1456dfce06e57533a4e5ddbb6ff3e1097b6596f2cd32f1fb33e6adae33ea45 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=musing_wilson, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030)
Jan 31 08:49:27 compute-0 podman[258861]: 2026-01-31 08:49:27.988103885 +0000 UTC m=+0.108093019 container attach bf1456dfce06e57533a4e5ddbb6ff3e1097b6596f2cd32f1fb33e6adae33ea45 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=musing_wilson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 08:49:27 compute-0 musing_wilson[258877]: 167 167
Jan 31 08:49:27 compute-0 systemd[1]: libpod-bf1456dfce06e57533a4e5ddbb6ff3e1097b6596f2cd32f1fb33e6adae33ea45.scope: Deactivated successfully.
Jan 31 08:49:27 compute-0 podman[258861]: 2026-01-31 08:49:27.990231836 +0000 UTC m=+0.110220950 container died bf1456dfce06e57533a4e5ddbb6ff3e1097b6596f2cd32f1fb33e6adae33ea45 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=musing_wilson, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030)
Jan 31 08:49:27 compute-0 podman[258861]: 2026-01-31 08:49:27.901163875 +0000 UTC m=+0.021152999 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 08:49:28 compute-0 systemd[1]: var-lib-containers-storage-overlay-eec8ad7002a94afab82a50fe1e2e2df82c5116a2452d24edf4533bf723001403-merged.mount: Deactivated successfully.
Jan 31 08:49:28 compute-0 podman[258861]: 2026-01-31 08:49:28.031526074 +0000 UTC m=+0.151515188 container remove bf1456dfce06e57533a4e5ddbb6ff3e1097b6596f2cd32f1fb33e6adae33ea45 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=musing_wilson, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 08:49:28 compute-0 systemd[1]: libpod-conmon-bf1456dfce06e57533a4e5ddbb6ff3e1097b6596f2cd32f1fb33e6adae33ea45.scope: Deactivated successfully.
Jan 31 08:49:28 compute-0 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 08:49:28 compute-0 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 08:49:28 compute-0 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} : dispatch
Jan 31 08:49:28 compute-0 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 08:49:28 compute-0 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 31 08:49:28 compute-0 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 08:49:28 compute-0 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 31 08:49:28 compute-0 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 31 08:49:28 compute-0 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 08:49:28 compute-0 ceph-mon[75227]: pgmap v1420: 305 pgs: 305 active+clean; 21 MiB data, 157 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:49:28 compute-0 podman[258903]: 2026-01-31 08:49:28.167230627 +0000 UTC m=+0.041760862 container create 2d57fe5a7229705c94c9a34ffb4acd6e42b2081bb3955ac932a7441f3b7cf0c4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=epic_margulis, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 08:49:28 compute-0 systemd[1]: Started libpod-conmon-2d57fe5a7229705c94c9a34ffb4acd6e42b2081bb3955ac932a7441f3b7cf0c4.scope.
Jan 31 08:49:28 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:49:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a43782d8dc4770c435a98d591e62174513e0ba842dd1418a7804732ff810fb57/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 08:49:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a43782d8dc4770c435a98d591e62174513e0ba842dd1418a7804732ff810fb57/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 08:49:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a43782d8dc4770c435a98d591e62174513e0ba842dd1418a7804732ff810fb57/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 08:49:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a43782d8dc4770c435a98d591e62174513e0ba842dd1418a7804732ff810fb57/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 08:49:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a43782d8dc4770c435a98d591e62174513e0ba842dd1418a7804732ff810fb57/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 31 08:49:28 compute-0 podman[258903]: 2026-01-31 08:49:28.148745995 +0000 UTC m=+0.023276260 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 08:49:28 compute-0 podman[258903]: 2026-01-31 08:49:28.255888867 +0000 UTC m=+0.130419122 container init 2d57fe5a7229705c94c9a34ffb4acd6e42b2081bb3955ac932a7441f3b7cf0c4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=epic_margulis, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_REF=tentacle)
Jan 31 08:49:28 compute-0 podman[258903]: 2026-01-31 08:49:28.262305581 +0000 UTC m=+0.136835816 container start 2d57fe5a7229705c94c9a34ffb4acd6e42b2081bb3955ac932a7441f3b7cf0c4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=epic_margulis, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Jan 31 08:49:28 compute-0 podman[258903]: 2026-01-31 08:49:28.265489263 +0000 UTC m=+0.140019518 container attach 2d57fe5a7229705c94c9a34ffb4acd6e42b2081bb3955ac932a7441f3b7cf0c4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=epic_margulis, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default)
Jan 31 08:49:28 compute-0 epic_margulis[258920]: --> passed data devices: 0 physical, 3 LVM
Jan 31 08:49:28 compute-0 epic_margulis[258920]: --> All data devices are unavailable
Jan 31 08:49:28 compute-0 systemd[1]: libpod-2d57fe5a7229705c94c9a34ffb4acd6e42b2081bb3955ac932a7441f3b7cf0c4.scope: Deactivated successfully.
Jan 31 08:49:28 compute-0 podman[258903]: 2026-01-31 08:49:28.714523168 +0000 UTC m=+0.589053443 container died 2d57fe5a7229705c94c9a34ffb4acd6e42b2081bb3955ac932a7441f3b7cf0c4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=epic_margulis, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 31 08:49:28 compute-0 systemd[1]: var-lib-containers-storage-overlay-a43782d8dc4770c435a98d591e62174513e0ba842dd1418a7804732ff810fb57-merged.mount: Deactivated successfully.
Jan 31 08:49:28 compute-0 podman[258903]: 2026-01-31 08:49:28.767601444 +0000 UTC m=+0.642131709 container remove 2d57fe5a7229705c94c9a34ffb4acd6e42b2081bb3955ac932a7441f3b7cf0c4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=epic_margulis, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 31 08:49:28 compute-0 systemd[1]: libpod-conmon-2d57fe5a7229705c94c9a34ffb4acd6e42b2081bb3955ac932a7441f3b7cf0c4.scope: Deactivated successfully.
Jan 31 08:49:28 compute-0 sudo[258826]: pam_unix(sudo:session): session closed for user root
Jan 31 08:49:28 compute-0 sudo[258951]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 08:49:28 compute-0 sudo[258951]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:49:28 compute-0 sudo[258951]: pam_unix(sudo:session): session closed for user root
Jan 31 08:49:28 compute-0 sudo[258976]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/82c880e6-d992-5408-8b12-efff9c275473/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 82c880e6-d992-5408-8b12-efff9c275473 -- lvm list --format json
Jan 31 08:49:28 compute-0 sudo[258976]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:49:29 compute-0 podman[259012]: 2026-01-31 08:49:29.241300019 +0000 UTC m=+0.046123918 container create 1949efd019c03dce5c3862b4e485e26c73625511688b961b29426b0e577589e2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elastic_matsumoto, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 08:49:29 compute-0 systemd[1]: Started libpod-conmon-1949efd019c03dce5c3862b4e485e26c73625511688b961b29426b0e577589e2.scope.
Jan 31 08:49:29 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:49:29 compute-0 podman[259012]: 2026-01-31 08:49:29.31087091 +0000 UTC m=+0.115694819 container init 1949efd019c03dce5c3862b4e485e26c73625511688b961b29426b0e577589e2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elastic_matsumoto, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Jan 31 08:49:29 compute-0 podman[259012]: 2026-01-31 08:49:29.220206082 +0000 UTC m=+0.025029961 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 08:49:29 compute-0 podman[259012]: 2026-01-31 08:49:29.314989659 +0000 UTC m=+0.119813518 container start 1949efd019c03dce5c3862b4e485e26c73625511688b961b29426b0e577589e2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elastic_matsumoto, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 08:49:29 compute-0 elastic_matsumoto[259028]: 167 167
Jan 31 08:49:29 compute-0 systemd[1]: libpod-1949efd019c03dce5c3862b4e485e26c73625511688b961b29426b0e577589e2.scope: Deactivated successfully.
Jan 31 08:49:29 compute-0 podman[259012]: 2026-01-31 08:49:29.318038956 +0000 UTC m=+0.122862845 container attach 1949efd019c03dce5c3862b4e485e26c73625511688b961b29426b0e577589e2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elastic_matsumoto, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 08:49:29 compute-0 conmon[259028]: conmon 1949efd019c03dce5c38 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-1949efd019c03dce5c3862b4e485e26c73625511688b961b29426b0e577589e2.scope/container/memory.events
Jan 31 08:49:29 compute-0 podman[259012]: 2026-01-31 08:49:29.31920344 +0000 UTC m=+0.124027289 container died 1949efd019c03dce5c3862b4e485e26c73625511688b961b29426b0e577589e2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elastic_matsumoto, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=tentacle)
Jan 31 08:49:29 compute-0 systemd[1]: var-lib-containers-storage-overlay-45051a31c824280b692fad5e532d1ef40ca4b51477afc84ad2af789d38465bcd-merged.mount: Deactivated successfully.
Jan 31 08:49:29 compute-0 podman[259012]: 2026-01-31 08:49:29.352163968 +0000 UTC m=+0.156987827 container remove 1949efd019c03dce5c3862b4e485e26c73625511688b961b29426b0e577589e2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elastic_matsumoto, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 31 08:49:29 compute-0 systemd[1]: libpod-conmon-1949efd019c03dce5c3862b4e485e26c73625511688b961b29426b0e577589e2.scope: Deactivated successfully.
Jan 31 08:49:29 compute-0 nova_compute[238824]: 2026-01-31 08:49:29.358 238828 DEBUG oslo_service.periodic_task [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:49:29 compute-0 podman[259052]: 2026-01-31 08:49:29.483941988 +0000 UTC m=+0.037232092 container create 3aedfa25f501782c387925dc9b1301d7b9081f82f76430171ce53f5250516842 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=friendly_visvesvaraya, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 08:49:29 compute-0 systemd[1]: Started libpod-conmon-3aedfa25f501782c387925dc9b1301d7b9081f82f76430171ce53f5250516842.scope.
Jan 31 08:49:29 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:49:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7ddedfa00f30fe2e021379683ea09aa655e4e6ab5223e7e2940be024a52866c0/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 08:49:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7ddedfa00f30fe2e021379683ea09aa655e4e6ab5223e7e2940be024a52866c0/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 08:49:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7ddedfa00f30fe2e021379683ea09aa655e4e6ab5223e7e2940be024a52866c0/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 08:49:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7ddedfa00f30fe2e021379683ea09aa655e4e6ab5223e7e2940be024a52866c0/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 08:49:29 compute-0 podman[259052]: 2026-01-31 08:49:29.547068854 +0000 UTC m=+0.100358978 container init 3aedfa25f501782c387925dc9b1301d7b9081f82f76430171ce53f5250516842 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=friendly_visvesvaraya, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Jan 31 08:49:29 compute-0 podman[259052]: 2026-01-31 08:49:29.553132838 +0000 UTC m=+0.106422942 container start 3aedfa25f501782c387925dc9b1301d7b9081f82f76430171ce53f5250516842 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=friendly_visvesvaraya, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Jan 31 08:49:29 compute-0 podman[259052]: 2026-01-31 08:49:29.556505655 +0000 UTC m=+0.109795769 container attach 3aedfa25f501782c387925dc9b1301d7b9081f82f76430171ce53f5250516842 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=friendly_visvesvaraya, io.buildah.version=1.41.3, ceph=True, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 08:49:29 compute-0 podman[259052]: 2026-01-31 08:49:29.468630818 +0000 UTC m=+0.021920962 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 08:49:29 compute-0 friendly_visvesvaraya[259069]: {
Jan 31 08:49:29 compute-0 friendly_visvesvaraya[259069]:     "0": [
Jan 31 08:49:29 compute-0 friendly_visvesvaraya[259069]:         {
Jan 31 08:49:29 compute-0 friendly_visvesvaraya[259069]:             "devices": [
Jan 31 08:49:29 compute-0 friendly_visvesvaraya[259069]:                 "/dev/loop3"
Jan 31 08:49:29 compute-0 friendly_visvesvaraya[259069]:             ],
Jan 31 08:49:29 compute-0 friendly_visvesvaraya[259069]:             "lv_name": "ceph_lv0",
Jan 31 08:49:29 compute-0 friendly_visvesvaraya[259069]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 08:49:29 compute-0 friendly_visvesvaraya[259069]:             "lv_size": "21470642176",
Jan 31 08:49:29 compute-0 friendly_visvesvaraya[259069]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=MTsNbY-MKaT-jGv0-3onj-5WQa-gnK0-BbfLsK,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=82c880e6-d992-5408-8b12-efff9c275473,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=39c36249-2898-4a76-b317-8e4ca379866f,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 31 08:49:29 compute-0 friendly_visvesvaraya[259069]:             "lv_uuid": "MTsNbY-MKaT-jGv0-3onj-5WQa-gnK0-BbfLsK",
Jan 31 08:49:29 compute-0 friendly_visvesvaraya[259069]:             "name": "ceph_lv0",
Jan 31 08:49:29 compute-0 friendly_visvesvaraya[259069]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 08:49:29 compute-0 friendly_visvesvaraya[259069]:             "tags": {
Jan 31 08:49:29 compute-0 friendly_visvesvaraya[259069]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 31 08:49:29 compute-0 friendly_visvesvaraya[259069]:                 "ceph.block_uuid": "MTsNbY-MKaT-jGv0-3onj-5WQa-gnK0-BbfLsK",
Jan 31 08:49:29 compute-0 friendly_visvesvaraya[259069]:                 "ceph.cephx_lockbox_secret": "",
Jan 31 08:49:29 compute-0 friendly_visvesvaraya[259069]:                 "ceph.cluster_fsid": "82c880e6-d992-5408-8b12-efff9c275473",
Jan 31 08:49:29 compute-0 friendly_visvesvaraya[259069]:                 "ceph.cluster_name": "ceph",
Jan 31 08:49:29 compute-0 friendly_visvesvaraya[259069]:                 "ceph.crush_device_class": "",
Jan 31 08:49:29 compute-0 friendly_visvesvaraya[259069]:                 "ceph.encrypted": "0",
Jan 31 08:49:29 compute-0 friendly_visvesvaraya[259069]:                 "ceph.objectstore": "bluestore",
Jan 31 08:49:29 compute-0 friendly_visvesvaraya[259069]:                 "ceph.osd_fsid": "39c36249-2898-4a76-b317-8e4ca379866f",
Jan 31 08:49:29 compute-0 friendly_visvesvaraya[259069]:                 "ceph.osd_id": "0",
Jan 31 08:49:29 compute-0 friendly_visvesvaraya[259069]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 31 08:49:29 compute-0 friendly_visvesvaraya[259069]:                 "ceph.type": "block",
Jan 31 08:49:29 compute-0 friendly_visvesvaraya[259069]:                 "ceph.vdo": "0",
Jan 31 08:49:29 compute-0 friendly_visvesvaraya[259069]:                 "ceph.with_tpm": "0"
Jan 31 08:49:29 compute-0 friendly_visvesvaraya[259069]:             },
Jan 31 08:49:29 compute-0 friendly_visvesvaraya[259069]:             "type": "block",
Jan 31 08:49:29 compute-0 friendly_visvesvaraya[259069]:             "vg_name": "ceph_vg0"
Jan 31 08:49:29 compute-0 friendly_visvesvaraya[259069]:         }
Jan 31 08:49:29 compute-0 friendly_visvesvaraya[259069]:     ],
Jan 31 08:49:29 compute-0 friendly_visvesvaraya[259069]:     "1": [
Jan 31 08:49:29 compute-0 friendly_visvesvaraya[259069]:         {
Jan 31 08:49:29 compute-0 friendly_visvesvaraya[259069]:             "devices": [
Jan 31 08:49:29 compute-0 friendly_visvesvaraya[259069]:                 "/dev/loop4"
Jan 31 08:49:29 compute-0 friendly_visvesvaraya[259069]:             ],
Jan 31 08:49:29 compute-0 friendly_visvesvaraya[259069]:             "lv_name": "ceph_lv1",
Jan 31 08:49:29 compute-0 friendly_visvesvaraya[259069]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 31 08:49:29 compute-0 friendly_visvesvaraya[259069]:             "lv_size": "21470642176",
Jan 31 08:49:29 compute-0 friendly_visvesvaraya[259069]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=p93Mbf-DMxT-pcUt-jSJE-SFna-oscq-yTAd40,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=82c880e6-d992-5408-8b12-efff9c275473,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=dacad4fa-56d8-4937-b2d8-306fb75187f3,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 31 08:49:29 compute-0 friendly_visvesvaraya[259069]:             "lv_uuid": "p93Mbf-DMxT-pcUt-jSJE-SFna-oscq-yTAd40",
Jan 31 08:49:29 compute-0 friendly_visvesvaraya[259069]:             "name": "ceph_lv1",
Jan 31 08:49:29 compute-0 friendly_visvesvaraya[259069]:             "path": "/dev/ceph_vg1/ceph_lv1",
Jan 31 08:49:29 compute-0 friendly_visvesvaraya[259069]:             "tags": {
Jan 31 08:49:29 compute-0 friendly_visvesvaraya[259069]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 31 08:49:29 compute-0 friendly_visvesvaraya[259069]:                 "ceph.block_uuid": "p93Mbf-DMxT-pcUt-jSJE-SFna-oscq-yTAd40",
Jan 31 08:49:29 compute-0 friendly_visvesvaraya[259069]:                 "ceph.cephx_lockbox_secret": "",
Jan 31 08:49:29 compute-0 friendly_visvesvaraya[259069]:                 "ceph.cluster_fsid": "82c880e6-d992-5408-8b12-efff9c275473",
Jan 31 08:49:29 compute-0 friendly_visvesvaraya[259069]:                 "ceph.cluster_name": "ceph",
Jan 31 08:49:29 compute-0 friendly_visvesvaraya[259069]:                 "ceph.crush_device_class": "",
Jan 31 08:49:29 compute-0 friendly_visvesvaraya[259069]:                 "ceph.encrypted": "0",
Jan 31 08:49:29 compute-0 friendly_visvesvaraya[259069]:                 "ceph.objectstore": "bluestore",
Jan 31 08:49:29 compute-0 friendly_visvesvaraya[259069]:                 "ceph.osd_fsid": "dacad4fa-56d8-4937-b2d8-306fb75187f3",
Jan 31 08:49:29 compute-0 friendly_visvesvaraya[259069]:                 "ceph.osd_id": "1",
Jan 31 08:49:29 compute-0 friendly_visvesvaraya[259069]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 31 08:49:29 compute-0 friendly_visvesvaraya[259069]:                 "ceph.type": "block",
Jan 31 08:49:29 compute-0 friendly_visvesvaraya[259069]:                 "ceph.vdo": "0",
Jan 31 08:49:29 compute-0 friendly_visvesvaraya[259069]:                 "ceph.with_tpm": "0"
Jan 31 08:49:29 compute-0 friendly_visvesvaraya[259069]:             },
Jan 31 08:49:29 compute-0 friendly_visvesvaraya[259069]:             "type": "block",
Jan 31 08:49:29 compute-0 friendly_visvesvaraya[259069]:             "vg_name": "ceph_vg1"
Jan 31 08:49:29 compute-0 friendly_visvesvaraya[259069]:         }
Jan 31 08:49:29 compute-0 friendly_visvesvaraya[259069]:     ],
Jan 31 08:49:29 compute-0 friendly_visvesvaraya[259069]:     "2": [
Jan 31 08:49:29 compute-0 friendly_visvesvaraya[259069]:         {
Jan 31 08:49:29 compute-0 friendly_visvesvaraya[259069]:             "devices": [
Jan 31 08:49:29 compute-0 friendly_visvesvaraya[259069]:                 "/dev/loop5"
Jan 31 08:49:29 compute-0 friendly_visvesvaraya[259069]:             ],
Jan 31 08:49:29 compute-0 friendly_visvesvaraya[259069]:             "lv_name": "ceph_lv2",
Jan 31 08:49:29 compute-0 friendly_visvesvaraya[259069]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 31 08:49:29 compute-0 friendly_visvesvaraya[259069]:             "lv_size": "21470642176",
Jan 31 08:49:29 compute-0 friendly_visvesvaraya[259069]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=fxd7JU-HnwP-NvcE-M4xv-EgEF-kK7y-w6dXCS,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=82c880e6-d992-5408-8b12-efff9c275473,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=faa25865-e7b6-44f9-8188-08bf287b941b,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 31 08:49:29 compute-0 friendly_visvesvaraya[259069]:             "lv_uuid": "fxd7JU-HnwP-NvcE-M4xv-EgEF-kK7y-w6dXCS",
Jan 31 08:49:29 compute-0 friendly_visvesvaraya[259069]:             "name": "ceph_lv2",
Jan 31 08:49:29 compute-0 friendly_visvesvaraya[259069]:             "path": "/dev/ceph_vg2/ceph_lv2",
Jan 31 08:49:29 compute-0 friendly_visvesvaraya[259069]:             "tags": {
Jan 31 08:49:29 compute-0 friendly_visvesvaraya[259069]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 31 08:49:29 compute-0 friendly_visvesvaraya[259069]:                 "ceph.block_uuid": "fxd7JU-HnwP-NvcE-M4xv-EgEF-kK7y-w6dXCS",
Jan 31 08:49:29 compute-0 friendly_visvesvaraya[259069]:                 "ceph.cephx_lockbox_secret": "",
Jan 31 08:49:29 compute-0 friendly_visvesvaraya[259069]:                 "ceph.cluster_fsid": "82c880e6-d992-5408-8b12-efff9c275473",
Jan 31 08:49:29 compute-0 friendly_visvesvaraya[259069]:                 "ceph.cluster_name": "ceph",
Jan 31 08:49:29 compute-0 friendly_visvesvaraya[259069]:                 "ceph.crush_device_class": "",
Jan 31 08:49:29 compute-0 friendly_visvesvaraya[259069]:                 "ceph.encrypted": "0",
Jan 31 08:49:29 compute-0 friendly_visvesvaraya[259069]:                 "ceph.objectstore": "bluestore",
Jan 31 08:49:29 compute-0 friendly_visvesvaraya[259069]:                 "ceph.osd_fsid": "faa25865-e7b6-44f9-8188-08bf287b941b",
Jan 31 08:49:29 compute-0 friendly_visvesvaraya[259069]:                 "ceph.osd_id": "2",
Jan 31 08:49:29 compute-0 friendly_visvesvaraya[259069]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 31 08:49:29 compute-0 friendly_visvesvaraya[259069]:                 "ceph.type": "block",
Jan 31 08:49:29 compute-0 friendly_visvesvaraya[259069]:                 "ceph.vdo": "0",
Jan 31 08:49:29 compute-0 friendly_visvesvaraya[259069]:                 "ceph.with_tpm": "0"
Jan 31 08:49:29 compute-0 friendly_visvesvaraya[259069]:             },
Jan 31 08:49:29 compute-0 friendly_visvesvaraya[259069]:             "type": "block",
Jan 31 08:49:29 compute-0 friendly_visvesvaraya[259069]:             "vg_name": "ceph_vg2"
Jan 31 08:49:29 compute-0 friendly_visvesvaraya[259069]:         }
Jan 31 08:49:29 compute-0 friendly_visvesvaraya[259069]:     ]
Jan 31 08:49:29 compute-0 friendly_visvesvaraya[259069]: }
Jan 31 08:49:29 compute-0 systemd[1]: libpod-3aedfa25f501782c387925dc9b1301d7b9081f82f76430171ce53f5250516842.scope: Deactivated successfully.
Jan 31 08:49:29 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1421: 305 pgs: 305 active+clean; 21 MiB data, 157 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:49:29 compute-0 podman[259078]: 2026-01-31 08:49:29.876874519 +0000 UTC m=+0.030330933 container died 3aedfa25f501782c387925dc9b1301d7b9081f82f76430171ce53f5250516842 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=friendly_visvesvaraya, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 08:49:29 compute-0 systemd[1]: var-lib-containers-storage-overlay-7ddedfa00f30fe2e021379683ea09aa655e4e6ab5223e7e2940be024a52866c0-merged.mount: Deactivated successfully.
Jan 31 08:49:29 compute-0 podman[259078]: 2026-01-31 08:49:29.913571175 +0000 UTC m=+0.067027579 container remove 3aedfa25f501782c387925dc9b1301d7b9081f82f76430171ce53f5250516842 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=friendly_visvesvaraya, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True)
Jan 31 08:49:29 compute-0 systemd[1]: libpod-conmon-3aedfa25f501782c387925dc9b1301d7b9081f82f76430171ce53f5250516842.scope: Deactivated successfully.
Jan 31 08:49:29 compute-0 ceph-mon[75227]: pgmap v1421: 305 pgs: 305 active+clean; 21 MiB data, 157 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:49:29 compute-0 sudo[258976]: pam_unix(sudo:session): session closed for user root
Jan 31 08:49:30 compute-0 sudo[259093]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 08:49:30 compute-0 sudo[259093]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:49:30 compute-0 sudo[259093]: pam_unix(sudo:session): session closed for user root
Jan 31 08:49:30 compute-0 sudo[259118]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/82c880e6-d992-5408-8b12-efff9c275473/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 82c880e6-d992-5408-8b12-efff9c275473 -- raw list --format json
Jan 31 08:49:30 compute-0 sudo[259118]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:49:30 compute-0 podman[259155]: 2026-01-31 08:49:30.323214907 +0000 UTC m=+0.039736494 container create 3e5ab8946644e4776bed658f4cf6acc8f81b8faacf37c8cde415c63c661f2069 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=frosty_allen, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 31 08:49:30 compute-0 systemd[1]: Started libpod-conmon-3e5ab8946644e4776bed658f4cf6acc8f81b8faacf37c8cde415c63c661f2069.scope.
Jan 31 08:49:30 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:49:30 compute-0 podman[259155]: 2026-01-31 08:49:30.304738475 +0000 UTC m=+0.021260102 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 08:49:30 compute-0 podman[259155]: 2026-01-31 08:49:30.406166883 +0000 UTC m=+0.122688470 container init 3e5ab8946644e4776bed658f4cf6acc8f81b8faacf37c8cde415c63c661f2069 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=frosty_allen, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 08:49:30 compute-0 podman[259155]: 2026-01-31 08:49:30.415844531 +0000 UTC m=+0.132366098 container start 3e5ab8946644e4776bed658f4cf6acc8f81b8faacf37c8cde415c63c661f2069 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=frosty_allen, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 31 08:49:30 compute-0 podman[259155]: 2026-01-31 08:49:30.419721943 +0000 UTC m=+0.136243540 container attach 3e5ab8946644e4776bed658f4cf6acc8f81b8faacf37c8cde415c63c661f2069 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=frosty_allen, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 31 08:49:30 compute-0 frosty_allen[259171]: 167 167
Jan 31 08:49:30 compute-0 systemd[1]: libpod-3e5ab8946644e4776bed658f4cf6acc8f81b8faacf37c8cde415c63c661f2069.scope: Deactivated successfully.
Jan 31 08:49:30 compute-0 conmon[259171]: conmon 3e5ab8946644e4776bed <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-3e5ab8946644e4776bed658f4cf6acc8f81b8faacf37c8cde415c63c661f2069.scope/container/memory.events
Jan 31 08:49:30 compute-0 podman[259155]: 2026-01-31 08:49:30.421694349 +0000 UTC m=+0.138215916 container died 3e5ab8946644e4776bed658f4cf6acc8f81b8faacf37c8cde415c63c661f2069 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=frosty_allen, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 08:49:30 compute-0 systemd[1]: var-lib-containers-storage-overlay-d3b00f664b8cfab53da4e4df9e219ce684cad3bba3d96b168bb62d4eb3376402-merged.mount: Deactivated successfully.
Jan 31 08:49:30 compute-0 podman[259155]: 2026-01-31 08:49:30.454850523 +0000 UTC m=+0.171372100 container remove 3e5ab8946644e4776bed658f4cf6acc8f81b8faacf37c8cde415c63c661f2069 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=frosty_allen, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 08:49:30 compute-0 systemd[1]: libpod-conmon-3e5ab8946644e4776bed658f4cf6acc8f81b8faacf37c8cde415c63c661f2069.scope: Deactivated successfully.
Jan 31 08:49:30 compute-0 podman[259196]: 2026-01-31 08:49:30.598839644 +0000 UTC m=+0.043454590 container create 9f811664586d1e86176f93d66fcece559e7e081e8a2039b4a037f78d416d987c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=exciting_wozniak, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle)
Jan 31 08:49:30 compute-0 systemd[1]: Started libpod-conmon-9f811664586d1e86176f93d66fcece559e7e081e8a2039b4a037f78d416d987c.scope.
Jan 31 08:49:30 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:49:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b4835ed517898644e6512ed6c065251c1f45a6f438fe7cd783398ce587db4436/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 08:49:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b4835ed517898644e6512ed6c065251c1f45a6f438fe7cd783398ce587db4436/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 08:49:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b4835ed517898644e6512ed6c065251c1f45a6f438fe7cd783398ce587db4436/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 08:49:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b4835ed517898644e6512ed6c065251c1f45a6f438fe7cd783398ce587db4436/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 08:49:30 compute-0 podman[259196]: 2026-01-31 08:49:30.663532535 +0000 UTC m=+0.108147481 container init 9f811664586d1e86176f93d66fcece559e7e081e8a2039b4a037f78d416d987c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=exciting_wozniak, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Jan 31 08:49:30 compute-0 podman[259196]: 2026-01-31 08:49:30.67275438 +0000 UTC m=+0.117369316 container start 9f811664586d1e86176f93d66fcece559e7e081e8a2039b4a037f78d416d987c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=exciting_wozniak, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 08:49:30 compute-0 podman[259196]: 2026-01-31 08:49:30.578821479 +0000 UTC m=+0.023436455 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 08:49:30 compute-0 podman[259196]: 2026-01-31 08:49:30.676101517 +0000 UTC m=+0.120716483 container attach 9f811664586d1e86176f93d66fcece559e7e081e8a2039b4a037f78d416d987c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=exciting_wozniak, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030)
Jan 31 08:49:31 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:49:31 compute-0 lvm[259291]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 31 08:49:31 compute-0 lvm[259294]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 31 08:49:31 compute-0 lvm[259291]: VG ceph_vg0 finished
Jan 31 08:49:31 compute-0 lvm[259294]: VG ceph_vg1 finished
Jan 31 08:49:31 compute-0 lvm[259296]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 31 08:49:31 compute-0 lvm[259296]: VG ceph_vg2 finished
Jan 31 08:49:31 compute-0 nova_compute[238824]: 2026-01-31 08:49:31.341 238828 DEBUG oslo_service.periodic_task [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:49:31 compute-0 exciting_wozniak[259213]: {}
Jan 31 08:49:31 compute-0 systemd[1]: libpod-9f811664586d1e86176f93d66fcece559e7e081e8a2039b4a037f78d416d987c.scope: Deactivated successfully.
Jan 31 08:49:31 compute-0 podman[259196]: 2026-01-31 08:49:31.373147615 +0000 UTC m=+0.817762551 container died 9f811664586d1e86176f93d66fcece559e7e081e8a2039b4a037f78d416d987c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=exciting_wozniak, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=tentacle, ceph=True, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 08:49:31 compute-0 systemd[1]: var-lib-containers-storage-overlay-b4835ed517898644e6512ed6c065251c1f45a6f438fe7cd783398ce587db4436-merged.mount: Deactivated successfully.
Jan 31 08:49:31 compute-0 podman[259196]: 2026-01-31 08:49:31.415182024 +0000 UTC m=+0.859796960 container remove 9f811664586d1e86176f93d66fcece559e7e081e8a2039b4a037f78d416d987c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=exciting_wozniak, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True)
Jan 31 08:49:31 compute-0 systemd[1]: libpod-conmon-9f811664586d1e86176f93d66fcece559e7e081e8a2039b4a037f78d416d987c.scope: Deactivated successfully.
Jan 31 08:49:31 compute-0 sudo[259118]: pam_unix(sudo:session): session closed for user root
Jan 31 08:49:31 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 31 08:49:31 compute-0 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 08:49:31 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 31 08:49:31 compute-0 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 08:49:31 compute-0 sudo[259310]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 31 08:49:31 compute-0 sudo[259310]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:49:31 compute-0 sudo[259310]: pam_unix(sudo:session): session closed for user root
Jan 31 08:49:31 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1422: 305 pgs: 305 active+clean; 21 MiB data, 157 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:49:31 compute-0 ceph-mgr[75519]: [balancer INFO root] Optimize plan auto_2026-01-31_08:49:31
Jan 31 08:49:31 compute-0 ceph-mgr[75519]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 31 08:49:31 compute-0 ceph-mgr[75519]: [balancer INFO root] do_upmap
Jan 31 08:49:31 compute-0 ceph-mgr[75519]: [balancer INFO root] pools ['.rgw.root', 'cephfs.cephfs.data', 'images', 'vms', 'default.rgw.log', 'cephfs.cephfs.meta', '.mgr', 'default.rgw.meta', 'backups', 'default.rgw.control', 'volumes']
Jan 31 08:49:31 compute-0 ceph-mgr[75519]: [balancer INFO root] prepared 0/10 upmap changes
Jan 31 08:49:32 compute-0 nova_compute[238824]: 2026-01-31 08:49:32.341 238828 DEBUG oslo_service.periodic_task [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:49:32 compute-0 nova_compute[238824]: 2026-01-31 08:49:32.342 238828 DEBUG nova.compute.manager [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 31 08:49:32 compute-0 nova_compute[238824]: 2026-01-31 08:49:32.343 238828 DEBUG nova.compute.manager [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 31 08:49:32 compute-0 nova_compute[238824]: 2026-01-31 08:49:32.390 238828 DEBUG nova.compute.manager [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 31 08:49:32 compute-0 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 08:49:32 compute-0 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 08:49:32 compute-0 ceph-mon[75227]: pgmap v1422: 305 pgs: 305 active+clean; 21 MiB data, 157 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:49:32 compute-0 ceph-mgr[75519]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:49:32 compute-0 ceph-mgr[75519]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:49:32 compute-0 ceph-mgr[75519]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:49:32 compute-0 ceph-mgr[75519]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:49:32 compute-0 ceph-mgr[75519]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:49:32 compute-0 ceph-mgr[75519]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:49:33 compute-0 ceph-mgr[75519]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 31 08:49:33 compute-0 ceph-mgr[75519]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 08:49:33 compute-0 ceph-mgr[75519]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 31 08:49:33 compute-0 ceph-mgr[75519]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 08:49:33 compute-0 ceph-mgr[75519]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 08:49:33 compute-0 ceph-mgr[75519]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 08:49:33 compute-0 ceph-mgr[75519]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 08:49:33 compute-0 ceph-mgr[75519]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 08:49:33 compute-0 ceph-mgr[75519]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 08:49:33 compute-0 ceph-mgr[75519]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 08:49:33 compute-0 nova_compute[238824]: 2026-01-31 08:49:33.339 238828 DEBUG oslo_service.periodic_task [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:49:33 compute-0 sshd-session[259335]: Invalid user solana from 92.118.39.76 port 33156
Jan 31 08:49:33 compute-0 sshd-session[259335]: Connection closed by invalid user solana 92.118.39.76 port 33156 [preauth]
Jan 31 08:49:33 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1423: 305 pgs: 305 active+clean; 21 MiB data, 157 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:49:33 compute-0 ceph-mon[75227]: pgmap v1423: 305 pgs: 305 active+clean; 21 MiB data, 157 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:49:34 compute-0 nova_compute[238824]: 2026-01-31 08:49:34.340 238828 DEBUG oslo_service.periodic_task [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:49:35 compute-0 nova_compute[238824]: 2026-01-31 08:49:35.352 238828 DEBUG oslo_service.periodic_task [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:49:35 compute-0 nova_compute[238824]: 2026-01-31 08:49:35.373 238828 DEBUG oslo_concurrency.lockutils [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:49:35 compute-0 nova_compute[238824]: 2026-01-31 08:49:35.374 238828 DEBUG oslo_concurrency.lockutils [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:49:35 compute-0 nova_compute[238824]: 2026-01-31 08:49:35.374 238828 DEBUG oslo_concurrency.lockutils [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:49:35 compute-0 nova_compute[238824]: 2026-01-31 08:49:35.374 238828 DEBUG nova.compute.resource_tracker [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 31 08:49:35 compute-0 nova_compute[238824]: 2026-01-31 08:49:35.375 238828 DEBUG oslo_concurrency.processutils [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:49:35 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1424: 305 pgs: 305 active+clean; 21 MiB data, 157 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:49:35 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 31 08:49:35 compute-0 ceph-mon[75227]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1133995121' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 31 08:49:35 compute-0 nova_compute[238824]: 2026-01-31 08:49:35.898 238828 DEBUG oslo_concurrency.processutils [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.523s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:49:35 compute-0 ceph-mon[75227]: pgmap v1424: 305 pgs: 305 active+clean; 21 MiB data, 157 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:49:35 compute-0 ceph-mon[75227]: from='client.? 192.168.122.100:0/1133995121' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 31 08:49:36 compute-0 nova_compute[238824]: 2026-01-31 08:49:36.011 238828 WARNING nova.virt.libvirt.driver [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 31 08:49:36 compute-0 nova_compute[238824]: 2026-01-31 08:49:36.012 238828 DEBUG nova.compute.resource_tracker [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5018MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 31 08:49:36 compute-0 nova_compute[238824]: 2026-01-31 08:49:36.012 238828 DEBUG oslo_concurrency.lockutils [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:49:36 compute-0 nova_compute[238824]: 2026-01-31 08:49:36.012 238828 DEBUG oslo_concurrency.lockutils [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:49:36 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:49:36 compute-0 nova_compute[238824]: 2026-01-31 08:49:36.164 238828 DEBUG nova.compute.resource_tracker [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 31 08:49:36 compute-0 nova_compute[238824]: 2026-01-31 08:49:36.165 238828 DEBUG nova.compute.resource_tracker [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 31 08:49:36 compute-0 nova_compute[238824]: 2026-01-31 08:49:36.187 238828 DEBUG oslo_concurrency.processutils [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:49:36 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 31 08:49:36 compute-0 ceph-mon[75227]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3803382574' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 31 08:49:36 compute-0 nova_compute[238824]: 2026-01-31 08:49:36.733 238828 DEBUG oslo_concurrency.processutils [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.546s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:49:36 compute-0 nova_compute[238824]: 2026-01-31 08:49:36.737 238828 DEBUG nova.compute.provider_tree [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Inventory has not changed in ProviderTree for provider: 6d4ff98f-eb37-47a1-bfaf-01e7f5329d98 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 31 08:49:36 compute-0 nova_compute[238824]: 2026-01-31 08:49:36.751 238828 DEBUG nova.scheduler.client.report [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Inventory has not changed for provider 6d4ff98f-eb37-47a1-bfaf-01e7f5329d98 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 31 08:49:36 compute-0 nova_compute[238824]: 2026-01-31 08:49:36.753 238828 DEBUG nova.compute.resource_tracker [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 31 08:49:36 compute-0 nova_compute[238824]: 2026-01-31 08:49:36.753 238828 DEBUG oslo_concurrency.lockutils [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.741s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:49:36 compute-0 ceph-mon[75227]: from='client.? 192.168.122.100:0/3803382574' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 31 08:49:37 compute-0 nova_compute[238824]: 2026-01-31 08:49:37.734 238828 DEBUG oslo_service.periodic_task [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:49:37 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1425: 305 pgs: 305 active+clean; 21 MiB data, 157 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:49:37 compute-0 ceph-mon[75227]: pgmap v1425: 305 pgs: 305 active+clean; 21 MiB data, 157 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:49:38 compute-0 podman[259382]: 2026-01-31 08:49:38.179114537 +0000 UTC m=+0.063843527 container health_status 5cc46d1955888fed41771eb977e7f9416e280539f01559118253757fe3eb0869 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '3b798815db4eef76ff54d2cfe5801aee605c637f16e47a4297de289ab1fdb9c1-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Jan 31 08:49:38 compute-0 podman[259381]: 2026-01-31 08:49:38.209161011 +0000 UTC m=+0.096708582 container health_status 14c3c41ef3ecc0cb180f3b1c6b2646401de390411c75f2edf127900ced71a3ae (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '3b798815db4eef76ff54d2cfe5801aee605c637f16e47a4297de289ab1fdb9c1-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_controller)
Jan 31 08:49:39 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1426: 305 pgs: 305 active+clean; 21 MiB data, 157 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:49:39 compute-0 ceph-mon[75227]: pgmap v1426: 305 pgs: 305 active+clean; 21 MiB data, 157 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:49:41 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:49:41 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1427: 305 pgs: 305 active+clean; 21 MiB data, 157 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:49:41 compute-0 ceph-mon[75227]: pgmap v1427: 305 pgs: 305 active+clean; 21 MiB data, 157 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:49:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] _maybe_adjust
Jan 31 08:49:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:49:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 31 08:49:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:49:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 08:49:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:49:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 08:49:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:49:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 08:49:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:49:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.00033324202763395413 of space, bias 1.0, pg target 0.09997260829018624 quantized to 32 (current 32)
Jan 31 08:49:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:49:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 2.535042946262736e-06 of space, bias 4.0, pg target 0.003042051535515283 quantized to 16 (current 16)
Jan 31 08:49:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:49:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 08:49:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:49:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 31 08:49:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:49:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Jan 31 08:49:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:49:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 08:49:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:49:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 31 08:49:43 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1428: 305 pgs: 305 active+clean; 21 MiB data, 157 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:49:43 compute-0 ceph-mon[75227]: pgmap v1428: 305 pgs: 305 active+clean; 21 MiB data, 157 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:49:45 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1429: 305 pgs: 305 active+clean; 21 MiB data, 157 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:49:45 compute-0 ceph-mon[75227]: pgmap v1429: 305 pgs: 305 active+clean; 21 MiB data, 157 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:49:46 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:49:47 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1430: 305 pgs: 305 active+clean; 21 MiB data, 157 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:49:47 compute-0 ceph-mon[75227]: pgmap v1430: 305 pgs: 305 active+clean; 21 MiB data, 157 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:49:49 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1431: 305 pgs: 305 active+clean; 21 MiB data, 157 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:49:49 compute-0 ceph-mon[75227]: pgmap v1431: 305 pgs: 305 active+clean; 21 MiB data, 157 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:49:51 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:49:51 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1432: 305 pgs: 305 active+clean; 21 MiB data, 157 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:49:51 compute-0 ceph-mon[75227]: pgmap v1432: 305 pgs: 305 active+clean; 21 MiB data, 157 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:49:53 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1433: 305 pgs: 305 active+clean; 21 MiB data, 157 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:49:53 compute-0 ceph-mon[75227]: pgmap v1433: 305 pgs: 305 active+clean; 21 MiB data, 157 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:49:55 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1434: 305 pgs: 305 active+clean; 21 MiB data, 157 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:49:55 compute-0 ceph-mon[75227]: pgmap v1434: 305 pgs: 305 active+clean; 21 MiB data, 157 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:49:56 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:49:57 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1435: 305 pgs: 305 active+clean; 21 MiB data, 157 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:49:57 compute-0 ceph-mon[75227]: pgmap v1435: 305 pgs: 305 active+clean; 21 MiB data, 157 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:49:59 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e142 do_prune osdmap full prune enabled
Jan 31 08:49:59 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e143 e143: 3 total, 3 up, 3 in
Jan 31 08:49:59 compute-0 ceph-mon[75227]: log_channel(cluster) log [DBG] : osdmap e143: 3 total, 3 up, 3 in
Jan 31 08:49:59 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1437: 305 pgs: 305 active+clean; 8.5 MiB data, 157 MiB used, 60 GiB / 60 GiB avail; 15 KiB/s rd, 921 B/s wr, 19 op/s
Jan 31 08:50:00 compute-0 ceph-mon[75227]: osdmap e143: 3 total, 3 up, 3 in
Jan 31 08:50:00 compute-0 ceph-mon[75227]: pgmap v1437: 305 pgs: 305 active+clean; 8.5 MiB data, 157 MiB used, 60 GiB / 60 GiB avail; 15 KiB/s rd, 921 B/s wr, 19 op/s
Jan 31 08:50:01 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:50:01 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1438: 305 pgs: 305 active+clean; 461 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 1.4 KiB/s wr, 25 op/s
Jan 31 08:50:01 compute-0 ceph-mon[75227]: pgmap v1438: 305 pgs: 305 active+clean; 461 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 1.4 KiB/s wr, 25 op/s
Jan 31 08:50:02 compute-0 ceph-mgr[75519]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:50:02 compute-0 ceph-mgr[75519]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:50:02 compute-0 ceph-mgr[75519]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:50:02 compute-0 ceph-mgr[75519]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:50:02 compute-0 ceph-mgr[75519]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:50:02 compute-0 ceph-mgr[75519]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:50:03 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1439: 305 pgs: 305 active+clean; 461 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 1.4 KiB/s wr, 25 op/s
Jan 31 08:50:03 compute-0 ceph-mon[75227]: pgmap v1439: 305 pgs: 305 active+clean; 461 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 1.4 KiB/s wr, 25 op/s
Jan 31 08:50:05 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1440: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 1.4 KiB/s wr, 25 op/s
Jan 31 08:50:05 compute-0 ceph-mon[75227]: pgmap v1440: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 1.4 KiB/s wr, 25 op/s
Jan 31 08:50:06 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:50:06 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e143 do_prune osdmap full prune enabled
Jan 31 08:50:06 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e144 e144: 3 total, 3 up, 3 in
Jan 31 08:50:06 compute-0 ceph-mon[75227]: log_channel(cluster) log [DBG] : osdmap e144: 3 total, 3 up, 3 in
Jan 31 08:50:07 compute-0 ceph-mon[75227]: osdmap e144: 3 total, 3 up, 3 in
Jan 31 08:50:07 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1442: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail; 21 KiB/s rd, 1.6 KiB/s wr, 28 op/s
Jan 31 08:50:08 compute-0 ceph-mon[75227]: pgmap v1442: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail; 21 KiB/s rd, 1.6 KiB/s wr, 28 op/s
Jan 31 08:50:09 compute-0 podman[259431]: 2026-01-31 08:50:09.153502718 +0000 UTC m=+0.038333583 container health_status 5cc46d1955888fed41771eb977e7f9416e280539f01559118253757fe3eb0869 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '3b798815db4eef76ff54d2cfe5801aee605c637f16e47a4297de289ab1fdb9c1-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ovn_metadata_agent, managed_by=edpm_ansible)
Jan 31 08:50:09 compute-0 podman[259430]: 2026-01-31 08:50:09.206112351 +0000 UTC m=+0.089627379 container health_status 14c3c41ef3ecc0cb180f3b1c6b2646401de390411c75f2edf127900ced71a3ae (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '3b798815db4eef76ff54d2cfe5801aee605c637f16e47a4297de289ab1fdb9c1-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, container_name=ovn_controller, org.label-schema.build-date=20260127, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Jan 31 08:50:09 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1443: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail; 3.1 KiB/s rd, 511 B/s wr, 5 op/s
Jan 31 08:50:10 compute-0 ceph-mon[75227]: pgmap v1443: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail; 3.1 KiB/s rd, 511 B/s wr, 5 op/s
Jan 31 08:50:11 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:50:11 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1444: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:50:12 compute-0 ceph-mon[75227]: pgmap v1444: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:50:13 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1445: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:50:15 compute-0 ceph-mon[75227]: pgmap v1445: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:50:15 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1446: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:50:16 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:50:17 compute-0 ceph-mon[75227]: pgmap v1446: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:50:17 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1447: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:50:17 compute-0 ovn_metadata_agent[154972]: 2026-01-31 08:50:17.912 154977 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:50:17 compute-0 ovn_metadata_agent[154972]: 2026-01-31 08:50:17.912 154977 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:50:17 compute-0 ovn_metadata_agent[154972]: 2026-01-31 08:50:17.913 154977 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:50:18 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 31 08:50:18 compute-0 ceph-mon[75227]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2255035226' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 31 08:50:18 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 31 08:50:18 compute-0 ceph-mon[75227]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2255035226' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 31 08:50:18 compute-0 ceph-mon[75227]: from='client.? 192.168.122.10:0/2255035226' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 31 08:50:18 compute-0 ceph-mon[75227]: from='client.? 192.168.122.10:0/2255035226' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 31 08:50:19 compute-0 ceph-mon[75227]: pgmap v1447: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:50:19 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1448: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:50:21 compute-0 ceph-mon[75227]: pgmap v1448: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:50:21 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:50:21 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1449: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:50:23 compute-0 ceph-mon[75227]: pgmap v1449: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:50:23 compute-0 nova_compute[238824]: 2026-01-31 08:50:23.339 238828 DEBUG oslo_service.periodic_task [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:50:23 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1450: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:50:25 compute-0 ceph-mon[75227]: pgmap v1450: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:50:25 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1451: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:50:26 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:50:26 compute-0 nova_compute[238824]: 2026-01-31 08:50:26.339 238828 DEBUG oslo_service.periodic_task [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:50:27 compute-0 ceph-mon[75227]: pgmap v1451: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:50:27 compute-0 nova_compute[238824]: 2026-01-31 08:50:27.340 238828 DEBUG oslo_service.periodic_task [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:50:27 compute-0 nova_compute[238824]: 2026-01-31 08:50:27.340 238828 DEBUG nova.compute.manager [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 31 08:50:27 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1452: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:50:29 compute-0 ceph-mon[75227]: pgmap v1452: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:50:29 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1453: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:50:30 compute-0 nova_compute[238824]: 2026-01-31 08:50:30.340 238828 DEBUG oslo_service.periodic_task [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:50:31 compute-0 ceph-mon[75227]: pgmap v1453: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:50:31 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:50:31 compute-0 sudo[259475]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 08:50:31 compute-0 sudo[259475]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:50:31 compute-0 sudo[259475]: pam_unix(sudo:session): session closed for user root
Jan 31 08:50:31 compute-0 sudo[259500]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/82c880e6-d992-5408-8b12-efff9c275473/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Jan 31 08:50:31 compute-0 sudo[259500]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:50:31 compute-0 ceph-mgr[75519]: [balancer INFO root] Optimize plan auto_2026-01-31_08:50:31
Jan 31 08:50:31 compute-0 ceph-mgr[75519]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 31 08:50:31 compute-0 ceph-mgr[75519]: [balancer INFO root] do_upmap
Jan 31 08:50:31 compute-0 ceph-mgr[75519]: [balancer INFO root] pools ['cephfs.cephfs.data', 'images', 'default.rgw.log', 'default.rgw.meta', 'volumes', '.mgr', '.rgw.root', 'default.rgw.control', 'backups', 'vms', 'cephfs.cephfs.meta']
Jan 31 08:50:31 compute-0 ceph-mgr[75519]: [balancer INFO root] prepared 0/10 upmap changes
Jan 31 08:50:31 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1454: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:50:32 compute-0 sudo[259500]: pam_unix(sudo:session): session closed for user root
Jan 31 08:50:32 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 31 08:50:32 compute-0 ceph-mon[75227]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 08:50:32 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 31 08:50:32 compute-0 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 31 08:50:32 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 31 08:50:32 compute-0 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 08:50:32 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 31 08:50:32 compute-0 ceph-mon[75227]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 31 08:50:32 compute-0 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 08:50:32 compute-0 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 31 08:50:32 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 31 08:50:32 compute-0 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 31 08:50:32 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 31 08:50:32 compute-0 ceph-mon[75227]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 08:50:32 compute-0 sudo[259556]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 08:50:32 compute-0 sudo[259556]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:50:32 compute-0 sudo[259556]: pam_unix(sudo:session): session closed for user root
Jan 31 08:50:32 compute-0 sudo[259581]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/82c880e6-d992-5408-8b12-efff9c275473/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 82c880e6-d992-5408-8b12-efff9c275473 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Jan 31 08:50:32 compute-0 sudo[259581]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:50:32 compute-0 nova_compute[238824]: 2026-01-31 08:50:32.340 238828 DEBUG oslo_service.periodic_task [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:50:32 compute-0 podman[259618]: 2026-01-31 08:50:32.47984022 +0000 UTC m=+0.040724132 container create 6b04cccc3454259882a029073bdd4e6e10aa7c683efa249cd1cf4ceba25fd345 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=trusting_wozniak, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 08:50:32 compute-0 systemd[1]: Started libpod-conmon-6b04cccc3454259882a029073bdd4e6e10aa7c683efa249cd1cf4ceba25fd345.scope.
Jan 31 08:50:32 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:50:32 compute-0 podman[259618]: 2026-01-31 08:50:32.560217352 +0000 UTC m=+0.121101314 container init 6b04cccc3454259882a029073bdd4e6e10aa7c683efa249cd1cf4ceba25fd345 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=trusting_wozniak, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030)
Jan 31 08:50:32 compute-0 podman[259618]: 2026-01-31 08:50:32.463596723 +0000 UTC m=+0.024480655 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 08:50:32 compute-0 podman[259618]: 2026-01-31 08:50:32.566901784 +0000 UTC m=+0.127785716 container start 6b04cccc3454259882a029073bdd4e6e10aa7c683efa249cd1cf4ceba25fd345 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=trusting_wozniak, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 08:50:32 compute-0 trusting_wozniak[259634]: 167 167
Jan 31 08:50:32 compute-0 systemd[1]: libpod-6b04cccc3454259882a029073bdd4e6e10aa7c683efa249cd1cf4ceba25fd345.scope: Deactivated successfully.
Jan 31 08:50:32 compute-0 conmon[259634]: conmon 6b04cccc3454259882a0 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-6b04cccc3454259882a029073bdd4e6e10aa7c683efa249cd1cf4ceba25fd345.scope/container/memory.events
Jan 31 08:50:32 compute-0 podman[259618]: 2026-01-31 08:50:32.572414143 +0000 UTC m=+0.133298085 container attach 6b04cccc3454259882a029073bdd4e6e10aa7c683efa249cd1cf4ceba25fd345 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=trusting_wozniak, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 08:50:32 compute-0 podman[259618]: 2026-01-31 08:50:32.573298198 +0000 UTC m=+0.134182130 container died 6b04cccc3454259882a029073bdd4e6e10aa7c683efa249cd1cf4ceba25fd345 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=trusting_wozniak, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, io.buildah.version=1.41.3)
Jan 31 08:50:32 compute-0 systemd[1]: var-lib-containers-storage-overlay-0576fbec2df5fffcf1c643f7a252602f0d14f3de8f7e11e96cab66f4152601dd-merged.mount: Deactivated successfully.
Jan 31 08:50:32 compute-0 podman[259618]: 2026-01-31 08:50:32.619306421 +0000 UTC m=+0.180190353 container remove 6b04cccc3454259882a029073bdd4e6e10aa7c683efa249cd1cf4ceba25fd345 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=trusting_wozniak, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Jan 31 08:50:32 compute-0 systemd[1]: libpod-conmon-6b04cccc3454259882a029073bdd4e6e10aa7c683efa249cd1cf4ceba25fd345.scope: Deactivated successfully.
Jan 31 08:50:32 compute-0 podman[259658]: 2026-01-31 08:50:32.766166055 +0000 UTC m=+0.049836914 container create 0c405f3b2a21acaf86671c11c4cc0b8d4aa9f780e01c72ef95065a9131f51003 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wizardly_almeida, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Jan 31 08:50:32 compute-0 systemd[1]: Started libpod-conmon-0c405f3b2a21acaf86671c11c4cc0b8d4aa9f780e01c72ef95065a9131f51003.scope.
Jan 31 08:50:32 compute-0 ceph-mgr[75519]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:50:32 compute-0 ceph-mgr[75519]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:50:32 compute-0 ceph-mgr[75519]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:50:32 compute-0 ceph-mgr[75519]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:50:32 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:50:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bd29ab2fcb1e4a2829b0775e02f0a455fb014f7bf8af298f3957c93aa418ac59/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 08:50:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bd29ab2fcb1e4a2829b0775e02f0a455fb014f7bf8af298f3957c93aa418ac59/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 08:50:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bd29ab2fcb1e4a2829b0775e02f0a455fb014f7bf8af298f3957c93aa418ac59/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 08:50:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bd29ab2fcb1e4a2829b0775e02f0a455fb014f7bf8af298f3957c93aa418ac59/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 08:50:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bd29ab2fcb1e4a2829b0775e02f0a455fb014f7bf8af298f3957c93aa418ac59/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 31 08:50:32 compute-0 ceph-mgr[75519]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:50:32 compute-0 ceph-mgr[75519]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:50:32 compute-0 podman[259658]: 2026-01-31 08:50:32.751764021 +0000 UTC m=+0.035434900 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 08:50:32 compute-0 podman[259658]: 2026-01-31 08:50:32.862625 +0000 UTC m=+0.146295879 container init 0c405f3b2a21acaf86671c11c4cc0b8d4aa9f780e01c72ef95065a9131f51003 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wizardly_almeida, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 31 08:50:32 compute-0 podman[259658]: 2026-01-31 08:50:32.872180965 +0000 UTC m=+0.155851824 container start 0c405f3b2a21acaf86671c11c4cc0b8d4aa9f780e01c72ef95065a9131f51003 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wizardly_almeida, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, OSD_FLAVOR=default)
Jan 31 08:50:32 compute-0 podman[259658]: 2026-01-31 08:50:32.877108186 +0000 UTC m=+0.160779045 container attach 0c405f3b2a21acaf86671c11c4cc0b8d4aa9f780e01c72ef95065a9131f51003 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wizardly_almeida, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, ceph=True, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 08:50:33 compute-0 ceph-mon[75227]: pgmap v1454: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:50:33 compute-0 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 08:50:33 compute-0 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 31 08:50:33 compute-0 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 31 08:50:33 compute-0 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 08:50:33 compute-0 ceph-mgr[75519]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 31 08:50:33 compute-0 ceph-mgr[75519]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 08:50:33 compute-0 ceph-mgr[75519]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 31 08:50:33 compute-0 ceph-mgr[75519]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 08:50:33 compute-0 ceph-mgr[75519]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 08:50:33 compute-0 ceph-mgr[75519]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 08:50:33 compute-0 ceph-mgr[75519]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 08:50:33 compute-0 ceph-mgr[75519]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 08:50:33 compute-0 ceph-mgr[75519]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 08:50:33 compute-0 ceph-mgr[75519]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 08:50:33 compute-0 wizardly_almeida[259675]: --> passed data devices: 0 physical, 3 LVM
Jan 31 08:50:33 compute-0 wizardly_almeida[259675]: --> All data devices are unavailable
Jan 31 08:50:33 compute-0 systemd[1]: libpod-0c405f3b2a21acaf86671c11c4cc0b8d4aa9f780e01c72ef95065a9131f51003.scope: Deactivated successfully.
Jan 31 08:50:33 compute-0 podman[259695]: 2026-01-31 08:50:33.35704108 +0000 UTC m=+0.025348040 container died 0c405f3b2a21acaf86671c11c4cc0b8d4aa9f780e01c72ef95065a9131f51003 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wizardly_almeida, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Jan 31 08:50:33 compute-0 systemd[1]: var-lib-containers-storage-overlay-bd29ab2fcb1e4a2829b0775e02f0a455fb014f7bf8af298f3957c93aa418ac59-merged.mount: Deactivated successfully.
Jan 31 08:50:33 compute-0 podman[259695]: 2026-01-31 08:50:33.401923471 +0000 UTC m=+0.070230431 container remove 0c405f3b2a21acaf86671c11c4cc0b8d4aa9f780e01c72ef95065a9131f51003 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wizardly_almeida, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, ceph=True)
Jan 31 08:50:33 compute-0 systemd[1]: libpod-conmon-0c405f3b2a21acaf86671c11c4cc0b8d4aa9f780e01c72ef95065a9131f51003.scope: Deactivated successfully.
Jan 31 08:50:33 compute-0 sudo[259581]: pam_unix(sudo:session): session closed for user root
Jan 31 08:50:33 compute-0 sudo[259710]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 08:50:33 compute-0 sudo[259710]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:50:33 compute-0 sudo[259710]: pam_unix(sudo:session): session closed for user root
Jan 31 08:50:33 compute-0 sudo[259735]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/82c880e6-d992-5408-8b12-efff9c275473/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 82c880e6-d992-5408-8b12-efff9c275473 -- lvm list --format json
Jan 31 08:50:33 compute-0 sudo[259735]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:50:33 compute-0 podman[259772]: 2026-01-31 08:50:33.827932534 +0000 UTC m=+0.039176568 container create 2649e72f2f7026e95cf94533104aacc5ce8e673fe493e84e5e5987663ac11c46 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=agitated_mahavira, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 08:50:33 compute-0 systemd[1]: Started libpod-conmon-2649e72f2f7026e95cf94533104aacc5ce8e673fe493e84e5e5987663ac11c46.scope.
Jan 31 08:50:33 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:50:33 compute-0 podman[259772]: 2026-01-31 08:50:33.889107123 +0000 UTC m=+0.100351187 container init 2649e72f2f7026e95cf94533104aacc5ce8e673fe493e84e5e5987663ac11c46 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=agitated_mahavira, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030)
Jan 31 08:50:33 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1455: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:50:33 compute-0 podman[259772]: 2026-01-31 08:50:33.894163769 +0000 UTC m=+0.105407823 container start 2649e72f2f7026e95cf94533104aacc5ce8e673fe493e84e5e5987663ac11c46 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=agitated_mahavira, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.41.3)
Jan 31 08:50:33 compute-0 podman[259772]: 2026-01-31 08:50:33.897603838 +0000 UTC m=+0.108847912 container attach 2649e72f2f7026e95cf94533104aacc5ce8e673fe493e84e5e5987663ac11c46 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=agitated_mahavira, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 08:50:33 compute-0 agitated_mahavira[259788]: 167 167
Jan 31 08:50:33 compute-0 systemd[1]: libpod-2649e72f2f7026e95cf94533104aacc5ce8e673fe493e84e5e5987663ac11c46.scope: Deactivated successfully.
Jan 31 08:50:33 compute-0 podman[259772]: 2026-01-31 08:50:33.900601194 +0000 UTC m=+0.111845248 container died 2649e72f2f7026e95cf94533104aacc5ce8e673fe493e84e5e5987663ac11c46 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=agitated_mahavira, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True)
Jan 31 08:50:33 compute-0 podman[259772]: 2026-01-31 08:50:33.812327375 +0000 UTC m=+0.023571459 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 08:50:33 compute-0 systemd[1]: var-lib-containers-storage-overlay-fc7f27acb6f001a18f3ecfb3f95c89e83af6d9df020a9348bd5ae43d0ffb0a1a-merged.mount: Deactivated successfully.
Jan 31 08:50:33 compute-0 podman[259772]: 2026-01-31 08:50:33.93800329 +0000 UTC m=+0.149247334 container remove 2649e72f2f7026e95cf94533104aacc5ce8e673fe493e84e5e5987663ac11c46 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=agitated_mahavira, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 08:50:33 compute-0 systemd[1]: libpod-conmon-2649e72f2f7026e95cf94533104aacc5ce8e673fe493e84e5e5987663ac11c46.scope: Deactivated successfully.
Jan 31 08:50:34 compute-0 podman[259813]: 2026-01-31 08:50:34.070892792 +0000 UTC m=+0.040819985 container create a4e392f149703e8d1cd55e55c2bb8937c13db5517ea935e302f07171d06f0410 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_lalande, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Jan 31 08:50:34 compute-0 systemd[1]: Started libpod-conmon-a4e392f149703e8d1cd55e55c2bb8937c13db5517ea935e302f07171d06f0410.scope.
Jan 31 08:50:34 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:50:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/954a9c625f808b3400eb34645c7934c93c4bcf51e527544ef7b0a981db64554e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 08:50:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/954a9c625f808b3400eb34645c7934c93c4bcf51e527544ef7b0a981db64554e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 08:50:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/954a9c625f808b3400eb34645c7934c93c4bcf51e527544ef7b0a981db64554e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 08:50:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/954a9c625f808b3400eb34645c7934c93c4bcf51e527544ef7b0a981db64554e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 08:50:34 compute-0 podman[259813]: 2026-01-31 08:50:34.13271251 +0000 UTC m=+0.102639713 container init a4e392f149703e8d1cd55e55c2bb8937c13db5517ea935e302f07171d06f0410 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_lalande, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Jan 31 08:50:34 compute-0 podman[259813]: 2026-01-31 08:50:34.138674081 +0000 UTC m=+0.108601264 container start a4e392f149703e8d1cd55e55c2bb8937c13db5517ea935e302f07171d06f0410 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_lalande, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Jan 31 08:50:34 compute-0 podman[259813]: 2026-01-31 08:50:34.141514023 +0000 UTC m=+0.111441296 container attach a4e392f149703e8d1cd55e55c2bb8937c13db5517ea935e302f07171d06f0410 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_lalande, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Jan 31 08:50:34 compute-0 podman[259813]: 2026-01-31 08:50:34.055500339 +0000 UTC m=+0.025427542 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 08:50:34 compute-0 nova_compute[238824]: 2026-01-31 08:50:34.334 238828 DEBUG oslo_service.periodic_task [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:50:34 compute-0 nova_compute[238824]: 2026-01-31 08:50:34.357 238828 DEBUG oslo_service.periodic_task [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:50:34 compute-0 nova_compute[238824]: 2026-01-31 08:50:34.357 238828 DEBUG nova.compute.manager [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 31 08:50:34 compute-0 nova_compute[238824]: 2026-01-31 08:50:34.358 238828 DEBUG nova.compute.manager [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 31 08:50:34 compute-0 bold_lalande[259830]: {
Jan 31 08:50:34 compute-0 bold_lalande[259830]:     "0": [
Jan 31 08:50:34 compute-0 bold_lalande[259830]:         {
Jan 31 08:50:34 compute-0 bold_lalande[259830]:             "devices": [
Jan 31 08:50:34 compute-0 bold_lalande[259830]:                 "/dev/loop3"
Jan 31 08:50:34 compute-0 bold_lalande[259830]:             ],
Jan 31 08:50:34 compute-0 bold_lalande[259830]:             "lv_name": "ceph_lv0",
Jan 31 08:50:34 compute-0 bold_lalande[259830]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 08:50:34 compute-0 bold_lalande[259830]:             "lv_size": "21470642176",
Jan 31 08:50:34 compute-0 bold_lalande[259830]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=MTsNbY-MKaT-jGv0-3onj-5WQa-gnK0-BbfLsK,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=82c880e6-d992-5408-8b12-efff9c275473,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=39c36249-2898-4a76-b317-8e4ca379866f,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 31 08:50:34 compute-0 bold_lalande[259830]:             "lv_uuid": "MTsNbY-MKaT-jGv0-3onj-5WQa-gnK0-BbfLsK",
Jan 31 08:50:34 compute-0 bold_lalande[259830]:             "name": "ceph_lv0",
Jan 31 08:50:34 compute-0 bold_lalande[259830]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 08:50:34 compute-0 bold_lalande[259830]:             "tags": {
Jan 31 08:50:34 compute-0 bold_lalande[259830]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 31 08:50:34 compute-0 bold_lalande[259830]:                 "ceph.block_uuid": "MTsNbY-MKaT-jGv0-3onj-5WQa-gnK0-BbfLsK",
Jan 31 08:50:34 compute-0 bold_lalande[259830]:                 "ceph.cephx_lockbox_secret": "",
Jan 31 08:50:34 compute-0 bold_lalande[259830]:                 "ceph.cluster_fsid": "82c880e6-d992-5408-8b12-efff9c275473",
Jan 31 08:50:34 compute-0 bold_lalande[259830]:                 "ceph.cluster_name": "ceph",
Jan 31 08:50:34 compute-0 bold_lalande[259830]:                 "ceph.crush_device_class": "",
Jan 31 08:50:34 compute-0 bold_lalande[259830]:                 "ceph.encrypted": "0",
Jan 31 08:50:34 compute-0 bold_lalande[259830]:                 "ceph.objectstore": "bluestore",
Jan 31 08:50:34 compute-0 bold_lalande[259830]:                 "ceph.osd_fsid": "39c36249-2898-4a76-b317-8e4ca379866f",
Jan 31 08:50:34 compute-0 bold_lalande[259830]:                 "ceph.osd_id": "0",
Jan 31 08:50:34 compute-0 bold_lalande[259830]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 31 08:50:34 compute-0 bold_lalande[259830]:                 "ceph.type": "block",
Jan 31 08:50:34 compute-0 bold_lalande[259830]:                 "ceph.vdo": "0",
Jan 31 08:50:34 compute-0 bold_lalande[259830]:                 "ceph.with_tpm": "0"
Jan 31 08:50:34 compute-0 bold_lalande[259830]:             },
Jan 31 08:50:34 compute-0 bold_lalande[259830]:             "type": "block",
Jan 31 08:50:34 compute-0 bold_lalande[259830]:             "vg_name": "ceph_vg0"
Jan 31 08:50:34 compute-0 bold_lalande[259830]:         }
Jan 31 08:50:34 compute-0 bold_lalande[259830]:     ],
Jan 31 08:50:34 compute-0 bold_lalande[259830]:     "1": [
Jan 31 08:50:34 compute-0 bold_lalande[259830]:         {
Jan 31 08:50:34 compute-0 bold_lalande[259830]:             "devices": [
Jan 31 08:50:34 compute-0 bold_lalande[259830]:                 "/dev/loop4"
Jan 31 08:50:34 compute-0 bold_lalande[259830]:             ],
Jan 31 08:50:34 compute-0 bold_lalande[259830]:             "lv_name": "ceph_lv1",
Jan 31 08:50:34 compute-0 bold_lalande[259830]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 31 08:50:34 compute-0 bold_lalande[259830]:             "lv_size": "21470642176",
Jan 31 08:50:34 compute-0 bold_lalande[259830]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=p93Mbf-DMxT-pcUt-jSJE-SFna-oscq-yTAd40,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=82c880e6-d992-5408-8b12-efff9c275473,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=dacad4fa-56d8-4937-b2d8-306fb75187f3,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 31 08:50:34 compute-0 bold_lalande[259830]:             "lv_uuid": "p93Mbf-DMxT-pcUt-jSJE-SFna-oscq-yTAd40",
Jan 31 08:50:34 compute-0 bold_lalande[259830]:             "name": "ceph_lv1",
Jan 31 08:50:34 compute-0 bold_lalande[259830]:             "path": "/dev/ceph_vg1/ceph_lv1",
Jan 31 08:50:34 compute-0 bold_lalande[259830]:             "tags": {
Jan 31 08:50:34 compute-0 bold_lalande[259830]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 31 08:50:34 compute-0 bold_lalande[259830]:                 "ceph.block_uuid": "p93Mbf-DMxT-pcUt-jSJE-SFna-oscq-yTAd40",
Jan 31 08:50:34 compute-0 bold_lalande[259830]:                 "ceph.cephx_lockbox_secret": "",
Jan 31 08:50:34 compute-0 bold_lalande[259830]:                 "ceph.cluster_fsid": "82c880e6-d992-5408-8b12-efff9c275473",
Jan 31 08:50:34 compute-0 bold_lalande[259830]:                 "ceph.cluster_name": "ceph",
Jan 31 08:50:34 compute-0 bold_lalande[259830]:                 "ceph.crush_device_class": "",
Jan 31 08:50:34 compute-0 bold_lalande[259830]:                 "ceph.encrypted": "0",
Jan 31 08:50:34 compute-0 bold_lalande[259830]:                 "ceph.objectstore": "bluestore",
Jan 31 08:50:34 compute-0 bold_lalande[259830]:                 "ceph.osd_fsid": "dacad4fa-56d8-4937-b2d8-306fb75187f3",
Jan 31 08:50:34 compute-0 bold_lalande[259830]:                 "ceph.osd_id": "1",
Jan 31 08:50:34 compute-0 bold_lalande[259830]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 31 08:50:34 compute-0 bold_lalande[259830]:                 "ceph.type": "block",
Jan 31 08:50:34 compute-0 bold_lalande[259830]:                 "ceph.vdo": "0",
Jan 31 08:50:34 compute-0 bold_lalande[259830]:                 "ceph.with_tpm": "0"
Jan 31 08:50:34 compute-0 bold_lalande[259830]:             },
Jan 31 08:50:34 compute-0 bold_lalande[259830]:             "type": "block",
Jan 31 08:50:34 compute-0 bold_lalande[259830]:             "vg_name": "ceph_vg1"
Jan 31 08:50:34 compute-0 bold_lalande[259830]:         }
Jan 31 08:50:34 compute-0 bold_lalande[259830]:     ],
Jan 31 08:50:34 compute-0 bold_lalande[259830]:     "2": [
Jan 31 08:50:34 compute-0 bold_lalande[259830]:         {
Jan 31 08:50:34 compute-0 bold_lalande[259830]:             "devices": [
Jan 31 08:50:34 compute-0 bold_lalande[259830]:                 "/dev/loop5"
Jan 31 08:50:34 compute-0 bold_lalande[259830]:             ],
Jan 31 08:50:34 compute-0 bold_lalande[259830]:             "lv_name": "ceph_lv2",
Jan 31 08:50:34 compute-0 bold_lalande[259830]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 31 08:50:34 compute-0 bold_lalande[259830]:             "lv_size": "21470642176",
Jan 31 08:50:34 compute-0 bold_lalande[259830]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=fxd7JU-HnwP-NvcE-M4xv-EgEF-kK7y-w6dXCS,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=82c880e6-d992-5408-8b12-efff9c275473,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=faa25865-e7b6-44f9-8188-08bf287b941b,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 31 08:50:34 compute-0 bold_lalande[259830]:             "lv_uuid": "fxd7JU-HnwP-NvcE-M4xv-EgEF-kK7y-w6dXCS",
Jan 31 08:50:34 compute-0 bold_lalande[259830]:             "name": "ceph_lv2",
Jan 31 08:50:34 compute-0 bold_lalande[259830]:             "path": "/dev/ceph_vg2/ceph_lv2",
Jan 31 08:50:34 compute-0 bold_lalande[259830]:             "tags": {
Jan 31 08:50:34 compute-0 bold_lalande[259830]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 31 08:50:34 compute-0 bold_lalande[259830]:                 "ceph.block_uuid": "fxd7JU-HnwP-NvcE-M4xv-EgEF-kK7y-w6dXCS",
Jan 31 08:50:34 compute-0 bold_lalande[259830]:                 "ceph.cephx_lockbox_secret": "",
Jan 31 08:50:34 compute-0 bold_lalande[259830]:                 "ceph.cluster_fsid": "82c880e6-d992-5408-8b12-efff9c275473",
Jan 31 08:50:34 compute-0 bold_lalande[259830]:                 "ceph.cluster_name": "ceph",
Jan 31 08:50:34 compute-0 bold_lalande[259830]:                 "ceph.crush_device_class": "",
Jan 31 08:50:34 compute-0 bold_lalande[259830]:                 "ceph.encrypted": "0",
Jan 31 08:50:34 compute-0 bold_lalande[259830]:                 "ceph.objectstore": "bluestore",
Jan 31 08:50:34 compute-0 bold_lalande[259830]:                 "ceph.osd_fsid": "faa25865-e7b6-44f9-8188-08bf287b941b",
Jan 31 08:50:34 compute-0 bold_lalande[259830]:                 "ceph.osd_id": "2",
Jan 31 08:50:34 compute-0 bold_lalande[259830]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 31 08:50:34 compute-0 bold_lalande[259830]:                 "ceph.type": "block",
Jan 31 08:50:34 compute-0 bold_lalande[259830]:                 "ceph.vdo": "0",
Jan 31 08:50:34 compute-0 bold_lalande[259830]:                 "ceph.with_tpm": "0"
Jan 31 08:50:34 compute-0 bold_lalande[259830]:             },
Jan 31 08:50:34 compute-0 bold_lalande[259830]:             "type": "block",
Jan 31 08:50:34 compute-0 bold_lalande[259830]:             "vg_name": "ceph_vg2"
Jan 31 08:50:34 compute-0 bold_lalande[259830]:         }
Jan 31 08:50:34 compute-0 bold_lalande[259830]:     ]
Jan 31 08:50:34 compute-0 bold_lalande[259830]: }
Jan 31 08:50:34 compute-0 nova_compute[238824]: 2026-01-31 08:50:34.373 238828 DEBUG nova.compute.manager [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 31 08:50:34 compute-0 systemd[1]: libpod-a4e392f149703e8d1cd55e55c2bb8937c13db5517ea935e302f07171d06f0410.scope: Deactivated successfully.
Jan 31 08:50:34 compute-0 podman[259813]: 2026-01-31 08:50:34.400680607 +0000 UTC m=+0.370607810 container died a4e392f149703e8d1cd55e55c2bb8937c13db5517ea935e302f07171d06f0410 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_lalande, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Jan 31 08:50:34 compute-0 systemd[1]: var-lib-containers-storage-overlay-954a9c625f808b3400eb34645c7934c93c4bcf51e527544ef7b0a981db64554e-merged.mount: Deactivated successfully.
Jan 31 08:50:34 compute-0 podman[259813]: 2026-01-31 08:50:34.445704982 +0000 UTC m=+0.415632205 container remove a4e392f149703e8d1cd55e55c2bb8937c13db5517ea935e302f07171d06f0410 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_lalande, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Jan 31 08:50:34 compute-0 systemd[1]: libpod-conmon-a4e392f149703e8d1cd55e55c2bb8937c13db5517ea935e302f07171d06f0410.scope: Deactivated successfully.
Jan 31 08:50:34 compute-0 sudo[259735]: pam_unix(sudo:session): session closed for user root
Jan 31 08:50:34 compute-0 sudo[259852]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 08:50:34 compute-0 sudo[259852]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:50:34 compute-0 sudo[259852]: pam_unix(sudo:session): session closed for user root
Jan 31 08:50:34 compute-0 sudo[259877]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/82c880e6-d992-5408-8b12-efff9c275473/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 82c880e6-d992-5408-8b12-efff9c275473 -- raw list --format json
Jan 31 08:50:34 compute-0 sudo[259877]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:50:34 compute-0 podman[259913]: 2026-01-31 08:50:34.825067893 +0000 UTC m=+0.032773113 container create 948c2f885e9a20c92d3ab5c5a6a2352258e1564944f651b33bda3b06705120f0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=determined_mahavira, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 08:50:34 compute-0 systemd[1]: Started libpod-conmon-948c2f885e9a20c92d3ab5c5a6a2352258e1564944f651b33bda3b06705120f0.scope.
Jan 31 08:50:34 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:50:34 compute-0 podman[259913]: 2026-01-31 08:50:34.886421218 +0000 UTC m=+0.094126458 container init 948c2f885e9a20c92d3ab5c5a6a2352258e1564944f651b33bda3b06705120f0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=determined_mahavira, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 08:50:34 compute-0 podman[259913]: 2026-01-31 08:50:34.890946068 +0000 UTC m=+0.098651298 container start 948c2f885e9a20c92d3ab5c5a6a2352258e1564944f651b33bda3b06705120f0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=determined_mahavira, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 08:50:34 compute-0 podman[259913]: 2026-01-31 08:50:34.894417458 +0000 UTC m=+0.102122698 container attach 948c2f885e9a20c92d3ab5c5a6a2352258e1564944f651b33bda3b06705120f0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=determined_mahavira, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Jan 31 08:50:34 compute-0 determined_mahavira[259930]: 167 167
Jan 31 08:50:34 compute-0 systemd[1]: libpod-948c2f885e9a20c92d3ab5c5a6a2352258e1564944f651b33bda3b06705120f0.scope: Deactivated successfully.
Jan 31 08:50:34 compute-0 podman[259913]: 2026-01-31 08:50:34.897572499 +0000 UTC m=+0.105277729 container died 948c2f885e9a20c92d3ab5c5a6a2352258e1564944f651b33bda3b06705120f0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=determined_mahavira, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle)
Jan 31 08:50:34 compute-0 podman[259913]: 2026-01-31 08:50:34.810306399 +0000 UTC m=+0.018011649 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 08:50:34 compute-0 systemd[1]: var-lib-containers-storage-overlay-c73bba857782130bba137ff2eaa4f90494193e718519a0ced55eee113678035c-merged.mount: Deactivated successfully.
Jan 31 08:50:34 compute-0 podman[259913]: 2026-01-31 08:50:34.93445767 +0000 UTC m=+0.142162900 container remove 948c2f885e9a20c92d3ab5c5a6a2352258e1564944f651b33bda3b06705120f0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=determined_mahavira, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2)
Jan 31 08:50:34 compute-0 systemd[1]: libpod-conmon-948c2f885e9a20c92d3ab5c5a6a2352258e1564944f651b33bda3b06705120f0.scope: Deactivated successfully.
Jan 31 08:50:35 compute-0 podman[259954]: 2026-01-31 08:50:35.070520353 +0000 UTC m=+0.052871792 container create ea889ce375c88328933ac1871fb2f895af9752a34a3ce39f7638be596fd9577a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=peaceful_kalam, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3)
Jan 31 08:50:35 compute-0 systemd[1]: Started libpod-conmon-ea889ce375c88328933ac1871fb2f895af9752a34a3ce39f7638be596fd9577a.scope.
Jan 31 08:50:35 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:50:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e863b0ce9c1cc5e67eed7c3b3158bda047342d63748622a5042583e8030786f0/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 08:50:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e863b0ce9c1cc5e67eed7c3b3158bda047342d63748622a5042583e8030786f0/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 08:50:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e863b0ce9c1cc5e67eed7c3b3158bda047342d63748622a5042583e8030786f0/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 08:50:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e863b0ce9c1cc5e67eed7c3b3158bda047342d63748622a5042583e8030786f0/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 08:50:35 compute-0 ceph-mon[75227]: pgmap v1455: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:50:35 compute-0 podman[259954]: 2026-01-31 08:50:35.047764069 +0000 UTC m=+0.030115558 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 08:50:35 compute-0 podman[259954]: 2026-01-31 08:50:35.16915802 +0000 UTC m=+0.151509469 container init ea889ce375c88328933ac1871fb2f895af9752a34a3ce39f7638be596fd9577a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=peaceful_kalam, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 08:50:35 compute-0 podman[259954]: 2026-01-31 08:50:35.175097781 +0000 UTC m=+0.157449190 container start ea889ce375c88328933ac1871fb2f895af9752a34a3ce39f7638be596fd9577a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=peaceful_kalam, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030)
Jan 31 08:50:35 compute-0 podman[259954]: 2026-01-31 08:50:35.237064783 +0000 UTC m=+0.219416222 container attach ea889ce375c88328933ac1871fb2f895af9752a34a3ce39f7638be596fd9577a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=peaceful_kalam, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 31 08:50:35 compute-0 nova_compute[238824]: 2026-01-31 08:50:35.339 238828 DEBUG oslo_service.periodic_task [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:50:35 compute-0 lvm[260049]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 31 08:50:35 compute-0 lvm[260049]: VG ceph_vg0 finished
Jan 31 08:50:35 compute-0 lvm[260050]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 31 08:50:35 compute-0 lvm[260050]: VG ceph_vg1 finished
Jan 31 08:50:35 compute-0 lvm[260052]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 31 08:50:35 compute-0 lvm[260052]: VG ceph_vg2 finished
Jan 31 08:50:35 compute-0 peaceful_kalam[259971]: {}
Jan 31 08:50:35 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1456: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:50:35 compute-0 systemd[1]: libpod-ea889ce375c88328933ac1871fb2f895af9752a34a3ce39f7638be596fd9577a.scope: Deactivated successfully.
Jan 31 08:50:35 compute-0 podman[259954]: 2026-01-31 08:50:35.930934089 +0000 UTC m=+0.913285518 container died ea889ce375c88328933ac1871fb2f895af9752a34a3ce39f7638be596fd9577a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=peaceful_kalam, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030)
Jan 31 08:50:35 compute-0 systemd[1]: libpod-ea889ce375c88328933ac1871fb2f895af9752a34a3ce39f7638be596fd9577a.scope: Consumed 1.072s CPU time.
Jan 31 08:50:35 compute-0 systemd[1]: var-lib-containers-storage-overlay-e863b0ce9c1cc5e67eed7c3b3158bda047342d63748622a5042583e8030786f0-merged.mount: Deactivated successfully.
Jan 31 08:50:35 compute-0 podman[259954]: 2026-01-31 08:50:35.983484041 +0000 UTC m=+0.965835450 container remove ea889ce375c88328933ac1871fb2f895af9752a34a3ce39f7638be596fd9577a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=peaceful_kalam, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 08:50:35 compute-0 systemd[1]: libpod-conmon-ea889ce375c88328933ac1871fb2f895af9752a34a3ce39f7638be596fd9577a.scope: Deactivated successfully.
Jan 31 08:50:36 compute-0 sudo[259877]: pam_unix(sudo:session): session closed for user root
Jan 31 08:50:36 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 31 08:50:36 compute-0 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 08:50:36 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 31 08:50:36 compute-0 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 08:50:36 compute-0 sudo[260066]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 31 08:50:36 compute-0 sudo[260066]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:50:36 compute-0 sudo[260066]: pam_unix(sudo:session): session closed for user root
Jan 31 08:50:36 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:50:37 compute-0 ceph-mon[75227]: pgmap v1456: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:50:37 compute-0 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 08:50:37 compute-0 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 08:50:37 compute-0 nova_compute[238824]: 2026-01-31 08:50:37.336 238828 DEBUG oslo_service.periodic_task [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:50:37 compute-0 nova_compute[238824]: 2026-01-31 08:50:37.338 238828 DEBUG oslo_service.periodic_task [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:50:37 compute-0 nova_compute[238824]: 2026-01-31 08:50:37.369 238828 DEBUG oslo_concurrency.lockutils [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:50:37 compute-0 nova_compute[238824]: 2026-01-31 08:50:37.369 238828 DEBUG oslo_concurrency.lockutils [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:50:37 compute-0 nova_compute[238824]: 2026-01-31 08:50:37.370 238828 DEBUG oslo_concurrency.lockutils [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:50:37 compute-0 nova_compute[238824]: 2026-01-31 08:50:37.370 238828 DEBUG nova.compute.resource_tracker [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 31 08:50:37 compute-0 nova_compute[238824]: 2026-01-31 08:50:37.370 238828 DEBUG oslo_concurrency.processutils [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:50:37 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 31 08:50:37 compute-0 ceph-mon[75227]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2778539515' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 31 08:50:37 compute-0 nova_compute[238824]: 2026-01-31 08:50:37.880 238828 DEBUG oslo_concurrency.processutils [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.510s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:50:37 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1457: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:50:38 compute-0 nova_compute[238824]: 2026-01-31 08:50:38.015 238828 WARNING nova.virt.libvirt.driver [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 31 08:50:38 compute-0 nova_compute[238824]: 2026-01-31 08:50:38.016 238828 DEBUG nova.compute.resource_tracker [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5032MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 31 08:50:38 compute-0 nova_compute[238824]: 2026-01-31 08:50:38.016 238828 DEBUG oslo_concurrency.lockutils [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:50:38 compute-0 nova_compute[238824]: 2026-01-31 08:50:38.016 238828 DEBUG oslo_concurrency.lockutils [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:50:38 compute-0 ceph-mon[75227]: from='client.? 192.168.122.100:0/2778539515' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 31 08:50:38 compute-0 nova_compute[238824]: 2026-01-31 08:50:38.228 238828 DEBUG nova.compute.resource_tracker [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 31 08:50:38 compute-0 nova_compute[238824]: 2026-01-31 08:50:38.228 238828 DEBUG nova.compute.resource_tracker [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 31 08:50:38 compute-0 nova_compute[238824]: 2026-01-31 08:50:38.298 238828 DEBUG nova.scheduler.client.report [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Refreshing inventories for resource provider 6d4ff98f-eb37-47a1-bfaf-01e7f5329d98 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Jan 31 08:50:38 compute-0 nova_compute[238824]: 2026-01-31 08:50:38.362 238828 DEBUG nova.scheduler.client.report [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Updating ProviderTree inventory for provider 6d4ff98f-eb37-47a1-bfaf-01e7f5329d98 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Jan 31 08:50:38 compute-0 nova_compute[238824]: 2026-01-31 08:50:38.362 238828 DEBUG nova.compute.provider_tree [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Updating inventory in ProviderTree for provider 6d4ff98f-eb37-47a1-bfaf-01e7f5329d98 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Jan 31 08:50:38 compute-0 nova_compute[238824]: 2026-01-31 08:50:38.376 238828 DEBUG nova.scheduler.client.report [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Refreshing aggregate associations for resource provider 6d4ff98f-eb37-47a1-bfaf-01e7f5329d98, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Jan 31 08:50:38 compute-0 nova_compute[238824]: 2026-01-31 08:50:38.396 238828 DEBUG nova.scheduler.client.report [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Refreshing trait associations for resource provider 6d4ff98f-eb37-47a1-bfaf-01e7f5329d98, traits: COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_NODE,COMPUTE_IMAGE_TYPE_ISO,HW_CPU_X86_F16C,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_STORAGE_BUS_SCSI,HW_CPU_X86_FMA3,HW_CPU_X86_SHA,HW_CPU_X86_SSE,HW_CPU_X86_SSSE3,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,HW_CPU_X86_MMX,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_STORAGE_BUS_IDE,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_ACCELERATORS,HW_CPU_X86_CLMUL,HW_CPU_X86_BMI,HW_CPU_X86_AESNI,HW_CPU_X86_SSE2,HW_CPU_X86_SVM,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_NET_VIF_MODEL_E1000E,HW_CPU_X86_AVX,COMPUTE_SECURITY_TPM_2_0,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_DEVICE_TAGGING,COMPUTE_SECURITY_UEFI_SECURE_BOOT,HW_CPU_X86_SSE41,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_STORAGE_BUS_FDC,COMPUTE_GRAPHICS_MODEL_BOCHS,HW_CPU_X86_AVX2,HW_CPU_X86_BMI2,COMPUTE_VOLUME_EXTEND,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_TRUSTED_CERTS,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_SECURITY_TPM_1_2,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_GRAPHICS_MODEL_VIRTIO,HW_CPU_X86_ABM,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_RESCUE_BFV,HW_CPU_X86_SSE42,HW_CPU_X86_SSE4A,COMPUTE_STORAGE_BUS_SATA,COMPUTE_STORAGE_BUS_USB,COMPUTE_VIOMMU_MODEL_INTEL,HW_CPU_X86_AMD_SVM _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Jan 31 08:50:38 compute-0 nova_compute[238824]: 2026-01-31 08:50:38.428 238828 DEBUG oslo_concurrency.processutils [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:50:38 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 31 08:50:38 compute-0 ceph-mon[75227]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2752485080' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 31 08:50:38 compute-0 nova_compute[238824]: 2026-01-31 08:50:38.967 238828 DEBUG oslo_concurrency.processutils [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.539s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:50:38 compute-0 nova_compute[238824]: 2026-01-31 08:50:38.973 238828 DEBUG nova.compute.provider_tree [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Inventory has not changed in ProviderTree for provider: 6d4ff98f-eb37-47a1-bfaf-01e7f5329d98 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 31 08:50:39 compute-0 nova_compute[238824]: 2026-01-31 08:50:39.005 238828 DEBUG nova.scheduler.client.report [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Inventory has not changed for provider 6d4ff98f-eb37-47a1-bfaf-01e7f5329d98 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 31 08:50:39 compute-0 nova_compute[238824]: 2026-01-31 08:50:39.006 238828 DEBUG nova.compute.resource_tracker [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 31 08:50:39 compute-0 nova_compute[238824]: 2026-01-31 08:50:39.007 238828 DEBUG oslo_concurrency.lockutils [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.991s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:50:39 compute-0 ceph-mon[75227]: pgmap v1457: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:50:39 compute-0 ceph-mon[75227]: from='client.? 192.168.122.100:0/2752485080' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 31 08:50:39 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1458: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:50:40 compute-0 podman[260136]: 2026-01-31 08:50:40.157007808 +0000 UTC m=+0.047790025 container health_status 5cc46d1955888fed41771eb977e7f9416e280539f01559118253757fe3eb0869 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '3b798815db4eef76ff54d2cfe5801aee605c637f16e47a4297de289ab1fdb9c1-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Jan 31 08:50:40 compute-0 podman[260135]: 2026-01-31 08:50:40.181950546 +0000 UTC m=+0.072630370 container health_status 14c3c41ef3ecc0cb180f3b1c6b2646401de390411c75f2edf127900ced71a3ae (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, managed_by=edpm_ansible, container_name=ovn_controller, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '3b798815db4eef76ff54d2cfe5801aee605c637f16e47a4297de289ab1fdb9c1-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Jan 31 08:50:41 compute-0 ceph-mon[75227]: pgmap v1458: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:50:41 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:50:41 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1459: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:50:43 compute-0 ceph-mon[75227]: pgmap v1459: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:50:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] _maybe_adjust
Jan 31 08:50:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:50:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 31 08:50:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:50:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 08:50:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:50:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 08:50:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:50:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 08:50:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:50:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 2.546112324845754e-07 of space, bias 1.0, pg target 7.638336974537263e-05 quantized to 32 (current 32)
Jan 31 08:50:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:50:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 2.7421629738588775e-06 of space, bias 4.0, pg target 0.003290595568630653 quantized to 16 (current 16)
Jan 31 08:50:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:50:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 08:50:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:50:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 31 08:50:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:50:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Jan 31 08:50:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:50:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 08:50:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:50:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 31 08:50:43 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1460: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:50:45 compute-0 ceph-mon[75227]: pgmap v1460: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:50:45 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1461: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:50:46 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:50:47 compute-0 ceph-mon[75227]: pgmap v1461: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:50:47 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1462: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:50:49 compute-0 ceph-mon[75227]: pgmap v1462: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:50:49 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1463: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:50:51 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:50:51 compute-0 ceph-mon[75227]: pgmap v1463: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:50:51 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1464: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:50:53 compute-0 ceph-mon[75227]: pgmap v1464: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:50:53 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1465: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:50:55 compute-0 ceph-mon[75227]: pgmap v1465: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:50:55 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1466: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:50:56 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:50:57 compute-0 ceph-mon[75227]: pgmap v1466: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:50:57 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1467: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:50:59 compute-0 ceph-mon[75227]: pgmap v1467: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:50:59 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1468: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:51:01 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:51:01 compute-0 ceph-mon[75227]: pgmap v1468: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:51:01 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1469: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:51:02 compute-0 ceph-mgr[75519]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:51:02 compute-0 ceph-mgr[75519]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:51:02 compute-0 ceph-mgr[75519]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:51:02 compute-0 ceph-mgr[75519]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:51:02 compute-0 ceph-mgr[75519]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:51:02 compute-0 ceph-mgr[75519]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:51:03 compute-0 ceph-mon[75227]: pgmap v1469: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:51:03 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1470: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:51:05 compute-0 ceph-mon[75227]: pgmap v1470: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:51:05 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1471: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:51:06 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:51:07 compute-0 ceph-mon[75227]: pgmap v1471: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:51:07 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1472: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:51:09 compute-0 ceph-mon[75227]: pgmap v1472: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:51:09 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1473: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:51:11 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:51:11 compute-0 podman[260181]: 2026-01-31 08:51:11.16591205 +0000 UTC m=+0.058327162 container health_status 14c3c41ef3ecc0cb180f3b1c6b2646401de390411c75f2edf127900ced71a3ae (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '3b798815db4eef76ff54d2cfe5801aee605c637f16e47a4297de289ab1fdb9c1-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, container_name=ovn_controller, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Jan 31 08:51:11 compute-0 podman[260182]: 2026-01-31 08:51:11.18099644 +0000 UTC m=+0.068049199 container health_status 5cc46d1955888fed41771eb977e7f9416e280539f01559118253757fe3eb0869 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '3b798815db4eef76ff54d2cfe5801aee605c637f16e47a4297de289ab1fdb9c1-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent)
Jan 31 08:51:11 compute-0 ceph-mon[75227]: pgmap v1473: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:51:11 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1474: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:51:13 compute-0 ceph-mon[75227]: pgmap v1474: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:51:13 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1475: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:51:15 compute-0 ceph-mon[75227]: pgmap v1475: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:51:15 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1476: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:51:16 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:51:17 compute-0 ceph-mon[75227]: pgmap v1476: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:51:17 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1477: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:51:17 compute-0 ovn_metadata_agent[154972]: 2026-01-31 08:51:17.913 154977 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:51:17 compute-0 ovn_metadata_agent[154972]: 2026-01-31 08:51:17.913 154977 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:51:17 compute-0 ovn_metadata_agent[154972]: 2026-01-31 08:51:17.913 154977 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:51:18 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 31 08:51:18 compute-0 ceph-mon[75227]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1866503327' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 31 08:51:18 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 31 08:51:18 compute-0 ceph-mon[75227]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1866503327' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 31 08:51:18 compute-0 sshd-session[260226]: Accepted publickey for zuul from 192.168.122.10 port 60424 ssh2: ECDSA SHA256:Skb+4tfaoVfLHQIqkRSeA/sFlTrVc6ZnX8V66qTLHY8
Jan 31 08:51:18 compute-0 systemd-logind[793]: New session 56 of user zuul.
Jan 31 08:51:18 compute-0 systemd[1]: Started Session 56 of User zuul.
Jan 31 08:51:18 compute-0 sshd-session[260226]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 31 08:51:18 compute-0 sudo[260230]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/bash -c 'rm -rf /var/tmp/sos-osp && mkdir /var/tmp/sos-osp && sos report --batch --all-logs --tmp-dir=/var/tmp/sos-osp  -p container,openstack_edpm,system,storage,virt'
Jan 31 08:51:18 compute-0 sudo[260230]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 31 08:51:18 compute-0 ceph-mon[75227]: from='client.? 192.168.122.10:0/1866503327' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 31 08:51:18 compute-0 ceph-mon[75227]: from='client.? 192.168.122.10:0/1866503327' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 31 08:51:19 compute-0 ceph-mon[75227]: pgmap v1477: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:51:19 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1478: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:51:20 compute-0 ceph-mgr[75519]: log_channel(audit) log [DBG] : from='client.14612 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 08:51:21 compute-0 ceph-mgr[75519]: log_channel(audit) log [DBG] : from='client.14614 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 08:51:21 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:51:21 compute-0 ceph-mon[75227]: pgmap v1478: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:51:21 compute-0 ceph-mon[75227]: from='client.14612 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 08:51:21 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status"} v 0)
Jan 31 08:51:21 compute-0 ceph-mon[75227]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/822961792' entity='client.admin' cmd={"prefix": "status"} : dispatch
Jan 31 08:51:21 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1479: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:51:22 compute-0 ceph-mon[75227]: from='client.14614 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 08:51:22 compute-0 ceph-mon[75227]: from='client.? 192.168.122.100:0/822961792' entity='client.admin' cmd={"prefix": "status"} : dispatch
Jan 31 08:51:22 compute-0 ceph-mon[75227]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #66. Immutable memtables: 0.
Jan 31 08:51:22 compute-0 ceph-mon[75227]: rocksdb: (Original Log Time 2026/01/31-08:51:22.790824) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 31 08:51:22 compute-0 ceph-mon[75227]: rocksdb: [db/flush_job.cc:856] [default] [JOB 35] Flushing memtable with next log file: 66
Jan 31 08:51:22 compute-0 ceph-mon[75227]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769849482790852, "job": 35, "event": "flush_started", "num_memtables": 1, "num_entries": 1666, "num_deletes": 509, "total_data_size": 2228630, "memory_usage": 2263264, "flush_reason": "Manual Compaction"}
Jan 31 08:51:22 compute-0 ceph-mon[75227]: rocksdb: [db/flush_job.cc:885] [default] [JOB 35] Level-0 flush table #67: started
Jan 31 08:51:22 compute-0 ceph-mon[75227]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769849482916768, "cf_name": "default", "job": 35, "event": "table_file_creation", "file_number": 67, "file_size": 2184468, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 28682, "largest_seqno": 30347, "table_properties": {"data_size": 2177122, "index_size": 3840, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2437, "raw_key_size": 18135, "raw_average_key_size": 18, "raw_value_size": 2160325, "raw_average_value_size": 2262, "num_data_blocks": 173, "num_entries": 955, "num_filter_entries": 955, "num_deletions": 509, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769849342, "oldest_key_time": 1769849342, "file_creation_time": 1769849482, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "91992687-9ca4-489a-811f-a25b3432622d", "db_session_id": "RDN3DWKE2K2I6QTJYIJY", "orig_file_number": 67, "seqno_to_time_mapping": "N/A"}}
Jan 31 08:51:22 compute-0 ceph-mon[75227]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 35] Flush lasted 126019 microseconds, and 3336 cpu microseconds.
Jan 31 08:51:22 compute-0 ceph-mon[75227]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 31 08:51:22 compute-0 ceph-mon[75227]: rocksdb: (Original Log Time 2026/01/31-08:51:22.916834) [db/flush_job.cc:967] [default] [JOB 35] Level-0 flush table #67: 2184468 bytes OK
Jan 31 08:51:22 compute-0 ceph-mon[75227]: rocksdb: (Original Log Time 2026/01/31-08:51:22.916856) [db/memtable_list.cc:519] [default] Level-0 commit table #67 started
Jan 31 08:51:22 compute-0 ceph-mon[75227]: rocksdb: (Original Log Time 2026/01/31-08:51:22.993468) [db/memtable_list.cc:722] [default] Level-0 commit table #67: memtable #1 done
Jan 31 08:51:22 compute-0 ceph-mon[75227]: rocksdb: (Original Log Time 2026/01/31-08:51:22.993526) EVENT_LOG_v1 {"time_micros": 1769849482993514, "job": 35, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 31 08:51:22 compute-0 ceph-mon[75227]: rocksdb: (Original Log Time 2026/01/31-08:51:22.993558) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 31 08:51:22 compute-0 ceph-mon[75227]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 35] Try to delete WAL files size 2220323, prev total WAL file size 2220323, number of live WAL files 2.
Jan 31 08:51:22 compute-0 ceph-mon[75227]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000063.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 08:51:22 compute-0 ceph-mon[75227]: rocksdb: (Original Log Time 2026/01/31-08:51:22.994373) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730032353130' seq:72057594037927935, type:22 .. '7061786F730032373632' seq:0, type:0; will stop at (end)
Jan 31 08:51:22 compute-0 ceph-mon[75227]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 36] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 31 08:51:22 compute-0 ceph-mon[75227]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 35 Base level 0, inputs: [67(2133KB)], [65(7208KB)]
Jan 31 08:51:22 compute-0 ceph-mon[75227]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769849482994469, "job": 36, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [67], "files_L6": [65], "score": -1, "input_data_size": 9565648, "oldest_snapshot_seqno": -1}
Jan 31 08:51:23 compute-0 ceph-mon[75227]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 36] Generated table #68: 5059 keys, 7739020 bytes, temperature: kUnknown
Jan 31 08:51:23 compute-0 ceph-mon[75227]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769849483165540, "cf_name": "default", "job": 36, "event": "table_file_creation", "file_number": 68, "file_size": 7739020, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 7705693, "index_size": 19585, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 12677, "raw_key_size": 127996, "raw_average_key_size": 25, "raw_value_size": 7614494, "raw_average_value_size": 1505, "num_data_blocks": 804, "num_entries": 5059, "num_filter_entries": 5059, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769846771, "oldest_key_time": 0, "file_creation_time": 1769849482, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "91992687-9ca4-489a-811f-a25b3432622d", "db_session_id": "RDN3DWKE2K2I6QTJYIJY", "orig_file_number": 68, "seqno_to_time_mapping": "N/A"}}
Jan 31 08:51:23 compute-0 ceph-mon[75227]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 31 08:51:23 compute-0 ceph-mon[75227]: rocksdb: (Original Log Time 2026/01/31-08:51:23.165823) [db/compaction/compaction_job.cc:1663] [default] [JOB 36] Compacted 1@0 + 1@6 files to L6 => 7739020 bytes
Jan 31 08:51:23 compute-0 ceph-mon[75227]: rocksdb: (Original Log Time 2026/01/31-08:51:23.195829) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 55.9 rd, 45.2 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.1, 7.0 +0.0 blob) out(7.4 +0.0 blob), read-write-amplify(7.9) write-amplify(3.5) OK, records in: 6094, records dropped: 1035 output_compression: NoCompression
Jan 31 08:51:23 compute-0 ceph-mon[75227]: rocksdb: (Original Log Time 2026/01/31-08:51:23.195868) EVENT_LOG_v1 {"time_micros": 1769849483195853, "job": 36, "event": "compaction_finished", "compaction_time_micros": 171151, "compaction_time_cpu_micros": 16365, "output_level": 6, "num_output_files": 1, "total_output_size": 7739020, "num_input_records": 6094, "num_output_records": 5059, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 31 08:51:23 compute-0 ceph-mon[75227]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000067.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 08:51:23 compute-0 ceph-mon[75227]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769849483196282, "job": 36, "event": "table_file_deletion", "file_number": 67}
Jan 31 08:51:23 compute-0 ceph-mon[75227]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000065.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 08:51:23 compute-0 ceph-mon[75227]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769849483197099, "job": 36, "event": "table_file_deletion", "file_number": 65}
Jan 31 08:51:23 compute-0 ceph-mon[75227]: rocksdb: (Original Log Time 2026/01/31-08:51:22.994215) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 08:51:23 compute-0 ceph-mon[75227]: rocksdb: (Original Log Time 2026/01/31-08:51:23.197238) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 08:51:23 compute-0 ceph-mon[75227]: rocksdb: (Original Log Time 2026/01/31-08:51:23.197242) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 08:51:23 compute-0 ceph-mon[75227]: rocksdb: (Original Log Time 2026/01/31-08:51:23.197243) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 08:51:23 compute-0 ceph-mon[75227]: rocksdb: (Original Log Time 2026/01/31-08:51:23.197245) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 08:51:23 compute-0 ceph-mon[75227]: rocksdb: (Original Log Time 2026/01/31-08:51:23.197246) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 08:51:23 compute-0 ceph-mon[75227]: pgmap v1479: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:51:23 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1480: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:51:25 compute-0 nova_compute[238824]: 2026-01-31 08:51:25.009 238828 DEBUG oslo_service.periodic_task [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:51:25 compute-0 ceph-mon[75227]: pgmap v1480: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:51:25 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1481: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:51:26 compute-0 ovs-vsctl[260555]: ovs|00001|db_ctl_base|ERR|no key "dpdk-init" in Open_vSwitch record "." column other_config
Jan 31 08:51:26 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:51:26 compute-0 virtqemud[239124]: libvirt version: 11.10.0, package: 2.el9 (builder@centos.org, 2025-12-18-15:09:54, )
Jan 31 08:51:26 compute-0 virtqemud[239124]: hostname: compute-0
Jan 31 08:51:26 compute-0 virtqemud[239124]: Failed to connect socket to '/var/run/libvirt/virtnetworkd-sock-ro': No such file or directory
Jan 31 08:51:26 compute-0 virtqemud[239124]: Failed to connect socket to '/var/run/libvirt/virtnwfilterd-sock-ro': No such file or directory
Jan 31 08:51:26 compute-0 virtqemud[239124]: Failed to connect socket to '/var/run/libvirt/virtstoraged-sock-ro': No such file or directory
Jan 31 08:51:27 compute-0 ceph-mds[96266]: mds.cephfs.compute-0.nafbok asok_command: cache status {prefix=cache status} (starting...)
Jan 31 08:51:27 compute-0 ceph-mds[96266]: mds.cephfs.compute-0.nafbok asok_command: client ls {prefix=client ls} (starting...)
Jan 31 08:51:27 compute-0 lvm[260890]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 31 08:51:27 compute-0 lvm[260890]: VG ceph_vg0 finished
Jan 31 08:51:27 compute-0 lvm[260912]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 31 08:51:27 compute-0 lvm[260912]: VG ceph_vg1 finished
Jan 31 08:51:27 compute-0 lvm[260915]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 31 08:51:27 compute-0 lvm[260915]: VG ceph_vg2 finished
Jan 31 08:51:27 compute-0 ceph-mgr[75519]: log_channel(audit) log [DBG] : from='client.14618 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 08:51:27 compute-0 ceph-mon[75227]: pgmap v1481: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:51:27 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1482: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:51:28 compute-0 ceph-mds[96266]: mds.cephfs.compute-0.nafbok asok_command: damage ls {prefix=damage ls} (starting...)
Jan 31 08:51:28 compute-0 ceph-mds[96266]: mds.cephfs.compute-0.nafbok asok_command: dump loads {prefix=dump loads} (starting...)
Jan 31 08:51:28 compute-0 ceph-mgr[75519]: log_channel(audit) log [DBG] : from='client.14620 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 08:51:28 compute-0 ceph-mds[96266]: mds.cephfs.compute-0.nafbok asok_command: dump tree {prefix=dump tree,root=/} (starting...)
Jan 31 08:51:28 compute-0 nova_compute[238824]: 2026-01-31 08:51:28.339 238828 DEBUG oslo_service.periodic_task [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:51:28 compute-0 ceph-mds[96266]: mds.cephfs.compute-0.nafbok asok_command: dump_blocked_ops {prefix=dump_blocked_ops} (starting...)
Jan 31 08:51:28 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "report"} v 0)
Jan 31 08:51:28 compute-0 ceph-mon[75227]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1968346629' entity='client.admin' cmd={"prefix": "report"} : dispatch
Jan 31 08:51:28 compute-0 ceph-mds[96266]: mds.cephfs.compute-0.nafbok asok_command: dump_historic_ops {prefix=dump_historic_ops} (starting...)
Jan 31 08:51:28 compute-0 ceph-mds[96266]: mds.cephfs.compute-0.nafbok asok_command: dump_historic_ops_by_duration {prefix=dump_historic_ops_by_duration} (starting...)
Jan 31 08:51:28 compute-0 ceph-mgr[75519]: log_channel(audit) log [DBG] : from='client.14624 -' entity='client.admin' cmd=[{"prefix": "balancer status detail", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 08:51:28 compute-0 ceph-mds[96266]: mds.cephfs.compute-0.nafbok asok_command: dump_ops_in_flight {prefix=dump_ops_in_flight} (starting...)
Jan 31 08:51:28 compute-0 ceph-mon[75227]: from='client.14618 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 08:51:28 compute-0 ceph-mon[75227]: pgmap v1482: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:51:28 compute-0 ceph-mon[75227]: from='client.14620 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 08:51:28 compute-0 ceph-mon[75227]: from='client.? 192.168.122.100:0/1968346629' entity='client.admin' cmd={"prefix": "report"} : dispatch
Jan 31 08:51:28 compute-0 ceph-mon[75227]: from='client.14624 -' entity='client.admin' cmd=[{"prefix": "balancer status detail", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 08:51:28 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 31 08:51:28 compute-0 ceph-mon[75227]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2561520865' entity='client.admin' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 08:51:29 compute-0 ceph-mds[96266]: mds.cephfs.compute-0.nafbok asok_command: get subtrees {prefix=get subtrees} (starting...)
Jan 31 08:51:29 compute-0 ceph-mgr[75519]: log_channel(audit) log [DBG] : from='client.14628 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 08:51:29 compute-0 ceph-mgr[75519]: mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Jan 31 08:51:29 compute-0 ceph-82c880e6-d992-5408-8b12-efff9c275473-mgr-compute-0-fqetdi[75515]: 2026-01-31T08:51:29.125+0000 7fcf0ed23640 -1 mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Jan 31 08:51:29 compute-0 ceph-mds[96266]: mds.cephfs.compute-0.nafbok asok_command: ops {prefix=ops} (starting...)
Jan 31 08:51:29 compute-0 nova_compute[238824]: 2026-01-31 08:51:29.339 238828 DEBUG oslo_service.periodic_task [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:51:29 compute-0 nova_compute[238824]: 2026-01-31 08:51:29.340 238828 DEBUG nova.compute.manager [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 31 08:51:29 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config log"} v 0)
Jan 31 08:51:29 compute-0 ceph-mon[75227]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3154022046' entity='client.admin' cmd={"prefix": "config log"} : dispatch
Jan 31 08:51:29 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "log last", "channel": "cephadm"} v 0)
Jan 31 08:51:29 compute-0 ceph-mon[75227]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2145512582' entity='client.admin' cmd={"prefix": "log last", "channel": "cephadm"} : dispatch
Jan 31 08:51:29 compute-0 ceph-mds[96266]: mds.cephfs.compute-0.nafbok asok_command: session ls {prefix=session ls} (starting...)
Jan 31 08:51:29 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1483: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:51:29 compute-0 ceph-mon[75227]: from='client.? 192.168.122.100:0/2561520865' entity='client.admin' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 08:51:29 compute-0 ceph-mon[75227]: from='client.14628 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 08:51:29 compute-0 ceph-mon[75227]: from='client.? 192.168.122.100:0/3154022046' entity='client.admin' cmd={"prefix": "config log"} : dispatch
Jan 31 08:51:29 compute-0 ceph-mon[75227]: from='client.? 192.168.122.100:0/2145512582' entity='client.admin' cmd={"prefix": "log last", "channel": "cephadm"} : dispatch
Jan 31 08:51:29 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config-key dump"} v 0)
Jan 31 08:51:29 compute-0 ceph-mon[75227]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4254321740' entity='client.admin' cmd={"prefix": "config-key dump"} : dispatch
Jan 31 08:51:30 compute-0 ceph-mds[96266]: mds.cephfs.compute-0.nafbok asok_command: status {prefix=status} (starting...)
Jan 31 08:51:30 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr dump"} v 0)
Jan 31 08:51:30 compute-0 ceph-mon[75227]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2966179961' entity='client.admin' cmd={"prefix": "mgr dump"} : dispatch
Jan 31 08:51:30 compute-0 ceph-mgr[75519]: log_channel(audit) log [DBG] : from='client.14638 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 08:51:30 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr metadata"} v 0)
Jan 31 08:51:30 compute-0 ceph-mon[75227]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/303141896' entity='client.admin' cmd={"prefix": "mgr metadata"} : dispatch
Jan 31 08:51:30 compute-0 ceph-mgr[75519]: log_channel(audit) log [DBG] : from='client.14642 -' entity='client.admin' cmd=[{"prefix": "crash stat", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 08:51:30 compute-0 ceph-mon[75227]: pgmap v1483: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:51:30 compute-0 ceph-mon[75227]: from='client.? 192.168.122.100:0/4254321740' entity='client.admin' cmd={"prefix": "config-key dump"} : dispatch
Jan 31 08:51:30 compute-0 ceph-mon[75227]: from='client.? 192.168.122.100:0/2966179961' entity='client.admin' cmd={"prefix": "mgr dump"} : dispatch
Jan 31 08:51:30 compute-0 ceph-mon[75227]: from='client.14638 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 08:51:30 compute-0 ceph-mon[75227]: from='client.? 192.168.122.100:0/303141896' entity='client.admin' cmd={"prefix": "mgr metadata"} : dispatch
Jan 31 08:51:30 compute-0 ceph-mon[75227]: from='client.14642 -' entity='client.admin' cmd=[{"prefix": "crash stat", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 08:51:31 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:51:31 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr module ls"} v 0)
Jan 31 08:51:31 compute-0 ceph-mon[75227]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4033206096' entity='client.admin' cmd={"prefix": "mgr module ls"} : dispatch
Jan 31 08:51:31 compute-0 nova_compute[238824]: 2026-01-31 08:51:31.340 238828 DEBUG oslo_service.periodic_task [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:51:31 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "features"} v 0)
Jan 31 08:51:31 compute-0 ceph-mon[75227]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1032501781' entity='client.admin' cmd={"prefix": "features"} : dispatch
Jan 31 08:51:31 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr services"} v 0)
Jan 31 08:51:31 compute-0 ceph-mon[75227]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3742990929' entity='client.admin' cmd={"prefix": "mgr services"} : dispatch
Jan 31 08:51:31 compute-0 ceph-mgr[75519]: [balancer INFO root] Optimize plan auto_2026-01-31_08:51:31
Jan 31 08:51:31 compute-0 ceph-mgr[75519]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 31 08:51:31 compute-0 ceph-mgr[75519]: [balancer INFO root] do_upmap
Jan 31 08:51:31 compute-0 ceph-mgr[75519]: [balancer INFO root] pools ['default.rgw.log', '.mgr', 'default.rgw.control', 'default.rgw.meta', 'cephfs.cephfs.meta', 'backups', '.rgw.root', 'volumes', 'cephfs.cephfs.data', 'vms', 'images']
Jan 31 08:51:31 compute-0 ceph-mgr[75519]: [balancer INFO root] prepared 0/10 upmap changes
Jan 31 08:51:31 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1484: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:51:31 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "health", "detail": "detail"} v 0)
Jan 31 08:51:31 compute-0 ceph-mon[75227]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/722635469' entity='client.admin' cmd={"prefix": "health", "detail": "detail"} : dispatch
Jan 31 08:51:31 compute-0 ceph-mon[75227]: from='client.? 192.168.122.100:0/4033206096' entity='client.admin' cmd={"prefix": "mgr module ls"} : dispatch
Jan 31 08:51:31 compute-0 ceph-mon[75227]: from='client.? 192.168.122.100:0/1032501781' entity='client.admin' cmd={"prefix": "features"} : dispatch
Jan 31 08:51:31 compute-0 ceph-mon[75227]: from='client.? 192.168.122.100:0/3742990929' entity='client.admin' cmd={"prefix": "mgr services"} : dispatch
Jan 31 08:51:31 compute-0 ceph-mon[75227]: from='client.? 192.168.122.100:0/722635469' entity='client.admin' cmd={"prefix": "health", "detail": "detail"} : dispatch
Jan 31 08:51:32 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr stat"} v 0)
Jan 31 08:51:32 compute-0 ceph-mon[75227]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/76167573' entity='client.admin' cmd={"prefix": "mgr stat"} : dispatch
Jan 31 08:51:32 compute-0 ceph-mgr[75519]: log_channel(audit) log [DBG] : from='client.14654 -' entity='client.admin' cmd=[{"prefix": "insights", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 08:51:32 compute-0 ceph-mgr[75519]: mgr.server reply reply (95) Operation not supported Module 'insights' is not enabled/loaded (required by command 'insights'): use `ceph mgr module enable insights` to enable it
Jan 31 08:51:32 compute-0 ceph-82c880e6-d992-5408-8b12-efff9c275473-mgr-compute-0-fqetdi[75515]: 2026-01-31T08:51:32.429+0000 7fcf0ed23640 -1 mgr.server reply reply (95) Operation not supported Module 'insights' is not enabled/loaded (required by command 'insights'): use `ceph mgr module enable insights` to enable it
Jan 31 08:51:32 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr versions"} v 0)
Jan 31 08:51:32 compute-0 ceph-mon[75227]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3383974240' entity='client.admin' cmd={"prefix": "mgr versions"} : dispatch
Jan 31 08:51:32 compute-0 ceph-mgr[75519]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:51:32 compute-0 ceph-mgr[75519]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:51:32 compute-0 ceph-mgr[75519]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:51:32 compute-0 ceph-mgr[75519]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:51:32 compute-0 ceph-mgr[75519]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 08:51:32 compute-0 ceph-mgr[75519]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 08:51:32 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "log last", "num": 10000, "level": "debug", "channel": "audit"} v 0)
Jan 31 08:51:32 compute-0 ceph-mon[75227]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3547894075' entity='client.admin' cmd={"prefix": "log last", "num": 10000, "level": "debug", "channel": "audit"} : dispatch
Jan 31 08:51:32 compute-0 ceph-mon[75227]: pgmap v1484: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:51:32 compute-0 ceph-mon[75227]: from='client.? 192.168.122.100:0/76167573' entity='client.admin' cmd={"prefix": "mgr stat"} : dispatch
Jan 31 08:51:32 compute-0 ceph-mon[75227]: from='client.14654 -' entity='client.admin' cmd=[{"prefix": "insights", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 08:51:32 compute-0 ceph-mon[75227]: from='client.? 192.168.122.100:0/3383974240' entity='client.admin' cmd={"prefix": "mgr versions"} : dispatch
Jan 31 08:51:32 compute-0 ceph-mon[75227]: from='client.? 192.168.122.100:0/3547894075' entity='client.admin' cmd={"prefix": "log last", "num": 10000, "level": "debug", "channel": "audit"} : dispatch
Jan 31 08:51:33 compute-0 ceph-mgr[75519]: log_channel(audit) log [DBG] : from='client.14660 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 08:51:33 compute-0 ceph-mgr[75519]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 31 08:51:33 compute-0 ceph-mgr[75519]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 31 08:51:33 compute-0 ceph-mgr[75519]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 08:51:33 compute-0 ceph-mgr[75519]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 08:51:33 compute-0 ceph-mgr[75519]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 08:51:33 compute-0 ceph-mgr[75519]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 08:51:33 compute-0 ceph-mgr[75519]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 08:51:33 compute-0 ceph-mgr[75519]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 08:51:33 compute-0 ceph-mgr[75519]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 08:51:33 compute-0 ceph-mgr[75519]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:17:28.657486+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 71729152 unmapped: 614400 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:17:29.657624+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 71729152 unmapped: 614400 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:33 compute-0 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 943949 data_alloc: 218103808 data_used: 6795
Jan 31 08:51:33 compute-0 ceph-osd[88096]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcea8000/0x0/0x4ffc00000, data 0xc1131/0x184000, compress 0x0/0x0/0x0, omap 0x11022, meta 0x2bbefde), peers [0,1] op hist [])
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:17:30.657758+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 71729152 unmapped: 614400 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:17:31.657888+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 71737344 unmapped: 606208 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:17:32.658063+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 71737344 unmapped: 606208 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcea8000/0x0/0x4ffc00000, data 0xc1131/0x184000, compress 0x0/0x0/0x0, omap 0x11022, meta 0x2bbefde), peers [0,1] op hist [])
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:17:33.658330+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 71737344 unmapped: 606208 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:17:34.658485+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 71745536 unmapped: 598016 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:33 compute-0 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 943949 data_alloc: 218103808 data_used: 6795
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:17:35.658661+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 71745536 unmapped: 598016 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcea8000/0x0/0x4ffc00000, data 0xc1131/0x184000, compress 0x0/0x0/0x0, omap 0x11022, meta 0x2bbefde), peers [0,1] op hist [])
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:17:36.658803+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 71745536 unmapped: 598016 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:17:37.658971+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 71753728 unmapped: 589824 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:17:38.659104+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 71753728 unmapped: 589824 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:17:39.659279+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 71753728 unmapped: 589824 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:33 compute-0 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 943949 data_alloc: 218103808 data_used: 6795
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:17:40.659412+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 71761920 unmapped: 581632 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:17:41.659538+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 71761920 unmapped: 581632 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcea8000/0x0/0x4ffc00000, data 0xc1131/0x184000, compress 0x0/0x0/0x0, omap 0x11022, meta 0x2bbefde), peers [0,1] op hist [])
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:17:42.659720+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 71761920 unmapped: 581632 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:17:43.659842+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 71770112 unmapped: 573440 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcea8000/0x0/0x4ffc00000, data 0xc1131/0x184000, compress 0x0/0x0/0x0, omap 0x11022, meta 0x2bbefde), peers [0,1] op hist [])
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:17:44.659957+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 71770112 unmapped: 573440 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:33 compute-0 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 943949 data_alloc: 218103808 data_used: 6795
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:17:45.660161+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 71778304 unmapped: 565248 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:17:46.660310+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 71778304 unmapped: 565248 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:17:47.660554+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 71778304 unmapped: 565248 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:17:48.660750+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 71786496 unmapped: 557056 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcea8000/0x0/0x4ffc00000, data 0xc1131/0x184000, compress 0x0/0x0/0x0, omap 0x11022, meta 0x2bbefde), peers [0,1] op hist [])
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:17:49.660886+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 71786496 unmapped: 557056 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:33 compute-0 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 943949 data_alloc: 218103808 data_used: 6795
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:17:50.661049+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 71786496 unmapped: 557056 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcea8000/0x0/0x4ffc00000, data 0xc1131/0x184000, compress 0x0/0x0/0x0, omap 0x11022, meta 0x2bbefde), peers [0,1] op hist [])
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:17:51.661198+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 71794688 unmapped: 548864 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcea8000/0x0/0x4ffc00000, data 0xc1131/0x184000, compress 0x0/0x0/0x0, omap 0x11022, meta 0x2bbefde), peers [0,1] op hist [])
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:17:52.661387+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 71794688 unmapped: 548864 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:17:53.661574+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 71802880 unmapped: 540672 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:17:54.661751+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 71802880 unmapped: 540672 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:33 compute-0 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 943949 data_alloc: 218103808 data_used: 6795
Jan 31 08:51:33 compute-0 ceph-osd[88096]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcea8000/0x0/0x4ffc00000, data 0xc1131/0x184000, compress 0x0/0x0/0x0, omap 0x11022, meta 0x2bbefde), peers [0,1] op hist [])
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:17:55.661923+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 71802880 unmapped: 540672 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:17:56.662056+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 71811072 unmapped: 532480 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:17:57.662229+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 71811072 unmapped: 532480 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:17:58.662428+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 71819264 unmapped: 524288 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 324.490722656s of 324.498870850s, submitted: 3
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:17:59.662581+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 71950336 unmapped: 393216 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:33 compute-0 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 943949 data_alloc: 218103808 data_used: 6795
Jan 31 08:51:33 compute-0 ceph-osd[88096]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcea8000/0x0/0x4ffc00000, data 0xc1131/0x184000, compress 0x0/0x0/0x0, omap 0x11022, meta 0x2bbefde), peers [0,1] op hist [])
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:18:00.662713+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 71950336 unmapped: 393216 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:18:01.662910+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 71950336 unmapped: 393216 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:18:02.663110+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 71950336 unmapped: 393216 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcea8000/0x0/0x4ffc00000, data 0xc1131/0x184000, compress 0x0/0x0/0x0, omap 0x11022, meta 0x2bbefde), peers [0,1] op hist [])
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:18:03.663346+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcea8000/0x0/0x4ffc00000, data 0xc1131/0x184000, compress 0x0/0x0/0x0, omap 0x11022, meta 0x2bbefde), peers [0,1] op hist [])
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 71958528 unmapped: 385024 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:18:04.663508+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 71958528 unmapped: 385024 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:33 compute-0 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 943949 data_alloc: 218103808 data_used: 6795
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:18:05.663669+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 71958528 unmapped: 385024 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:18:06.663852+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcea8000/0x0/0x4ffc00000, data 0xc1131/0x184000, compress 0x0/0x0/0x0, omap 0x11022, meta 0x2bbefde), peers [0,1] op hist [])
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 71958528 unmapped: 385024 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcea8000/0x0/0x4ffc00000, data 0xc1131/0x184000, compress 0x0/0x0/0x0, omap 0x11022, meta 0x2bbefde), peers [0,1] op hist [])
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:18:07.664053+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 71958528 unmapped: 385024 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:18:08.664198+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 71958528 unmapped: 385024 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:18:09.664342+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 71958528 unmapped: 385024 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:33 compute-0 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 943949 data_alloc: 218103808 data_used: 6795
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:18:10.664526+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 71958528 unmapped: 385024 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:18:11.664690+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 71958528 unmapped: 385024 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcea8000/0x0/0x4ffc00000, data 0xc1131/0x184000, compress 0x0/0x0/0x0, omap 0x11022, meta 0x2bbefde), peers [0,1] op hist [])
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:18:12.664884+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 71966720 unmapped: 376832 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:18:13.665061+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 71966720 unmapped: 376832 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:18:14.665208+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 71974912 unmapped: 368640 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:33 compute-0 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 943949 data_alloc: 218103808 data_used: 6795
Jan 31 08:51:33 compute-0 ceph-osd[88096]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcea8000/0x0/0x4ffc00000, data 0xc1131/0x184000, compress 0x0/0x0/0x0, omap 0x11022, meta 0x2bbefde), peers [0,1] op hist [])
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:18:15.665367+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 71974912 unmapped: 368640 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcea8000/0x0/0x4ffc00000, data 0xc1131/0x184000, compress 0x0/0x0/0x0, omap 0x11022, meta 0x2bbefde), peers [0,1] op hist [])
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:18:16.665520+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 71983104 unmapped: 360448 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:18:17.665698+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 71983104 unmapped: 360448 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:18:18.665859+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 71983104 unmapped: 360448 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:18:19.666057+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 71991296 unmapped: 352256 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:33 compute-0 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 943949 data_alloc: 218103808 data_used: 6795
Jan 31 08:51:33 compute-0 ceph-osd[88096]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcea8000/0x0/0x4ffc00000, data 0xc1131/0x184000, compress 0x0/0x0/0x0, omap 0x11022, meta 0x2bbefde), peers [0,1] op hist [])
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:18:20.666189+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 71999488 unmapped: 344064 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:18:21.666312+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72007680 unmapped: 335872 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:18:22.666507+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72007680 unmapped: 335872 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:18:23.666682+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72015872 unmapped: 327680 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:18:24.666938+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72015872 unmapped: 327680 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:33 compute-0 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 943949 data_alloc: 218103808 data_used: 6795
Jan 31 08:51:33 compute-0 ceph-osd[88096]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcea8000/0x0/0x4ffc00000, data 0xc1131/0x184000, compress 0x0/0x0/0x0, omap 0x11022, meta 0x2bbefde), peers [0,1] op hist [])
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:18:25.667069+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcea8000/0x0/0x4ffc00000, data 0xc1131/0x184000, compress 0x0/0x0/0x0, omap 0x11022, meta 0x2bbefde), peers [0,1] op hist [])
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72015872 unmapped: 327680 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:18:26.667204+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72024064 unmapped: 319488 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:18:27.667360+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72024064 unmapped: 319488 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:18:28.667484+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72024064 unmapped: 319488 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:18:29.667620+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72032256 unmapped: 311296 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:33 compute-0 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 943949 data_alloc: 218103808 data_used: 6795
Jan 31 08:51:33 compute-0 ceph-osd[88096]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcea8000/0x0/0x4ffc00000, data 0xc1131/0x184000, compress 0x0/0x0/0x0, omap 0x11022, meta 0x2bbefde), peers [0,1] op hist [])
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:18:30.667736+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72032256 unmapped: 311296 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcea8000/0x0/0x4ffc00000, data 0xc1131/0x184000, compress 0x0/0x0/0x0, omap 0x11022, meta 0x2bbefde), peers [0,1] op hist [])
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:18:31.667845+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72040448 unmapped: 303104 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:18:32.667981+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72048640 unmapped: 294912 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:18:33.668123+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72056832 unmapped: 286720 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:18:34.668245+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72056832 unmapped: 286720 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:33 compute-0 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 943949 data_alloc: 218103808 data_used: 6795
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:18:35.668390+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72056832 unmapped: 286720 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:18:36.668563+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcea8000/0x0/0x4ffc00000, data 0xc1131/0x184000, compress 0x0/0x0/0x0, omap 0x11022, meta 0x2bbefde), peers [0,1] op hist [])
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72065024 unmapped: 278528 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:18:37.668710+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72073216 unmapped: 270336 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:18:38.668893+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72081408 unmapped: 262144 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:18:39.669030+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72081408 unmapped: 262144 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:33 compute-0 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 943949 data_alloc: 218103808 data_used: 6795
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:18:40.669158+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72089600 unmapped: 253952 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcea8000/0x0/0x4ffc00000, data 0xc1131/0x184000, compress 0x0/0x0/0x0, omap 0x11022, meta 0x2bbefde), peers [0,1] op hist [])
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:18:41.669301+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72089600 unmapped: 253952 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:18:42.669455+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72089600 unmapped: 253952 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:18:43.669591+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcea8000/0x0/0x4ffc00000, data 0xc1131/0x184000, compress 0x0/0x0/0x0, omap 0x11022, meta 0x2bbefde), peers [0,1] op hist [])
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72097792 unmapped: 245760 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcea8000/0x0/0x4ffc00000, data 0xc1131/0x184000, compress 0x0/0x0/0x0, omap 0x11022, meta 0x2bbefde), peers [0,1] op hist [])
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:18:44.669697+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcea8000/0x0/0x4ffc00000, data 0xc1131/0x184000, compress 0x0/0x0/0x0, omap 0x11022, meta 0x2bbefde), peers [0,1] op hist [])
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72097792 unmapped: 245760 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:33 compute-0 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 943949 data_alloc: 218103808 data_used: 6795
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:18:45.669886+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72105984 unmapped: 237568 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:18:46.670077+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72105984 unmapped: 237568 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:18:47.670200+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72105984 unmapped: 237568 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcea8000/0x0/0x4ffc00000, data 0xc1131/0x184000, compress 0x0/0x0/0x0, omap 0x11022, meta 0x2bbefde), peers [0,1] op hist [])
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:18:48.670313+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72114176 unmapped: 229376 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:18:49.670469+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72114176 unmapped: 229376 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:33 compute-0 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 943949 data_alloc: 218103808 data_used: 6795
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:18:50.670612+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72122368 unmapped: 221184 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:18:51.670742+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72122368 unmapped: 221184 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:18:52.670940+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72130560 unmapped: 212992 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcea8000/0x0/0x4ffc00000, data 0xc1131/0x184000, compress 0x0/0x0/0x0, omap 0x11022, meta 0x2bbefde), peers [0,1] op hist [])
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:18:53.671122+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72138752 unmapped: 204800 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:18:54.671319+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcea8000/0x0/0x4ffc00000, data 0xc1131/0x184000, compress 0x0/0x0/0x0, omap 0x11022, meta 0x2bbefde), peers [0,1] op hist [])
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72138752 unmapped: 204800 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:33 compute-0 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 943949 data_alloc: 218103808 data_used: 6795
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:18:55.671542+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72146944 unmapped: 196608 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:18:56.671729+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72146944 unmapped: 196608 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:18:57.671925+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72155136 unmapped: 188416 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:18:58.672167+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcea8000/0x0/0x4ffc00000, data 0xc1131/0x184000, compress 0x0/0x0/0x0, omap 0x11022, meta 0x2bbefde), peers [0,1] op hist [])
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72155136 unmapped: 188416 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:18:59.672305+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72155136 unmapped: 188416 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:33 compute-0 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 943949 data_alloc: 218103808 data_used: 6795
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:19:00.672442+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72155136 unmapped: 188416 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:19:01.672593+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcea8000/0x0/0x4ffc00000, data 0xc1131/0x184000, compress 0x0/0x0/0x0, omap 0x11022, meta 0x2bbefde), peers [0,1] op hist [])
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72155136 unmapped: 188416 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:19:02.672752+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72155136 unmapped: 188416 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:19:03.672894+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72155136 unmapped: 188416 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:19:04.673047+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72155136 unmapped: 188416 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:33 compute-0 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 943949 data_alloc: 218103808 data_used: 6795
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:19:05.673206+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72155136 unmapped: 188416 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:19:06.673368+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72155136 unmapped: 188416 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:19:07.673515+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcea8000/0x0/0x4ffc00000, data 0xc1131/0x184000, compress 0x0/0x0/0x0, omap 0x11022, meta 0x2bbefde), peers [0,1] op hist [])
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72155136 unmapped: 188416 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcea8000/0x0/0x4ffc00000, data 0xc1131/0x184000, compress 0x0/0x0/0x0, omap 0x11022, meta 0x2bbefde), peers [0,1] op hist [])
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:19:08.673688+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72155136 unmapped: 188416 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:19:09.673840+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72155136 unmapped: 188416 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:33 compute-0 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 943949 data_alloc: 218103808 data_used: 6795
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:19:10.673976+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72155136 unmapped: 188416 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:19:11.674148+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72163328 unmapped: 180224 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcea8000/0x0/0x4ffc00000, data 0xc1131/0x184000, compress 0x0/0x0/0x0, omap 0x11022, meta 0x2bbefde), peers [0,1] op hist [])
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:19:12.674843+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72163328 unmapped: 180224 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:19:13.674986+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72163328 unmapped: 180224 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcea8000/0x0/0x4ffc00000, data 0xc1131/0x184000, compress 0x0/0x0/0x0, omap 0x11022, meta 0x2bbefde), peers [0,1] op hist [])
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:19:14.675116+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72155136 unmapped: 188416 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:33 compute-0 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 943949 data_alloc: 218103808 data_used: 6795
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:19:15.675293+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcea8000/0x0/0x4ffc00000, data 0xc1131/0x184000, compress 0x0/0x0/0x0, omap 0x11022, meta 0x2bbefde), peers [0,1] op hist [])
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72155136 unmapped: 188416 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcea8000/0x0/0x4ffc00000, data 0xc1131/0x184000, compress 0x0/0x0/0x0, omap 0x11022, meta 0x2bbefde), peers [0,1] op hist [])
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:19:16.675423+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72155136 unmapped: 188416 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:19:17.675553+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72155136 unmapped: 188416 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:19:18.675705+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72155136 unmapped: 188416 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:19:19.675867+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72155136 unmapped: 188416 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:33 compute-0 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 943949 data_alloc: 218103808 data_used: 6795
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:19:20.675994+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcea8000/0x0/0x4ffc00000, data 0xc1131/0x184000, compress 0x0/0x0/0x0, omap 0x11022, meta 0x2bbefde), peers [0,1] op hist [])
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72155136 unmapped: 188416 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:19:21.676122+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72163328 unmapped: 180224 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:19:22.676305+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72163328 unmapped: 180224 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:19:23.676452+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72163328 unmapped: 180224 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcea8000/0x0/0x4ffc00000, data 0xc1131/0x184000, compress 0x0/0x0/0x0, omap 0x11022, meta 0x2bbefde), peers [0,1] op hist [])
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:19:24.676569+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72163328 unmapped: 180224 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:33 compute-0 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 943949 data_alloc: 218103808 data_used: 6795
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:19:25.676701+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72163328 unmapped: 180224 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:19:26.676857+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72163328 unmapped: 180224 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:19:27.677026+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72163328 unmapped: 180224 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:19:28.677217+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72163328 unmapped: 180224 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:19:29.677395+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcea8000/0x0/0x4ffc00000, data 0xc1131/0x184000, compress 0x0/0x0/0x0, omap 0x11022, meta 0x2bbefde), peers [0,1] op hist [])
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72163328 unmapped: 180224 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:33 compute-0 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 943949 data_alloc: 218103808 data_used: 6795
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:19:30.677542+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72163328 unmapped: 180224 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:19:31.677674+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcea8000/0x0/0x4ffc00000, data 0xc1131/0x184000, compress 0x0/0x0/0x0, omap 0x11022, meta 0x2bbefde), peers [0,1] op hist [])
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72163328 unmapped: 180224 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:19:32.677877+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72171520 unmapped: 172032 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:19:33.678030+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72171520 unmapped: 172032 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:19:34.678176+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72171520 unmapped: 172032 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:33 compute-0 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 943949 data_alloc: 218103808 data_used: 6795
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:19:35.678332+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcea8000/0x0/0x4ffc00000, data 0xc1131/0x184000, compress 0x0/0x0/0x0, omap 0x11022, meta 0x2bbefde), peers [0,1] op hist [])
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72171520 unmapped: 172032 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:19:36.678487+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72179712 unmapped: 163840 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:19:37.678619+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72179712 unmapped: 163840 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:19:38.678769+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcea8000/0x0/0x4ffc00000, data 0xc1131/0x184000, compress 0x0/0x0/0x0, omap 0x11022, meta 0x2bbefde), peers [0,1] op hist [])
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72179712 unmapped: 163840 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:19:39.678977+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72179712 unmapped: 163840 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:33 compute-0 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 943949 data_alloc: 218103808 data_used: 6795
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:19:40.679156+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72179712 unmapped: 163840 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcea8000/0x0/0x4ffc00000, data 0xc1131/0x184000, compress 0x0/0x0/0x0, omap 0x11022, meta 0x2bbefde), peers [0,1] op hist [])
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:19:41.679288+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72187904 unmapped: 155648 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:19:42.679458+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72187904 unmapped: 155648 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:19:43.679834+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72196096 unmapped: 147456 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcea8000/0x0/0x4ffc00000, data 0xc1131/0x184000, compress 0x0/0x0/0x0, omap 0x11022, meta 0x2bbefde), peers [0,1] op hist [])
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:19:44.680031+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72196096 unmapped: 147456 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:33 compute-0 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 943949 data_alloc: 218103808 data_used: 6795
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:19:45.680193+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72196096 unmapped: 147456 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:19:46.680495+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72196096 unmapped: 147456 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:19:47.680903+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72196096 unmapped: 147456 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:19:48.681067+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72196096 unmapped: 147456 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcea8000/0x0/0x4ffc00000, data 0xc1131/0x184000, compress 0x0/0x0/0x0, omap 0x11022, meta 0x2bbefde), peers [0,1] op hist [])
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:19:49.681190+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72196096 unmapped: 147456 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:33 compute-0 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 943949 data_alloc: 218103808 data_used: 6795
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:19:50.681415+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72196096 unmapped: 147456 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:19:51.681547+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcea8000/0x0/0x4ffc00000, data 0xc1131/0x184000, compress 0x0/0x0/0x0, omap 0x11022, meta 0x2bbefde), peers [0,1] op hist [])
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72196096 unmapped: 147456 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:19:52.681770+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcea8000/0x0/0x4ffc00000, data 0xc1131/0x184000, compress 0x0/0x0/0x0, omap 0x11022, meta 0x2bbefde), peers [0,1] op hist [])
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72196096 unmapped: 147456 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:19:53.681970+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72196096 unmapped: 147456 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:19:54.682111+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72196096 unmapped: 147456 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:33 compute-0 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 943949 data_alloc: 218103808 data_used: 6795
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:19:55.682241+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72196096 unmapped: 147456 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:19:56.682347+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcea8000/0x0/0x4ffc00000, data 0xc1131/0x184000, compress 0x0/0x0/0x0, omap 0x11022, meta 0x2bbefde), peers [0,1] op hist [])
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72196096 unmapped: 147456 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:19:57.682485+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72196096 unmapped: 147456 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:19:58.682624+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72196096 unmapped: 147456 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:19:59.682728+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72196096 unmapped: 147456 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcea8000/0x0/0x4ffc00000, data 0xc1131/0x184000, compress 0x0/0x0/0x0, omap 0x11022, meta 0x2bbefde), peers [0,1] op hist [])
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:33 compute-0 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 943949 data_alloc: 218103808 data_used: 6795
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:20:00.682941+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcea8000/0x0/0x4ffc00000, data 0xc1131/0x184000, compress 0x0/0x0/0x0, omap 0x11022, meta 0x2bbefde), peers [0,1] op hist [])
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72196096 unmapped: 147456 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:20:01.683185+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72204288 unmapped: 139264 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:20:02.683318+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72212480 unmapped: 131072 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:20:03.683442+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72212480 unmapped: 131072 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:20:04.683573+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72212480 unmapped: 131072 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:33 compute-0 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 943949 data_alloc: 218103808 data_used: 6795
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:20:05.683724+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72212480 unmapped: 131072 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:20:06.683846+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcea8000/0x0/0x4ffc00000, data 0xc1131/0x184000, compress 0x0/0x0/0x0, omap 0x11022, meta 0x2bbefde), peers [0,1] op hist [])
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72212480 unmapped: 131072 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:20:07.684025+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72212480 unmapped: 131072 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:20:08.684234+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72212480 unmapped: 131072 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:20:09.684530+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72212480 unmapped: 131072 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:33 compute-0 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 943949 data_alloc: 218103808 data_used: 6795
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:20:10.684678+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcea8000/0x0/0x4ffc00000, data 0xc1131/0x184000, compress 0x0/0x0/0x0, omap 0x11022, meta 0x2bbefde), peers [0,1] op hist [])
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72212480 unmapped: 131072 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:20:11.686971+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72212480 unmapped: 131072 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:20:12.687169+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72212480 unmapped: 131072 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:20:13.687409+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72212480 unmapped: 131072 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:20:14.687651+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcea8000/0x0/0x4ffc00000, data 0xc1131/0x184000, compress 0x0/0x0/0x0, omap 0x11022, meta 0x2bbefde), peers [0,1] op hist [])
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72220672 unmapped: 122880 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:33 compute-0 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 943949 data_alloc: 218103808 data_used: 6795
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:20:15.687923+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72228864 unmapped: 114688 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:20:16.688146+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72228864 unmapped: 114688 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcea8000/0x0/0x4ffc00000, data 0xc1131/0x184000, compress 0x0/0x0/0x0, omap 0x11022, meta 0x2bbefde), peers [0,1] op hist [])
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:20:17.688340+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72228864 unmapped: 114688 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:20:18.688468+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72228864 unmapped: 114688 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:20:19.688580+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72228864 unmapped: 114688 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:33 compute-0 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 943949 data_alloc: 218103808 data_used: 6795
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:20:20.688807+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72228864 unmapped: 114688 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:20:21.689000+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72228864 unmapped: 114688 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:20:22.689286+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcea8000/0x0/0x4ffc00000, data 0xc1131/0x184000, compress 0x0/0x0/0x0, omap 0x11022, meta 0x2bbefde), peers [0,1] op hist [])
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72228864 unmapped: 114688 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:20:23.689408+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72228864 unmapped: 114688 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:20:24.689548+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72228864 unmapped: 114688 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:33 compute-0 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 943949 data_alloc: 218103808 data_used: 6795
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:20:25.689739+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72228864 unmapped: 114688 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:20:26.689913+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72228864 unmapped: 114688 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcea8000/0x0/0x4ffc00000, data 0xc1131/0x184000, compress 0x0/0x0/0x0, omap 0x11022, meta 0x2bbefde), peers [0,1] op hist [])
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:20:27.690038+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72228864 unmapped: 114688 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:20:28.690218+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72228864 unmapped: 114688 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:20:29.690435+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72228864 unmapped: 114688 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:33 compute-0 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 943949 data_alloc: 218103808 data_used: 6795
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:20:30.690572+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72228864 unmapped: 114688 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:20:31.690682+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72228864 unmapped: 114688 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:20:32.690843+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcea8000/0x0/0x4ffc00000, data 0xc1131/0x184000, compress 0x0/0x0/0x0, omap 0x11022, meta 0x2bbefde), peers [0,1] op hist [])
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72228864 unmapped: 114688 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:20:33.691010+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72228864 unmapped: 114688 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:20:34.691161+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72228864 unmapped: 114688 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:33 compute-0 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 943949 data_alloc: 218103808 data_used: 6795
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:20:35.691373+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72228864 unmapped: 114688 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:20:36.691558+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72228864 unmapped: 114688 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:20:37.691812+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcea8000/0x0/0x4ffc00000, data 0xc1131/0x184000, compress 0x0/0x0/0x0, omap 0x11022, meta 0x2bbefde), peers [0,1] op hist [])
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72228864 unmapped: 114688 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:20:38.691988+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcea8000/0x0/0x4ffc00000, data 0xc1131/0x184000, compress 0x0/0x0/0x0, omap 0x11022, meta 0x2bbefde), peers [0,1] op hist [])
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72228864 unmapped: 114688 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:20:39.692214+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72228864 unmapped: 114688 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:33 compute-0 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 943949 data_alloc: 218103808 data_used: 6795
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:20:40.692370+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72228864 unmapped: 114688 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:20:41.692564+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72228864 unmapped: 114688 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcea8000/0x0/0x4ffc00000, data 0xc1131/0x184000, compress 0x0/0x0/0x0, omap 0x11022, meta 0x2bbefde), peers [0,1] op hist [])
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:20:42.692729+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72228864 unmapped: 114688 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:20:43.692952+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcea8000/0x0/0x4ffc00000, data 0xc1131/0x184000, compress 0x0/0x0/0x0, omap 0x11022, meta 0x2bbefde), peers [0,1] op hist [])
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72228864 unmapped: 114688 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:20:44.693134+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72228864 unmapped: 114688 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:20:45.693320+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:33 compute-0 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 943949 data_alloc: 218103808 data_used: 6795
Jan 31 08:51:33 compute-0 ceph-osd[88096]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcea8000/0x0/0x4ffc00000, data 0xc1131/0x184000, compress 0x0/0x0/0x0, omap 0x11022, meta 0x2bbefde), peers [0,1] op hist [])
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72228864 unmapped: 114688 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:20:46.693470+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72237056 unmapped: 106496 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:20:47.693621+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72237056 unmapped: 106496 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:20:48.693861+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72237056 unmapped: 106496 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:20:49.694062+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72237056 unmapped: 106496 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:20:50.694341+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:33 compute-0 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 943949 data_alloc: 218103808 data_used: 6795
Jan 31 08:51:33 compute-0 ceph-osd[88096]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcea8000/0x0/0x4ffc00000, data 0xc1131/0x184000, compress 0x0/0x0/0x0, omap 0x11022, meta 0x2bbefde), peers [0,1] op hist [])
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72237056 unmapped: 106496 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:20:51.694514+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72237056 unmapped: 106496 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:20:52.694721+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72237056 unmapped: 106496 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:20:53.694923+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72237056 unmapped: 106496 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:20:54.695128+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72237056 unmapped: 106496 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:20:55.695305+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:33 compute-0 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 943949 data_alloc: 218103808 data_used: 6795
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72237056 unmapped: 106496 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcea8000/0x0/0x4ffc00000, data 0xc1131/0x184000, compress 0x0/0x0/0x0, omap 0x11022, meta 0x2bbefde), peers [0,1] op hist [])
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:20:56.695509+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72245248 unmapped: 98304 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:20:57.695758+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72253440 unmapped: 90112 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:20:58.695948+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72253440 unmapped: 90112 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:20:59.696129+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72253440 unmapped: 90112 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:21:00.696404+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:33 compute-0 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 943949 data_alloc: 218103808 data_used: 6795
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72253440 unmapped: 90112 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:21:01.696632+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcea8000/0x0/0x4ffc00000, data 0xc1131/0x184000, compress 0x0/0x0/0x0, omap 0x11022, meta 0x2bbefde), peers [0,1] op hist [])
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72269824 unmapped: 73728 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:21:02.696852+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72278016 unmapped: 65536 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:21:03.697004+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72278016 unmapped: 65536 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:21:04.697130+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72278016 unmapped: 65536 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:21:05.697305+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:33 compute-0 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 943949 data_alloc: 218103808 data_used: 6795
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72278016 unmapped: 65536 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:21:06.697467+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcea8000/0x0/0x4ffc00000, data 0xc1131/0x184000, compress 0x0/0x0/0x0, omap 0x11022, meta 0x2bbefde), peers [0,1] op hist [])
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72278016 unmapped: 65536 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:21:07.697687+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcea8000/0x0/0x4ffc00000, data 0xc1131/0x184000, compress 0x0/0x0/0x0, omap 0x11022, meta 0x2bbefde), peers [0,1] op hist [])
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72278016 unmapped: 65536 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:21:08.697865+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72278016 unmapped: 65536 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:21:09.698110+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72278016 unmapped: 65536 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:21:10.698345+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:33 compute-0 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 943949 data_alloc: 218103808 data_used: 6795
Jan 31 08:51:33 compute-0 ceph-osd[88096]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcea8000/0x0/0x4ffc00000, data 0xc1131/0x184000, compress 0x0/0x0/0x0, omap 0x11022, meta 0x2bbefde), peers [0,1] op hist [])
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72278016 unmapped: 65536 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:21:11.698557+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72278016 unmapped: 65536 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:21:12.698760+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72278016 unmapped: 65536 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:21:13.698940+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcea8000/0x0/0x4ffc00000, data 0xc1131/0x184000, compress 0x0/0x0/0x0, omap 0x11022, meta 0x2bbefde), peers [0,1] op hist [])
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72278016 unmapped: 65536 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:21:14.699100+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72278016 unmapped: 65536 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:21:15.699270+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:33 compute-0 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 943949 data_alloc: 218103808 data_used: 6795
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72278016 unmapped: 65536 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:21:16.699462+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72278016 unmapped: 65536 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:21:17.699706+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72278016 unmapped: 65536 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:21:18.699861+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcea8000/0x0/0x4ffc00000, data 0xc1131/0x184000, compress 0x0/0x0/0x0, omap 0x11022, meta 0x2bbefde), peers [0,1] op hist [])
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72278016 unmapped: 65536 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:21:19.700023+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72278016 unmapped: 65536 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:21:20.700138+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:33 compute-0 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 943949 data_alloc: 218103808 data_used: 6795
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72278016 unmapped: 65536 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:21:21.700291+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72278016 unmapped: 65536 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:21:22.700552+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72278016 unmapped: 65536 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:21:23.700715+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72278016 unmapped: 65536 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:21:24.700864+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcea8000/0x0/0x4ffc00000, data 0xc1131/0x184000, compress 0x0/0x0/0x0, omap 0x11022, meta 0x2bbefde), peers [0,1] op hist [])
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72278016 unmapped: 65536 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:21:25.701045+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcea8000/0x0/0x4ffc00000, data 0xc1131/0x184000, compress 0x0/0x0/0x0, omap 0x11022, meta 0x2bbefde), peers [0,1] op hist [])
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:33 compute-0 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 943949 data_alloc: 218103808 data_used: 6795
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72278016 unmapped: 65536 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:21:26.701328+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72278016 unmapped: 65536 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:21:27.701517+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcea8000/0x0/0x4ffc00000, data 0xc1131/0x184000, compress 0x0/0x0/0x0, omap 0x11022, meta 0x2bbefde), peers [0,1] op hist [])
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72278016 unmapped: 65536 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:21:28.701729+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72278016 unmapped: 65536 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:21:29.701909+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72278016 unmapped: 65536 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:21:30.702101+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:33 compute-0 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 943949 data_alloc: 218103808 data_used: 6795
Jan 31 08:51:33 compute-0 ceph-osd[88096]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcea8000/0x0/0x4ffc00000, data 0xc1131/0x184000, compress 0x0/0x0/0x0, omap 0x11022, meta 0x2bbefde), peers [0,1] op hist [])
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72278016 unmapped: 65536 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:21:31.702332+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72278016 unmapped: 65536 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:21:32.702630+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72278016 unmapped: 65536 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:21:33.702804+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcea8000/0x0/0x4ffc00000, data 0xc1131/0x184000, compress 0x0/0x0/0x0, omap 0x11022, meta 0x2bbefde), peers [0,1] op hist [])
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72278016 unmapped: 65536 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:21:34.702977+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcea8000/0x0/0x4ffc00000, data 0xc1131/0x184000, compress 0x0/0x0/0x0, omap 0x11022, meta 0x2bbefde), peers [0,1] op hist [])
Jan 31 08:51:33 compute-0 ceph-osd[88096]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcea8000/0x0/0x4ffc00000, data 0xc1131/0x184000, compress 0x0/0x0/0x0, omap 0x11022, meta 0x2bbefde), peers [0,1] op hist [])
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72269824 unmapped: 73728 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:21:35.703116+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:33 compute-0 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 943949 data_alloc: 218103808 data_used: 6795
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72269824 unmapped: 73728 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:21:36.703296+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72269824 unmapped: 73728 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:21:37.703444+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72269824 unmapped: 73728 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:21:38.703697+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72278016 unmapped: 65536 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcea8000/0x0/0x4ffc00000, data 0xc1131/0x184000, compress 0x0/0x0/0x0, omap 0x11022, meta 0x2bbefde), peers [0,1] op hist [])
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:21:39.703886+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72278016 unmapped: 65536 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:21:40.704025+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:33 compute-0 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 943949 data_alloc: 218103808 data_used: 6795
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72278016 unmapped: 65536 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:21:41.704165+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72278016 unmapped: 65536 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:21:42.704352+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72278016 unmapped: 65536 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:21:43.704561+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72278016 unmapped: 65536 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:21:44.704720+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcea8000/0x0/0x4ffc00000, data 0xc1131/0x184000, compress 0x0/0x0/0x0, omap 0x11022, meta 0x2bbefde), peers [0,1] op hist [])
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72278016 unmapped: 65536 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:21:45.704923+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:33 compute-0 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 943949 data_alloc: 218103808 data_used: 6795
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72278016 unmapped: 65536 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:21:46.705087+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72278016 unmapped: 65536 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:21:47.705325+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72278016 unmapped: 65536 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:21:48.705478+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72278016 unmapped: 65536 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:21:49.705602+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcea8000/0x0/0x4ffc00000, data 0xc1131/0x184000, compress 0x0/0x0/0x0, omap 0x11022, meta 0x2bbefde), peers [0,1] op hist [])
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72278016 unmapped: 65536 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:21:50.705710+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:33 compute-0 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 943949 data_alloc: 218103808 data_used: 6795
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72278016 unmapped: 65536 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:21:51.705931+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72278016 unmapped: 65536 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:21:52.706215+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72278016 unmapped: 65536 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:21:53.706319+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcea8000/0x0/0x4ffc00000, data 0xc1131/0x184000, compress 0x0/0x0/0x0, omap 0x11022, meta 0x2bbefde), peers [0,1] op hist [])
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72278016 unmapped: 65536 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:21:54.706508+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72278016 unmapped: 65536 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:21:55.706694+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:33 compute-0 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 943949 data_alloc: 218103808 data_used: 6795
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72278016 unmapped: 65536 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:21:56.706819+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72278016 unmapped: 65536 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:21:57.706965+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72278016 unmapped: 65536 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:21:58.707132+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcea8000/0x0/0x4ffc00000, data 0xc1131/0x184000, compress 0x0/0x0/0x0, omap 0x11022, meta 0x2bbefde), peers [0,1] op hist [])
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72278016 unmapped: 65536 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:21:59.707356+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72278016 unmapped: 65536 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:22:00.707481+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:33 compute-0 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 943949 data_alloc: 218103808 data_used: 6795
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72278016 unmapped: 65536 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:22:01.707702+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcea8000/0x0/0x4ffc00000, data 0xc1131/0x184000, compress 0x0/0x0/0x0, omap 0x11022, meta 0x2bbefde), peers [0,1] op hist [])
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72278016 unmapped: 65536 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:22:02.707876+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72286208 unmapped: 57344 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:22:03.708069+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcea8000/0x0/0x4ffc00000, data 0xc1131/0x184000, compress 0x0/0x0/0x0, omap 0x11022, meta 0x2bbefde), peers [0,1] op hist [])
Jan 31 08:51:33 compute-0 ceph-osd[88096]: mgrc ms_handle_reset ms_handle_reset con 0x5603a3a9a000
Jan 31 08:51:33 compute-0 ceph-osd[88096]: mgrc reconnect Terminating session with v2:192.168.122.100:6800/2264315754
Jan 31 08:51:33 compute-0 ceph-osd[88096]: mgrc reconnect Starting new session with [v2:192.168.122.100:6800/2264315754,v1:192.168.122.100:6801/2264315754]
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: get_auth_request con 0x5603a3817000 auth_method 0
Jan 31 08:51:33 compute-0 ceph-osd[88096]: mgrc handle_mgr_configure stats_period=5
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72548352 unmapped: 843776 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:22:04.708314+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72548352 unmapped: 843776 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:22:05.708475+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcea8000/0x0/0x4ffc00000, data 0xc1131/0x184000, compress 0x0/0x0/0x0, omap 0x11022, meta 0x2bbefde), peers [0,1] op hist [])
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:33 compute-0 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 943949 data_alloc: 218103808 data_used: 6795
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:22:06.708639+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72548352 unmapped: 843776 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcea8000/0x0/0x4ffc00000, data 0xc1131/0x184000, compress 0x0/0x0/0x0, omap 0x11022, meta 0x2bbefde), peers [0,1] op hist [])
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:22:07.708820+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72548352 unmapped: 843776 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:22:08.709036+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72548352 unmapped: 843776 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcea8000/0x0/0x4ffc00000, data 0xc1131/0x184000, compress 0x0/0x0/0x0, omap 0x11022, meta 0x2bbefde), peers [0,1] op hist [])
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:22:09.709161+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72548352 unmapped: 843776 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:22:10.709336+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72548352 unmapped: 843776 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:33 compute-0 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 943949 data_alloc: 218103808 data_used: 6795
Jan 31 08:51:33 compute-0 ceph-osd[88096]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcea8000/0x0/0x4ffc00000, data 0xc1131/0x184000, compress 0x0/0x0/0x0, omap 0x11022, meta 0x2bbefde), peers [0,1] op hist [])
Jan 31 08:51:33 compute-0 ceph-osd[88096]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcea8000/0x0/0x4ffc00000, data 0xc1131/0x184000, compress 0x0/0x0/0x0, omap 0x11022, meta 0x2bbefde), peers [0,1] op hist [])
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:22:11.709481+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72548352 unmapped: 843776 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:22:12.709675+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72548352 unmapped: 843776 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:22:13.709833+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72548352 unmapped: 843776 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:22:14.710047+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72548352 unmapped: 843776 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcea8000/0x0/0x4ffc00000, data 0xc1131/0x184000, compress 0x0/0x0/0x0, omap 0x11022, meta 0x2bbefde), peers [0,1] op hist [])
Jan 31 08:51:33 compute-0 ceph-osd[88096]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcea8000/0x0/0x4ffc00000, data 0xc1131/0x184000, compress 0x0/0x0/0x0, omap 0x11022, meta 0x2bbefde), peers [0,1] op hist [])
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:22:15.710208+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72548352 unmapped: 843776 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:33 compute-0 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 943949 data_alloc: 218103808 data_used: 6795
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:22:16.710373+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72548352 unmapped: 843776 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcea8000/0x0/0x4ffc00000, data 0xc1131/0x184000, compress 0x0/0x0/0x0, omap 0x11022, meta 0x2bbefde), peers [0,1] op hist [])
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:22:17.710574+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72548352 unmapped: 843776 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcea8000/0x0/0x4ffc00000, data 0xc1131/0x184000, compress 0x0/0x0/0x0, omap 0x11022, meta 0x2bbefde), peers [0,1] op hist [])
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:22:18.710735+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72548352 unmapped: 843776 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:22:19.710925+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72548352 unmapped: 843776 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:22:20.711054+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72548352 unmapped: 843776 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:33 compute-0 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 943949 data_alloc: 218103808 data_used: 6795
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:22:21.711188+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72548352 unmapped: 843776 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcea8000/0x0/0x4ffc00000, data 0xc1131/0x184000, compress 0x0/0x0/0x0, omap 0x11022, meta 0x2bbefde), peers [0,1] op hist [])
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:22:22.711310+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72556544 unmapped: 835584 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcea8000/0x0/0x4ffc00000, data 0xc1131/0x184000, compress 0x0/0x0/0x0, omap 0x11022, meta 0x2bbefde), peers [0,1] op hist [])
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:22:23.711432+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72556544 unmapped: 835584 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:22:24.711628+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72556544 unmapped: 835584 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:22:25.711761+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72556544 unmapped: 835584 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:33 compute-0 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 943949 data_alloc: 218103808 data_used: 6795
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:22:26.711946+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72556544 unmapped: 835584 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:22:27.712109+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72556544 unmapped: 835584 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:22:28.712271+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72556544 unmapped: 835584 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcea8000/0x0/0x4ffc00000, data 0xc1131/0x184000, compress 0x0/0x0/0x0, omap 0x11022, meta 0x2bbefde), peers [0,1] op hist [])
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:22:29.712433+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72556544 unmapped: 835584 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:22:30.712580+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72556544 unmapped: 835584 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:33 compute-0 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 943949 data_alloc: 218103808 data_used: 6795
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:22:31.712762+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72556544 unmapped: 835584 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:22:32.713017+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72564736 unmapped: 827392 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcea8000/0x0/0x4ffc00000, data 0xc1131/0x184000, compress 0x0/0x0/0x0, omap 0x11022, meta 0x2bbefde), peers [0,1] op hist [])
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:22:33.713199+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72564736 unmapped: 827392 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcea8000/0x0/0x4ffc00000, data 0xc1131/0x184000, compress 0x0/0x0/0x0, omap 0x11022, meta 0x2bbefde), peers [0,1] op hist [])
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:22:34.713338+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72564736 unmapped: 827392 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:22:35.713506+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72564736 unmapped: 827392 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:33 compute-0 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 943949 data_alloc: 218103808 data_used: 6795
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:22:36.713740+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72564736 unmapped: 827392 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcea8000/0x0/0x4ffc00000, data 0xc1131/0x184000, compress 0x0/0x0/0x0, omap 0x11022, meta 0x2bbefde), peers [0,1] op hist [])
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:22:37.713931+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72564736 unmapped: 827392 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcea8000/0x0/0x4ffc00000, data 0xc1131/0x184000, compress 0x0/0x0/0x0, omap 0x11022, meta 0x2bbefde), peers [0,1] op hist [])
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:22:38.714114+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72564736 unmapped: 827392 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:22:39.714267+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcea8000/0x0/0x4ffc00000, data 0xc1131/0x184000, compress 0x0/0x0/0x0, omap 0x11022, meta 0x2bbefde), peers [0,1] op hist [])
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72564736 unmapped: 827392 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:22:40.714399+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72564736 unmapped: 827392 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:33 compute-0 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 943949 data_alloc: 218103808 data_used: 6795
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:22:41.714571+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72564736 unmapped: 827392 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:22:42.714748+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72564736 unmapped: 827392 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcea8000/0x0/0x4ffc00000, data 0xc1131/0x184000, compress 0x0/0x0/0x0, omap 0x11022, meta 0x2bbefde), peers [0,1] op hist [])
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:22:43.714932+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72564736 unmapped: 827392 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:22:44.715076+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72564736 unmapped: 827392 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcea8000/0x0/0x4ffc00000, data 0xc1131/0x184000, compress 0x0/0x0/0x0, omap 0x11022, meta 0x2bbefde), peers [0,1] op hist [])
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:22:45.715279+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72564736 unmapped: 827392 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:33 compute-0 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 943949 data_alloc: 218103808 data_used: 6795
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:22:46.715410+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72564736 unmapped: 827392 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:22:47.715538+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72564736 unmapped: 827392 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:22:48.715703+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72564736 unmapped: 827392 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcea8000/0x0/0x4ffc00000, data 0xc1131/0x184000, compress 0x0/0x0/0x0, omap 0x11022, meta 0x2bbefde), peers [0,1] op hist [])
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:22:49.715832+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72564736 unmapped: 827392 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:22:50.715999+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72564736 unmapped: 827392 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:33 compute-0 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 943949 data_alloc: 218103808 data_used: 6795
Jan 31 08:51:33 compute-0 ceph-osd[88096]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcea8000/0x0/0x4ffc00000, data 0xc1131/0x184000, compress 0x0/0x0/0x0, omap 0x11022, meta 0x2bbefde), peers [0,1] op hist [])
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:22:51.716115+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72564736 unmapped: 827392 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:22:52.716332+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72564736 unmapped: 827392 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:22:53.716993+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72564736 unmapped: 827392 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:22:54.717170+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72564736 unmapped: 827392 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcea8000/0x0/0x4ffc00000, data 0xc1131/0x184000, compress 0x0/0x0/0x0, omap 0x11022, meta 0x2bbefde), peers [0,1] op hist [])
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:22:55.717392+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72564736 unmapped: 827392 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcea8000/0x0/0x4ffc00000, data 0xc1131/0x184000, compress 0x0/0x0/0x0, omap 0x11022, meta 0x2bbefde), peers [0,1] op hist [])
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:33 compute-0 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 943949 data_alloc: 218103808 data_used: 6795
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:22:56.717568+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72564736 unmapped: 827392 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcea8000/0x0/0x4ffc00000, data 0xc1131/0x184000, compress 0x0/0x0/0x0, omap 0x11022, meta 0x2bbefde), peers [0,1] op hist [])
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:22:57.717760+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72572928 unmapped: 819200 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:22:58.717889+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72572928 unmapped: 819200 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcea8000/0x0/0x4ffc00000, data 0xc1131/0x184000, compress 0x0/0x0/0x0, omap 0x11022, meta 0x2bbefde), peers [0,1] op hist [])
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: handle_auth_request added challenge on 0x5603a62c1400
Jan 31 08:51:33 compute-0 ceph-osd[88096]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 300.305572510s of 300.562957764s, submitted: 90
Jan 31 08:51:33 compute-0 ceph-osd[88096]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:22:59.718049+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcea8000/0x0/0x4ffc00000, data 0xc1131/0x184000, compress 0x0/0x0/0x0, omap 0x11022, meta 0x2bbefde), peers [0,1] op hist [])
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72671232 unmapped: 720896 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:23:00.718238+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72671232 unmapped: 720896 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:33 compute-0 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 945357 data_alloc: 218103808 data_used: 8479
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:23:01.718348+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72671232 unmapped: 720896 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:23:02.718493+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72671232 unmapped: 720896 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:23:03.718611+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72679424 unmapped: 712704 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:23:04.718724+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72679424 unmapped: 712704 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:23:05.718850+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcea8000/0x0/0x4ffc00000, data 0xc1131/0x184000, compress 0x0/0x0/0x0, omap 0x11022, meta 0x2bbefde), peers [0,1] op hist [])
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72679424 unmapped: 712704 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:33 compute-0 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 945357 data_alloc: 218103808 data_used: 8479
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:23:06.719012+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcea8000/0x0/0x4ffc00000, data 0xc1131/0x184000, compress 0x0/0x0/0x0, omap 0x11022, meta 0x2bbefde), peers [0,1] op hist [])
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72679424 unmapped: 712704 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcea8000/0x0/0x4ffc00000, data 0xc1131/0x184000, compress 0x0/0x0/0x0, omap 0x11022, meta 0x2bbefde), peers [0,1] op hist [])
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:23:07.719162+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72679424 unmapped: 712704 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:23:08.719403+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72679424 unmapped: 712704 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcea8000/0x0/0x4ffc00000, data 0xc1131/0x184000, compress 0x0/0x0/0x0, omap 0x11022, meta 0x2bbefde), peers [0,1] op hist [])
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:23:09.719549+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72679424 unmapped: 712704 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:23:10.719685+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72679424 unmapped: 712704 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:33 compute-0 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 945357 data_alloc: 218103808 data_used: 8479
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:23:11.719834+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72679424 unmapped: 712704 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:23:12.719974+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72687616 unmapped: 704512 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:23:13.720094+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcea8000/0x0/0x4ffc00000, data 0xc1131/0x184000, compress 0x0/0x0/0x0, omap 0x11022, meta 0x2bbefde), peers [0,1] op hist [])
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72687616 unmapped: 704512 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:23:14.720238+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72687616 unmapped: 704512 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcea8000/0x0/0x4ffc00000, data 0xc1131/0x184000, compress 0x0/0x0/0x0, omap 0x11022, meta 0x2bbefde), peers [0,1] op hist [])
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:23:15.720370+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72687616 unmapped: 704512 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:33 compute-0 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 945357 data_alloc: 218103808 data_used: 8479
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:23:16.720494+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72687616 unmapped: 704512 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:23:17.720592+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72687616 unmapped: 704512 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:23:18.720729+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72687616 unmapped: 704512 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcea8000/0x0/0x4ffc00000, data 0xc1131/0x184000, compress 0x0/0x0/0x0, omap 0x11022, meta 0x2bbefde), peers [0,1] op hist [])
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:23:19.720856+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72704000 unmapped: 688128 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:23:20.721010+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72704000 unmapped: 688128 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:33 compute-0 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 945357 data_alloc: 218103808 data_used: 8479
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:23:21.721150+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72704000 unmapped: 688128 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:23:22.721381+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72712192 unmapped: 679936 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:23:23.721558+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72712192 unmapped: 679936 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:23:24.721671+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcea8000/0x0/0x4ffc00000, data 0xc1131/0x184000, compress 0x0/0x0/0x0, omap 0x11022, meta 0x2bbefde), peers [0,1] op hist [])
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72712192 unmapped: 679936 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:23:25.721807+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72712192 unmapped: 679936 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:33 compute-0 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 945357 data_alloc: 218103808 data_used: 8479
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:23:26.721944+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72712192 unmapped: 679936 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:23:27.722074+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72712192 unmapped: 679936 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:23:28.722211+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72712192 unmapped: 679936 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcea8000/0x0/0x4ffc00000, data 0xc1131/0x184000, compress 0x0/0x0/0x0, omap 0x11022, meta 0x2bbefde), peers [0,1] op hist [])
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:23:29.722356+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72712192 unmapped: 679936 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:23:30.722542+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72712192 unmapped: 679936 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:33 compute-0 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 945357 data_alloc: 218103808 data_used: 8479
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:23:31.722687+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72712192 unmapped: 679936 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:23:32.722865+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72712192 unmapped: 679936 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:23:33.722978+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72712192 unmapped: 679936 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcea8000/0x0/0x4ffc00000, data 0xc1131/0x184000, compress 0x0/0x0/0x0, omap 0x11022, meta 0x2bbefde), peers [0,1] op hist [])
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:23:34.723084+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72712192 unmapped: 679936 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:23:35.723223+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72712192 unmapped: 679936 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:33 compute-0 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 945357 data_alloc: 218103808 data_used: 8479
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:23:36.723368+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72720384 unmapped: 671744 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:23:37.723520+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcea8000/0x0/0x4ffc00000, data 0xc1131/0x184000, compress 0x0/0x0/0x0, omap 0x11022, meta 0x2bbefde), peers [0,1] op hist [])
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72720384 unmapped: 671744 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:23:38.723669+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72720384 unmapped: 671744 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:23:39.723782+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72736768 unmapped: 655360 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:23:40.723903+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72736768 unmapped: 655360 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:33 compute-0 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 945357 data_alloc: 218103808 data_used: 8479
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:23:41.724026+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72753152 unmapped: 638976 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:23:42.724197+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72753152 unmapped: 638976 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcea8000/0x0/0x4ffc00000, data 0xc1131/0x184000, compress 0x0/0x0/0x0, omap 0x11022, meta 0x2bbefde), peers [0,1] op hist [])
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:23:43.724353+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72753152 unmapped: 638976 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:23:44.724488+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcea8000/0x0/0x4ffc00000, data 0xc1131/0x184000, compress 0x0/0x0/0x0, omap 0x11022, meta 0x2bbefde), peers [0,1] op hist [])
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72753152 unmapped: 638976 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:23:45.724611+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72753152 unmapped: 638976 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:33 compute-0 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 945357 data_alloc: 218103808 data_used: 8479
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:23:46.724776+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72753152 unmapped: 638976 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:23:47.724936+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72753152 unmapped: 638976 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:23:48.725093+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcea8000/0x0/0x4ffc00000, data 0xc1131/0x184000, compress 0x0/0x0/0x0, omap 0x11022, meta 0x2bbefde), peers [0,1] op hist [])
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72753152 unmapped: 638976 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:23:49.725324+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72753152 unmapped: 638976 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:23:50.725453+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72753152 unmapped: 638976 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:33 compute-0 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 945357 data_alloc: 218103808 data_used: 8479
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:23:51.725632+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72753152 unmapped: 638976 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:23:52.725822+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72753152 unmapped: 638976 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcea8000/0x0/0x4ffc00000, data 0xc1131/0x184000, compress 0x0/0x0/0x0, omap 0x11022, meta 0x2bbefde), peers [0,1] op hist [])
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:23:53.726017+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72753152 unmapped: 638976 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:23:54.726167+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72753152 unmapped: 638976 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:23:55.726374+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72753152 unmapped: 638976 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:33 compute-0 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 945357 data_alloc: 218103808 data_used: 8479
Jan 31 08:51:33 compute-0 ceph-osd[88096]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcea8000/0x0/0x4ffc00000, data 0xc1131/0x184000, compress 0x0/0x0/0x0, omap 0x11022, meta 0x2bbefde), peers [0,1] op hist [])
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:23:56.726508+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72753152 unmapped: 638976 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:23:57.726645+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72753152 unmapped: 638976 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:23:58.726806+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72753152 unmapped: 638976 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:23:59.726919+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72769536 unmapped: 622592 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:24:00.727026+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72769536 unmapped: 622592 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcea8000/0x0/0x4ffc00000, data 0xc1131/0x184000, compress 0x0/0x0/0x0, omap 0x11022, meta 0x2bbefde), peers [0,1] op hist [])
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:33 compute-0 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 945357 data_alloc: 218103808 data_used: 8479
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:24:01.727437+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72769536 unmapped: 622592 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:24:02.727660+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72769536 unmapped: 622592 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:24:03.727824+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72769536 unmapped: 622592 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcea8000/0x0/0x4ffc00000, data 0xc1131/0x184000, compress 0x0/0x0/0x0, omap 0x11022, meta 0x2bbefde), peers [0,1] op hist [])
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:24:04.728030+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72777728 unmapped: 614400 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:24:05.728322+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72777728 unmapped: 614400 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:33 compute-0 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 945357 data_alloc: 218103808 data_used: 8479
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:24:06.728522+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72777728 unmapped: 614400 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:24:07.728692+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72777728 unmapped: 614400 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:24:08.728848+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72777728 unmapped: 614400 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:24:09.729042+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcea8000/0x0/0x4ffc00000, data 0xc1131/0x184000, compress 0x0/0x0/0x0, omap 0x11022, meta 0x2bbefde), peers [0,1] op hist [])
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72777728 unmapped: 614400 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:24:10.729231+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcea8000/0x0/0x4ffc00000, data 0xc1131/0x184000, compress 0x0/0x0/0x0, omap 0x11022, meta 0x2bbefde), peers [0,1] op hist [])
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72777728 unmapped: 614400 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:33 compute-0 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 945357 data_alloc: 218103808 data_used: 8479
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:24:11.729458+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72777728 unmapped: 614400 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:24:12.729848+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72777728 unmapped: 614400 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:24:13.730106+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72777728 unmapped: 614400 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:24:14.730327+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72777728 unmapped: 614400 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:24:15.730485+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcea8000/0x0/0x4ffc00000, data 0xc1131/0x184000, compress 0x0/0x0/0x0, omap 0x11022, meta 0x2bbefde), peers [0,1] op hist [])
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72777728 unmapped: 614400 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcea8000/0x0/0x4ffc00000, data 0xc1131/0x184000, compress 0x0/0x0/0x0, omap 0x11022, meta 0x2bbefde), peers [0,1] op hist [])
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:33 compute-0 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 945357 data_alloc: 218103808 data_used: 8479
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:24:16.730675+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72785920 unmapped: 606208 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:24:17.730842+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72785920 unmapped: 606208 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:24:18.731080+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72785920 unmapped: 606208 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:24:19.731396+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72785920 unmapped: 606208 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:24:20.731616+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72785920 unmapped: 606208 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:33 compute-0 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 945357 data_alloc: 218103808 data_used: 8479
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:24:21.731869+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcea8000/0x0/0x4ffc00000, data 0xc1131/0x184000, compress 0x0/0x0/0x0, omap 0x11022, meta 0x2bbefde), peers [0,1] op hist [])
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72785920 unmapped: 606208 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:24:22.732122+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72785920 unmapped: 606208 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:24:23.732369+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72785920 unmapped: 606208 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:24:24.733078+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72785920 unmapped: 606208 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:24:25.733363+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72785920 unmapped: 606208 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:33 compute-0 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 945357 data_alloc: 218103808 data_used: 8479
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:24:26.733569+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72785920 unmapped: 606208 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:24:27.733753+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcea8000/0x0/0x4ffc00000, data 0xc1131/0x184000, compress 0x0/0x0/0x0, omap 0x11022, meta 0x2bbefde), peers [0,1] op hist [])
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72785920 unmapped: 606208 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:24:28.733928+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72785920 unmapped: 606208 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:24:29.734141+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72785920 unmapped: 606208 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:24:30.734375+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72785920 unmapped: 606208 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:33 compute-0 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 945357 data_alloc: 218103808 data_used: 8479
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:24:31.734564+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72785920 unmapped: 606208 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:24:32.734793+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcea8000/0x0/0x4ffc00000, data 0xc1131/0x184000, compress 0x0/0x0/0x0, omap 0x11022, meta 0x2bbefde), peers [0,1] op hist [])
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72785920 unmapped: 606208 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:24:33.734979+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcea8000/0x0/0x4ffc00000, data 0xc1131/0x184000, compress 0x0/0x0/0x0, omap 0x11022, meta 0x2bbefde), peers [0,1] op hist [])
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72785920 unmapped: 606208 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:24:34.735158+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72785920 unmapped: 606208 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:24:35.735345+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72785920 unmapped: 606208 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:33 compute-0 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 945357 data_alloc: 218103808 data_used: 8479
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:24:36.735604+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72785920 unmapped: 606208 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcea8000/0x0/0x4ffc00000, data 0xc1131/0x184000, compress 0x0/0x0/0x0, omap 0x11022, meta 0x2bbefde), peers [0,1] op hist [])
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:24:37.735879+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72785920 unmapped: 606208 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:24:38.736023+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72785920 unmapped: 606208 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:24:39.736214+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72785920 unmapped: 606208 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:24:40.736478+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72785920 unmapped: 606208 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:33 compute-0 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 945357 data_alloc: 218103808 data_used: 8479
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:24:41.736683+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcea8000/0x0/0x4ffc00000, data 0xc1131/0x184000, compress 0x0/0x0/0x0, omap 0x11022, meta 0x2bbefde), peers [0,1] op hist [])
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72785920 unmapped: 606208 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:24:42.736922+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72785920 unmapped: 606208 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:24:43.737175+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72785920 unmapped: 606208 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:24:44.737381+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72785920 unmapped: 606208 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcea8000/0x0/0x4ffc00000, data 0xc1131/0x184000, compress 0x0/0x0/0x0, omap 0x11022, meta 0x2bbefde), peers [0,1] op hist [])
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:24:45.737543+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcea8000/0x0/0x4ffc00000, data 0xc1131/0x184000, compress 0x0/0x0/0x0, omap 0x11022, meta 0x2bbefde), peers [0,1] op hist [])
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72785920 unmapped: 606208 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:33 compute-0 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 945357 data_alloc: 218103808 data_used: 8479
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:24:46.737789+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72785920 unmapped: 606208 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:24:47.738004+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72785920 unmapped: 606208 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:24:48.738236+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72785920 unmapped: 606208 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:24:49.738445+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72785920 unmapped: 606208 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:24:50.738667+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72785920 unmapped: 606208 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:33 compute-0 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 945357 data_alloc: 218103808 data_used: 8479
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:24:51.738809+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcea8000/0x0/0x4ffc00000, data 0xc1131/0x184000, compress 0x0/0x0/0x0, omap 0x11022, meta 0x2bbefde), peers [0,1] op hist [])
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72785920 unmapped: 606208 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:24:52.739046+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72785920 unmapped: 606208 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:24:53.739195+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72785920 unmapped: 606208 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:24:54.739370+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcea8000/0x0/0x4ffc00000, data 0xc1131/0x184000, compress 0x0/0x0/0x0, omap 0x11022, meta 0x2bbefde), peers [0,1] op hist [])
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72785920 unmapped: 606208 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:24:55.739506+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcea8000/0x0/0x4ffc00000, data 0xc1131/0x184000, compress 0x0/0x0/0x0, omap 0x11022, meta 0x2bbefde), peers [0,1] op hist [])
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72785920 unmapped: 606208 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcea8000/0x0/0x4ffc00000, data 0xc1131/0x184000, compress 0x0/0x0/0x0, omap 0x11022, meta 0x2bbefde), peers [0,1] op hist [])
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:33 compute-0 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 945357 data_alloc: 218103808 data_used: 8479
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:24:56.739664+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72785920 unmapped: 606208 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:24:57.739878+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72785920 unmapped: 606208 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:24:58.740013+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72785920 unmapped: 606208 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:24:59.740179+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72785920 unmapped: 606208 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:25:00.740346+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72785920 unmapped: 606208 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:33 compute-0 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 945357 data_alloc: 218103808 data_used: 8479
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:25:01.740602+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72785920 unmapped: 606208 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcea8000/0x0/0x4ffc00000, data 0xc1131/0x184000, compress 0x0/0x0/0x0, omap 0x11022, meta 0x2bbefde), peers [0,1] op hist [])
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:25:02.740763+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72785920 unmapped: 606208 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:25:03.740947+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72785920 unmapped: 606208 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:25:04.741068+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72785920 unmapped: 606208 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:25:05.741245+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72785920 unmapped: 606208 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:33 compute-0 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 945357 data_alloc: 218103808 data_used: 8479
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:25:06.741411+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcea8000/0x0/0x4ffc00000, data 0xc1131/0x184000, compress 0x0/0x0/0x0, omap 0x11022, meta 0x2bbefde), peers [0,1] op hist [])
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72794112 unmapped: 598016 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:25:07.741545+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72794112 unmapped: 598016 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:25:08.741663+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72794112 unmapped: 598016 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:25:09.741892+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcea8000/0x0/0x4ffc00000, data 0xc1131/0x184000, compress 0x0/0x0/0x0, omap 0x11022, meta 0x2bbefde), peers [0,1] op hist [])
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72785920 unmapped: 606208 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:25:10.742061+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72785920 unmapped: 606208 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:33 compute-0 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 945357 data_alloc: 218103808 data_used: 8479
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:25:11.742352+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72785920 unmapped: 606208 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:25:12.742687+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72785920 unmapped: 606208 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:25:13.742940+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72785920 unmapped: 606208 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:25:14.743180+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcea8000/0x0/0x4ffc00000, data 0xc1131/0x184000, compress 0x0/0x0/0x0, omap 0x11022, meta 0x2bbefde), peers [0,1] op hist [])
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72785920 unmapped: 606208 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:25:15.743439+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72785920 unmapped: 606208 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:33 compute-0 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 945357 data_alloc: 218103808 data_used: 8479
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:25:16.743614+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72794112 unmapped: 598016 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:25:17.743795+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72794112 unmapped: 598016 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:25:18.743949+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72794112 unmapped: 598016 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:25:19.744103+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcea8000/0x0/0x4ffc00000, data 0xc1131/0x184000, compress 0x0/0x0/0x0, omap 0x11022, meta 0x2bbefde), peers [0,1] op hist [])
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72794112 unmapped: 598016 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:25:20.744328+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72794112 unmapped: 598016 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:33 compute-0 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 945357 data_alloc: 218103808 data_used: 8479
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:25:21.744541+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72794112 unmapped: 598016 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:25:22.744756+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcea8000/0x0/0x4ffc00000, data 0xc1131/0x184000, compress 0x0/0x0/0x0, omap 0x11022, meta 0x2bbefde), peers [0,1] op hist [])
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72794112 unmapped: 598016 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:25:23.744979+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72794112 unmapped: 598016 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:25:24.745163+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72794112 unmapped: 598016 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:25:25.745322+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72794112 unmapped: 598016 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:25:26.745496+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:33 compute-0 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 945357 data_alloc: 218103808 data_used: 8479
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72794112 unmapped: 598016 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcea8000/0x0/0x4ffc00000, data 0xc1131/0x184000, compress 0x0/0x0/0x0, omap 0x11022, meta 0x2bbefde), peers [0,1] op hist [])
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:25:27.745685+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72794112 unmapped: 598016 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:25:28.745922+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72794112 unmapped: 598016 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:25:29.746133+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72794112 unmapped: 598016 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcea8000/0x0/0x4ffc00000, data 0xc1131/0x184000, compress 0x0/0x0/0x0, omap 0x11022, meta 0x2bbefde), peers [0,1] op hist [])
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:25:30.746326+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72794112 unmapped: 598016 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:25:31.746552+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:33 compute-0 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 945357 data_alloc: 218103808 data_used: 8479
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72802304 unmapped: 589824 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:25:32.746837+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcea8000/0x0/0x4ffc00000, data 0xc1131/0x184000, compress 0x0/0x0/0x0, omap 0x11022, meta 0x2bbefde), peers [0,1] op hist [])
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72802304 unmapped: 589824 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:25:33.747072+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72802304 unmapped: 589824 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:25:34.747283+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72802304 unmapped: 589824 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:25:35.747487+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72802304 unmapped: 589824 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:25:36.747778+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:33 compute-0 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 945357 data_alloc: 218103808 data_used: 8479
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72802304 unmapped: 589824 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:25:37.748027+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcea8000/0x0/0x4ffc00000, data 0xc1131/0x184000, compress 0x0/0x0/0x0, omap 0x11022, meta 0x2bbefde), peers [0,1] op hist [])
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72802304 unmapped: 589824 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcea8000/0x0/0x4ffc00000, data 0xc1131/0x184000, compress 0x0/0x0/0x0, omap 0x11022, meta 0x2bbefde), peers [0,1] op hist [])
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:25:38.748245+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72802304 unmapped: 589824 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:25:39.748431+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcea8000/0x0/0x4ffc00000, data 0xc1131/0x184000, compress 0x0/0x0/0x0, omap 0x11022, meta 0x2bbefde), peers [0,1] op hist [])
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72802304 unmapped: 589824 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:25:40.748637+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72802304 unmapped: 589824 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:25:41.748833+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:33 compute-0 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 945357 data_alloc: 218103808 data_used: 8479
Jan 31 08:51:33 compute-0 ceph-osd[88096]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcea8000/0x0/0x4ffc00000, data 0xc1131/0x184000, compress 0x0/0x0/0x0, omap 0x11022, meta 0x2bbefde), peers [0,1] op hist [])
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72802304 unmapped: 589824 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:25:42.749030+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72802304 unmapped: 589824 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:25:43.749208+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72802304 unmapped: 589824 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:25:44.749391+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72802304 unmapped: 589824 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:25:45.749629+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72802304 unmapped: 589824 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcea8000/0x0/0x4ffc00000, data 0xc1131/0x184000, compress 0x0/0x0/0x0, omap 0x11022, meta 0x2bbefde), peers [0,1] op hist [])
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:25:46.749810+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:33 compute-0 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 945357 data_alloc: 218103808 data_used: 8479
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72810496 unmapped: 581632 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:25:47.750039+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72810496 unmapped: 581632 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:25:48.750230+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72810496 unmapped: 581632 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:25:49.750445+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72810496 unmapped: 581632 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:25:50.750690+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcea8000/0x0/0x4ffc00000, data 0xc1131/0x184000, compress 0x0/0x0/0x0, omap 0x11022, meta 0x2bbefde), peers [0,1] op hist [])
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72810496 unmapped: 581632 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:25:51.751154+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:33 compute-0 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 945357 data_alloc: 218103808 data_used: 8479
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72810496 unmapped: 581632 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:25:52.751391+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcea8000/0x0/0x4ffc00000, data 0xc1131/0x184000, compress 0x0/0x0/0x0, omap 0x11022, meta 0x2bbefde), peers [0,1] op hist [])
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72810496 unmapped: 581632 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:25:53.751654+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcea8000/0x0/0x4ffc00000, data 0xc1131/0x184000, compress 0x0/0x0/0x0, omap 0x11022, meta 0x2bbefde), peers [0,1] op hist [])
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72810496 unmapped: 581632 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:25:54.751958+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72810496 unmapped: 581632 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:25:55.752102+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72810496 unmapped: 581632 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:25:56.752329+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:33 compute-0 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 945357 data_alloc: 218103808 data_used: 8479
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72810496 unmapped: 581632 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:25:57.752547+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcea8000/0x0/0x4ffc00000, data 0xc1131/0x184000, compress 0x0/0x0/0x0, omap 0x11022, meta 0x2bbefde), peers [0,1] op hist [])
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72810496 unmapped: 581632 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:25:58.752709+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72810496 unmapped: 581632 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:25:59.752921+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcea8000/0x0/0x4ffc00000, data 0xc1131/0x184000, compress 0x0/0x0/0x0, omap 0x11022, meta 0x2bbefde), peers [0,1] op hist [])
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72810496 unmapped: 581632 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:26:00.753131+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72810496 unmapped: 581632 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcea8000/0x0/0x4ffc00000, data 0xc1131/0x184000, compress 0x0/0x0/0x0, omap 0x11022, meta 0x2bbefde), peers [0,1] op hist [])
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:26:01.753347+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:33 compute-0 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 945357 data_alloc: 218103808 data_used: 8479
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72810496 unmapped: 581632 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:26:02.753581+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72835072 unmapped: 557056 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:26:03.753792+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72835072 unmapped: 557056 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:26:04.754002+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72835072 unmapped: 557056 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:26:05.754153+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcea8000/0x0/0x4ffc00000, data 0xc1131/0x184000, compress 0x0/0x0/0x0, omap 0x11022, meta 0x2bbefde), peers [0,1] op hist [])
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72835072 unmapped: 557056 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:26:06.754363+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:33 compute-0 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 945357 data_alloc: 218103808 data_used: 8479
Jan 31 08:51:33 compute-0 ceph-osd[88096]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcea8000/0x0/0x4ffc00000, data 0xc1131/0x184000, compress 0x0/0x0/0x0, omap 0x11022, meta 0x2bbefde), peers [0,1] op hist [])
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72835072 unmapped: 557056 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:26:07.754555+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72835072 unmapped: 557056 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:26:08.754759+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72835072 unmapped: 557056 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:26:09.754936+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72835072 unmapped: 557056 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:26:10.755129+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcea8000/0x0/0x4ffc00000, data 0xc1131/0x184000, compress 0x0/0x0/0x0, omap 0x11022, meta 0x2bbefde), peers [0,1] op hist [])
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72835072 unmapped: 557056 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:26:11.755269+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:33 compute-0 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 945357 data_alloc: 218103808 data_used: 8479
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72835072 unmapped: 557056 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:26:12.755548+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcea8000/0x0/0x4ffc00000, data 0xc1131/0x184000, compress 0x0/0x0/0x0, omap 0x11022, meta 0x2bbefde), peers [0,1] op hist [])
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72835072 unmapped: 557056 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:26:13.755708+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcea8000/0x0/0x4ffc00000, data 0xc1131/0x184000, compress 0x0/0x0/0x0, omap 0x11022, meta 0x2bbefde), peers [0,1] op hist [])
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72835072 unmapped: 557056 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:26:14.755868+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcea8000/0x0/0x4ffc00000, data 0xc1131/0x184000, compress 0x0/0x0/0x0, omap 0x11022, meta 0x2bbefde), peers [0,1] op hist [])
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72835072 unmapped: 557056 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:26:15.756021+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72835072 unmapped: 557056 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:26:16.756178+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:33 compute-0 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 945357 data_alloc: 218103808 data_used: 8479
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72835072 unmapped: 557056 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:26:17.756325+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcea8000/0x0/0x4ffc00000, data 0xc1131/0x184000, compress 0x0/0x0/0x0, omap 0x11022, meta 0x2bbefde), peers [0,1] op hist [])
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72835072 unmapped: 557056 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:26:18.756449+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72835072 unmapped: 557056 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:26:19.756653+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcea8000/0x0/0x4ffc00000, data 0xc1131/0x184000, compress 0x0/0x0/0x0, omap 0x11022, meta 0x2bbefde), peers [0,1] op hist [])
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72835072 unmapped: 557056 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:26:20.756845+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72835072 unmapped: 557056 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:26:21.757015+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:33 compute-0 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 945357 data_alloc: 218103808 data_used: 8479
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72835072 unmapped: 557056 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:26:22.757200+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcea8000/0x0/0x4ffc00000, data 0xc1131/0x184000, compress 0x0/0x0/0x0, omap 0x11022, meta 0x2bbefde), peers [0,1] op hist [])
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72835072 unmapped: 557056 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:26:23.757363+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72835072 unmapped: 557056 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:26:24.757510+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72835072 unmapped: 557056 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:26:25.757815+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72835072 unmapped: 557056 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:26:26.757978+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:33 compute-0 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 945357 data_alloc: 218103808 data_used: 8479
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72835072 unmapped: 557056 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:26:27.758117+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcea8000/0x0/0x4ffc00000, data 0xc1131/0x184000, compress 0x0/0x0/0x0, omap 0x11022, meta 0x2bbefde), peers [0,1] op hist [])
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72835072 unmapped: 557056 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:26:28.758392+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72835072 unmapped: 557056 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:26:29.758612+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcea8000/0x0/0x4ffc00000, data 0xc1131/0x184000, compress 0x0/0x0/0x0, omap 0x11022, meta 0x2bbefde), peers [0,1] op hist [])
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72835072 unmapped: 557056 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:26:30.758760+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72835072 unmapped: 557056 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:26:31.758968+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:33 compute-0 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 945357 data_alloc: 218103808 data_used: 8479
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:26:32.759223+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72835072 unmapped: 557056 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:26:33.759388+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72835072 unmapped: 557056 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcea8000/0x0/0x4ffc00000, data 0xc1131/0x184000, compress 0x0/0x0/0x0, omap 0x11022, meta 0x2bbefde), peers [0,1] op hist [])
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72835072 unmapped: 557056 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:26:35.212801+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72835072 unmapped: 557056 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:26:36.212944+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72835072 unmapped: 557056 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:33 compute-0 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 945357 data_alloc: 218103808 data_used: 8479
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:26:37.213151+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72835072 unmapped: 557056 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:26:38.213335+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72835072 unmapped: 557056 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcea8000/0x0/0x4ffc00000, data 0xc1131/0x184000, compress 0x0/0x0/0x0, omap 0x11022, meta 0x2bbefde), peers [0,1] op hist [])
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:26:39.213464+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72835072 unmapped: 557056 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:26:40.213656+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72835072 unmapped: 557056 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:26:41.213892+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72835072 unmapped: 557056 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:33 compute-0 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 945357 data_alloc: 218103808 data_used: 8479
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:26:42.214060+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72835072 unmapped: 557056 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:26:43.214307+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72835072 unmapped: 557056 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:26:44.214541+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcea8000/0x0/0x4ffc00000, data 0xc1131/0x184000, compress 0x0/0x0/0x0, omap 0x11022, meta 0x2bbefde), peers [0,1] op hist [])
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72835072 unmapped: 557056 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:26:45.214694+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72835072 unmapped: 557056 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:26:46.214880+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72835072 unmapped: 557056 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:33 compute-0 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 945357 data_alloc: 218103808 data_used: 8479
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:26:47.215071+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcea8000/0x0/0x4ffc00000, data 0xc1131/0x184000, compress 0x0/0x0/0x0, omap 0x11022, meta 0x2bbefde), peers [0,1] op hist [])
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72843264 unmapped: 548864 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:26:48.215320+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72843264 unmapped: 548864 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:26:49.215647+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72843264 unmapped: 548864 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:26:50.216069+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72843264 unmapped: 548864 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:26:51.216236+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcea8000/0x0/0x4ffc00000, data 0xc1131/0x184000, compress 0x0/0x0/0x0, omap 0x11022, meta 0x2bbefde), peers [0,1] op hist [])
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72843264 unmapped: 548864 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:33 compute-0 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 945357 data_alloc: 218103808 data_used: 8479
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:26:52.216525+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72843264 unmapped: 548864 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:26:53.216889+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72843264 unmapped: 548864 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcea8000/0x0/0x4ffc00000, data 0xc1131/0x184000, compress 0x0/0x0/0x0, omap 0x11022, meta 0x2bbefde), peers [0,1] op hist [])
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:26:54.217168+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72843264 unmapped: 548864 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcea8000/0x0/0x4ffc00000, data 0xc1131/0x184000, compress 0x0/0x0/0x0, omap 0x11022, meta 0x2bbefde), peers [0,1] op hist [])
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:26:55.217325+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcea8000/0x0/0x4ffc00000, data 0xc1131/0x184000, compress 0x0/0x0/0x0, omap 0x11022, meta 0x2bbefde), peers [0,1] op hist [])
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72843264 unmapped: 548864 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:26:56.217625+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72843264 unmapped: 548864 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:33 compute-0 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 945357 data_alloc: 218103808 data_used: 8479
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:26:57.217840+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 1200.8 total, 600.0 interval
                                           Cumulative writes: 5591 writes, 24K keys, 5591 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.02 MB/s
                                           Cumulative WAL: 5591 writes, 826 syncs, 6.77 writes per sync, written: 0.02 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 227 writes, 342 keys, 227 commit groups, 1.0 writes per commit group, ingest: 0.12 MB, 0.00 MB/s
                                           Interval WAL: 227 writes, 113 syncs, 2.01 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.25              0.00         1    0.249       0      0       0.0       0.0
                                            Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.25              0.00         1    0.249       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.25              0.00         1    0.249       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.8 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.2 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5603a1de18d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 7e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
                                           
                                           ** Compaction Stats [m-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.8 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5603a1de18d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 7e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-0] **
                                           
                                           ** Compaction Stats [m-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.8 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5603a1de18d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 7e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-1] **
                                           
                                           ** Compaction Stats [m-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.8 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5603a1de18d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 7e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-2] **
                                           
                                           ** Compaction Stats [p-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.56 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.07              0.00         1    0.071       0      0       0.0       0.0
                                            Sum      1/0    1.56 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.07              0.00         1    0.071       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.07              0.00         1    0.071       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.8 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.1 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5603a1de18d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 7e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-0] **
                                           
                                           ** Compaction Stats [p-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.8 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5603a1de18d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 7e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-1] **
                                           
                                           ** Compaction Stats [p-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.8 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5603a1de18d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 7e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-2] **
                                           
                                           ** Compaction Stats [O-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.8 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5603a1de1a30#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 4e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-0] **
                                           
                                           ** Compaction Stats [O-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.8 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5603a1de1a30#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 4e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-1] **
                                           
                                           ** Compaction Stats [O-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.25 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.21              0.00         1    0.207       0      0       0.0       0.0
                                            Sum      1/0    1.25 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.21              0.00         1    0.207       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.21              0.00         1    0.207       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.8 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.2 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5603a1de1a30#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 4e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-2] **
                                           
                                           ** Compaction Stats [L] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.04              0.00         1    0.043       0      0       0.0       0.0
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.04              0.00         1    0.043       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [L] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.04              0.00         1    0.043       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.8 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5603a1de18d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 7e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [L] **
                                           
                                           ** Compaction Stats [P] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [P] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.8 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5603a1de18d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 7e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [P] **
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72876032 unmapped: 516096 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:26:58.218404+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72876032 unmapped: 516096 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:26:59.218572+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcea8000/0x0/0x4ffc00000, data 0xc1131/0x184000, compress 0x0/0x0/0x0, omap 0x11022, meta 0x2bbefde), peers [0,1] op hist [])
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72876032 unmapped: 516096 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:27:00.218758+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcea8000/0x0/0x4ffc00000, data 0xc1131/0x184000, compress 0x0/0x0/0x0, omap 0x11022, meta 0x2bbefde), peers [0,1] op hist [])
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72876032 unmapped: 516096 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:27:01.218942+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72876032 unmapped: 516096 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:27:02.219194+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:33 compute-0 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 945357 data_alloc: 218103808 data_used: 8479
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72876032 unmapped: 516096 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcea8000/0x0/0x4ffc00000, data 0xc1131/0x184000, compress 0x0/0x0/0x0, omap 0x11022, meta 0x2bbefde), peers [0,1] op hist [])
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:27:03.219486+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72900608 unmapped: 491520 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:27:04.219739+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72900608 unmapped: 491520 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:27:05.219968+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72900608 unmapped: 491520 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:27:06.220447+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72900608 unmapped: 491520 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:27:07.220708+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:33 compute-0 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 945357 data_alloc: 218103808 data_used: 8479
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72900608 unmapped: 491520 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:27:08.220965+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72900608 unmapped: 491520 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcea8000/0x0/0x4ffc00000, data 0xc1131/0x184000, compress 0x0/0x0/0x0, omap 0x11022, meta 0x2bbefde), peers [0,1] op hist [])
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:27:09.221205+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72900608 unmapped: 491520 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:27:10.221366+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72916992 unmapped: 475136 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:27:11.221589+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72916992 unmapped: 475136 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:27:12.221778+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:33 compute-0 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 945357 data_alloc: 218103808 data_used: 8479
Jan 31 08:51:33 compute-0 ceph-osd[88096]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcea8000/0x0/0x4ffc00000, data 0xc1131/0x184000, compress 0x0/0x0/0x0, omap 0x11022, meta 0x2bbefde), peers [0,1] op hist [])
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72916992 unmapped: 475136 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:27:13.222066+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72916992 unmapped: 475136 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:27:14.222228+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcea8000/0x0/0x4ffc00000, data 0xc1131/0x184000, compress 0x0/0x0/0x0, omap 0x11022, meta 0x2bbefde), peers [0,1] op hist [])
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72916992 unmapped: 475136 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:27:15.222445+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72916992 unmapped: 475136 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:27:16.222617+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72925184 unmapped: 466944 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:27:17.222839+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:33 compute-0 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 945357 data_alloc: 218103808 data_used: 8479
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72925184 unmapped: 466944 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:27:18.223022+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72925184 unmapped: 466944 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:27:19.223142+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcea8000/0x0/0x4ffc00000, data 0xc1131/0x184000, compress 0x0/0x0/0x0, omap 0x11022, meta 0x2bbefde), peers [0,1] op hist [])
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72925184 unmapped: 466944 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:27:20.223338+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72925184 unmapped: 466944 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:27:21.223467+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72925184 unmapped: 466944 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:27:22.223644+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:33 compute-0 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 945357 data_alloc: 218103808 data_used: 8479
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72925184 unmapped: 466944 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:27:23.223857+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72925184 unmapped: 466944 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:27:24.224050+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcea8000/0x0/0x4ffc00000, data 0xc1131/0x184000, compress 0x0/0x0/0x0, omap 0x11022, meta 0x2bbefde), peers [0,1] op hist [])
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72925184 unmapped: 466944 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:27:25.224175+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72925184 unmapped: 466944 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:27:26.224334+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72925184 unmapped: 466944 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:27:27.224538+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:33 compute-0 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 945357 data_alloc: 218103808 data_used: 8479
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72925184 unmapped: 466944 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:27:28.224644+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcea8000/0x0/0x4ffc00000, data 0xc1131/0x184000, compress 0x0/0x0/0x0, omap 0x11022, meta 0x2bbefde), peers [0,1] op hist [])
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72925184 unmapped: 466944 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:27:29.224817+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72925184 unmapped: 466944 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:27:30.225499+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72925184 unmapped: 466944 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:27:31.225661+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72925184 unmapped: 466944 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:27:32.225838+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:33 compute-0 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 945357 data_alloc: 218103808 data_used: 8479
Jan 31 08:51:33 compute-0 ceph-osd[88096]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcea8000/0x0/0x4ffc00000, data 0xc1131/0x184000, compress 0x0/0x0/0x0, omap 0x11022, meta 0x2bbefde), peers [0,1] op hist [])
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72925184 unmapped: 466944 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:27:33.226003+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72925184 unmapped: 466944 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:27:34.560688+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcea8000/0x0/0x4ffc00000, data 0xc1131/0x184000, compress 0x0/0x0/0x0, omap 0x11022, meta 0x2bbefde), peers [0,1] op hist [])
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72925184 unmapped: 466944 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:27:35.560850+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72925184 unmapped: 466944 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:27:36.560983+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72925184 unmapped: 466944 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:33 compute-0 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 945357 data_alloc: 218103808 data_used: 8479
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:27:37.561162+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72925184 unmapped: 466944 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:27:38.561317+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72925184 unmapped: 466944 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcea8000/0x0/0x4ffc00000, data 0xc1131/0x184000, compress 0x0/0x0/0x0, omap 0x11022, meta 0x2bbefde), peers [0,1] op hist [])
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:27:39.561491+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72925184 unmapped: 466944 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:27:40.561674+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72925184 unmapped: 466944 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:27:41.561837+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72925184 unmapped: 466944 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcea8000/0x0/0x4ffc00000, data 0xc1131/0x184000, compress 0x0/0x0/0x0, omap 0x11022, meta 0x2bbefde), peers [0,1] op hist [])
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:33 compute-0 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 945357 data_alloc: 218103808 data_used: 8479
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:27:42.562030+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72925184 unmapped: 466944 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:27:43.562298+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72925184 unmapped: 466944 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:27:44.562470+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72925184 unmapped: 466944 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcea8000/0x0/0x4ffc00000, data 0xc1131/0x184000, compress 0x0/0x0/0x0, omap 0x11022, meta 0x2bbefde), peers [0,1] op hist [])
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:27:45.562636+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72925184 unmapped: 466944 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:27:46.562797+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcea8000/0x0/0x4ffc00000, data 0xc1131/0x184000, compress 0x0/0x0/0x0, omap 0x11022, meta 0x2bbefde), peers [0,1] op hist [])
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72925184 unmapped: 466944 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:33 compute-0 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 945357 data_alloc: 218103808 data_used: 8479
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:27:47.562949+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72925184 unmapped: 466944 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:27:48.563221+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72925184 unmapped: 466944 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:27:49.563364+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72925184 unmapped: 466944 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:27:50.563499+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72925184 unmapped: 466944 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:27:51.563664+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72925184 unmapped: 466944 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:33 compute-0 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 945357 data_alloc: 218103808 data_used: 8479
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:27:52.563794+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcea8000/0x0/0x4ffc00000, data 0xc1131/0x184000, compress 0x0/0x0/0x0, omap 0x11022, meta 0x2bbefde), peers [0,1] op hist [])
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72925184 unmapped: 466944 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:27:53.563998+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72925184 unmapped: 466944 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:27:54.564140+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72925184 unmapped: 466944 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:27:55.564314+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72925184 unmapped: 466944 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:27:56.564512+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72925184 unmapped: 466944 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:33 compute-0 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 945357 data_alloc: 218103808 data_used: 8479
Jan 31 08:51:33 compute-0 ceph-osd[88096]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcea8000/0x0/0x4ffc00000, data 0xc1131/0x184000, compress 0x0/0x0/0x0, omap 0x11022, meta 0x2bbefde), peers [0,1] op hist [])
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:27:57.564683+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72925184 unmapped: 466944 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:27:58.564863+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 72925184 unmapped: 466944 heap: 73392128 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 299.835449219s of 299.881591797s, submitted: 24
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:27:59.565027+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 73015296 unmapped: 1425408 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcea8000/0x0/0x4ffc00000, data 0xc1131/0x184000, compress 0x0/0x0/0x0, omap 0x11022, meta 0x2bbefde), peers [0,1] op hist [0,0,1])
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:28:00.565157+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 73072640 unmapped: 1368064 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:28:01.565310+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 74170368 unmapped: 270336 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:33 compute-0 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 945357 data_alloc: 218103808 data_used: 8479
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:28:02.565438+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 74260480 unmapped: 180224 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcea8000/0x0/0x4ffc00000, data 0xc1131/0x184000, compress 0x0/0x0/0x0, omap 0x11022, meta 0x2bbefde), peers [0,1] op hist [1])
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:28:03.565664+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 74285056 unmapped: 155648 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:28:04.565857+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 74301440 unmapped: 139264 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:28:05.566036+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 74317824 unmapped: 122880 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:28:06.566197+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 74326016 unmapped: 114688 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:33 compute-0 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 945429 data_alloc: 218103808 data_used: 8479
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:28:07.566318+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 74334208 unmapped: 106496 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:28:08.566462+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcea8000/0x0/0x4ffc00000, data 0xc1131/0x184000, compress 0x0/0x0/0x0, omap 0x11022, meta 0x2bbefde), peers [0,1] op hist [])
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 74334208 unmapped: 106496 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:28:09.566592+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 74334208 unmapped: 106496 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcea8000/0x0/0x4ffc00000, data 0xc1131/0x184000, compress 0x0/0x0/0x0, omap 0x11022, meta 0x2bbefde), peers [0,1] op hist [])
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:28:10.566826+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 74334208 unmapped: 106496 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:28:11.566992+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 74334208 unmapped: 106496 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:33 compute-0 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 945357 data_alloc: 218103808 data_used: 8479
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:28:12.567124+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcea8000/0x0/0x4ffc00000, data 0xc1131/0x184000, compress 0x0/0x0/0x0, omap 0x11022, meta 0x2bbefde), peers [0,1] op hist [])
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 74334208 unmapped: 106496 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:28:13.567307+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 74334208 unmapped: 106496 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:28:14.567446+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 74334208 unmapped: 106496 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:28:15.567601+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcea8000/0x0/0x4ffc00000, data 0xc1131/0x184000, compress 0x0/0x0/0x0, omap 0x11022, meta 0x2bbefde), peers [0,1] op hist [])
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 74334208 unmapped: 106496 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:28:16.567772+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 74342400 unmapped: 98304 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:33 compute-0 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 945357 data_alloc: 218103808 data_used: 8479
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:28:17.568082+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 74342400 unmapped: 98304 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:28:18.568315+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 74342400 unmapped: 98304 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:28:19.568548+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 74342400 unmapped: 98304 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:28:20.568758+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcea8000/0x0/0x4ffc00000, data 0xc1131/0x184000, compress 0x0/0x0/0x0, omap 0x11022, meta 0x2bbefde), peers [0,1] op hist [])
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 74342400 unmapped: 98304 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:28:21.569005+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcea8000/0x0/0x4ffc00000, data 0xc1131/0x184000, compress 0x0/0x0/0x0, omap 0x11022, meta 0x2bbefde), peers [0,1] op hist [])
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 74342400 unmapped: 98304 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:28:22.569285+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:33 compute-0 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 945357 data_alloc: 218103808 data_used: 8479
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 74342400 unmapped: 98304 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:28:23.569713+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 74342400 unmapped: 98304 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:28:24.569944+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 74342400 unmapped: 98304 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcea8000/0x0/0x4ffc00000, data 0xc1131/0x184000, compress 0x0/0x0/0x0, omap 0x11022, meta 0x2bbefde), peers [0,1] op hist [])
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:28:25.570096+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 74342400 unmapped: 98304 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:28:26.570373+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 74342400 unmapped: 98304 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:28:27.570561+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:33 compute-0 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 945357 data_alloc: 218103808 data_used: 8479
Jan 31 08:51:33 compute-0 ceph-osd[88096]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcea8000/0x0/0x4ffc00000, data 0xc1131/0x184000, compress 0x0/0x0/0x0, omap 0x11022, meta 0x2bbefde), peers [0,1] op hist [])
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 74342400 unmapped: 98304 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:28:28.570675+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 74342400 unmapped: 98304 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:28:29.570791+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 74342400 unmapped: 98304 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:28:30.570958+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 74342400 unmapped: 98304 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:28:31.571121+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 74342400 unmapped: 98304 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcea8000/0x0/0x4ffc00000, data 0xc1131/0x184000, compress 0x0/0x0/0x0, omap 0x11022, meta 0x2bbefde), peers [0,1] op hist [])
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:28:32.571322+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:33 compute-0 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 945357 data_alloc: 218103808 data_used: 8479
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 74342400 unmapped: 98304 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:28:33.571653+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 74342400 unmapped: 98304 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:28:34.571820+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 74342400 unmapped: 98304 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:28:35.572007+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 74342400 unmapped: 98304 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:28:36.572167+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 74350592 unmapped: 90112 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:28:37.572377+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:33 compute-0 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 945357 data_alloc: 218103808 data_used: 8479
Jan 31 08:51:33 compute-0 ceph-osd[88096]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcea8000/0x0/0x4ffc00000, data 0xc1131/0x184000, compress 0x0/0x0/0x0, omap 0x11022, meta 0x2bbefde), peers [0,1] op hist [])
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 74350592 unmapped: 90112 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:28:38.572641+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 74350592 unmapped: 90112 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:28:39.572811+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 74350592 unmapped: 90112 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:28:40.572948+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcea8000/0x0/0x4ffc00000, data 0xc1131/0x184000, compress 0x0/0x0/0x0, omap 0x11022, meta 0x2bbefde), peers [0,1] op hist [])
Jan 31 08:51:33 compute-0 ceph-osd[88096]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcea8000/0x0/0x4ffc00000, data 0xc1131/0x184000, compress 0x0/0x0/0x0, omap 0x11022, meta 0x2bbefde), peers [0,1] op hist [])
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 74350592 unmapped: 90112 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:28:41.573178+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 74350592 unmapped: 90112 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:28:42.573465+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:33 compute-0 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 945357 data_alloc: 218103808 data_used: 8479
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 74350592 unmapped: 90112 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:28:43.574362+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 74350592 unmapped: 90112 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:28:44.574559+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcea8000/0x0/0x4ffc00000, data 0xc1131/0x184000, compress 0x0/0x0/0x0, omap 0x11022, meta 0x2bbefde), peers [0,1] op hist [])
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 74350592 unmapped: 90112 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:28:45.574779+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 74350592 unmapped: 90112 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:28:46.574970+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 74350592 unmapped: 90112 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:28:47.575239+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:33 compute-0 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 945357 data_alloc: 218103808 data_used: 8479
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 74358784 unmapped: 81920 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:28:48.575612+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 74358784 unmapped: 81920 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:28:49.575861+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcea8000/0x0/0x4ffc00000, data 0xc1131/0x184000, compress 0x0/0x0/0x0, omap 0x11022, meta 0x2bbefde), peers [0,1] op hist [])
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 74358784 unmapped: 81920 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:28:50.576079+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 74358784 unmapped: 81920 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:28:51.576670+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcea8000/0x0/0x4ffc00000, data 0xc1131/0x184000, compress 0x0/0x0/0x0, omap 0x11022, meta 0x2bbefde), peers [0,1] op hist [])
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 74358784 unmapped: 81920 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:28:52.576792+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:33 compute-0 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 945357 data_alloc: 218103808 data_used: 8479
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 74358784 unmapped: 81920 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:28:53.577046+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 74358784 unmapped: 81920 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:28:54.577226+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 74358784 unmapped: 81920 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:28:55.577484+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 74358784 unmapped: 81920 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:28:56.577682+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcea8000/0x0/0x4ffc00000, data 0xc1131/0x184000, compress 0x0/0x0/0x0, omap 0x11022, meta 0x2bbefde), peers [0,1] op hist [])
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 74358784 unmapped: 81920 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:28:57.577889+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:33 compute-0 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 945357 data_alloc: 218103808 data_used: 8479
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: handle_auth_request added challenge on 0x5603a3420c00
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 74506240 unmapped: 983040 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:28:58.578272+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: osd.2 127 heartbeat osd_stat(store_statfs(0x4fcea8000/0x0/0x4ffc00000, data 0xc1131/0x184000, compress 0x0/0x0/0x0, omap 0x11022, meta 0x2bbefde), peers [0,1] op hist [])
Jan 31 08:51:33 compute-0 ceph-osd[88096]: osd.2 127 handle_osd_map epochs [127,128], i have 127, src has [1,128]
Jan 31 08:51:33 compute-0 ceph-osd[88096]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 56.759403229s of 59.501785278s, submitted: 90
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 74506240 unmapped: 983040 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:28:59.578750+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: osd.2 128 handle_osd_map epochs [129,129], i have 128, src has [1,129]
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 74547200 unmapped: 942080 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:29:00.579041+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 82984960 unmapped: 892928 heap: 83877888 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:29:01.579154+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 74727424 unmapped: 17547264 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:29:02.579336+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: osd.2 129 handle_osd_map epochs [129,130], i have 129, src has [1,130]
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:33 compute-0 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1001648 data_alloc: 218103808 data_used: 8479
Jan 31 08:51:33 compute-0 ceph-osd[88096]: osd.2 130 ms_handle_reset con 0x5603a3420c00 session 0x5603a53c1880
Jan 31 08:51:33 compute-0 ceph-osd[88096]: osd.2 130 heartbeat osd_stat(store_statfs(0x4fc69e000/0x0/0x4ffc00000, data 0x8c48f0/0x98c000, compress 0x0/0x0/0x0, omap 0x11604, meta 0x2bbe9fc), peers [0,1] op hist [])
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 74760192 unmapped: 17514496 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:29:03.579514+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: handle_auth_request added challenge on 0x5603a6264c00
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 74997760 unmapped: 17276928 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:29:04.579648+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75079680 unmapped: 17195008 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:29:05.579765+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75079680 unmapped: 17195008 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:29:06.579959+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _renew_subs
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 31 08:51:33 compute-0 ceph-osd[88096]: osd.2 130 handle_osd_map epochs [131,131], i have 130, src has [1,131]
Jan 31 08:51:33 compute-0 ceph-osd[88096]: osd.2 131 ms_handle_reset con 0x5603a6264c00 session 0x5603a606da40
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75112448 unmapped: 17162240 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:29:07.580201+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:33 compute-0 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1030439 data_alloc: 218103808 data_used: 8479
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75112448 unmapped: 17162240 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:29:08.580321+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: osd.2 131 heartbeat osd_stat(store_statfs(0x4fc226000/0x0/0x4ffc00000, data 0xd380a6/0xe04000, compress 0x0/0x0/0x0, omap 0x11bc8, meta 0x2bbe438), peers [0,1] op hist [])
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75112448 unmapped: 17162240 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:29:09.580438+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:29:10.580616+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75112448 unmapped: 17162240 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:29:11.580795+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75112448 unmapped: 17162240 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _renew_subs
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 31 08:51:33 compute-0 ceph-osd[88096]: osd.2 131 handle_osd_map epochs [132,132], i have 131, src has [1,132]
Jan 31 08:51:33 compute-0 ceph-osd[88096]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.325613022s of 13.250974655s, submitted: 45
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:29:12.580982+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75153408 unmapped: 17121280 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:33 compute-0 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1033053 data_alloc: 218103808 data_used: 8479
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:29:13.581331+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75153408 unmapped: 17121280 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: osd.2 132 heartbeat osd_stat(store_statfs(0x4fc223000/0x0/0x4ffc00000, data 0xd39b25/0xe07000, compress 0x0/0x0/0x0, omap 0x11ea0, meta 0x2bbe160), peers [0,1] op hist [])
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:29:14.581691+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75153408 unmapped: 17121280 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:29:15.582009+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75153408 unmapped: 17121280 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: handle_auth_request added challenge on 0x5603a56cc800
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:29:16.582161+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75055104 unmapped: 17219584 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _renew_subs
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 31 08:51:33 compute-0 ceph-osd[88096]: osd.2 132 handle_osd_map epochs [133,133], i have 132, src has [1,133]
Jan 31 08:51:33 compute-0 ceph-osd[88096]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fc226000/0x0/0x4ffc00000, data 0xd39b02/0xe06000, compress 0x0/0x0/0x0, omap 0x11ea0, meta 0x2bbe160), peers [0,1] op hist [])
Jan 31 08:51:33 compute-0 ceph-osd[88096]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fc226000/0x0/0x4ffc00000, data 0xd39b02/0xe06000, compress 0x0/0x0/0x0, omap 0x11ea0, meta 0x2bbe160), peers [0,1] op hist [])
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:29:17.582320+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75071488 unmapped: 17203200 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:33 compute-0 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1033880 data_alloc: 218103808 data_used: 8479
Jan 31 08:51:33 compute-0 ceph-osd[88096]: osd.2 133 ms_handle_reset con 0x5603a56cc800 session 0x5603a40f0a80
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:29:18.582540+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75169792 unmapped: 17104896 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:29:19.582751+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75169792 unmapped: 17104896 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:29:20.582905+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75169792 unmapped: 17104896 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: handle_auth_request added challenge on 0x5603a2ea1800
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:29:21.583115+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75350016 unmapped: 16924672 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca25000/0x0/0x4ffc00000, data 0x53b6bf/0x607000, compress 0x0/0x0/0x0, omap 0x1219d, meta 0x2bbde63), peers [0,1] op hist [])
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:29:22.583379+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75350016 unmapped: 16924672 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:33 compute-0 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993344 data_alloc: 218103808 data_used: 12540
Jan 31 08:51:33 compute-0 ceph-osd[88096]: osd.2 133 handle_osd_map epochs [134,134], i have 133, src has [1,134]
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _renew_subs
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 31 08:51:33 compute-0 ceph-osd[88096]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.058580399s of 10.925365448s, submitted: 46
Jan 31 08:51:33 compute-0 ceph-osd[88096]: osd.2 133 handle_osd_map epochs [134,134], i have 134, src has [1,134]
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:29:23.583612+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75358208 unmapped: 16916480 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:29:24.583781+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75366400 unmapped: 16908288 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _renew_subs
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 31 08:51:33 compute-0 ceph-osd[88096]: osd.2 134 handle_osd_map epochs [135,135], i have 134, src has [1,135]
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:29:25.583912+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75431936 unmapped: 16842752 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: osd.2 135 ms_handle_reset con 0x5603a2ea1800 session 0x5603a606d340
Jan 31 08:51:33 compute-0 ceph-osd[88096]: osd.2 135 heartbeat osd_stat(store_statfs(0x4fce8d000/0x0/0x4ffc00000, data 0xced4a/0x19d000, compress 0x0/0x0/0x0, omap 0x127ad, meta 0x2bbd853), peers [0,1] op hist [])
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:29:26.584055+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75431936 unmapped: 16842752 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: osd.2 135 heartbeat osd_stat(store_statfs(0x4fce8d000/0x0/0x4ffc00000, data 0xced27/0x19c000, compress 0x0/0x0/0x0, omap 0x127ad, meta 0x2bbd853), peers [0,1] op hist [])
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:29:27.584307+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75431936 unmapped: 16842752 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: osd.2 135 heartbeat osd_stat(store_statfs(0x4fce8d000/0x0/0x4ffc00000, data 0xced27/0x19c000, compress 0x0/0x0/0x0, omap 0x127ad, meta 0x2bbd853), peers [0,1] op hist [])
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:33 compute-0 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 976492 data_alloc: 218103808 data_used: 12540
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:29:28.584623+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75431936 unmapped: 16842752 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:29:29.584826+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75431936 unmapped: 16842752 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: osd.2 135 handle_osd_map epochs [135,136], i have 135, src has [1,136]
Jan 31 08:51:33 compute-0 ceph-osd[88096]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fce8d000/0x0/0x4ffc00000, data 0xced27/0x19c000, compress 0x0/0x0/0x0, omap 0x127ad, meta 0x2bbd853), peers [0,1] op hist [])
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:29:30.584973+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75431936 unmapped: 16842752 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:29:31.593007+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75431936 unmapped: 16842752 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:29:32.593174+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75440128 unmapped: 16834560 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:33 compute-0 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 979202 data_alloc: 218103808 data_used: 16601
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:29:33.593394+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75440128 unmapped: 16834560 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fce8b000/0x0/0x4ffc00000, data 0xd07a6/0x19f000, compress 0x0/0x0/0x0, omap 0x12abc, meta 0x2bbd544), peers [0,1] op hist [])
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:29:34.593596+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75440128 unmapped: 16834560 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:29:35.593811+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75440128 unmapped: 16834560 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:29:36.593973+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75440128 unmapped: 16834560 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:29:37.594160+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75440128 unmapped: 16834560 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:33 compute-0 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 979202 data_alloc: 218103808 data_used: 16601
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:29:38.594343+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75440128 unmapped: 16834560 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:29:39.599401+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75440128 unmapped: 16834560 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fce8b000/0x0/0x4ffc00000, data 0xd07a6/0x19f000, compress 0x0/0x0/0x0, omap 0x12abc, meta 0x2bbd544), peers [0,1] op hist [])
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:29:40.599571+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75440128 unmapped: 16834560 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fce8b000/0x0/0x4ffc00000, data 0xd07a6/0x19f000, compress 0x0/0x0/0x0, omap 0x12abc, meta 0x2bbd544), peers [0,1] op hist [])
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:29:41.599742+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75440128 unmapped: 16834560 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:29:42.599911+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75440128 unmapped: 16834560 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:33 compute-0 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 979202 data_alloc: 218103808 data_used: 16601
Jan 31 08:51:33 compute-0 ceph-osd[88096]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fce8b000/0x0/0x4ffc00000, data 0xd07a6/0x19f000, compress 0x0/0x0/0x0, omap 0x12abc, meta 0x2bbd544), peers [0,1] op hist [])
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:29:43.600155+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75440128 unmapped: 16834560 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:29:44.600307+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75440128 unmapped: 16834560 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:29:45.600537+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75440128 unmapped: 16834560 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:29:46.600780+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75440128 unmapped: 16834560 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:29:47.601843+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75440128 unmapped: 16834560 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:33 compute-0 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 979202 data_alloc: 218103808 data_used: 16601
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:29:48.602042+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75440128 unmapped: 16834560 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fce8b000/0x0/0x4ffc00000, data 0xd07a6/0x19f000, compress 0x0/0x0/0x0, omap 0x12abc, meta 0x2bbd544), peers [0,1] op hist [])
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:29:49.602306+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75440128 unmapped: 16834560 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:29:50.602507+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75440128 unmapped: 16834560 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fce8b000/0x0/0x4ffc00000, data 0xd07a6/0x19f000, compress 0x0/0x0/0x0, omap 0x12abc, meta 0x2bbd544), peers [0,1] op hist [])
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:29:51.602707+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75440128 unmapped: 16834560 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:29:52.602958+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75440128 unmapped: 16834560 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:33 compute-0 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 979202 data_alloc: 218103808 data_used: 16601
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:29:53.603184+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75440128 unmapped: 16834560 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:29:54.603461+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75440128 unmapped: 16834560 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:29:55.603835+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75440128 unmapped: 16834560 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fce8b000/0x0/0x4ffc00000, data 0xd07a6/0x19f000, compress 0x0/0x0/0x0, omap 0x12abc, meta 0x2bbd544), peers [0,1] op hist [])
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:29:56.604044+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75440128 unmapped: 16834560 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:29:57.604197+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75440128 unmapped: 16834560 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:33 compute-0 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 979202 data_alloc: 218103808 data_used: 16601
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:29:58.604409+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75440128 unmapped: 16834560 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:29:59.604689+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75440128 unmapped: 16834560 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fce8b000/0x0/0x4ffc00000, data 0xd07a6/0x19f000, compress 0x0/0x0/0x0, omap 0x12abc, meta 0x2bbd544), peers [0,1] op hist [])
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:30:00.604903+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75440128 unmapped: 16834560 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:30:01.605172+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75440128 unmapped: 16834560 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:30:02.605402+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75440128 unmapped: 16834560 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:33 compute-0 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 979202 data_alloc: 218103808 data_used: 16601
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:30:03.605711+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75440128 unmapped: 16834560 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:30:04.605995+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75440128 unmapped: 16834560 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:30:05.606215+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fce8b000/0x0/0x4ffc00000, data 0xd07a6/0x19f000, compress 0x0/0x0/0x0, omap 0x12abc, meta 0x2bbd544), peers [0,1] op hist [])
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75440128 unmapped: 16834560 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fce8b000/0x0/0x4ffc00000, data 0xd07a6/0x19f000, compress 0x0/0x0/0x0, omap 0x12abc, meta 0x2bbd544), peers [0,1] op hist [])
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:30:06.606488+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75440128 unmapped: 16834560 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:30:07.606746+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75440128 unmapped: 16834560 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:33 compute-0 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 979202 data_alloc: 218103808 data_used: 16601
Jan 31 08:51:33 compute-0 ceph-osd[88096]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fce8b000/0x0/0x4ffc00000, data 0xd07a6/0x19f000, compress 0x0/0x0/0x0, omap 0x12abc, meta 0x2bbd544), peers [0,1] op hist [])
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:30:08.606932+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75440128 unmapped: 16834560 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:30:09.607203+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75440128 unmapped: 16834560 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:30:10.607409+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75440128 unmapped: 16834560 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:30:11.607626+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75440128 unmapped: 16834560 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:30:12.607856+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75456512 unmapped: 16818176 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:33 compute-0 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 979202 data_alloc: 218103808 data_used: 16601
Jan 31 08:51:33 compute-0 ceph-osd[88096]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fce8b000/0x0/0x4ffc00000, data 0xd07a6/0x19f000, compress 0x0/0x0/0x0, omap 0x12abc, meta 0x2bbd544), peers [0,1] op hist [])
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:30:13.608109+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75456512 unmapped: 16818176 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:30:14.608354+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75456512 unmapped: 16818176 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:30:15.608572+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75456512 unmapped: 16818176 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:30:16.608741+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fce8b000/0x0/0x4ffc00000, data 0xd07a6/0x19f000, compress 0x0/0x0/0x0, omap 0x12abc, meta 0x2bbd544), peers [0,1] op hist [])
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75456512 unmapped: 16818176 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:30:17.608914+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75456512 unmapped: 16818176 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:33 compute-0 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 979202 data_alloc: 218103808 data_used: 16601
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:30:18.609148+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75456512 unmapped: 16818176 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:30:19.609314+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fce8b000/0x0/0x4ffc00000, data 0xd07a6/0x19f000, compress 0x0/0x0/0x0, omap 0x12abc, meta 0x2bbd544), peers [0,1] op hist [])
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75456512 unmapped: 16818176 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:30:20.609497+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75456512 unmapped: 16818176 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:30:21.609669+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75456512 unmapped: 16818176 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fce8b000/0x0/0x4ffc00000, data 0xd07a6/0x19f000, compress 0x0/0x0/0x0, omap 0x12abc, meta 0x2bbd544), peers [0,1] op hist [])
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:30:22.609908+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75456512 unmapped: 16818176 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:33 compute-0 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 979202 data_alloc: 218103808 data_used: 16601
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:30:23.610198+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75456512 unmapped: 16818176 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:30:24.610364+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fce8b000/0x0/0x4ffc00000, data 0xd07a6/0x19f000, compress 0x0/0x0/0x0, omap 0x12abc, meta 0x2bbd544), peers [0,1] op hist [])
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75456512 unmapped: 16818176 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:30:25.610612+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75456512 unmapped: 16818176 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:30:26.610841+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75456512 unmapped: 16818176 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:30:27.611073+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fce8b000/0x0/0x4ffc00000, data 0xd07a6/0x19f000, compress 0x0/0x0/0x0, omap 0x12abc, meta 0x2bbd544), peers [0,1] op hist [])
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75464704 unmapped: 16809984 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:33 compute-0 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 979202 data_alloc: 218103808 data_used: 16601
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:30:28.611303+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75464704 unmapped: 16809984 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fce8b000/0x0/0x4ffc00000, data 0xd07a6/0x19f000, compress 0x0/0x0/0x0, omap 0x12abc, meta 0x2bbd544), peers [0,1] op hist [])
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:30:29.611456+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fce8b000/0x0/0x4ffc00000, data 0xd07a6/0x19f000, compress 0x0/0x0/0x0, omap 0x12abc, meta 0x2bbd544), peers [0,1] op hist [])
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75464704 unmapped: 16809984 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:30:30.611639+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75464704 unmapped: 16809984 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:30:31.611810+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fce8b000/0x0/0x4ffc00000, data 0xd07a6/0x19f000, compress 0x0/0x0/0x0, omap 0x12abc, meta 0x2bbd544), peers [0,1] op hist [])
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75464704 unmapped: 16809984 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:30:32.612023+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75464704 unmapped: 16809984 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:33 compute-0 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 979202 data_alloc: 218103808 data_used: 16601
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:30:33.612569+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75464704 unmapped: 16809984 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:30:34.612824+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75464704 unmapped: 16809984 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:30:35.612957+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fce8b000/0x0/0x4ffc00000, data 0xd07a6/0x19f000, compress 0x0/0x0/0x0, omap 0x12abc, meta 0x2bbd544), peers [0,1] op hist [])
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75464704 unmapped: 16809984 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:30:36.613181+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75464704 unmapped: 16809984 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:30:37.613351+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75464704 unmapped: 16809984 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:33 compute-0 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 979202 data_alloc: 218103808 data_used: 16601
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:30:38.613500+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75464704 unmapped: 16809984 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fce8b000/0x0/0x4ffc00000, data 0xd07a6/0x19f000, compress 0x0/0x0/0x0, omap 0x12abc, meta 0x2bbd544), peers [0,1] op hist [])
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:30:39.613648+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75464704 unmapped: 16809984 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:30:40.613855+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75464704 unmapped: 16809984 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:30:41.614044+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75464704 unmapped: 16809984 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:30:42.614180+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75464704 unmapped: 16809984 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:33 compute-0 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 979202 data_alloc: 218103808 data_used: 16601
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:30:43.614334+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75464704 unmapped: 16809984 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:30:44.614451+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75464704 unmapped: 16809984 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fce8b000/0x0/0x4ffc00000, data 0xd07a6/0x19f000, compress 0x0/0x0/0x0, omap 0x12abc, meta 0x2bbd544), peers [0,1] op hist [])
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:30:45.614587+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fce8b000/0x0/0x4ffc00000, data 0xd07a6/0x19f000, compress 0x0/0x0/0x0, omap 0x12abc, meta 0x2bbd544), peers [0,1] op hist [])
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75464704 unmapped: 16809984 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:30:46.614773+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75464704 unmapped: 16809984 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:30:47.614925+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75464704 unmapped: 16809984 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:33 compute-0 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 979202 data_alloc: 218103808 data_used: 16601
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:30:48.615102+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75464704 unmapped: 16809984 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:30:49.615336+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75464704 unmapped: 16809984 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fce8b000/0x0/0x4ffc00000, data 0xd07a6/0x19f000, compress 0x0/0x0/0x0, omap 0x12abc, meta 0x2bbd544), peers [0,1] op hist [])
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:30:50.615609+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75464704 unmapped: 16809984 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:30:51.615763+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75464704 unmapped: 16809984 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:30:52.615926+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75464704 unmapped: 16809984 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:33 compute-0 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 979202 data_alloc: 218103808 data_used: 16601
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:30:53.616099+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75464704 unmapped: 16809984 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:30:54.616245+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75464704 unmapped: 16809984 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:30:55.616434+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fce8b000/0x0/0x4ffc00000, data 0xd07a6/0x19f000, compress 0x0/0x0/0x0, omap 0x12abc, meta 0x2bbd544), peers [0,1] op hist [])
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75464704 unmapped: 16809984 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:30:56.616584+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75464704 unmapped: 16809984 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:30:57.616741+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75464704 unmapped: 16809984 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:33 compute-0 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 979202 data_alloc: 218103808 data_used: 16601
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:30:58.616871+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75464704 unmapped: 16809984 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:30:59.617046+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75464704 unmapped: 16809984 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fce8b000/0x0/0x4ffc00000, data 0xd07a6/0x19f000, compress 0x0/0x0/0x0, omap 0x12abc, meta 0x2bbd544), peers [0,1] op hist [])
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:31:00.617373+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75464704 unmapped: 16809984 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:31:01.617928+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75464704 unmapped: 16809984 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:31:02.618207+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fce8b000/0x0/0x4ffc00000, data 0xd07a6/0x19f000, compress 0x0/0x0/0x0, omap 0x12abc, meta 0x2bbd544), peers [0,1] op hist [])
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75464704 unmapped: 16809984 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:33 compute-0 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 979202 data_alloc: 218103808 data_used: 16601
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:31:03.618591+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: bluestore.MempoolThread fragmentation_score=0.000143 took=0.000028s
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75464704 unmapped: 16809984 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:31:04.618827+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75464704 unmapped: 16809984 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:31:05.619043+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75464704 unmapped: 16809984 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:31:06.619244+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75472896 unmapped: 16801792 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:31:07.619431+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75472896 unmapped: 16801792 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:33 compute-0 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 979202 data_alloc: 218103808 data_used: 16601
Jan 31 08:51:33 compute-0 ceph-osd[88096]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fce8b000/0x0/0x4ffc00000, data 0xd07a6/0x19f000, compress 0x0/0x0/0x0, omap 0x12abc, meta 0x2bbd544), peers [0,1] op hist [])
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:31:08.619625+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75472896 unmapped: 16801792 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:31:09.619809+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75472896 unmapped: 16801792 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:31:10.620022+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75472896 unmapped: 16801792 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:31:11.620174+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75472896 unmapped: 16801792 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:31:12.620408+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75472896 unmapped: 16801792 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:33 compute-0 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 979202 data_alloc: 218103808 data_used: 16601
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:31:13.620645+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fce8b000/0x0/0x4ffc00000, data 0xd07a6/0x19f000, compress 0x0/0x0/0x0, omap 0x12abc, meta 0x2bbd544), peers [0,1] op hist [])
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75472896 unmapped: 16801792 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:31:14.620815+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75472896 unmapped: 16801792 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:31:15.620961+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75472896 unmapped: 16801792 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:31:16.621153+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75472896 unmapped: 16801792 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:31:17.621376+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fce8b000/0x0/0x4ffc00000, data 0xd07a6/0x19f000, compress 0x0/0x0/0x0, omap 0x12abc, meta 0x2bbd544), peers [0,1] op hist [])
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75472896 unmapped: 16801792 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:33 compute-0 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 979202 data_alloc: 218103808 data_used: 16601
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:31:18.621592+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75472896 unmapped: 16801792 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:31:19.621782+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75472896 unmapped: 16801792 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:31:20.621988+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75472896 unmapped: 16801792 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:31:21.622156+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75472896 unmapped: 16801792 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:31:22.622550+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fce8b000/0x0/0x4ffc00000, data 0xd07a6/0x19f000, compress 0x0/0x0/0x0, omap 0x12abc, meta 0x2bbd544), peers [0,1] op hist [])
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75472896 unmapped: 16801792 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:33 compute-0 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 979202 data_alloc: 218103808 data_used: 16601
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:31:23.622736+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75472896 unmapped: 16801792 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:31:24.622955+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75472896 unmapped: 16801792 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:31:25.623164+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75472896 unmapped: 16801792 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:31:26.623338+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75472896 unmapped: 16801792 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:31:27.623494+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75472896 unmapped: 16801792 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:33 compute-0 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 979202 data_alloc: 218103808 data_used: 16601
Jan 31 08:51:33 compute-0 ceph-osd[88096]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fce8b000/0x0/0x4ffc00000, data 0xd07a6/0x19f000, compress 0x0/0x0/0x0, omap 0x12abc, meta 0x2bbd544), peers [0,1] op hist [])
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:31:28.623639+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75472896 unmapped: 16801792 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:31:29.623794+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75472896 unmapped: 16801792 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:31:30.623959+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75472896 unmapped: 16801792 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:31:31.624112+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75472896 unmapped: 16801792 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:31:32.624384+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fce8b000/0x0/0x4ffc00000, data 0xd07a6/0x19f000, compress 0x0/0x0/0x0, omap 0x12abc, meta 0x2bbd544), peers [0,1] op hist [])
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75472896 unmapped: 16801792 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:33 compute-0 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 979202 data_alloc: 218103808 data_used: 16601
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:31:33.624783+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75472896 unmapped: 16801792 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:31:34.624981+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fce8b000/0x0/0x4ffc00000, data 0xd07a6/0x19f000, compress 0x0/0x0/0x0, omap 0x12abc, meta 0x2bbd544), peers [0,1] op hist [])
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75472896 unmapped: 16801792 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:31:35.625183+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75472896 unmapped: 16801792 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:31:36.625398+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75472896 unmapped: 16801792 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:31:37.625715+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75472896 unmapped: 16801792 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:33 compute-0 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 979202 data_alloc: 218103808 data_used: 16601
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:31:38.625940+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75472896 unmapped: 16801792 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:31:39.626145+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75472896 unmapped: 16801792 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:31:40.626406+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fce8b000/0x0/0x4ffc00000, data 0xd07a6/0x19f000, compress 0x0/0x0/0x0, omap 0x12abc, meta 0x2bbd544), peers [0,1] op hist [])
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75472896 unmapped: 16801792 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:31:41.626676+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fce8b000/0x0/0x4ffc00000, data 0xd07a6/0x19f000, compress 0x0/0x0/0x0, omap 0x12abc, meta 0x2bbd544), peers [0,1] op hist [])
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75481088 unmapped: 16793600 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:31:42.626838+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75481088 unmapped: 16793600 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:33 compute-0 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 979202 data_alloc: 218103808 data_used: 16601
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:31:43.627104+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fce8b000/0x0/0x4ffc00000, data 0xd07a6/0x19f000, compress 0x0/0x0/0x0, omap 0x12abc, meta 0x2bbd544), peers [0,1] op hist [])
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75481088 unmapped: 16793600 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:31:44.627386+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75481088 unmapped: 16793600 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:31:45.627592+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75481088 unmapped: 16793600 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:31:46.627859+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75481088 unmapped: 16793600 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:31:47.628024+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75481088 unmapped: 16793600 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fce8b000/0x0/0x4ffc00000, data 0xd07a6/0x19f000, compress 0x0/0x0/0x0, omap 0x12abc, meta 0x2bbd544), peers [0,1] op hist [])
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:33 compute-0 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 979202 data_alloc: 218103808 data_used: 16601
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:31:48.628212+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fce8b000/0x0/0x4ffc00000, data 0xd07a6/0x19f000, compress 0x0/0x0/0x0, omap 0x12abc, meta 0x2bbd544), peers [0,1] op hist [])
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75481088 unmapped: 16793600 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:31:49.628392+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75481088 unmapped: 16793600 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:31:50.628542+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fce8b000/0x0/0x4ffc00000, data 0xd07a6/0x19f000, compress 0x0/0x0/0x0, omap 0x12abc, meta 0x2bbd544), peers [0,1] op hist [])
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75481088 unmapped: 16793600 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:31:51.628785+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fce8b000/0x0/0x4ffc00000, data 0xd07a6/0x19f000, compress 0x0/0x0/0x0, omap 0x12abc, meta 0x2bbd544), peers [0,1] op hist [])
Jan 31 08:51:33 compute-0 ceph-osd[88096]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fce8b000/0x0/0x4ffc00000, data 0xd07a6/0x19f000, compress 0x0/0x0/0x0, omap 0x12abc, meta 0x2bbd544), peers [0,1] op hist [])
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75481088 unmapped: 16793600 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:31:52.628996+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75481088 unmapped: 16793600 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:33 compute-0 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 979202 data_alloc: 218103808 data_used: 16601
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:31:53.629168+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75481088 unmapped: 16793600 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:31:54.631072+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75481088 unmapped: 16793600 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:31:55.631346+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75481088 unmapped: 16793600 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:31:56.631515+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75489280 unmapped: 16785408 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fce8b000/0x0/0x4ffc00000, data 0xd07a6/0x19f000, compress 0x0/0x0/0x0, omap 0x12abc, meta 0x2bbd544), peers [0,1] op hist [])
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:31:57.631729+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75489280 unmapped: 16785408 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:31:58.632088+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:33 compute-0 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 979202 data_alloc: 218103808 data_used: 16601
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75489280 unmapped: 16785408 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:31:59.632308+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75489280 unmapped: 16785408 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:32:00.632464+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75489280 unmapped: 16785408 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fce8b000/0x0/0x4ffc00000, data 0xd07a6/0x19f000, compress 0x0/0x0/0x0, omap 0x12abc, meta 0x2bbd544), peers [0,1] op hist [])
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:32:01.632621+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75489280 unmapped: 16785408 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:32:02.632947+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75497472 unmapped: 16777216 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:32:03.633138+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:33 compute-0 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 979202 data_alloc: 218103808 data_used: 16601
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75497472 unmapped: 16777216 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:32:04.633313+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75497472 unmapped: 16777216 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:32:05.633445+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fce8b000/0x0/0x4ffc00000, data 0xd07a6/0x19f000, compress 0x0/0x0/0x0, omap 0x12abc, meta 0x2bbd544), peers [0,1] op hist [])
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75497472 unmapped: 16777216 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:32:06.633821+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75505664 unmapped: 16769024 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:32:07.634026+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fce8b000/0x0/0x4ffc00000, data 0xd07a6/0x19f000, compress 0x0/0x0/0x0, omap 0x12abc, meta 0x2bbd544), peers [0,1] op hist [])
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75505664 unmapped: 16769024 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fce8b000/0x0/0x4ffc00000, data 0xd07a6/0x19f000, compress 0x0/0x0/0x0, omap 0x12abc, meta 0x2bbd544), peers [0,1] op hist [])
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:32:08.634169+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:33 compute-0 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 979202 data_alloc: 218103808 data_used: 16601
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75505664 unmapped: 16769024 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fce8b000/0x0/0x4ffc00000, data 0xd07a6/0x19f000, compress 0x0/0x0/0x0, omap 0x12abc, meta 0x2bbd544), peers [0,1] op hist [])
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:32:09.634289+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75505664 unmapped: 16769024 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:32:10.634417+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75505664 unmapped: 16769024 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:32:11.634559+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fce8b000/0x0/0x4ffc00000, data 0xd07a6/0x19f000, compress 0x0/0x0/0x0, omap 0x12abc, meta 0x2bbd544), peers [0,1] op hist [])
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75505664 unmapped: 16769024 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:32:12.634863+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75505664 unmapped: 16769024 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:32:13.634998+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:33 compute-0 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 979202 data_alloc: 218103808 data_used: 16601
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75505664 unmapped: 16769024 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:32:14.635125+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fce8b000/0x0/0x4ffc00000, data 0xd07a6/0x19f000, compress 0x0/0x0/0x0, omap 0x12abc, meta 0x2bbd544), peers [0,1] op hist [])
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75505664 unmapped: 16769024 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:32:15.635336+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75505664 unmapped: 16769024 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:32:16.635486+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75505664 unmapped: 16769024 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:32:17.635628+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75505664 unmapped: 16769024 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:32:18.635750+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:33 compute-0 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 979202 data_alloc: 218103808 data_used: 16601
Jan 31 08:51:33 compute-0 ceph-osd[88096]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fce8b000/0x0/0x4ffc00000, data 0xd07a6/0x19f000, compress 0x0/0x0/0x0, omap 0x12abc, meta 0x2bbd544), peers [0,1] op hist [])
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75505664 unmapped: 16769024 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:32:19.635868+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75505664 unmapped: 16769024 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:32:20.635990+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75505664 unmapped: 16769024 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:32:21.636122+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75505664 unmapped: 16769024 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:32:22.636276+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75505664 unmapped: 16769024 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:32:23.636476+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:33 compute-0 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 979202 data_alloc: 218103808 data_used: 16601
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75505664 unmapped: 16769024 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:32:24.636612+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fce8b000/0x0/0x4ffc00000, data 0xd07a6/0x19f000, compress 0x0/0x0/0x0, omap 0x12abc, meta 0x2bbd544), peers [0,1] op hist [])
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75505664 unmapped: 16769024 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:32:25.636810+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75505664 unmapped: 16769024 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:32:26.636933+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75505664 unmapped: 16769024 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:32:27.637114+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75505664 unmapped: 16769024 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:32:28.637326+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:33 compute-0 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 979202 data_alloc: 218103808 data_used: 16601
Jan 31 08:51:33 compute-0 ceph-osd[88096]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fce8b000/0x0/0x4ffc00000, data 0xd07a6/0x19f000, compress 0x0/0x0/0x0, omap 0x12abc, meta 0x2bbd544), peers [0,1] op hist [])
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75505664 unmapped: 16769024 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:32:29.637465+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75505664 unmapped: 16769024 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:32:30.637599+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75505664 unmapped: 16769024 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:32:31.637747+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75513856 unmapped: 16760832 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:32:32.637956+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75513856 unmapped: 16760832 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:32:33.638172+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:33 compute-0 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 979202 data_alloc: 218103808 data_used: 16601
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75513856 unmapped: 16760832 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:32:34.638967+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fce8b000/0x0/0x4ffc00000, data 0xd07a6/0x19f000, compress 0x0/0x0/0x0, omap 0x12abc, meta 0x2bbd544), peers [0,1] op hist [])
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75513856 unmapped: 16760832 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:32:35.639396+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75513856 unmapped: 16760832 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:32:36.639633+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75513856 unmapped: 16760832 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:32:37.639983+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75513856 unmapped: 16760832 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:32:38.640226+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:33 compute-0 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 979202 data_alloc: 218103808 data_used: 16601
Jan 31 08:51:33 compute-0 ceph-osd[88096]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fce8b000/0x0/0x4ffc00000, data 0xd07a6/0x19f000, compress 0x0/0x0/0x0, omap 0x12abc, meta 0x2bbd544), peers [0,1] op hist [])
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75513856 unmapped: 16760832 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 nova_compute[238824]: 2026-01-31 08:51:33.339 238828 DEBUG oslo_service.periodic_task [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:32:39.640453+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75513856 unmapped: 16760832 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:32:40.640613+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75513856 unmapped: 16760832 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:32:41.640756+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fce8b000/0x0/0x4ffc00000, data 0xd07a6/0x19f000, compress 0x0/0x0/0x0, omap 0x12abc, meta 0x2bbd544), peers [0,1] op hist [])
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75513856 unmapped: 16760832 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:32:42.640920+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75513856 unmapped: 16760832 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:32:43.641442+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:33 compute-0 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 979202 data_alloc: 218103808 data_used: 16601
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75513856 unmapped: 16760832 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:32:44.641601+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75513856 unmapped: 16760832 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:32:45.641861+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fce8b000/0x0/0x4ffc00000, data 0xd07a6/0x19f000, compress 0x0/0x0/0x0, omap 0x12abc, meta 0x2bbd544), peers [0,1] op hist [])
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75513856 unmapped: 16760832 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:32:46.641999+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:32:47.642300+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75513856 unmapped: 16760832 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:32:48.642516+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75513856 unmapped: 16760832 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:33 compute-0 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 979202 data_alloc: 218103808 data_used: 16601
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:32:49.642680+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75513856 unmapped: 16760832 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:32:50.643004+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75513856 unmapped: 16760832 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:32:51.643346+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75513856 unmapped: 16760832 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fce8b000/0x0/0x4ffc00000, data 0xd07a6/0x19f000, compress 0x0/0x0/0x0, omap 0x12abc, meta 0x2bbd544), peers [0,1] op hist [])
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:32:52.643680+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75513856 unmapped: 16760832 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:32:53.643998+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75513856 unmapped: 16760832 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:33 compute-0 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 979202 data_alloc: 218103808 data_used: 16601
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:32:54.644231+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75513856 unmapped: 16760832 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:32:55.644472+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75513856 unmapped: 16760832 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fce8b000/0x0/0x4ffc00000, data 0xd07a6/0x19f000, compress 0x0/0x0/0x0, omap 0x12abc, meta 0x2bbd544), peers [0,1] op hist [])
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:32:56.644658+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75513856 unmapped: 16760832 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:32:57.644827+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75513856 unmapped: 16760832 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:32:58.645045+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75513856 unmapped: 16760832 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:33 compute-0 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 979202 data_alloc: 218103808 data_used: 16601
Jan 31 08:51:33 compute-0 ceph-osd[88096]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fce8b000/0x0/0x4ffc00000, data 0xd07a6/0x19f000, compress 0x0/0x0/0x0, omap 0x12abc, meta 0x2bbd544), peers [0,1] op hist [])
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:32:59.645189+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75513856 unmapped: 16760832 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:33:00.645422+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75513856 unmapped: 16760832 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:33:01.645608+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75513856 unmapped: 16760832 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:33:02.645812+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75513856 unmapped: 16760832 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:33:03.646047+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75513856 unmapped: 16760832 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:33 compute-0 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 979202 data_alloc: 218103808 data_used: 16601
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:33:04.646306+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fce8b000/0x0/0x4ffc00000, data 0xd07a6/0x19f000, compress 0x0/0x0/0x0, omap 0x12abc, meta 0x2bbd544), peers [0,1] op hist [])
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75513856 unmapped: 16760832 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:33:05.646491+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75513856 unmapped: 16760832 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:33:06.646654+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75513856 unmapped: 16760832 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:33:07.646802+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75513856 unmapped: 16760832 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fce8b000/0x0/0x4ffc00000, data 0xd07a6/0x19f000, compress 0x0/0x0/0x0, omap 0x12abc, meta 0x2bbd544), peers [0,1] op hist [])
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:33:08.646953+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75513856 unmapped: 16760832 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:33 compute-0 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 979202 data_alloc: 218103808 data_used: 16601
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:33:09.647107+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75513856 unmapped: 16760832 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:33:10.647335+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75513856 unmapped: 16760832 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:33:11.647551+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fce8b000/0x0/0x4ffc00000, data 0xd07a6/0x19f000, compress 0x0/0x0/0x0, omap 0x12abc, meta 0x2bbd544), peers [0,1] op hist [])
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75513856 unmapped: 16760832 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:33:12.647687+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75513856 unmapped: 16760832 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:33:13.647851+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75513856 unmapped: 16760832 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:33 compute-0 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 979202 data_alloc: 218103808 data_used: 16601
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:33:14.647985+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75513856 unmapped: 16760832 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:33:15.648118+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75513856 unmapped: 16760832 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fce8b000/0x0/0x4ffc00000, data 0xd07a6/0x19f000, compress 0x0/0x0/0x0, omap 0x12abc, meta 0x2bbd544), peers [0,1] op hist [])
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:33:16.648303+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75513856 unmapped: 16760832 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:33:17.648438+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75513856 unmapped: 16760832 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:33:18.648600+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75513856 unmapped: 16760832 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:33 compute-0 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 979202 data_alloc: 218103808 data_used: 16601
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:33:19.648749+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75513856 unmapped: 16760832 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:33:20.648912+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75513856 unmapped: 16760832 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fce8b000/0x0/0x4ffc00000, data 0xd07a6/0x19f000, compress 0x0/0x0/0x0, omap 0x12abc, meta 0x2bbd544), peers [0,1] op hist [])
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:33:21.649129+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75513856 unmapped: 16760832 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:33:22.649298+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75513856 unmapped: 16760832 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:33:23.649469+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75513856 unmapped: 16760832 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fce8b000/0x0/0x4ffc00000, data 0xd07a6/0x19f000, compress 0x0/0x0/0x0, omap 0x12abc, meta 0x2bbd544), peers [0,1] op hist [])
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:33 compute-0 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 979202 data_alloc: 218103808 data_used: 16601
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:33:24.649659+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75513856 unmapped: 16760832 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:33:25.649821+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75513856 unmapped: 16760832 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:33:26.649983+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75513856 unmapped: 16760832 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:33:27.650149+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75513856 unmapped: 16760832 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:33:28.650308+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75513856 unmapped: 16760832 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:33 compute-0 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 979202 data_alloc: 218103808 data_used: 16601
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:33:29.650466+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75513856 unmapped: 16760832 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fce8b000/0x0/0x4ffc00000, data 0xd07a6/0x19f000, compress 0x0/0x0/0x0, omap 0x12abc, meta 0x2bbd544), peers [0,1] op hist [])
Jan 31 08:51:33 compute-0 ceph-osd[88096]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fce8b000/0x0/0x4ffc00000, data 0xd07a6/0x19f000, compress 0x0/0x0/0x0, omap 0x12abc, meta 0x2bbd544), peers [0,1] op hist [])
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:33:30.650620+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75513856 unmapped: 16760832 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fce8b000/0x0/0x4ffc00000, data 0xd07a6/0x19f000, compress 0x0/0x0/0x0, omap 0x12abc, meta 0x2bbd544), peers [0,1] op hist [])
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:33:31.650761+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75513856 unmapped: 16760832 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fce8b000/0x0/0x4ffc00000, data 0xd07a6/0x19f000, compress 0x0/0x0/0x0, omap 0x12abc, meta 0x2bbd544), peers [0,1] op hist [])
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:33:32.650943+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75513856 unmapped: 16760832 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:33:33.651113+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75513856 unmapped: 16760832 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:33 compute-0 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 979202 data_alloc: 218103808 data_used: 16601
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:33:34.651292+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75513856 unmapped: 16760832 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fce8b000/0x0/0x4ffc00000, data 0xd07a6/0x19f000, compress 0x0/0x0/0x0, omap 0x12abc, meta 0x2bbd544), peers [0,1] op hist [])
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:33:35.651413+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75513856 unmapped: 16760832 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fce8b000/0x0/0x4ffc00000, data 0xd07a6/0x19f000, compress 0x0/0x0/0x0, omap 0x12abc, meta 0x2bbd544), peers [0,1] op hist [])
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:33:36.651646+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75513856 unmapped: 16760832 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:33:37.651778+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fce8b000/0x0/0x4ffc00000, data 0xd07a6/0x19f000, compress 0x0/0x0/0x0, omap 0x12abc, meta 0x2bbd544), peers [0,1] op hist [])
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75522048 unmapped: 16752640 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:33:38.653137+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75522048 unmapped: 16752640 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:33 compute-0 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 979202 data_alloc: 218103808 data_used: 16601
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:33:39.653644+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75522048 unmapped: 16752640 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fce8b000/0x0/0x4ffc00000, data 0xd07a6/0x19f000, compress 0x0/0x0/0x0, omap 0x12abc, meta 0x2bbd544), peers [0,1] op hist [])
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:33:40.653829+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75522048 unmapped: 16752640 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fce8b000/0x0/0x4ffc00000, data 0xd07a6/0x19f000, compress 0x0/0x0/0x0, omap 0x12abc, meta 0x2bbd544), peers [0,1] op hist [])
Jan 31 08:51:33 compute-0 ceph-osd[88096]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fce8b000/0x0/0x4ffc00000, data 0xd07a6/0x19f000, compress 0x0/0x0/0x0, omap 0x12abc, meta 0x2bbd544), peers [0,1] op hist [])
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:33:41.654609+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75522048 unmapped: 16752640 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:33:42.654821+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75522048 unmapped: 16752640 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:33:43.655470+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75522048 unmapped: 16752640 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:33 compute-0 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 979202 data_alloc: 218103808 data_used: 16601
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:33:44.656008+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75522048 unmapped: 16752640 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:33:45.656351+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75522048 unmapped: 16752640 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:33:46.656945+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fce8b000/0x0/0x4ffc00000, data 0xd07a6/0x19f000, compress 0x0/0x0/0x0, omap 0x12abc, meta 0x2bbd544), peers [0,1] op hist [])
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75522048 unmapped: 16752640 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:33:47.657280+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75522048 unmapped: 16752640 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fce8b000/0x0/0x4ffc00000, data 0xd07a6/0x19f000, compress 0x0/0x0/0x0, omap 0x12abc, meta 0x2bbd544), peers [0,1] op hist [])
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:33:48.657866+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75522048 unmapped: 16752640 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:33 compute-0 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 979202 data_alloc: 218103808 data_used: 16601
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:33:49.658156+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fce8b000/0x0/0x4ffc00000, data 0xd07a6/0x19f000, compress 0x0/0x0/0x0, omap 0x12abc, meta 0x2bbd544), peers [0,1] op hist [])
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75522048 unmapped: 16752640 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:33:50.658589+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75522048 unmapped: 16752640 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:33:51.658961+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75522048 unmapped: 16752640 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fce8b000/0x0/0x4ffc00000, data 0xd07a6/0x19f000, compress 0x0/0x0/0x0, omap 0x12abc, meta 0x2bbd544), peers [0,1] op hist [])
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:33:52.659284+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75522048 unmapped: 16752640 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:33:53.659609+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75522048 unmapped: 16752640 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:33 compute-0 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 979202 data_alloc: 218103808 data_used: 16601
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:33:54.659923+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75522048 unmapped: 16752640 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:33:55.660092+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75522048 unmapped: 16752640 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:33:56.660383+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fce8b000/0x0/0x4ffc00000, data 0xd07a6/0x19f000, compress 0x0/0x0/0x0, omap 0x12abc, meta 0x2bbd544), peers [0,1] op hist [])
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75522048 unmapped: 16752640 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:33:57.660653+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75522048 unmapped: 16752640 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:33:58.660929+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75522048 unmapped: 16752640 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:33 compute-0 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 979202 data_alloc: 218103808 data_used: 16601
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:33:59.661119+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75522048 unmapped: 16752640 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:34:00.661339+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75522048 unmapped: 16752640 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:34:01.661566+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fce8b000/0x0/0x4ffc00000, data 0xd07a6/0x19f000, compress 0x0/0x0/0x0, omap 0x12abc, meta 0x2bbd544), peers [0,1] op hist [])
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75522048 unmapped: 16752640 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:34:02.661823+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75522048 unmapped: 16752640 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:34:03.662134+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fce8b000/0x0/0x4ffc00000, data 0xd07a6/0x19f000, compress 0x0/0x0/0x0, omap 0x12abc, meta 0x2bbd544), peers [0,1] op hist [])
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75522048 unmapped: 16752640 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:33 compute-0 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 979202 data_alloc: 218103808 data_used: 16601
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:34:04.662371+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75522048 unmapped: 16752640 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:34:05.662564+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fce8b000/0x0/0x4ffc00000, data 0xd07a6/0x19f000, compress 0x0/0x0/0x0, omap 0x12abc, meta 0x2bbd544), peers [0,1] op hist [])
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75522048 unmapped: 16752640 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fce8b000/0x0/0x4ffc00000, data 0xd07a6/0x19f000, compress 0x0/0x0/0x0, omap 0x12abc, meta 0x2bbd544), peers [0,1] op hist [])
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:34:06.662713+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fce8b000/0x0/0x4ffc00000, data 0xd07a6/0x19f000, compress 0x0/0x0/0x0, omap 0x12abc, meta 0x2bbd544), peers [0,1] op hist [])
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75522048 unmapped: 16752640 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:34:07.662929+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75522048 unmapped: 16752640 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:34:08.663081+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75522048 unmapped: 16752640 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:33 compute-0 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 979202 data_alloc: 218103808 data_used: 16601
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:34:09.663239+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75522048 unmapped: 16752640 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:34:10.663420+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75522048 unmapped: 16752640 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:34:11.663575+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75522048 unmapped: 16752640 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:34:12.663764+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fce8b000/0x0/0x4ffc00000, data 0xd07a6/0x19f000, compress 0x0/0x0/0x0, omap 0x12abc, meta 0x2bbd544), peers [0,1] op hist [])
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75522048 unmapped: 16752640 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:34:13.663980+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75522048 unmapped: 16752640 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:33 compute-0 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 979202 data_alloc: 218103808 data_used: 16601
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:34:14.664158+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75522048 unmapped: 16752640 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:34:15.664319+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75522048 unmapped: 16752640 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:34:16.664498+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75530240 unmapped: 16744448 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fce8b000/0x0/0x4ffc00000, data 0xd07a6/0x19f000, compress 0x0/0x0/0x0, omap 0x12abc, meta 0x2bbd544), peers [0,1] op hist [])
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:34:17.664630+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75530240 unmapped: 16744448 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:34:18.664789+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fce8b000/0x0/0x4ffc00000, data 0xd07a6/0x19f000, compress 0x0/0x0/0x0, omap 0x12abc, meta 0x2bbd544), peers [0,1] op hist [])
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75530240 unmapped: 16744448 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:33 compute-0 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 979202 data_alloc: 218103808 data_used: 16601
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:34:19.664944+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75530240 unmapped: 16744448 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:34:20.665093+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75530240 unmapped: 16744448 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:34:21.665558+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75530240 unmapped: 16744448 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:34:22.665692+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75530240 unmapped: 16744448 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fce8b000/0x0/0x4ffc00000, data 0xd07a6/0x19f000, compress 0x0/0x0/0x0, omap 0x12abc, meta 0x2bbd544), peers [0,1] op hist [])
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:34:23.665885+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75530240 unmapped: 16744448 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:33 compute-0 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 979202 data_alloc: 218103808 data_used: 16601
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:34:24.666017+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fce8b000/0x0/0x4ffc00000, data 0xd07a6/0x19f000, compress 0x0/0x0/0x0, omap 0x12abc, meta 0x2bbd544), peers [0,1] op hist [])
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75530240 unmapped: 16744448 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:34:25.666140+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75530240 unmapped: 16744448 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:34:26.666384+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75530240 unmapped: 16744448 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:34:27.666505+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75530240 unmapped: 16744448 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:34:28.666662+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75530240 unmapped: 16744448 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:33 compute-0 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 979202 data_alloc: 218103808 data_used: 16601
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:34:29.666767+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75530240 unmapped: 16744448 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:34:30.666957+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fce8b000/0x0/0x4ffc00000, data 0xd07a6/0x19f000, compress 0x0/0x0/0x0, omap 0x12abc, meta 0x2bbd544), peers [0,1] op hist [])
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75530240 unmapped: 16744448 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:34:31.667080+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75530240 unmapped: 16744448 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:34:32.667241+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fce8b000/0x0/0x4ffc00000, data 0xd07a6/0x19f000, compress 0x0/0x0/0x0, omap 0x12abc, meta 0x2bbd544), peers [0,1] op hist [])
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75530240 unmapped: 16744448 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:34:33.667465+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75530240 unmapped: 16744448 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:33 compute-0 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 979202 data_alloc: 218103808 data_used: 16601
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:34:34.667619+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75530240 unmapped: 16744448 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:34:35.667806+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75530240 unmapped: 16744448 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:34:36.668010+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75530240 unmapped: 16744448 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:34:37.668150+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75530240 unmapped: 16744448 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fce8b000/0x0/0x4ffc00000, data 0xd07a6/0x19f000, compress 0x0/0x0/0x0, omap 0x12abc, meta 0x2bbd544), peers [0,1] op hist [])
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:34:38.668289+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75530240 unmapped: 16744448 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:33 compute-0 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 979202 data_alloc: 218103808 data_used: 16601
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:34:39.668434+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75530240 unmapped: 16744448 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:34:40.668608+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75530240 unmapped: 16744448 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:34:41.668723+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75530240 unmapped: 16744448 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:34:42.668875+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75530240 unmapped: 16744448 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:34:43.669275+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75530240 unmapped: 16744448 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:33 compute-0 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 979202 data_alloc: 218103808 data_used: 16601
Jan 31 08:51:33 compute-0 ceph-osd[88096]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fce8b000/0x0/0x4ffc00000, data 0xd07a6/0x19f000, compress 0x0/0x0/0x0, omap 0x12abc, meta 0x2bbd544), peers [0,1] op hist [])
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:34:44.669647+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75530240 unmapped: 16744448 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fce8b000/0x0/0x4ffc00000, data 0xd07a6/0x19f000, compress 0x0/0x0/0x0, omap 0x12abc, meta 0x2bbd544), peers [0,1] op hist [])
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:34:45.669788+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75530240 unmapped: 16744448 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:34:46.669988+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75530240 unmapped: 16744448 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:34:47.670357+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75530240 unmapped: 16744448 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:34:48.670647+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fce8b000/0x0/0x4ffc00000, data 0xd07a6/0x19f000, compress 0x0/0x0/0x0, omap 0x12abc, meta 0x2bbd544), peers [0,1] op hist [])
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75530240 unmapped: 16744448 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:33 compute-0 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 979202 data_alloc: 218103808 data_used: 16601
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:34:49.670963+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75530240 unmapped: 16744448 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:34:50.671538+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75530240 unmapped: 16744448 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:34:51.671762+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75530240 unmapped: 16744448 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:34:52.672050+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75530240 unmapped: 16744448 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:34:53.672244+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75530240 unmapped: 16744448 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fce8b000/0x0/0x4ffc00000, data 0xd07a6/0x19f000, compress 0x0/0x0/0x0, omap 0x12abc, meta 0x2bbd544), peers [0,1] op hist [])
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:33 compute-0 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 979202 data_alloc: 218103808 data_used: 16601
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:34:54.672480+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75530240 unmapped: 16744448 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:34:55.672703+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75530240 unmapped: 16744448 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:34:56.672966+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fce8b000/0x0/0x4ffc00000, data 0xd07a6/0x19f000, compress 0x0/0x0/0x0, omap 0x12abc, meta 0x2bbd544), peers [0,1] op hist [])
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75530240 unmapped: 16744448 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:34:57.673213+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75530240 unmapped: 16744448 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:34:58.673413+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75530240 unmapped: 16744448 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:33 compute-0 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 979202 data_alloc: 218103808 data_used: 16601
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:34:59.673618+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75530240 unmapped: 16744448 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:35:00.674934+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75530240 unmapped: 16744448 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:35:01.675114+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75530240 unmapped: 16744448 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fce8b000/0x0/0x4ffc00000, data 0xd07a6/0x19f000, compress 0x0/0x0/0x0, omap 0x12abc, meta 0x2bbd544), peers [0,1] op hist [])
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:35:02.675313+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75546624 unmapped: 16728064 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:35:03.675544+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fce8b000/0x0/0x4ffc00000, data 0xd07a6/0x19f000, compress 0x0/0x0/0x0, omap 0x12abc, meta 0x2bbd544), peers [0,1] op hist [])
Jan 31 08:51:33 compute-0 ceph-osd[88096]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fce8b000/0x0/0x4ffc00000, data 0xd07a6/0x19f000, compress 0x0/0x0/0x0, omap 0x12abc, meta 0x2bbd544), peers [0,1] op hist [])
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75546624 unmapped: 16728064 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:33 compute-0 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 979202 data_alloc: 218103808 data_used: 16601
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:35:04.675730+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fce8b000/0x0/0x4ffc00000, data 0xd07a6/0x19f000, compress 0x0/0x0/0x0, omap 0x12abc, meta 0x2bbd544), peers [0,1] op hist [])
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75546624 unmapped: 16728064 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:35:05.675887+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75546624 unmapped: 16728064 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:35:06.676110+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75546624 unmapped: 16728064 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:35:07.676330+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fce8b000/0x0/0x4ffc00000, data 0xd07a6/0x19f000, compress 0x0/0x0/0x0, omap 0x12abc, meta 0x2bbd544), peers [0,1] op hist [])
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75546624 unmapped: 16728064 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:35:08.676609+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75546624 unmapped: 16728064 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:33 compute-0 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 979202 data_alloc: 218103808 data_used: 16601
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:35:09.676757+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75546624 unmapped: 16728064 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:35:10.676913+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75546624 unmapped: 16728064 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:35:11.677079+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fce8b000/0x0/0x4ffc00000, data 0xd07a6/0x19f000, compress 0x0/0x0/0x0, omap 0x12abc, meta 0x2bbd544), peers [0,1] op hist [])
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75546624 unmapped: 16728064 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:35:12.677233+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75546624 unmapped: 16728064 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fce8b000/0x0/0x4ffc00000, data 0xd07a6/0x19f000, compress 0x0/0x0/0x0, omap 0x12abc, meta 0x2bbd544), peers [0,1] op hist [])
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:35:13.680108+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75546624 unmapped: 16728064 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:33 compute-0 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 979202 data_alloc: 218103808 data_used: 16601
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:35:14.680327+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75546624 unmapped: 16728064 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:35:15.680530+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75546624 unmapped: 16728064 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:35:16.680677+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fce8b000/0x0/0x4ffc00000, data 0xd07a6/0x19f000, compress 0x0/0x0/0x0, omap 0x12abc, meta 0x2bbd544), peers [0,1] op hist [])
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75546624 unmapped: 16728064 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:35:17.680825+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75546624 unmapped: 16728064 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:35:18.680895+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75546624 unmapped: 16728064 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:33 compute-0 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 979202 data_alloc: 218103808 data_used: 16601
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:35:19.681087+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75546624 unmapped: 16728064 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:35:20.681301+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fce8b000/0x0/0x4ffc00000, data 0xd07a6/0x19f000, compress 0x0/0x0/0x0, omap 0x12abc, meta 0x2bbd544), peers [0,1] op hist [])
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75546624 unmapped: 16728064 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:35:21.681474+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75546624 unmapped: 16728064 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:35:22.681667+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fce8b000/0x0/0x4ffc00000, data 0xd07a6/0x19f000, compress 0x0/0x0/0x0, omap 0x12abc, meta 0x2bbd544), peers [0,1] op hist [])
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75546624 unmapped: 16728064 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:35:23.681916+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75546624 unmapped: 16728064 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:35:24.682098+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:33 compute-0 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 979202 data_alloc: 218103808 data_used: 16601
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75546624 unmapped: 16728064 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:35:25.682247+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75546624 unmapped: 16728064 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:35:26.682442+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fce8b000/0x0/0x4ffc00000, data 0xd07a6/0x19f000, compress 0x0/0x0/0x0, omap 0x12abc, meta 0x2bbd544), peers [0,1] op hist [])
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75546624 unmapped: 16728064 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:35:27.682544+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fce8b000/0x0/0x4ffc00000, data 0xd07a6/0x19f000, compress 0x0/0x0/0x0, omap 0x12abc, meta 0x2bbd544), peers [0,1] op hist [])
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75546624 unmapped: 16728064 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:35:28.682696+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75546624 unmapped: 16728064 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fce8b000/0x0/0x4ffc00000, data 0xd07a6/0x19f000, compress 0x0/0x0/0x0, omap 0x12abc, meta 0x2bbd544), peers [0,1] op hist [])
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:35:29.682845+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:33 compute-0 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 979202 data_alloc: 218103808 data_used: 16601
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75546624 unmapped: 16728064 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:35:30.683015+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75546624 unmapped: 16728064 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:35:31.683188+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75554816 unmapped: 16719872 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:35:32.683906+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75554816 unmapped: 16719872 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:35:33.684075+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fce8b000/0x0/0x4ffc00000, data 0xd07a6/0x19f000, compress 0x0/0x0/0x0, omap 0x12abc, meta 0x2bbd544), peers [0,1] op hist [])
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75554816 unmapped: 16719872 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:35:34.684234+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:33 compute-0 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 979202 data_alloc: 218103808 data_used: 16601
Jan 31 08:51:33 compute-0 ceph-osd[88096]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fce8b000/0x0/0x4ffc00000, data 0xd07a6/0x19f000, compress 0x0/0x0/0x0, omap 0x12abc, meta 0x2bbd544), peers [0,1] op hist [])
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75554816 unmapped: 16719872 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:35:35.684430+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75554816 unmapped: 16719872 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:35:36.684777+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75554816 unmapped: 16719872 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:35:37.684935+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75554816 unmapped: 16719872 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:35:38.685069+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75554816 unmapped: 16719872 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:35:39.685219+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:33 compute-0 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 979202 data_alloc: 218103808 data_used: 16601
Jan 31 08:51:33 compute-0 ceph-osd[88096]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fce8b000/0x0/0x4ffc00000, data 0xd07a6/0x19f000, compress 0x0/0x0/0x0, omap 0x12abc, meta 0x2bbd544), peers [0,1] op hist [])
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75554816 unmapped: 16719872 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:35:40.685431+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75554816 unmapped: 16719872 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:35:41.685583+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75554816 unmapped: 16719872 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:35:42.685766+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75554816 unmapped: 16719872 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:35:43.686035+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fce8b000/0x0/0x4ffc00000, data 0xd07a6/0x19f000, compress 0x0/0x0/0x0, omap 0x12abc, meta 0x2bbd544), peers [0,1] op hist [])
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75554816 unmapped: 16719872 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:35:44.686209+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:33 compute-0 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 979202 data_alloc: 218103808 data_used: 16601
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75554816 unmapped: 16719872 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:35:45.686372+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75554816 unmapped: 16719872 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:35:46.686525+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fce8b000/0x0/0x4ffc00000, data 0xd07a6/0x19f000, compress 0x0/0x0/0x0, omap 0x12abc, meta 0x2bbd544), peers [0,1] op hist [])
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75554816 unmapped: 16719872 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:35:47.686942+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75554816 unmapped: 16719872 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:35:48.687101+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75554816 unmapped: 16719872 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:35:49.687313+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:33 compute-0 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 979202 data_alloc: 218103808 data_used: 16601
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75554816 unmapped: 16719872 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:35:50.687671+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75554816 unmapped: 16719872 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:35:51.688011+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75554816 unmapped: 16719872 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:35:52.688328+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fce8b000/0x0/0x4ffc00000, data 0xd07a6/0x19f000, compress 0x0/0x0/0x0, omap 0x12abc, meta 0x2bbd544), peers [0,1] op hist [])
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75554816 unmapped: 16719872 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:35:53.688632+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75554816 unmapped: 16719872 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:35:54.688923+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:33 compute-0 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 979202 data_alloc: 218103808 data_used: 16601
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75554816 unmapped: 16719872 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:35:55.689137+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fce8b000/0x0/0x4ffc00000, data 0xd07a6/0x19f000, compress 0x0/0x0/0x0, omap 0x12abc, meta 0x2bbd544), peers [0,1] op hist [])
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75554816 unmapped: 16719872 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:35:56.689419+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75554816 unmapped: 16719872 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:35:57.689636+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75554816 unmapped: 16719872 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:35:58.689906+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75554816 unmapped: 16719872 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:35:59.690095+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:33 compute-0 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 979202 data_alloc: 218103808 data_used: 16601
Jan 31 08:51:33 compute-0 ceph-osd[88096]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fce8b000/0x0/0x4ffc00000, data 0xd07a6/0x19f000, compress 0x0/0x0/0x0, omap 0x12abc, meta 0x2bbd544), peers [0,1] op hist [])
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75554816 unmapped: 16719872 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:36:00.690313+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75554816 unmapped: 16719872 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:36:01.690542+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75554816 unmapped: 16719872 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:36:02.690811+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75554816 unmapped: 16719872 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:36:03.691078+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fce8b000/0x0/0x4ffc00000, data 0xd07a6/0x19f000, compress 0x0/0x0/0x0, omap 0x12abc, meta 0x2bbd544), peers [0,1] op hist [])
Jan 31 08:51:33 compute-0 ceph-osd[88096]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fce8b000/0x0/0x4ffc00000, data 0xd07a6/0x19f000, compress 0x0/0x0/0x0, omap 0x12abc, meta 0x2bbd544), peers [0,1] op hist [])
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75554816 unmapped: 16719872 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:36:04.691340+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:33 compute-0 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 979202 data_alloc: 218103808 data_used: 16601
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75554816 unmapped: 16719872 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:36:05.691622+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75554816 unmapped: 16719872 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:36:06.691863+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75554816 unmapped: 16719872 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:36:07.692007+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75554816 unmapped: 16719872 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:36:08.692435+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fce8b000/0x0/0x4ffc00000, data 0xd07a6/0x19f000, compress 0x0/0x0/0x0, omap 0x12abc, meta 0x2bbd544), peers [0,1] op hist [])
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75554816 unmapped: 16719872 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:36:09.692653+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:33 compute-0 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 979202 data_alloc: 218103808 data_used: 16601
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75554816 unmapped: 16719872 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:36:10.692857+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75554816 unmapped: 16719872 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:36:11.693041+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75554816 unmapped: 16719872 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:36:12.693298+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75554816 unmapped: 16719872 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:36:13.693573+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fce8b000/0x0/0x4ffc00000, data 0xd07a6/0x19f000, compress 0x0/0x0/0x0, omap 0x12abc, meta 0x2bbd544), peers [0,1] op hist [])
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75554816 unmapped: 16719872 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:36:14.693765+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:33 compute-0 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 979202 data_alloc: 218103808 data_used: 16601
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75554816 unmapped: 16719872 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:36:15.693893+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75554816 unmapped: 16719872 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:36:16.694099+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75554816 unmapped: 16719872 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:36:17.694237+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75554816 unmapped: 16719872 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:36:18.694416+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75563008 unmapped: 16711680 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:36:19.694537+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:33 compute-0 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 979202 data_alloc: 218103808 data_used: 16601
Jan 31 08:51:33 compute-0 ceph-osd[88096]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fce8b000/0x0/0x4ffc00000, data 0xd07a6/0x19f000, compress 0x0/0x0/0x0, omap 0x12abc, meta 0x2bbd544), peers [0,1] op hist [])
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75563008 unmapped: 16711680 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:36:20.694717+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75563008 unmapped: 16711680 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:36:21.694896+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:36:22.695060+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75563008 unmapped: 16711680 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:36:23.695241+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75563008 unmapped: 16711680 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fce8b000/0x0/0x4ffc00000, data 0xd07a6/0x19f000, compress 0x0/0x0/0x0, omap 0x12abc, meta 0x2bbd544), peers [0,1] op hist [])
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:36:24.695467+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75563008 unmapped: 16711680 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:33 compute-0 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 979202 data_alloc: 218103808 data_used: 16601
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:36:25.695650+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75563008 unmapped: 16711680 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:36:26.695816+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75563008 unmapped: 16711680 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:36:27.695975+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75563008 unmapped: 16711680 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:36:28.696111+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75563008 unmapped: 16711680 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:36:29.696268+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75563008 unmapped: 16711680 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:33 compute-0 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 979202 data_alloc: 218103808 data_used: 16601
Jan 31 08:51:33 compute-0 ceph-osd[88096]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fce8b000/0x0/0x4ffc00000, data 0xd07a6/0x19f000, compress 0x0/0x0/0x0, omap 0x12abc, meta 0x2bbd544), peers [0,1] op hist [])
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:36:30.696425+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75563008 unmapped: 16711680 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:36:31.696595+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75563008 unmapped: 16711680 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:36:32.696752+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75563008 unmapped: 16711680 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:36:33.696938+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75563008 unmapped: 16711680 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fce8b000/0x0/0x4ffc00000, data 0xd07a6/0x19f000, compress 0x0/0x0/0x0, omap 0x12abc, meta 0x2bbd544), peers [0,1] op hist [])
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: handle_auth_request added challenge on 0x5603a56cb800
Jan 31 08:51:33 compute-0 ceph-osd[88096]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 429.654937744s of 431.133453369s, submitted: 53
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:36:34.697081+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75702272 unmapped: 16572416 heap: 92274688 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:33 compute-0 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 982061 data_alloc: 218103808 data_used: 16601
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:36:35.697240+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 75874304 unmapped: 24797184 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _renew_subs
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 31 08:51:33 compute-0 ceph-osd[88096]: osd.2 136 handle_osd_map epochs [137,137], i have 136, src has [1,137]
Jan 31 08:51:33 compute-0 ceph-osd[88096]: osd.2 137 ms_handle_reset con 0x5603a56cb800 session 0x5603a603f6c0
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: handle_auth_request added challenge on 0x5603a62be400
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:36:36.697401+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 76185600 unmapped: 24485888 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:36:37.697602+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 76185600 unmapped: 24485888 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: osd.2 137 handle_osd_map epochs [137,138], i have 137, src has [1,138]
Jan 31 08:51:33 compute-0 ceph-osd[88096]: osd.2 138 ms_handle_reset con 0x5603a62be400 session 0x5603a603fc00
Jan 31 08:51:33 compute-0 ceph-osd[88096]: osd.2 138 heartbeat osd_stat(store_statfs(0x4fba15000/0x0/0x4ffc00000, data 0x1542398/0x1615000, compress 0x0/0x0/0x0, omap 0x12e9f, meta 0x2bbd161), peers [0,1] op hist [])
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:36:38.697800+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 76210176 unmapped: 24461312 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:36:39.697998+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 76398592 unmapped: 24272896 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:33 compute-0 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1097417 data_alloc: 218103808 data_used: 16601
Jan 31 08:51:33 compute-0 ceph-osd[88096]: osd.2 138 heartbeat osd_stat(store_statfs(0x4fba10000/0x0/0x4ffc00000, data 0x1543f34/0x1618000, compress 0x0/0x0/0x0, omap 0x1312a, meta 0x2bbced6), peers [0,1] op hist [])
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:36:40.698196+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 76398592 unmapped: 24272896 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: osd.2 138 heartbeat osd_stat(store_statfs(0x4fba10000/0x0/0x4ffc00000, data 0x1543f34/0x1618000, compress 0x0/0x0/0x0, omap 0x1312a, meta 0x2bbced6), peers [0,1] op hist [])
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:36:41.698343+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 76398592 unmapped: 24272896 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:36:42.698479+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 76398592 unmapped: 24272896 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:36:43.698682+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 76398592 unmapped: 24272896 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:36:44.698850+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 76398592 unmapped: 24272896 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:33 compute-0 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1097417 data_alloc: 218103808 data_used: 16601
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:36:45.699011+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 76398592 unmapped: 24272896 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:36:46.699209+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 76398592 unmapped: 24272896 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: osd.2 138 heartbeat osd_stat(store_statfs(0x4fba10000/0x0/0x4ffc00000, data 0x1543f34/0x1618000, compress 0x0/0x0/0x0, omap 0x1312a, meta 0x2bbced6), peers [0,1] op hist [])
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:36:47.699345+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 76398592 unmapped: 24272896 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:36:48.699463+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 76398592 unmapped: 24272896 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: handle_auth_request added challenge on 0x5603a3a4b000
Jan 31 08:51:33 compute-0 ceph-osd[88096]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 14.541514397s of 15.457092285s, submitted: 25
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _renew_subs
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 31 08:51:33 compute-0 ceph-osd[88096]: osd.2 138 handle_osd_map epochs [139,139], i have 138, src has [1,139]
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:36:49.699569+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 76406784 unmapped: 24264704 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: osd.2 139 ms_handle_reset con 0x5603a3a4b000 session 0x5603a604c540
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:33 compute-0 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1034016 data_alloc: 218103808 data_used: 16601
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:36:50.699698+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 77471744 unmapped: 23199744 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:36:51.699895+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 77471744 unmapped: 23199744 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: handle_auth_request added challenge on 0x5603a41e8c00
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:36:52.700038+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 77471744 unmapped: 23199744 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _renew_subs
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 31 08:51:33 compute-0 ceph-osd[88096]: osd.2 139 handle_osd_map epochs [140,140], i have 139, src has [1,140]
Jan 31 08:51:33 compute-0 ceph-osd[88096]: osd.2 140 ms_handle_reset con 0x5603a41e8c00 session 0x5603a6100700
Jan 31 08:51:33 compute-0 ceph-osd[88096]: osd.2 140 heartbeat osd_stat(store_statfs(0x4fce7f000/0x0/0x4ffc00000, data 0xd76be/0x1ab000, compress 0x0/0x0/0x0, omap 0x13686, meta 0x2bbc97a), peers [0,1] op hist [])
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:36:53.700226+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 77496320 unmapped: 23175168 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:36:54.700344+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 77496320 unmapped: 23175168 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:33 compute-0 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 997178 data_alloc: 218103808 data_used: 20662
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:36:55.700472+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 77496320 unmapped: 23175168 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:36:56.700732+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: osd.2 140 heartbeat osd_stat(store_statfs(0x4fce7f000/0x0/0x4ffc00000, data 0xd76be/0x1ab000, compress 0x0/0x0/0x0, omap 0x13686, meta 0x2bbc97a), peers [0,1] op hist [])
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 77496320 unmapped: 23175168 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 1800.8 total, 600.0 interval
                                           Cumulative writes: 6134 writes, 25K keys, 6134 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.01 MB/s
                                           Cumulative WAL: 6134 writes, 1062 syncs, 5.78 writes per sync, written: 0.02 GB, 0.01 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 543 writes, 1575 keys, 543 commit groups, 1.0 writes per commit group, ingest: 0.86 MB, 0.00 MB/s
                                           Interval WAL: 543 writes, 236 syncs, 2.30 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:36:57.700952+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 77496320 unmapped: 23175168 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:36:58.701149+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 77496320 unmapped: 23175168 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:36:59.701333+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: osd.2 140 heartbeat osd_stat(store_statfs(0x4fce7f000/0x0/0x4ffc00000, data 0xd76be/0x1ab000, compress 0x0/0x0/0x0, omap 0x13686, meta 0x2bbc97a), peers [0,1] op hist [])
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 77496320 unmapped: 23175168 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:33 compute-0 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 997178 data_alloc: 218103808 data_used: 20662
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:37:00.701537+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 77496320 unmapped: 23175168 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:37:01.701736+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 77496320 unmapped: 23175168 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _renew_subs
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 31 08:51:33 compute-0 ceph-osd[88096]: osd.2 140 handle_osd_map epochs [141,141], i have 140, src has [1,141]
Jan 31 08:51:33 compute-0 ceph-osd[88096]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 12.392799377s of 12.520668983s, submitted: 48
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:37:02.701866+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 77504512 unmapped: 23166976 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:37:03.702047+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 77504512 unmapped: 23166976 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: mgrc ms_handle_reset ms_handle_reset con 0x5603a3817000
Jan 31 08:51:33 compute-0 ceph-osd[88096]: mgrc reconnect Terminating session with v2:192.168.122.100:6800/2264315754
Jan 31 08:51:33 compute-0 ceph-osd[88096]: mgrc reconnect Starting new session with [v2:192.168.122.100:6800/2264315754,v1:192.168.122.100:6801/2264315754]
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: get_auth_request con 0x5603a56cb800 auth_method 0
Jan 31 08:51:33 compute-0 ceph-osd[88096]: mgrc handle_mgr_configure stats_period=5
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:37:04.702790+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 77873152 unmapped: 22798336 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: osd.2 141 heartbeat osd_stat(store_statfs(0x4fce7c000/0x0/0x4ffc00000, data 0xd913d/0x1ae000, compress 0x0/0x0/0x0, omap 0x13956, meta 0x2bbc6aa), peers [0,1] op hist [])
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:33 compute-0 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 999952 data_alloc: 218103808 data_used: 20662
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:37:05.702916+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 77873152 unmapped: 22798336 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:37:06.703057+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 77873152 unmapped: 22798336 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:37:07.703231+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 77873152 unmapped: 22798336 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:37:08.703401+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 77873152 unmapped: 22798336 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:37:09.703525+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 77873152 unmapped: 22798336 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:33 compute-0 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 999952 data_alloc: 218103808 data_used: 20662
Jan 31 08:51:33 compute-0 ceph-osd[88096]: osd.2 141 heartbeat osd_stat(store_statfs(0x4fce7c000/0x0/0x4ffc00000, data 0xd913d/0x1ae000, compress 0x0/0x0/0x0, omap 0x13956, meta 0x2bbc6aa), peers [0,1] op hist [])
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:37:10.703651+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 77873152 unmapped: 22798336 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:37:11.703810+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 77873152 unmapped: 22798336 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:37:12.703977+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 77873152 unmapped: 22798336 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:37:13.704185+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: osd.2 141 heartbeat osd_stat(store_statfs(0x4fce7c000/0x0/0x4ffc00000, data 0xd913d/0x1ae000, compress 0x0/0x0/0x0, omap 0x13956, meta 0x2bbc6aa), peers [0,1] op hist [])
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 77873152 unmapped: 22798336 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:37:14.704340+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 77873152 unmapped: 22798336 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:33 compute-0 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 999952 data_alloc: 218103808 data_used: 20662
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:37:15.704500+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 77873152 unmapped: 22798336 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:37:16.704634+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 77873152 unmapped: 22798336 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:37:17.704856+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 77873152 unmapped: 22798336 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:37:18.705112+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 77873152 unmapped: 22798336 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: osd.2 141 heartbeat osd_stat(store_statfs(0x4fce7c000/0x0/0x4ffc00000, data 0xd913d/0x1ae000, compress 0x0/0x0/0x0, omap 0x13956, meta 0x2bbc6aa), peers [0,1] op hist [])
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:37:19.705319+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 77873152 unmapped: 22798336 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:33 compute-0 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 999952 data_alloc: 218103808 data_used: 20662
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:37:20.705508+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 77873152 unmapped: 22798336 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:37:21.705755+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 77873152 unmapped: 22798336 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:37:22.705921+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 77873152 unmapped: 22798336 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:37:23.706814+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 77873152 unmapped: 22798336 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:37:24.707130+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 77873152 unmapped: 22798336 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: osd.2 141 heartbeat osd_stat(store_statfs(0x4fce7c000/0x0/0x4ffc00000, data 0xd913d/0x1ae000, compress 0x0/0x0/0x0, omap 0x13956, meta 0x2bbc6aa), peers [0,1] op hist [])
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:33 compute-0 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 999952 data_alloc: 218103808 data_used: 20662
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:37:25.707965+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 77873152 unmapped: 22798336 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:37:26.708233+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 77873152 unmapped: 22798336 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:37:27.708827+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 77873152 unmapped: 22798336 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:37:28.709107+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 77873152 unmapped: 22798336 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:37:29.709461+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: osd.2 141 heartbeat osd_stat(store_statfs(0x4fce7c000/0x0/0x4ffc00000, data 0xd913d/0x1ae000, compress 0x0/0x0/0x0, omap 0x13956, meta 0x2bbc6aa), peers [0,1] op hist [])
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 77873152 unmapped: 22798336 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:33 compute-0 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 999952 data_alloc: 218103808 data_used: 20662
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:37:30.709697+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 77873152 unmapped: 22798336 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:37:31.710108+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 77873152 unmapped: 22798336 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:37:32.710332+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 77873152 unmapped: 22798336 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:37:33.710724+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: osd.2 141 heartbeat osd_stat(store_statfs(0x4fce7c000/0x0/0x4ffc00000, data 0xd913d/0x1ae000, compress 0x0/0x0/0x0, omap 0x13956, meta 0x2bbc6aa), peers [0,1] op hist [])
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 77873152 unmapped: 22798336 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:37:34.710968+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 77873152 unmapped: 22798336 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:33 compute-0 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 999952 data_alloc: 218103808 data_used: 20662
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:37:35.711323+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 77873152 unmapped: 22798336 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:37:36.711511+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 77873152 unmapped: 22798336 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:37:37.711765+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 77873152 unmapped: 22798336 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:37:38.711998+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 77873152 unmapped: 22798336 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: osd.2 141 heartbeat osd_stat(store_statfs(0x4fce7c000/0x0/0x4ffc00000, data 0xd913d/0x1ae000, compress 0x0/0x0/0x0, omap 0x13956, meta 0x2bbc6aa), peers [0,1] op hist [])
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:37:39.712222+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 77873152 unmapped: 22798336 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:33 compute-0 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 999952 data_alloc: 218103808 data_used: 20662
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:37:40.712416+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 77873152 unmapped: 22798336 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:37:41.712571+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 77873152 unmapped: 22798336 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: osd.2 141 heartbeat osd_stat(store_statfs(0x4fce7c000/0x0/0x4ffc00000, data 0xd913d/0x1ae000, compress 0x0/0x0/0x0, omap 0x13956, meta 0x2bbc6aa), peers [0,1] op hist [])
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:37:42.712678+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 77873152 unmapped: 22798336 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: osd.2 141 heartbeat osd_stat(store_statfs(0x4fce7c000/0x0/0x4ffc00000, data 0xd913d/0x1ae000, compress 0x0/0x0/0x0, omap 0x13956, meta 0x2bbc6aa), peers [0,1] op hist [])
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:37:43.712943+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 77873152 unmapped: 22798336 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:37:44.713062+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 77873152 unmapped: 22798336 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:33 compute-0 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 999952 data_alloc: 218103808 data_used: 20662
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:37:45.713294+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 77873152 unmapped: 22798336 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: osd.2 141 heartbeat osd_stat(store_statfs(0x4fce7c000/0x0/0x4ffc00000, data 0xd913d/0x1ae000, compress 0x0/0x0/0x0, omap 0x13956, meta 0x2bbc6aa), peers [0,1] op hist [])
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:37:46.713424+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 77873152 unmapped: 22798336 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:37:47.713581+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 77873152 unmapped: 22798336 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:37:48.713737+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 77873152 unmapped: 22798336 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:37:49.713902+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 77873152 unmapped: 22798336 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:33 compute-0 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 999952 data_alloc: 218103808 data_used: 20662
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:37:50.714067+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 77873152 unmapped: 22798336 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:37:51.714185+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 77873152 unmapped: 22798336 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: osd.2 141 heartbeat osd_stat(store_statfs(0x4fce7c000/0x0/0x4ffc00000, data 0xd913d/0x1ae000, compress 0x0/0x0/0x0, omap 0x13956, meta 0x2bbc6aa), peers [0,1] op hist [])
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:37:52.714354+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 77873152 unmapped: 22798336 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: osd.2 141 heartbeat osd_stat(store_statfs(0x4fce7c000/0x0/0x4ffc00000, data 0xd913d/0x1ae000, compress 0x0/0x0/0x0, omap 0x13956, meta 0x2bbc6aa), peers [0,1] op hist [])
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:37:53.714543+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 77873152 unmapped: 22798336 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:37:54.714685+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 77873152 unmapped: 22798336 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:33 compute-0 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 999952 data_alloc: 218103808 data_used: 20662
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:37:55.714934+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 77873152 unmapped: 22798336 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:37:56.715550+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 77873152 unmapped: 22798336 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:37:57.716047+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 77873152 unmapped: 22798336 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:37:58.716658+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: osd.2 141 heartbeat osd_stat(store_statfs(0x4fce7c000/0x0/0x4ffc00000, data 0xd913d/0x1ae000, compress 0x0/0x0/0x0, omap 0x13956, meta 0x2bbc6aa), peers [0,1] op hist [])
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 77873152 unmapped: 22798336 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 57.516902924s of 57.524673462s, submitted: 14
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:37:59.716802+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 78938112 unmapped: 21733376 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:33 compute-0 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 999232 data_alloc: 218103808 data_used: 20662
Jan 31 08:51:33 compute-0 ceph-osd[88096]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:38:00.717180+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79183872 unmapped: 21487616 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:38:01.717344+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: osd.2 141 heartbeat osd_stat(store_statfs(0x4fce7e000/0x0/0x4ffc00000, data 0xd913d/0x1ae000, compress 0x0/0x0/0x0, omap 0x13956, meta 0x2bbc6aa), peers [0,1] op hist [])
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79233024 unmapped: 21438464 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:38:02.717645+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79233024 unmapped: 21438464 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: osd.2 141 heartbeat osd_stat(store_statfs(0x4fce7e000/0x0/0x4ffc00000, data 0xd913d/0x1ae000, compress 0x0/0x0/0x0, omap 0x13956, meta 0x2bbc6aa), peers [0,1] op hist [])
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:38:03.717952+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79233024 unmapped: 21438464 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:38:04.718277+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79233024 unmapped: 21438464 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:33 compute-0 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 999232 data_alloc: 218103808 data_used: 20662
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:38:05.718541+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79233024 unmapped: 21438464 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:38:06.718814+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79233024 unmapped: 21438464 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:38:07.719056+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79233024 unmapped: 21438464 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:38:08.719275+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: osd.2 141 heartbeat osd_stat(store_statfs(0x4fce7e000/0x0/0x4ffc00000, data 0xd913d/0x1ae000, compress 0x0/0x0/0x0, omap 0x13956, meta 0x2bbc6aa), peers [0,1] op hist [])
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79233024 unmapped: 21438464 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:38:09.719477+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79233024 unmapped: 21438464 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:33 compute-0 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 999232 data_alloc: 218103808 data_used: 20662
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:38:10.719672+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79233024 unmapped: 21438464 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:38:11.719860+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79233024 unmapped: 21438464 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:38:12.720004+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79233024 unmapped: 21438464 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:38:13.720200+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: osd.2 141 heartbeat osd_stat(store_statfs(0x4fce7e000/0x0/0x4ffc00000, data 0xd913d/0x1ae000, compress 0x0/0x0/0x0, omap 0x13956, meta 0x2bbc6aa), peers [0,1] op hist [])
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79233024 unmapped: 21438464 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: osd.2 141 heartbeat osd_stat(store_statfs(0x4fce7e000/0x0/0x4ffc00000, data 0xd913d/0x1ae000, compress 0x0/0x0/0x0, omap 0x13956, meta 0x2bbc6aa), peers [0,1] op hist [])
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:38:14.720343+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79233024 unmapped: 21438464 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:33 compute-0 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 999232 data_alloc: 218103808 data_used: 20662
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:38:15.720519+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79233024 unmapped: 21438464 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:38:16.720649+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79233024 unmapped: 21438464 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:38:17.720765+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79233024 unmapped: 21438464 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:38:18.720966+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79233024 unmapped: 21438464 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:38:19.721134+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: osd.2 141 heartbeat osd_stat(store_statfs(0x4fce7e000/0x0/0x4ffc00000, data 0xd913d/0x1ae000, compress 0x0/0x0/0x0, omap 0x13956, meta 0x2bbc6aa), peers [0,1] op hist [])
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79233024 unmapped: 21438464 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:33 compute-0 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 999232 data_alloc: 218103808 data_used: 20662
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:38:20.721302+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: osd.2 141 heartbeat osd_stat(store_statfs(0x4fce7e000/0x0/0x4ffc00000, data 0xd913d/0x1ae000, compress 0x0/0x0/0x0, omap 0x13956, meta 0x2bbc6aa), peers [0,1] op hist [])
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79233024 unmapped: 21438464 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:38:21.721441+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79233024 unmapped: 21438464 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:38:22.721615+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79233024 unmapped: 21438464 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:38:23.721804+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79233024 unmapped: 21438464 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:38:24.721921+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: osd.2 141 heartbeat osd_stat(store_statfs(0x4fce7e000/0x0/0x4ffc00000, data 0xd913d/0x1ae000, compress 0x0/0x0/0x0, omap 0x13956, meta 0x2bbc6aa), peers [0,1] op hist [])
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79233024 unmapped: 21438464 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:33 compute-0 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 999232 data_alloc: 218103808 data_used: 20662
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:38:25.722059+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79233024 unmapped: 21438464 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:38:26.722222+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: osd.2 141 heartbeat osd_stat(store_statfs(0x4fce7e000/0x0/0x4ffc00000, data 0xd913d/0x1ae000, compress 0x0/0x0/0x0, omap 0x13956, meta 0x2bbc6aa), peers [0,1] op hist [])
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79233024 unmapped: 21438464 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:38:27.722357+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79233024 unmapped: 21438464 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:38:28.722828+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79233024 unmapped: 21438464 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:38:29.723006+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79233024 unmapped: 21438464 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:33 compute-0 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 999232 data_alloc: 218103808 data_used: 20662
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:38:30.723106+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: osd.2 141 heartbeat osd_stat(store_statfs(0x4fce7e000/0x0/0x4ffc00000, data 0xd913d/0x1ae000, compress 0x0/0x0/0x0, omap 0x13956, meta 0x2bbc6aa), peers [0,1] op hist [])
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79233024 unmapped: 21438464 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: osd.2 141 heartbeat osd_stat(store_statfs(0x4fce7e000/0x0/0x4ffc00000, data 0xd913d/0x1ae000, compress 0x0/0x0/0x0, omap 0x13956, meta 0x2bbc6aa), peers [0,1] op hist [])
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:38:31.723196+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79233024 unmapped: 21438464 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: osd.2 141 heartbeat osd_stat(store_statfs(0x4fce7e000/0x0/0x4ffc00000, data 0xd913d/0x1ae000, compress 0x0/0x0/0x0, omap 0x13956, meta 0x2bbc6aa), peers [0,1] op hist [])
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:38:32.723318+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79233024 unmapped: 21438464 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:38:33.723464+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79233024 unmapped: 21438464 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:38:34.723589+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79233024 unmapped: 21438464 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:33 compute-0 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 999232 data_alloc: 218103808 data_used: 20662
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:38:35.723704+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79233024 unmapped: 21438464 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:38:36.723839+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79233024 unmapped: 21438464 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:38:37.723963+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79233024 unmapped: 21438464 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: osd.2 141 heartbeat osd_stat(store_statfs(0x4fce7e000/0x0/0x4ffc00000, data 0xd913d/0x1ae000, compress 0x0/0x0/0x0, omap 0x13956, meta 0x2bbc6aa), peers [0,1] op hist [])
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:38:38.724076+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79233024 unmapped: 21438464 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:38:39.724217+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79233024 unmapped: 21438464 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:33 compute-0 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 999232 data_alloc: 218103808 data_used: 20662
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:38:40.724330+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: osd.2 141 heartbeat osd_stat(store_statfs(0x4fce7e000/0x0/0x4ffc00000, data 0xd913d/0x1ae000, compress 0x0/0x0/0x0, omap 0x13956, meta 0x2bbc6aa), peers [0,1] op hist [])
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79233024 unmapped: 21438464 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:38:41.724464+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79233024 unmapped: 21438464 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:38:42.724595+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79233024 unmapped: 21438464 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:38:43.724776+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79233024 unmapped: 21438464 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:38:44.724938+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79233024 unmapped: 21438464 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:33 compute-0 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 999232 data_alloc: 218103808 data_used: 20662
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:38:45.725115+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79233024 unmapped: 21438464 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: osd.2 141 heartbeat osd_stat(store_statfs(0x4fce7e000/0x0/0x4ffc00000, data 0xd913d/0x1ae000, compress 0x0/0x0/0x0, omap 0x13956, meta 0x2bbc6aa), peers [0,1] op hist [])
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:38:46.725286+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79233024 unmapped: 21438464 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:38:47.725428+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79233024 unmapped: 21438464 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:38:48.725588+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79233024 unmapped: 21438464 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:38:49.725739+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: osd.2 141 heartbeat osd_stat(store_statfs(0x4fce7e000/0x0/0x4ffc00000, data 0xd913d/0x1ae000, compress 0x0/0x0/0x0, omap 0x13956, meta 0x2bbc6aa), peers [0,1] op hist [])
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79233024 unmapped: 21438464 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:33 compute-0 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 999232 data_alloc: 218103808 data_used: 20662
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:38:50.725862+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79233024 unmapped: 21438464 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:38:51.725990+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79233024 unmapped: 21438464 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:38:52.726137+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79233024 unmapped: 21438464 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:38:53.726324+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79233024 unmapped: 21438464 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:38:54.726443+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: osd.2 141 heartbeat osd_stat(store_statfs(0x4fce7e000/0x0/0x4ffc00000, data 0xd913d/0x1ae000, compress 0x0/0x0/0x0, omap 0x13956, meta 0x2bbc6aa), peers [0,1] op hist [])
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79233024 unmapped: 21438464 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:33 compute-0 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 999232 data_alloc: 218103808 data_used: 20662
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:38:55.726564+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79233024 unmapped: 21438464 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:38:56.726773+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79233024 unmapped: 21438464 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:38:57.726952+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79233024 unmapped: 21438464 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:38:58.727112+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79233024 unmapped: 21438464 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:38:59.727294+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: osd.2 141 heartbeat osd_stat(store_statfs(0x4fce7e000/0x0/0x4ffc00000, data 0xd913d/0x1ae000, compress 0x0/0x0/0x0, omap 0x13956, meta 0x2bbc6aa), peers [0,1] op hist [])
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79233024 unmapped: 21438464 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:33 compute-0 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 999232 data_alloc: 218103808 data_used: 20662
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:39:00.727511+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79233024 unmapped: 21438464 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:39:01.727836+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79233024 unmapped: 21438464 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:39:02.728059+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79241216 unmapped: 21430272 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: osd.2 141 heartbeat osd_stat(store_statfs(0x4fce7e000/0x0/0x4ffc00000, data 0xd913d/0x1ae000, compress 0x0/0x0/0x0, omap 0x13956, meta 0x2bbc6aa), peers [0,1] op hist [])
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:39:03.728243+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79241216 unmapped: 21430272 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:39:04.728548+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79241216 unmapped: 21430272 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:39:05.728699+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:33 compute-0 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 999232 data_alloc: 218103808 data_used: 20662
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79241216 unmapped: 21430272 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: osd.2 141 heartbeat osd_stat(store_statfs(0x4fce7e000/0x0/0x4ffc00000, data 0xd913d/0x1ae000, compress 0x0/0x0/0x0, omap 0x13956, meta 0x2bbc6aa), peers [0,1] op hist [])
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:39:06.728855+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79241216 unmapped: 21430272 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:39:07.729007+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79241216 unmapped: 21430272 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:39:08.729222+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79241216 unmapped: 21430272 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:39:09.729388+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79241216 unmapped: 21430272 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:39:10.729529+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:33 compute-0 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 999232 data_alloc: 218103808 data_used: 20662
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79241216 unmapped: 21430272 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:39:11.729671+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: osd.2 141 heartbeat osd_stat(store_statfs(0x4fce7e000/0x0/0x4ffc00000, data 0xd913d/0x1ae000, compress 0x0/0x0/0x0, omap 0x13956, meta 0x2bbc6aa), peers [0,1] op hist [])
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79241216 unmapped: 21430272 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:39:12.729853+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79241216 unmapped: 21430272 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:39:13.730046+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79241216 unmapped: 21430272 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:39:14.730192+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79241216 unmapped: 21430272 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:39:15.730355+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:33 compute-0 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 999232 data_alloc: 218103808 data_used: 20662
Jan 31 08:51:33 compute-0 ceph-osd[88096]: osd.2 141 heartbeat osd_stat(store_statfs(0x4fce7e000/0x0/0x4ffc00000, data 0xd913d/0x1ae000, compress 0x0/0x0/0x0, omap 0x13956, meta 0x2bbc6aa), peers [0,1] op hist [])
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79241216 unmapped: 21430272 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:39:16.730638+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79241216 unmapped: 21430272 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:39:17.730796+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79241216 unmapped: 21430272 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:39:18.730997+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: osd.2 141 heartbeat osd_stat(store_statfs(0x4fce7e000/0x0/0x4ffc00000, data 0xd913d/0x1ae000, compress 0x0/0x0/0x0, omap 0x13956, meta 0x2bbc6aa), peers [0,1] op hist [])
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79241216 unmapped: 21430272 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:39:19.731153+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79241216 unmapped: 21430272 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:39:20.731424+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:33 compute-0 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 999232 data_alloc: 218103808 data_used: 20662
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79241216 unmapped: 21430272 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:39:21.731659+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79241216 unmapped: 21430272 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:39:22.731891+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: osd.2 141 heartbeat osd_stat(store_statfs(0x4fce7e000/0x0/0x4ffc00000, data 0xd913d/0x1ae000, compress 0x0/0x0/0x0, omap 0x13956, meta 0x2bbc6aa), peers [0,1] op hist [])
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79241216 unmapped: 21430272 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:39:23.732120+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79241216 unmapped: 21430272 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:39:24.732431+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79241216 unmapped: 21430272 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:39:25.732601+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:33 compute-0 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 999232 data_alloc: 218103808 data_used: 20662
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79241216 unmapped: 21430272 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:39:26.732806+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: osd.2 141 heartbeat osd_stat(store_statfs(0x4fce7e000/0x0/0x4ffc00000, data 0xd913d/0x1ae000, compress 0x0/0x0/0x0, omap 0x13956, meta 0x2bbc6aa), peers [0,1] op hist [])
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79241216 unmapped: 21430272 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:39:27.733014+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79241216 unmapped: 21430272 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:39:28.733159+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: osd.2 141 heartbeat osd_stat(store_statfs(0x4fce7e000/0x0/0x4ffc00000, data 0xd913d/0x1ae000, compress 0x0/0x0/0x0, omap 0x13956, meta 0x2bbc6aa), peers [0,1] op hist [])
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79241216 unmapped: 21430272 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:39:29.733334+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79241216 unmapped: 21430272 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:39:30.733528+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:33 compute-0 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 999232 data_alloc: 218103808 data_used: 20662
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79241216 unmapped: 21430272 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:39:31.734681+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79241216 unmapped: 21430272 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:39:32.734818+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: osd.2 141 heartbeat osd_stat(store_statfs(0x4fce7e000/0x0/0x4ffc00000, data 0xd913d/0x1ae000, compress 0x0/0x0/0x0, omap 0x13956, meta 0x2bbc6aa), peers [0,1] op hist [])
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79241216 unmapped: 21430272 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:39:33.735005+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79241216 unmapped: 21430272 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:39:34.735169+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79241216 unmapped: 21430272 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:39:35.735303+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: osd.2 141 heartbeat osd_stat(store_statfs(0x4fce7e000/0x0/0x4ffc00000, data 0xd913d/0x1ae000, compress 0x0/0x0/0x0, omap 0x13956, meta 0x2bbc6aa), peers [0,1] op hist [])
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:33 compute-0 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 999232 data_alloc: 218103808 data_used: 20662
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79241216 unmapped: 21430272 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:39:36.735430+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: osd.2 141 heartbeat osd_stat(store_statfs(0x4fce7e000/0x0/0x4ffc00000, data 0xd913d/0x1ae000, compress 0x0/0x0/0x0, omap 0x13956, meta 0x2bbc6aa), peers [0,1] op hist [])
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79241216 unmapped: 21430272 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:39:37.735560+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79241216 unmapped: 21430272 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:39:38.735710+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79241216 unmapped: 21430272 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:39:39.735847+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79241216 unmapped: 21430272 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:39:40.735989+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:33 compute-0 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 999232 data_alloc: 218103808 data_used: 20662
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79241216 unmapped: 21430272 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:39:41.736116+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: osd.2 141 heartbeat osd_stat(store_statfs(0x4fce7e000/0x0/0x4ffc00000, data 0xd913d/0x1ae000, compress 0x0/0x0/0x0, omap 0x13956, meta 0x2bbc6aa), peers [0,1] op hist [])
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:39:42.736242+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79241216 unmapped: 21430272 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:39:43.736440+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79241216 unmapped: 21430272 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: osd.2 141 heartbeat osd_stat(store_statfs(0x4fce7e000/0x0/0x4ffc00000, data 0xd913d/0x1ae000, compress 0x0/0x0/0x0, omap 0x13956, meta 0x2bbc6aa), peers [0,1] op hist [])
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:39:44.736608+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79241216 unmapped: 21430272 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:39:45.736750+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79241216 unmapped: 21430272 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:33 compute-0 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 999232 data_alloc: 218103808 data_used: 20662
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:39:46.736897+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79241216 unmapped: 21430272 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:39:47.737071+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79241216 unmapped: 21430272 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: osd.2 141 heartbeat osd_stat(store_statfs(0x4fce7e000/0x0/0x4ffc00000, data 0xd913d/0x1ae000, compress 0x0/0x0/0x0, omap 0x13956, meta 0x2bbc6aa), peers [0,1] op hist [])
Jan 31 08:51:33 compute-0 ceph-osd[88096]: osd.2 141 heartbeat osd_stat(store_statfs(0x4fce7e000/0x0/0x4ffc00000, data 0xd913d/0x1ae000, compress 0x0/0x0/0x0, omap 0x13956, meta 0x2bbc6aa), peers [0,1] op hist [])
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:39:48.737203+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79241216 unmapped: 21430272 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: osd.2 141 heartbeat osd_stat(store_statfs(0x4fce7e000/0x0/0x4ffc00000, data 0xd913d/0x1ae000, compress 0x0/0x0/0x0, omap 0x13956, meta 0x2bbc6aa), peers [0,1] op hist [])
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:39:49.737356+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79241216 unmapped: 21430272 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:39:50.737508+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79241216 unmapped: 21430272 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:33 compute-0 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 999232 data_alloc: 218103808 data_used: 20662
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:39:51.737626+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79241216 unmapped: 21430272 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:39:52.737735+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79241216 unmapped: 21430272 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:39:53.737929+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79241216 unmapped: 21430272 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: osd.2 141 heartbeat osd_stat(store_statfs(0x4fce7e000/0x0/0x4ffc00000, data 0xd913d/0x1ae000, compress 0x0/0x0/0x0, omap 0x13956, meta 0x2bbc6aa), peers [0,1] op hist [])
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:39:54.738166+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: osd.2 141 heartbeat osd_stat(store_statfs(0x4fce7e000/0x0/0x4ffc00000, data 0xd913d/0x1ae000, compress 0x0/0x0/0x0, omap 0x13956, meta 0x2bbc6aa), peers [0,1] op hist [])
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79241216 unmapped: 21430272 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:39:55.738308+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79241216 unmapped: 21430272 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: osd.2 141 heartbeat osd_stat(store_statfs(0x4fce7e000/0x0/0x4ffc00000, data 0xd913d/0x1ae000, compress 0x0/0x0/0x0, omap 0x13956, meta 0x2bbc6aa), peers [0,1] op hist [])
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:33 compute-0 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 999232 data_alloc: 218103808 data_used: 20662
Jan 31 08:51:33 compute-0 ceph-osd[88096]: osd.2 141 heartbeat osd_stat(store_statfs(0x4fce7e000/0x0/0x4ffc00000, data 0xd913d/0x1ae000, compress 0x0/0x0/0x0, omap 0x13956, meta 0x2bbc6aa), peers [0,1] op hist [])
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:39:56.738440+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79241216 unmapped: 21430272 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:39:57.738641+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79241216 unmapped: 21430272 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:39:58.738813+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79241216 unmapped: 21430272 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:39:59.738958+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79241216 unmapped: 21430272 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:40:00.739097+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79241216 unmapped: 21430272 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:33 compute-0 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 999232 data_alloc: 218103808 data_used: 20662
Jan 31 08:51:33 compute-0 ceph-osd[88096]: osd.2 141 heartbeat osd_stat(store_statfs(0x4fce7e000/0x0/0x4ffc00000, data 0xd913d/0x1ae000, compress 0x0/0x0/0x0, omap 0x13956, meta 0x2bbc6aa), peers [0,1] op hist [])
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:40:01.739241+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79241216 unmapped: 21430272 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:40:02.739434+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79241216 unmapped: 21430272 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:40:03.739611+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79241216 unmapped: 21430272 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: osd.2 141 heartbeat osd_stat(store_statfs(0x4fce7e000/0x0/0x4ffc00000, data 0xd913d/0x1ae000, compress 0x0/0x0/0x0, omap 0x13956, meta 0x2bbc6aa), peers [0,1] op hist [])
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:40:04.739756+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79241216 unmapped: 21430272 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:40:05.739884+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79241216 unmapped: 21430272 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:33 compute-0 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 999232 data_alloc: 218103808 data_used: 20662
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:40:06.740020+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79241216 unmapped: 21430272 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:40:07.740149+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79241216 unmapped: 21430272 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:40:08.740325+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79241216 unmapped: 21430272 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:40:09.740456+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: osd.2 141 heartbeat osd_stat(store_statfs(0x4fce7e000/0x0/0x4ffc00000, data 0xd913d/0x1ae000, compress 0x0/0x0/0x0, omap 0x13956, meta 0x2bbc6aa), peers [0,1] op hist [])
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79241216 unmapped: 21430272 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: osd.2 141 heartbeat osd_stat(store_statfs(0x4fce7e000/0x0/0x4ffc00000, data 0xd913d/0x1ae000, compress 0x0/0x0/0x0, omap 0x13956, meta 0x2bbc6aa), peers [0,1] op hist [])
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:40:10.740625+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79241216 unmapped: 21430272 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:33 compute-0 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 999232 data_alloc: 218103808 data_used: 20662
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:40:11.740763+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79241216 unmapped: 21430272 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:40:12.740910+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79241216 unmapped: 21430272 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:40:13.741075+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79241216 unmapped: 21430272 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:40:14.741218+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79241216 unmapped: 21430272 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: osd.2 141 heartbeat osd_stat(store_statfs(0x4fce7e000/0x0/0x4ffc00000, data 0xd913d/0x1ae000, compress 0x0/0x0/0x0, omap 0x13956, meta 0x2bbc6aa), peers [0,1] op hist [])
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:40:15.741405+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79241216 unmapped: 21430272 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:33 compute-0 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 999232 data_alloc: 218103808 data_used: 20662
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:40:16.741528+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79241216 unmapped: 21430272 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:40:17.741636+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79241216 unmapped: 21430272 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:40:18.741898+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79241216 unmapped: 21430272 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:40:19.742062+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: osd.2 141 heartbeat osd_stat(store_statfs(0x4fce7e000/0x0/0x4ffc00000, data 0xd913d/0x1ae000, compress 0x0/0x0/0x0, omap 0x13956, meta 0x2bbc6aa), peers [0,1] op hist [])
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79241216 unmapped: 21430272 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:40:20.742329+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79241216 unmapped: 21430272 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:33 compute-0 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 999232 data_alloc: 218103808 data_used: 20662
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:40:21.742559+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79241216 unmapped: 21430272 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: osd.2 141 heartbeat osd_stat(store_statfs(0x4fce7e000/0x0/0x4ffc00000, data 0xd913d/0x1ae000, compress 0x0/0x0/0x0, omap 0x13956, meta 0x2bbc6aa), peers [0,1] op hist [])
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:40:22.742765+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79241216 unmapped: 21430272 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:40:23.743176+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79241216 unmapped: 21430272 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:40:24.743331+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79241216 unmapped: 21430272 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:40:25.743532+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79241216 unmapped: 21430272 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:33 compute-0 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 999232 data_alloc: 218103808 data_used: 20662
Jan 31 08:51:33 compute-0 ceph-osd[88096]: osd.2 141 heartbeat osd_stat(store_statfs(0x4fce7e000/0x0/0x4ffc00000, data 0xd913d/0x1ae000, compress 0x0/0x0/0x0, omap 0x13956, meta 0x2bbc6aa), peers [0,1] op hist [])
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:40:26.743699+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79241216 unmapped: 21430272 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:40:27.743888+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79241216 unmapped: 21430272 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:40:28.744071+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79241216 unmapped: 21430272 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: osd.2 141 heartbeat osd_stat(store_statfs(0x4fce7e000/0x0/0x4ffc00000, data 0xd913d/0x1ae000, compress 0x0/0x0/0x0, omap 0x13956, meta 0x2bbc6aa), peers [0,1] op hist [])
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:40:29.744310+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79241216 unmapped: 21430272 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:40:30.744454+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: osd.2 141 heartbeat osd_stat(store_statfs(0x4fce7e000/0x0/0x4ffc00000, data 0xd913d/0x1ae000, compress 0x0/0x0/0x0, omap 0x13956, meta 0x2bbc6aa), peers [0,1] op hist [])
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79241216 unmapped: 21430272 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:33 compute-0 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 999232 data_alloc: 218103808 data_used: 20662
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:40:31.744573+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79241216 unmapped: 21430272 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:40:32.744697+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79241216 unmapped: 21430272 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:40:33.744896+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79241216 unmapped: 21430272 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:40:34.745075+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: osd.2 141 heartbeat osd_stat(store_statfs(0x4fce7e000/0x0/0x4ffc00000, data 0xd913d/0x1ae000, compress 0x0/0x0/0x0, omap 0x13956, meta 0x2bbc6aa), peers [0,1] op hist [])
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79241216 unmapped: 21430272 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:40:35.745232+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79241216 unmapped: 21430272 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:33 compute-0 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 999232 data_alloc: 218103808 data_used: 20662
Jan 31 08:51:33 compute-0 ceph-osd[88096]: osd.2 141 heartbeat osd_stat(store_statfs(0x4fce7e000/0x0/0x4ffc00000, data 0xd913d/0x1ae000, compress 0x0/0x0/0x0, omap 0x13956, meta 0x2bbc6aa), peers [0,1] op hist [])
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:40:36.745432+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79241216 unmapped: 21430272 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:40:37.745569+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79241216 unmapped: 21430272 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:40:38.745699+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79241216 unmapped: 21430272 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:40:39.745863+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79241216 unmapped: 21430272 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:40:40.746025+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79241216 unmapped: 21430272 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:33 compute-0 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 999232 data_alloc: 218103808 data_used: 20662
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:40:41.746179+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: osd.2 141 heartbeat osd_stat(store_statfs(0x4fce7e000/0x0/0x4ffc00000, data 0xd913d/0x1ae000, compress 0x0/0x0/0x0, omap 0x13956, meta 0x2bbc6aa), peers [0,1] op hist [])
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79241216 unmapped: 21430272 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:40:42.746323+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79241216 unmapped: 21430272 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:40:43.746508+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79241216 unmapped: 21430272 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:40:44.746641+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79241216 unmapped: 21430272 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:40:45.746716+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: osd.2 141 heartbeat osd_stat(store_statfs(0x4fce7e000/0x0/0x4ffc00000, data 0xd913d/0x1ae000, compress 0x0/0x0/0x0, omap 0x13956, meta 0x2bbc6aa), peers [0,1] op hist [])
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79241216 unmapped: 21430272 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:33 compute-0 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 999232 data_alloc: 218103808 data_used: 20662
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:40:46.746844+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79241216 unmapped: 21430272 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:40:47.747006+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79241216 unmapped: 21430272 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:40:48.747162+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79241216 unmapped: 21430272 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:40:49.747303+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79241216 unmapped: 21430272 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:40:50.747585+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79241216 unmapped: 21430272 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:33 compute-0 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 999232 data_alloc: 218103808 data_used: 20662
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:40:51.747719+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: osd.2 141 heartbeat osd_stat(store_statfs(0x4fce7e000/0x0/0x4ffc00000, data 0xd913d/0x1ae000, compress 0x0/0x0/0x0, omap 0x13956, meta 0x2bbc6aa), peers [0,1] op hist [])
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79241216 unmapped: 21430272 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:40:52.747862+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79241216 unmapped: 21430272 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:40:53.748001+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79241216 unmapped: 21430272 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: osd.2 141 heartbeat osd_stat(store_statfs(0x4fce7e000/0x0/0x4ffc00000, data 0xd913d/0x1ae000, compress 0x0/0x0/0x0, omap 0x13956, meta 0x2bbc6aa), peers [0,1] op hist [])
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:40:54.748119+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79241216 unmapped: 21430272 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:40:55.748223+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79241216 unmapped: 21430272 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:33 compute-0 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 999232 data_alloc: 218103808 data_used: 20662
Jan 31 08:51:33 compute-0 ceph-osd[88096]: osd.2 141 heartbeat osd_stat(store_statfs(0x4fce7e000/0x0/0x4ffc00000, data 0xd913d/0x1ae000, compress 0x0/0x0/0x0, omap 0x13956, meta 0x2bbc6aa), peers [0,1] op hist [])
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:40:56.748353+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79241216 unmapped: 21430272 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:40:57.748562+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79241216 unmapped: 21430272 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:40:58.748679+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79241216 unmapped: 21430272 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:40:59.748793+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79241216 unmapped: 21430272 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: osd.2 141 heartbeat osd_stat(store_statfs(0x4fce7e000/0x0/0x4ffc00000, data 0xd913d/0x1ae000, compress 0x0/0x0/0x0, omap 0x13956, meta 0x2bbc6aa), peers [0,1] op hist [])
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:41:00.748933+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79241216 unmapped: 21430272 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:33 compute-0 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 999232 data_alloc: 218103808 data_used: 20662
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:41:01.749098+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79241216 unmapped: 21430272 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:41:02.749366+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79241216 unmapped: 21430272 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: osd.2 141 heartbeat osd_stat(store_statfs(0x4fce7e000/0x0/0x4ffc00000, data 0xd913d/0x1ae000, compress 0x0/0x0/0x0, omap 0x13956, meta 0x2bbc6aa), peers [0,1] op hist [])
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:41:03.749620+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79241216 unmapped: 21430272 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:41:04.749821+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79241216 unmapped: 21430272 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:41:05.749952+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79241216 unmapped: 21430272 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:33 compute-0 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 999232 data_alloc: 218103808 data_used: 20662
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:41:06.750100+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79241216 unmapped: 21430272 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:41:07.750324+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79241216 unmapped: 21430272 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:41:08.750461+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79241216 unmapped: 21430272 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: osd.2 141 heartbeat osd_stat(store_statfs(0x4fce7e000/0x0/0x4ffc00000, data 0xd913d/0x1ae000, compress 0x0/0x0/0x0, omap 0x13956, meta 0x2bbc6aa), peers [0,1] op hist [])
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:41:09.750601+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79241216 unmapped: 21430272 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: osd.2 141 heartbeat osd_stat(store_statfs(0x4fce7e000/0x0/0x4ffc00000, data 0xd913d/0x1ae000, compress 0x0/0x0/0x0, omap 0x13956, meta 0x2bbc6aa), peers [0,1] op hist [])
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:41:10.750741+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79241216 unmapped: 21430272 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:33 compute-0 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 999232 data_alloc: 218103808 data_used: 20662
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:41:11.750885+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79241216 unmapped: 21430272 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:41:12.751037+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79241216 unmapped: 21430272 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:41:13.751218+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79241216 unmapped: 21430272 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: osd.2 141 heartbeat osd_stat(store_statfs(0x4fce7e000/0x0/0x4ffc00000, data 0xd913d/0x1ae000, compress 0x0/0x0/0x0, omap 0x13956, meta 0x2bbc6aa), peers [0,1] op hist [])
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:41:14.751318+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79241216 unmapped: 21430272 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:41:15.751436+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79241216 unmapped: 21430272 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:33 compute-0 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 999232 data_alloc: 218103808 data_used: 20662
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:41:16.751622+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79241216 unmapped: 21430272 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:41:17.751748+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79241216 unmapped: 21430272 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:41:18.751888+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79241216 unmapped: 21430272 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:41:19.752035+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79241216 unmapped: 21430272 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: osd.2 141 heartbeat osd_stat(store_statfs(0x4fce7e000/0x0/0x4ffc00000, data 0xd913d/0x1ae000, compress 0x0/0x0/0x0, omap 0x13956, meta 0x2bbc6aa), peers [0,1] op hist [])
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:41:20.752362+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79241216 unmapped: 21430272 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:33 compute-0 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 999232 data_alloc: 218103808 data_used: 20662
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:41:21.752483+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79241216 unmapped: 21430272 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:41:22.752653+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79241216 unmapped: 21430272 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:41:23.752855+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: osd.2 141 heartbeat osd_stat(store_statfs(0x4fce7e000/0x0/0x4ffc00000, data 0xd913d/0x1ae000, compress 0x0/0x0/0x0, omap 0x13956, meta 0x2bbc6aa), peers [0,1] op hist [])
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79241216 unmapped: 21430272 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:41:24.752995+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79241216 unmapped: 21430272 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:41:25.753105+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79241216 unmapped: 21430272 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:33 compute-0 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 999232 data_alloc: 218103808 data_used: 20662
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:41:26.753284+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79241216 unmapped: 21430272 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: osd.2 141 heartbeat osd_stat(store_statfs(0x4fce7e000/0x0/0x4ffc00000, data 0xd913d/0x1ae000, compress 0x0/0x0/0x0, omap 0x13956, meta 0x2bbc6aa), peers [0,1] op hist [])
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:41:27.753425+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79241216 unmapped: 21430272 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: osd.2 141 heartbeat osd_stat(store_statfs(0x4fce7e000/0x0/0x4ffc00000, data 0xd913d/0x1ae000, compress 0x0/0x0/0x0, omap 0x13956, meta 0x2bbc6aa), peers [0,1] op hist [])
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:41:28.753610+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79241216 unmapped: 21430272 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:41:29.753742+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79241216 unmapped: 21430272 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:41:30.753902+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79241216 unmapped: 21430272 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:33 compute-0 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 999232 data_alloc: 218103808 data_used: 20662
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:41:31.754070+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79241216 unmapped: 21430272 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:41:32.754217+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79241216 unmapped: 21430272 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:41:33.754412+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: osd.2 141 heartbeat osd_stat(store_statfs(0x4fce7e000/0x0/0x4ffc00000, data 0xd913d/0x1ae000, compress 0x0/0x0/0x0, omap 0x13956, meta 0x2bbc6aa), peers [0,1] op hist [])
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79241216 unmapped: 21430272 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:41:34.754563+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79241216 unmapped: 21430272 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:41:35.754718+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79241216 unmapped: 21430272 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:33 compute-0 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 999232 data_alloc: 218103808 data_used: 20662
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:41:36.754841+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: osd.2 141 heartbeat osd_stat(store_statfs(0x4fce7e000/0x0/0x4ffc00000, data 0xd913d/0x1ae000, compress 0x0/0x0/0x0, omap 0x13956, meta 0x2bbc6aa), peers [0,1] op hist [])
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79241216 unmapped: 21430272 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:41:37.755022+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: osd.2 141 heartbeat osd_stat(store_statfs(0x4fce7e000/0x0/0x4ffc00000, data 0xd913d/0x1ae000, compress 0x0/0x0/0x0, omap 0x13956, meta 0x2bbc6aa), peers [0,1] op hist [])
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79241216 unmapped: 21430272 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: osd.2 141 heartbeat osd_stat(store_statfs(0x4fce7e000/0x0/0x4ffc00000, data 0xd913d/0x1ae000, compress 0x0/0x0/0x0, omap 0x13956, meta 0x2bbc6aa), peers [0,1] op hist [])
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:41:38.755188+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79241216 unmapped: 21430272 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:41:39.755356+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79241216 unmapped: 21430272 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:41:40.755541+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: osd.2 141 heartbeat osd_stat(store_statfs(0x4fce7e000/0x0/0x4ffc00000, data 0xd913d/0x1ae000, compress 0x0/0x0/0x0, omap 0x13956, meta 0x2bbc6aa), peers [0,1] op hist [])
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79241216 unmapped: 21430272 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:33 compute-0 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 999232 data_alloc: 218103808 data_used: 20662
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:41:41.755673+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79241216 unmapped: 21430272 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: osd.2 141 heartbeat osd_stat(store_statfs(0x4fce7e000/0x0/0x4ffc00000, data 0xd913d/0x1ae000, compress 0x0/0x0/0x0, omap 0x13956, meta 0x2bbc6aa), peers [0,1] op hist [])
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:41:42.755839+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79241216 unmapped: 21430272 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:41:43.756029+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79241216 unmapped: 21430272 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:41:44.756203+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: osd.2 141 heartbeat osd_stat(store_statfs(0x4fce7e000/0x0/0x4ffc00000, data 0xd913d/0x1ae000, compress 0x0/0x0/0x0, omap 0x13956, meta 0x2bbc6aa), peers [0,1] op hist [])
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79241216 unmapped: 21430272 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:41:45.756357+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79241216 unmapped: 21430272 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:33 compute-0 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 999232 data_alloc: 218103808 data_used: 20662
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:41:46.756497+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: osd.2 141 heartbeat osd_stat(store_statfs(0x4fce7e000/0x0/0x4ffc00000, data 0xd913d/0x1ae000, compress 0x0/0x0/0x0, omap 0x13956, meta 0x2bbc6aa), peers [0,1] op hist [])
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79241216 unmapped: 21430272 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:41:47.756664+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: osd.2 141 heartbeat osd_stat(store_statfs(0x4fce7e000/0x0/0x4ffc00000, data 0xd913d/0x1ae000, compress 0x0/0x0/0x0, omap 0x13956, meta 0x2bbc6aa), peers [0,1] op hist [])
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79241216 unmapped: 21430272 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:41:48.756844+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: osd.2 141 heartbeat osd_stat(store_statfs(0x4fce7e000/0x0/0x4ffc00000, data 0xd913d/0x1ae000, compress 0x0/0x0/0x0, omap 0x13956, meta 0x2bbc6aa), peers [0,1] op hist [])
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79241216 unmapped: 21430272 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:41:49.756975+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79241216 unmapped: 21430272 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:41:50.757098+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79241216 unmapped: 21430272 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:33 compute-0 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 999232 data_alloc: 218103808 data_used: 20662
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:41:51.757340+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79241216 unmapped: 21430272 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:41:52.757541+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79241216 unmapped: 21430272 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:41:53.757748+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79241216 unmapped: 21430272 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:41:54.757917+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: osd.2 141 heartbeat osd_stat(store_statfs(0x4fce7e000/0x0/0x4ffc00000, data 0xd913d/0x1ae000, compress 0x0/0x0/0x0, omap 0x13956, meta 0x2bbc6aa), peers [0,1] op hist [])
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79241216 unmapped: 21430272 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:41:55.758148+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79241216 unmapped: 21430272 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:33 compute-0 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 999232 data_alloc: 218103808 data_used: 20662
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:41:56.758350+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79241216 unmapped: 21430272 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:41:57.758480+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79241216 unmapped: 21430272 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:41:58.758633+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: osd.2 141 heartbeat osd_stat(store_statfs(0x4fce7e000/0x0/0x4ffc00000, data 0xd913d/0x1ae000, compress 0x0/0x0/0x0, omap 0x13956, meta 0x2bbc6aa), peers [0,1] op hist [])
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79241216 unmapped: 21430272 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:41:59.758812+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79241216 unmapped: 21430272 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:42:00.758944+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79241216 unmapped: 21430272 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:33 compute-0 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 999232 data_alloc: 218103808 data_used: 20662
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:42:01.759066+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79241216 unmapped: 21430272 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:42:02.759191+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79257600 unmapped: 21413888 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:42:03.759364+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79257600 unmapped: 21413888 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:42:04.759497+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: osd.2 141 heartbeat osd_stat(store_statfs(0x4fce7e000/0x0/0x4ffc00000, data 0xd913d/0x1ae000, compress 0x0/0x0/0x0, omap 0x13956, meta 0x2bbc6aa), peers [0,1] op hist [])
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79257600 unmapped: 21413888 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:42:05.759666+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79257600 unmapped: 21413888 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:33 compute-0 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 999232 data_alloc: 218103808 data_used: 20662
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:42:06.759787+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79257600 unmapped: 21413888 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:42:07.759999+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: osd.2 141 heartbeat osd_stat(store_statfs(0x4fce7e000/0x0/0x4ffc00000, data 0xd913d/0x1ae000, compress 0x0/0x0/0x0, omap 0x13956, meta 0x2bbc6aa), peers [0,1] op hist [])
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79257600 unmapped: 21413888 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: osd.2 141 heartbeat osd_stat(store_statfs(0x4fce7e000/0x0/0x4ffc00000, data 0xd913d/0x1ae000, compress 0x0/0x0/0x0, omap 0x13956, meta 0x2bbc6aa), peers [0,1] op hist [])
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:42:08.760195+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79257600 unmapped: 21413888 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:42:09.760332+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79257600 unmapped: 21413888 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:42:10.760470+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79257600 unmapped: 21413888 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:33 compute-0 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 999232 data_alloc: 218103808 data_used: 20662
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:42:11.760596+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79257600 unmapped: 21413888 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:42:12.760797+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79257600 unmapped: 21413888 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:42:13.761049+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: osd.2 141 heartbeat osd_stat(store_statfs(0x4fce7e000/0x0/0x4ffc00000, data 0xd913d/0x1ae000, compress 0x0/0x0/0x0, omap 0x13956, meta 0x2bbc6aa), peers [0,1] op hist [])
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79257600 unmapped: 21413888 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:42:14.761393+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79257600 unmapped: 21413888 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:42:15.761577+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79257600 unmapped: 21413888 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:33 compute-0 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 999232 data_alloc: 218103808 data_used: 20662
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:42:16.761851+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79257600 unmapped: 21413888 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:42:17.762095+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79257600 unmapped: 21413888 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:42:18.762383+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: osd.2 141 heartbeat osd_stat(store_statfs(0x4fce7e000/0x0/0x4ffc00000, data 0xd913d/0x1ae000, compress 0x0/0x0/0x0, omap 0x13956, meta 0x2bbc6aa), peers [0,1] op hist [])
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79257600 unmapped: 21413888 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:42:19.762670+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79257600 unmapped: 21413888 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:42:20.762916+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79257600 unmapped: 21413888 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:33 compute-0 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 999232 data_alloc: 218103808 data_used: 20662
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:42:21.763346+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79257600 unmapped: 21413888 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:42:22.763482+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79257600 unmapped: 21413888 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:42:23.763708+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: osd.2 141 heartbeat osd_stat(store_statfs(0x4fce7e000/0x0/0x4ffc00000, data 0xd913d/0x1ae000, compress 0x0/0x0/0x0, omap 0x13956, meta 0x2bbc6aa), peers [0,1] op hist [])
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79257600 unmapped: 21413888 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:42:24.763995+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79257600 unmapped: 21413888 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:42:25.764115+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79257600 unmapped: 21413888 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:33 compute-0 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 999232 data_alloc: 218103808 data_used: 20662
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:42:26.764359+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: osd.2 141 heartbeat osd_stat(store_statfs(0x4fce7e000/0x0/0x4ffc00000, data 0xd913d/0x1ae000, compress 0x0/0x0/0x0, omap 0x13956, meta 0x2bbc6aa), peers [0,1] op hist [])
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79257600 unmapped: 21413888 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:42:27.764510+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79257600 unmapped: 21413888 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:42:28.764994+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79257600 unmapped: 21413888 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:42:29.765443+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79257600 unmapped: 21413888 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:42:30.765803+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79257600 unmapped: 21413888 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:33 compute-0 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 999232 data_alloc: 218103808 data_used: 20662
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:42:31.766216+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: osd.2 141 heartbeat osd_stat(store_statfs(0x4fce7e000/0x0/0x4ffc00000, data 0xd913d/0x1ae000, compress 0x0/0x0/0x0, omap 0x13956, meta 0x2bbc6aa), peers [0,1] op hist [])
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79257600 unmapped: 21413888 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:42:32.766785+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: osd.2 141 heartbeat osd_stat(store_statfs(0x4fce7e000/0x0/0x4ffc00000, data 0xd913d/0x1ae000, compress 0x0/0x0/0x0, omap 0x13956, meta 0x2bbc6aa), peers [0,1] op hist [])
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79257600 unmapped: 21413888 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:42:33.767193+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79257600 unmapped: 21413888 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:42:34.767455+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79257600 unmapped: 21413888 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:42:35.767705+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79257600 unmapped: 21413888 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:42:36.768172+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:33 compute-0 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 999232 data_alloc: 218103808 data_used: 20662
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79257600 unmapped: 21413888 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:42:37.768486+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: osd.2 141 heartbeat osd_stat(store_statfs(0x4fce7e000/0x0/0x4ffc00000, data 0xd913d/0x1ae000, compress 0x0/0x0/0x0, omap 0x13956, meta 0x2bbc6aa), peers [0,1] op hist [])
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79257600 unmapped: 21413888 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:42:38.768830+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79257600 unmapped: 21413888 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:42:39.769078+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79257600 unmapped: 21413888 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:42:40.769305+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: osd.2 141 heartbeat osd_stat(store_statfs(0x4fce7e000/0x0/0x4ffc00000, data 0xd913d/0x1ae000, compress 0x0/0x0/0x0, omap 0x13956, meta 0x2bbc6aa), peers [0,1] op hist [])
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79257600 unmapped: 21413888 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:42:41.769593+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:33 compute-0 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 999232 data_alloc: 218103808 data_used: 20662
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79257600 unmapped: 21413888 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:42:42.769966+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79257600 unmapped: 21413888 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:42:43.770294+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79257600 unmapped: 21413888 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:42:44.770506+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79257600 unmapped: 21413888 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:42:45.770726+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: osd.2 141 heartbeat osd_stat(store_statfs(0x4fce7e000/0x0/0x4ffc00000, data 0xd913d/0x1ae000, compress 0x0/0x0/0x0, omap 0x13956, meta 0x2bbc6aa), peers [0,1] op hist [])
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79257600 unmapped: 21413888 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:42:46.771094+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:33 compute-0 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 999232 data_alloc: 218103808 data_used: 20662
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79257600 unmapped: 21413888 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:42:47.771647+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79257600 unmapped: 21413888 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:42:48.773050+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79257600 unmapped: 21413888 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:42:49.773811+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79257600 unmapped: 21413888 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:42:50.774967+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79257600 unmapped: 21413888 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:42:51.775604+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:33 compute-0 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 999232 data_alloc: 218103808 data_used: 20662
Jan 31 08:51:33 compute-0 ceph-osd[88096]: osd.2 141 heartbeat osd_stat(store_statfs(0x4fce7e000/0x0/0x4ffc00000, data 0xd913d/0x1ae000, compress 0x0/0x0/0x0, omap 0x13956, meta 0x2bbc6aa), peers [0,1] op hist [])
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79257600 unmapped: 21413888 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:42:52.775803+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79257600 unmapped: 21413888 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:42:53.776329+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79257600 unmapped: 21413888 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:42:54.776769+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79257600 unmapped: 21413888 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:42:55.776903+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79257600 unmapped: 21413888 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:42:56.777164+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:33 compute-0 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 999232 data_alloc: 218103808 data_used: 20662
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79257600 unmapped: 21413888 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:42:57.777302+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: osd.2 141 heartbeat osd_stat(store_statfs(0x4fce7e000/0x0/0x4ffc00000, data 0xd913d/0x1ae000, compress 0x0/0x0/0x0, omap 0x13956, meta 0x2bbc6aa), peers [0,1] op hist [])
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79257600 unmapped: 21413888 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:42:58.777660+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:42:59.777919+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79257600 unmapped: 21413888 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:43:00.778177+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79257600 unmapped: 21413888 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:43:01.778392+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79257600 unmapped: 21413888 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: osd.2 141 heartbeat osd_stat(store_statfs(0x4fce7e000/0x0/0x4ffc00000, data 0xd913d/0x1ae000, compress 0x0/0x0/0x0, omap 0x13956, meta 0x2bbc6aa), peers [0,1] op hist [])
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:33 compute-0 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 999232 data_alloc: 218103808 data_used: 20662
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:43:02.778544+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79257600 unmapped: 21413888 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:43:03.778808+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79257600 unmapped: 21413888 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:43:04.779034+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79257600 unmapped: 21413888 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:43:05.779223+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79257600 unmapped: 21413888 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:43:06.779367+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79257600 unmapped: 21413888 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:33 compute-0 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 999232 data_alloc: 218103808 data_used: 20662
Jan 31 08:51:33 compute-0 ceph-osd[88096]: osd.2 141 heartbeat osd_stat(store_statfs(0x4fce7e000/0x0/0x4ffc00000, data 0xd913d/0x1ae000, compress 0x0/0x0/0x0, omap 0x13956, meta 0x2bbc6aa), peers [0,1] op hist [])
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:43:07.779524+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79257600 unmapped: 21413888 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:43:08.779782+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79257600 unmapped: 21413888 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:43:09.779988+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79257600 unmapped: 21413888 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: osd.2 141 heartbeat osd_stat(store_statfs(0x4fce7e000/0x0/0x4ffc00000, data 0xd913d/0x1ae000, compress 0x0/0x0/0x0, omap 0x13956, meta 0x2bbc6aa), peers [0,1] op hist [])
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:43:10.780187+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79257600 unmapped: 21413888 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: osd.2 141 heartbeat osd_stat(store_statfs(0x4fce7e000/0x0/0x4ffc00000, data 0xd913d/0x1ae000, compress 0x0/0x0/0x0, omap 0x13956, meta 0x2bbc6aa), peers [0,1] op hist [])
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:43:11.780394+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79257600 unmapped: 21413888 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:33 compute-0 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 999232 data_alloc: 218103808 data_used: 20662
Jan 31 08:51:33 compute-0 ceph-osd[88096]: osd.2 141 heartbeat osd_stat(store_statfs(0x4fce7e000/0x0/0x4ffc00000, data 0xd913d/0x1ae000, compress 0x0/0x0/0x0, omap 0x13956, meta 0x2bbc6aa), peers [0,1] op hist [])
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:43:12.780533+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79257600 unmapped: 21413888 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:43:13.780677+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79257600 unmapped: 21413888 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: osd.2 141 heartbeat osd_stat(store_statfs(0x4fce7e000/0x0/0x4ffc00000, data 0xd913d/0x1ae000, compress 0x0/0x0/0x0, omap 0x13956, meta 0x2bbc6aa), peers [0,1] op hist [])
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:43:14.780831+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79257600 unmapped: 21413888 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:43:15.781028+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79257600 unmapped: 21413888 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:43:16.781219+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79257600 unmapped: 21413888 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:33 compute-0 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 999232 data_alloc: 218103808 data_used: 20662
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:43:17.781384+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79257600 unmapped: 21413888 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: osd.2 141 heartbeat osd_stat(store_statfs(0x4fce7e000/0x0/0x4ffc00000, data 0xd913d/0x1ae000, compress 0x0/0x0/0x0, omap 0x13956, meta 0x2bbc6aa), peers [0,1] op hist [])
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:43:18.781604+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79257600 unmapped: 21413888 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:43:19.781768+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79257600 unmapped: 21413888 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:43:20.781960+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79257600 unmapped: 21413888 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: osd.2 141 heartbeat osd_stat(store_statfs(0x4fce7e000/0x0/0x4ffc00000, data 0xd913d/0x1ae000, compress 0x0/0x0/0x0, omap 0x13956, meta 0x2bbc6aa), peers [0,1] op hist [])
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:43:21.782175+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79257600 unmapped: 21413888 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:33 compute-0 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 999232 data_alloc: 218103808 data_used: 20662
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:43:22.782425+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79257600 unmapped: 21413888 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: osd.2 141 heartbeat osd_stat(store_statfs(0x4fce7e000/0x0/0x4ffc00000, data 0xd913d/0x1ae000, compress 0x0/0x0/0x0, omap 0x13956, meta 0x2bbc6aa), peers [0,1] op hist [])
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:43:23.782665+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79257600 unmapped: 21413888 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: osd.2 141 heartbeat osd_stat(store_statfs(0x4fce7e000/0x0/0x4ffc00000, data 0xd913d/0x1ae000, compress 0x0/0x0/0x0, omap 0x13956, meta 0x2bbc6aa), peers [0,1] op hist [])
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:43:24.782828+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79257600 unmapped: 21413888 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:43:25.782976+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79257600 unmapped: 21413888 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:43:26.783128+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79257600 unmapped: 21413888 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:33 compute-0 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 999232 data_alloc: 218103808 data_used: 20662
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:43:27.783321+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: osd.2 141 heartbeat osd_stat(store_statfs(0x4fce7e000/0x0/0x4ffc00000, data 0xd913d/0x1ae000, compress 0x0/0x0/0x0, omap 0x13956, meta 0x2bbc6aa), peers [0,1] op hist [])
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79257600 unmapped: 21413888 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:43:28.783485+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79257600 unmapped: 21413888 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:43:29.784362+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79257600 unmapped: 21413888 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:43:30.784612+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79257600 unmapped: 21413888 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:43:31.784746+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79257600 unmapped: 21413888 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:33 compute-0 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 999232 data_alloc: 218103808 data_used: 20662
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:43:32.784888+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79257600 unmapped: 21413888 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: osd.2 141 heartbeat osd_stat(store_statfs(0x4fce7e000/0x0/0x4ffc00000, data 0xd913d/0x1ae000, compress 0x0/0x0/0x0, omap 0x13956, meta 0x2bbc6aa), peers [0,1] op hist [])
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:43:33.785056+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79257600 unmapped: 21413888 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:43:34.785157+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79257600 unmapped: 21413888 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: osd.2 141 heartbeat osd_stat(store_statfs(0x4fce7e000/0x0/0x4ffc00000, data 0xd913d/0x1ae000, compress 0x0/0x0/0x0, omap 0x13956, meta 0x2bbc6aa), peers [0,1] op hist [])
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:43:35.785343+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79257600 unmapped: 21413888 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:43:36.785532+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79257600 unmapped: 21413888 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:33 compute-0 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 999232 data_alloc: 218103808 data_used: 20662
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:43:37.785657+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79257600 unmapped: 21413888 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:43:38.785767+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79257600 unmapped: 21413888 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:43:39.785907+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79257600 unmapped: 21413888 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: osd.2 141 heartbeat osd_stat(store_statfs(0x4fce7e000/0x0/0x4ffc00000, data 0xd913d/0x1ae000, compress 0x0/0x0/0x0, omap 0x13956, meta 0x2bbc6aa), peers [0,1] op hist [])
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:43:40.786023+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79257600 unmapped: 21413888 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:43:41.786348+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: osd.2 141 heartbeat osd_stat(store_statfs(0x4fce7e000/0x0/0x4ffc00000, data 0xd913d/0x1ae000, compress 0x0/0x0/0x0, omap 0x13956, meta 0x2bbc6aa), peers [0,1] op hist [])
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79257600 unmapped: 21413888 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:33 compute-0 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 999232 data_alloc: 218103808 data_used: 20662
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:43:42.786617+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79257600 unmapped: 21413888 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:43:43.786841+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79257600 unmapped: 21413888 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:43:44.787036+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79257600 unmapped: 21413888 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: osd.2 141 heartbeat osd_stat(store_statfs(0x4fce7e000/0x0/0x4ffc00000, data 0xd913d/0x1ae000, compress 0x0/0x0/0x0, omap 0x13956, meta 0x2bbc6aa), peers [0,1] op hist [])
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:43:45.787215+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79257600 unmapped: 21413888 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:43:46.787360+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79257600 unmapped: 21413888 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:33 compute-0 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 999232 data_alloc: 218103808 data_used: 20662
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:43:47.787527+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79257600 unmapped: 21413888 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: osd.2 141 heartbeat osd_stat(store_statfs(0x4fce7e000/0x0/0x4ffc00000, data 0xd913d/0x1ae000, compress 0x0/0x0/0x0, omap 0x13956, meta 0x2bbc6aa), peers [0,1] op hist [])
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:43:48.787658+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79257600 unmapped: 21413888 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: osd.2 141 heartbeat osd_stat(store_statfs(0x4fce7e000/0x0/0x4ffc00000, data 0xd913d/0x1ae000, compress 0x0/0x0/0x0, omap 0x13956, meta 0x2bbc6aa), peers [0,1] op hist [])
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:43:49.787840+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79257600 unmapped: 21413888 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: osd.2 141 heartbeat osd_stat(store_statfs(0x4fce7e000/0x0/0x4ffc00000, data 0xd913d/0x1ae000, compress 0x0/0x0/0x0, omap 0x13956, meta 0x2bbc6aa), peers [0,1] op hist [])
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:43:50.787992+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79257600 unmapped: 21413888 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:43:51.788123+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79257600 unmapped: 21413888 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:33 compute-0 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 999232 data_alloc: 218103808 data_used: 20662
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:43:52.788282+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79257600 unmapped: 21413888 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: osd.2 141 heartbeat osd_stat(store_statfs(0x4fce7e000/0x0/0x4ffc00000, data 0xd913d/0x1ae000, compress 0x0/0x0/0x0, omap 0x13956, meta 0x2bbc6aa), peers [0,1] op hist [])
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:43:53.788467+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79257600 unmapped: 21413888 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: osd.2 141 heartbeat osd_stat(store_statfs(0x4fce7e000/0x0/0x4ffc00000, data 0xd913d/0x1ae000, compress 0x0/0x0/0x0, omap 0x13956, meta 0x2bbc6aa), peers [0,1] op hist [])
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:43:54.788587+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79257600 unmapped: 21413888 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:43:55.788787+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79257600 unmapped: 21413888 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:43:56.789018+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79257600 unmapped: 21413888 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:33 compute-0 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 999232 data_alloc: 218103808 data_used: 20662
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:43:57.789166+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79257600 unmapped: 21413888 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:43:58.789302+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79257600 unmapped: 21413888 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:43:59.789457+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79257600 unmapped: 21413888 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: osd.2 141 heartbeat osd_stat(store_statfs(0x4fce7e000/0x0/0x4ffc00000, data 0xd913d/0x1ae000, compress 0x0/0x0/0x0, omap 0x13956, meta 0x2bbc6aa), peers [0,1] op hist [])
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:44:00.789647+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79257600 unmapped: 21413888 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:44:01.789823+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: osd.2 141 heartbeat osd_stat(store_statfs(0x4fce7e000/0x0/0x4ffc00000, data 0xd913d/0x1ae000, compress 0x0/0x0/0x0, omap 0x13956, meta 0x2bbc6aa), peers [0,1] op hist [])
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79257600 unmapped: 21413888 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:33 compute-0 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 999232 data_alloc: 218103808 data_used: 20662
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:44:02.789976+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79257600 unmapped: 21413888 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:44:03.790194+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79273984 unmapped: 21397504 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:44:04.790350+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79273984 unmapped: 21397504 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: osd.2 141 heartbeat osd_stat(store_statfs(0x4fce7e000/0x0/0x4ffc00000, data 0xd913d/0x1ae000, compress 0x0/0x0/0x0, omap 0x13956, meta 0x2bbc6aa), peers [0,1] op hist [])
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:44:05.790530+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79273984 unmapped: 21397504 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:44:06.790819+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79273984 unmapped: 21397504 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:33 compute-0 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 999232 data_alloc: 218103808 data_used: 20662
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:44:07.790945+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79273984 unmapped: 21397504 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:44:08.791149+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79273984 unmapped: 21397504 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:44:09.791281+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79273984 unmapped: 21397504 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:44:10.791440+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: osd.2 141 heartbeat osd_stat(store_statfs(0x4fce7e000/0x0/0x4ffc00000, data 0xd913d/0x1ae000, compress 0x0/0x0/0x0, omap 0x13956, meta 0x2bbc6aa), peers [0,1] op hist [])
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79273984 unmapped: 21397504 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:44:11.791612+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79273984 unmapped: 21397504 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:33 compute-0 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 999232 data_alloc: 218103808 data_used: 20662
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:44:12.791753+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79273984 unmapped: 21397504 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:44:13.791991+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79273984 unmapped: 21397504 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: osd.2 141 heartbeat osd_stat(store_statfs(0x4fce7e000/0x0/0x4ffc00000, data 0xd913d/0x1ae000, compress 0x0/0x0/0x0, omap 0x13956, meta 0x2bbc6aa), peers [0,1] op hist [])
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:44:14.792212+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79273984 unmapped: 21397504 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:44:15.803341+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79273984 unmapped: 21397504 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:44:16.803511+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79273984 unmapped: 21397504 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:33 compute-0 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 999232 data_alloc: 218103808 data_used: 20662
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:44:17.803706+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: osd.2 141 heartbeat osd_stat(store_statfs(0x4fce7e000/0x0/0x4ffc00000, data 0xd913d/0x1ae000, compress 0x0/0x0/0x0, omap 0x13956, meta 0x2bbc6aa), peers [0,1] op hist [])
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79273984 unmapped: 21397504 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:44:18.803860+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79273984 unmapped: 21397504 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:44:19.804022+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: osd.2 141 heartbeat osd_stat(store_statfs(0x4fce7e000/0x0/0x4ffc00000, data 0xd913d/0x1ae000, compress 0x0/0x0/0x0, omap 0x13956, meta 0x2bbc6aa), peers [0,1] op hist [])
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79273984 unmapped: 21397504 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:44:20.804235+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79273984 unmapped: 21397504 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:44:21.804553+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79273984 unmapped: 21397504 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:33 compute-0 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 999232 data_alloc: 218103808 data_used: 20662
Jan 31 08:51:33 compute-0 ceph-osd[88096]: osd.2 141 heartbeat osd_stat(store_statfs(0x4fce7e000/0x0/0x4ffc00000, data 0xd913d/0x1ae000, compress 0x0/0x0/0x0, omap 0x13956, meta 0x2bbc6aa), peers [0,1] op hist [])
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:44:22.804807+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79273984 unmapped: 21397504 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:44:23.805035+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79273984 unmapped: 21397504 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:44:24.805303+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79273984 unmapped: 21397504 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:44:25.805509+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79273984 unmapped: 21397504 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: osd.2 141 heartbeat osd_stat(store_statfs(0x4fce7e000/0x0/0x4ffc00000, data 0xd913d/0x1ae000, compress 0x0/0x0/0x0, omap 0x13956, meta 0x2bbc6aa), peers [0,1] op hist [])
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:44:26.805668+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79273984 unmapped: 21397504 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:33 compute-0 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 999232 data_alloc: 218103808 data_used: 20662
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:44:27.805811+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79273984 unmapped: 21397504 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:44:28.806021+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79273984 unmapped: 21397504 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: osd.2 141 heartbeat osd_stat(store_statfs(0x4fce7e000/0x0/0x4ffc00000, data 0xd913d/0x1ae000, compress 0x0/0x0/0x0, omap 0x13956, meta 0x2bbc6aa), peers [0,1] op hist [])
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:44:29.806228+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: osd.2 141 heartbeat osd_stat(store_statfs(0x4fce7e000/0x0/0x4ffc00000, data 0xd913d/0x1ae000, compress 0x0/0x0/0x0, omap 0x13956, meta 0x2bbc6aa), peers [0,1] op hist [])
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79273984 unmapped: 21397504 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:44:30.806462+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79273984 unmapped: 21397504 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:44:31.806598+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79273984 unmapped: 21397504 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:33 compute-0 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 999232 data_alloc: 218103808 data_used: 20662
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:44:32.806784+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79273984 unmapped: 21397504 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:44:33.807034+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: osd.2 141 heartbeat osd_stat(store_statfs(0x4fce7e000/0x0/0x4ffc00000, data 0xd913d/0x1ae000, compress 0x0/0x0/0x0, omap 0x13956, meta 0x2bbc6aa), peers [0,1] op hist [])
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79273984 unmapped: 21397504 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: osd.2 141 heartbeat osd_stat(store_statfs(0x4fce7e000/0x0/0x4ffc00000, data 0xd913d/0x1ae000, compress 0x0/0x0/0x0, omap 0x13956, meta 0x2bbc6aa), peers [0,1] op hist [])
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:44:34.807176+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79273984 unmapped: 21397504 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:44:35.807364+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79273984 unmapped: 21397504 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:44:36.807524+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: osd.2 141 heartbeat osd_stat(store_statfs(0x4fce7e000/0x0/0x4ffc00000, data 0xd913d/0x1ae000, compress 0x0/0x0/0x0, omap 0x13956, meta 0x2bbc6aa), peers [0,1] op hist [])
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79273984 unmapped: 21397504 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:33 compute-0 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 999232 data_alloc: 218103808 data_used: 20662
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:44:37.807637+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79273984 unmapped: 21397504 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:44:38.807789+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79273984 unmapped: 21397504 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:44:39.807938+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79273984 unmapped: 21397504 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:44:40.808074+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79273984 unmapped: 21397504 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: osd.2 141 heartbeat osd_stat(store_statfs(0x4fce7e000/0x0/0x4ffc00000, data 0xd913d/0x1ae000, compress 0x0/0x0/0x0, omap 0x13956, meta 0x2bbc6aa), peers [0,1] op hist [])
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:44:41.808317+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79273984 unmapped: 21397504 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:33 compute-0 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 999232 data_alloc: 218103808 data_used: 20662
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:44:42.808529+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79273984 unmapped: 21397504 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:44:43.808742+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79273984 unmapped: 21397504 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: osd.2 141 heartbeat osd_stat(store_statfs(0x4fce7e000/0x0/0x4ffc00000, data 0xd913d/0x1ae000, compress 0x0/0x0/0x0, omap 0x13956, meta 0x2bbc6aa), peers [0,1] op hist [])
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:44:44.809024+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79273984 unmapped: 21397504 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: osd.2 141 heartbeat osd_stat(store_statfs(0x4fce7e000/0x0/0x4ffc00000, data 0xd913d/0x1ae000, compress 0x0/0x0/0x0, omap 0x13956, meta 0x2bbc6aa), peers [0,1] op hist [])
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:44:45.809204+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79273984 unmapped: 21397504 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: osd.2 141 heartbeat osd_stat(store_statfs(0x4fce7e000/0x0/0x4ffc00000, data 0xd913d/0x1ae000, compress 0x0/0x0/0x0, omap 0x13956, meta 0x2bbc6aa), peers [0,1] op hist [])
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:44:46.809456+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79273984 unmapped: 21397504 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:33 compute-0 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 999232 data_alloc: 218103808 data_used: 20662
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:44:47.809586+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79273984 unmapped: 21397504 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:44:48.809850+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79273984 unmapped: 21397504 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:44:49.810031+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79273984 unmapped: 21397504 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:44:50.810219+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79273984 unmapped: 21397504 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: osd.2 141 heartbeat osd_stat(store_statfs(0x4fce7e000/0x0/0x4ffc00000, data 0xd913d/0x1ae000, compress 0x0/0x0/0x0, omap 0x13956, meta 0x2bbc6aa), peers [0,1] op hist [])
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:44:51.810374+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79273984 unmapped: 21397504 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:33 compute-0 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 999232 data_alloc: 218103808 data_used: 20662
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:44:52.810489+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79273984 unmapped: 21397504 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:44:53.810636+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: osd.2 141 heartbeat osd_stat(store_statfs(0x4fce7e000/0x0/0x4ffc00000, data 0xd913d/0x1ae000, compress 0x0/0x0/0x0, omap 0x13956, meta 0x2bbc6aa), peers [0,1] op hist [])
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79273984 unmapped: 21397504 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:44:54.810796+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79273984 unmapped: 21397504 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:44:55.810947+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79273984 unmapped: 21397504 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:44:56.811160+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79273984 unmapped: 21397504 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:33 compute-0 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 999232 data_alloc: 218103808 data_used: 20662
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:44:57.811385+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79273984 unmapped: 21397504 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: osd.2 141 heartbeat osd_stat(store_statfs(0x4fce7e000/0x0/0x4ffc00000, data 0xd913d/0x1ae000, compress 0x0/0x0/0x0, omap 0x13956, meta 0x2bbc6aa), peers [0,1] op hist [])
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:44:58.811554+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79273984 unmapped: 21397504 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:44:59.811699+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79273984 unmapped: 21397504 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:45:00.811907+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79273984 unmapped: 21397504 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:45:01.812127+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79273984 unmapped: 21397504 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:33 compute-0 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 999232 data_alloc: 218103808 data_used: 20662
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:45:02.812297+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: osd.2 141 heartbeat osd_stat(store_statfs(0x4fce7e000/0x0/0x4ffc00000, data 0xd913d/0x1ae000, compress 0x0/0x0/0x0, omap 0x13956, meta 0x2bbc6aa), peers [0,1] op hist [])
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79282176 unmapped: 21389312 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:45:03.812505+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79282176 unmapped: 21389312 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:45:04.812677+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79282176 unmapped: 21389312 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:45:05.812819+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79282176 unmapped: 21389312 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:45:06.813033+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79282176 unmapped: 21389312 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:33 compute-0 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 999232 data_alloc: 218103808 data_used: 20662
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:45:07.813176+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: osd.2 141 heartbeat osd_stat(store_statfs(0x4fce7e000/0x0/0x4ffc00000, data 0xd913d/0x1ae000, compress 0x0/0x0/0x0, omap 0x13956, meta 0x2bbc6aa), peers [0,1] op hist [])
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79282176 unmapped: 21389312 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:45:08.813317+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79282176 unmapped: 21389312 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:45:09.813469+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79282176 unmapped: 21389312 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:45:10.813739+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: osd.2 141 heartbeat osd_stat(store_statfs(0x4fce7e000/0x0/0x4ffc00000, data 0xd913d/0x1ae000, compress 0x0/0x0/0x0, omap 0x13956, meta 0x2bbc6aa), peers [0,1] op hist [])
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79282176 unmapped: 21389312 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:45:11.813902+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79282176 unmapped: 21389312 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:33 compute-0 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 999232 data_alloc: 218103808 data_used: 20662
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:45:12.814039+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79282176 unmapped: 21389312 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:45:13.814324+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79282176 unmapped: 21389312 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:45:14.814520+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79282176 unmapped: 21389312 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:45:15.814747+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79282176 unmapped: 21389312 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:45:16.814911+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: osd.2 141 heartbeat osd_stat(store_statfs(0x4fce7e000/0x0/0x4ffc00000, data 0xd913d/0x1ae000, compress 0x0/0x0/0x0, omap 0x13956, meta 0x2bbc6aa), peers [0,1] op hist [])
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79282176 unmapped: 21389312 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:33 compute-0 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 999232 data_alloc: 218103808 data_used: 20662
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:45:17.815057+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79282176 unmapped: 21389312 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:45:18.815240+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79282176 unmapped: 21389312 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:45:19.815620+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79282176 unmapped: 21389312 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: osd.2 141 heartbeat osd_stat(store_statfs(0x4fce7e000/0x0/0x4ffc00000, data 0xd913d/0x1ae000, compress 0x0/0x0/0x0, omap 0x13956, meta 0x2bbc6aa), peers [0,1] op hist [])
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:45:20.815805+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: osd.2 141 heartbeat osd_stat(store_statfs(0x4fce7e000/0x0/0x4ffc00000, data 0xd913d/0x1ae000, compress 0x0/0x0/0x0, omap 0x13956, meta 0x2bbc6aa), peers [0,1] op hist [])
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79282176 unmapped: 21389312 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:45:21.816002+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79282176 unmapped: 21389312 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:33 compute-0 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 999232 data_alloc: 218103808 data_used: 20662
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:45:22.816190+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: osd.2 141 heartbeat osd_stat(store_statfs(0x4fce7e000/0x0/0x4ffc00000, data 0xd913d/0x1ae000, compress 0x0/0x0/0x0, omap 0x13956, meta 0x2bbc6aa), peers [0,1] op hist [])
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79282176 unmapped: 21389312 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:45:23.816562+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79282176 unmapped: 21389312 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:45:24.816872+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79282176 unmapped: 21389312 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:45:25.817148+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79282176 unmapped: 21389312 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:45:26.817412+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: osd.2 141 heartbeat osd_stat(store_statfs(0x4fce7e000/0x0/0x4ffc00000, data 0xd913d/0x1ae000, compress 0x0/0x0/0x0, omap 0x13956, meta 0x2bbc6aa), peers [0,1] op hist [])
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79282176 unmapped: 21389312 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:33 compute-0 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 999232 data_alloc: 218103808 data_used: 20662
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:45:27.817540+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79282176 unmapped: 21389312 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:45:28.817702+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79282176 unmapped: 21389312 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:45:29.817842+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: osd.2 141 heartbeat osd_stat(store_statfs(0x4fce7e000/0x0/0x4ffc00000, data 0xd913d/0x1ae000, compress 0x0/0x0/0x0, omap 0x13956, meta 0x2bbc6aa), peers [0,1] op hist [])
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79282176 unmapped: 21389312 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:45:30.818001+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79282176 unmapped: 21389312 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:45:31.818185+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: osd.2 141 heartbeat osd_stat(store_statfs(0x4fce7e000/0x0/0x4ffc00000, data 0xd913d/0x1ae000, compress 0x0/0x0/0x0, omap 0x13956, meta 0x2bbc6aa), peers [0,1] op hist [])
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79282176 unmapped: 21389312 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:33 compute-0 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 999232 data_alloc: 218103808 data_used: 20662
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:45:32.818344+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79282176 unmapped: 21389312 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:45:33.818562+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79282176 unmapped: 21389312 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:45:34.818714+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79282176 unmapped: 21389312 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: osd.2 141 heartbeat osd_stat(store_statfs(0x4fce7e000/0x0/0x4ffc00000, data 0xd913d/0x1ae000, compress 0x0/0x0/0x0, omap 0x13956, meta 0x2bbc6aa), peers [0,1] op hist [])
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:45:35.818893+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79282176 unmapped: 21389312 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:45:36.819091+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79282176 unmapped: 21389312 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:33 compute-0 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 999232 data_alloc: 218103808 data_used: 20662
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:45:37.819293+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79282176 unmapped: 21389312 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:45:38.819446+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79282176 unmapped: 21389312 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:45:39.819628+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79282176 unmapped: 21389312 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:45:40.819751+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: osd.2 141 heartbeat osd_stat(store_statfs(0x4fce7e000/0x0/0x4ffc00000, data 0xd913d/0x1ae000, compress 0x0/0x0/0x0, omap 0x13956, meta 0x2bbc6aa), peers [0,1] op hist [])
Jan 31 08:51:33 compute-0 ceph-osd[88096]: osd.2 141 heartbeat osd_stat(store_statfs(0x4fce7e000/0x0/0x4ffc00000, data 0xd913d/0x1ae000, compress 0x0/0x0/0x0, omap 0x13956, meta 0x2bbc6aa), peers [0,1] op hist [])
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79282176 unmapped: 21389312 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:45:41.819886+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79282176 unmapped: 21389312 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:33 compute-0 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 999232 data_alloc: 218103808 data_used: 20662
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:45:42.819987+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79282176 unmapped: 21389312 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: osd.2 141 heartbeat osd_stat(store_statfs(0x4fce7e000/0x0/0x4ffc00000, data 0xd913d/0x1ae000, compress 0x0/0x0/0x0, omap 0x13956, meta 0x2bbc6aa), peers [0,1] op hist [])
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:45:43.820104+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79282176 unmapped: 21389312 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:45:44.820328+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79282176 unmapped: 21389312 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:45:45.820534+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: osd.2 141 heartbeat osd_stat(store_statfs(0x4fce7e000/0x0/0x4ffc00000, data 0xd913d/0x1ae000, compress 0x0/0x0/0x0, omap 0x13956, meta 0x2bbc6aa), peers [0,1] op hist [])
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79282176 unmapped: 21389312 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:45:46.820761+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79282176 unmapped: 21389312 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:33 compute-0 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 999232 data_alloc: 218103808 data_used: 20662
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:45:47.820958+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79282176 unmapped: 21389312 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:45:48.821168+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79282176 unmapped: 21389312 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:45:49.821464+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79282176 unmapped: 21389312 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: osd.2 141 heartbeat osd_stat(store_statfs(0x4fce7e000/0x0/0x4ffc00000, data 0xd913d/0x1ae000, compress 0x0/0x0/0x0, omap 0x13956, meta 0x2bbc6aa), peers [0,1] op hist [])
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:45:50.821601+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79282176 unmapped: 21389312 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:45:51.821771+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79282176 unmapped: 21389312 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:33 compute-0 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 999232 data_alloc: 218103808 data_used: 20662
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:45:52.822025+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79282176 unmapped: 21389312 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:45:53.822318+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79282176 unmapped: 21389312 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:45:54.822608+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79282176 unmapped: 21389312 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:45:55.822830+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: osd.2 141 heartbeat osd_stat(store_statfs(0x4fce7e000/0x0/0x4ffc00000, data 0xd913d/0x1ae000, compress 0x0/0x0/0x0, omap 0x13956, meta 0x2bbc6aa), peers [0,1] op hist [])
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79282176 unmapped: 21389312 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:45:56.823009+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79282176 unmapped: 21389312 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:33 compute-0 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 999232 data_alloc: 218103808 data_used: 20662
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:45:57.823198+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79282176 unmapped: 21389312 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:45:58.823376+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79282176 unmapped: 21389312 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:45:59.823543+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79282176 unmapped: 21389312 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:46:00.823744+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79282176 unmapped: 21389312 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:46:01.823938+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: osd.2 141 heartbeat osd_stat(store_statfs(0x4fce7e000/0x0/0x4ffc00000, data 0xd913d/0x1ae000, compress 0x0/0x0/0x0, omap 0x13956, meta 0x2bbc6aa), peers [0,1] op hist [])
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79282176 unmapped: 21389312 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:33 compute-0 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 999232 data_alloc: 218103808 data_used: 20662
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:46:02.824062+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79306752 unmapped: 21364736 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:46:03.824347+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: osd.2 141 heartbeat osd_stat(store_statfs(0x4fce7e000/0x0/0x4ffc00000, data 0xd913d/0x1ae000, compress 0x0/0x0/0x0, omap 0x13956, meta 0x2bbc6aa), peers [0,1] op hist [])
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79306752 unmapped: 21364736 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:46:04.824493+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79306752 unmapped: 21364736 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:46:05.824633+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: osd.2 141 heartbeat osd_stat(store_statfs(0x4fce7e000/0x0/0x4ffc00000, data 0xd913d/0x1ae000, compress 0x0/0x0/0x0, omap 0x13956, meta 0x2bbc6aa), peers [0,1] op hist [])
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79306752 unmapped: 21364736 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:46:06.824785+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: osd.2 141 heartbeat osd_stat(store_statfs(0x4fce7e000/0x0/0x4ffc00000, data 0xd913d/0x1ae000, compress 0x0/0x0/0x0, omap 0x13956, meta 0x2bbc6aa), peers [0,1] op hist [])
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79306752 unmapped: 21364736 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:33 compute-0 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 999232 data_alloc: 218103808 data_used: 20662
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:46:07.824979+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79306752 unmapped: 21364736 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:46:08.825115+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79306752 unmapped: 21364736 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:46:09.825425+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79306752 unmapped: 21364736 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:46:10.825665+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79306752 unmapped: 21364736 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:46:11.825906+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79306752 unmapped: 21364736 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:33 compute-0 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 999232 data_alloc: 218103808 data_used: 20662
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:46:12.826089+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: osd.2 141 heartbeat osd_stat(store_statfs(0x4fce7e000/0x0/0x4ffc00000, data 0xd913d/0x1ae000, compress 0x0/0x0/0x0, omap 0x13956, meta 0x2bbc6aa), peers [0,1] op hist [])
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79306752 unmapped: 21364736 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:46:13.826336+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79306752 unmapped: 21364736 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:46:14.826529+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79306752 unmapped: 21364736 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:46:15.826789+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: osd.2 141 heartbeat osd_stat(store_statfs(0x4fce7e000/0x0/0x4ffc00000, data 0xd913d/0x1ae000, compress 0x0/0x0/0x0, omap 0x13956, meta 0x2bbc6aa), peers [0,1] op hist [])
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79306752 unmapped: 21364736 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:46:16.826976+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79306752 unmapped: 21364736 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:46:17.827192+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:33 compute-0 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 999232 data_alloc: 218103808 data_used: 20662
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79306752 unmapped: 21364736 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:46:18.827356+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79306752 unmapped: 21364736 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:46:19.827515+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: osd.2 141 heartbeat osd_stat(store_statfs(0x4fce7e000/0x0/0x4ffc00000, data 0xd913d/0x1ae000, compress 0x0/0x0/0x0, omap 0x13956, meta 0x2bbc6aa), peers [0,1] op hist [])
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79306752 unmapped: 21364736 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:46:20.827641+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79306752 unmapped: 21364736 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:46:21.827853+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79306752 unmapped: 21364736 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:46:22.828098+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:33 compute-0 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 999232 data_alloc: 218103808 data_used: 20662
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79306752 unmapped: 21364736 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:46:23.828359+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: osd.2 141 heartbeat osd_stat(store_statfs(0x4fce7e000/0x0/0x4ffc00000, data 0xd913d/0x1ae000, compress 0x0/0x0/0x0, omap 0x13956, meta 0x2bbc6aa), peers [0,1] op hist [])
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79306752 unmapped: 21364736 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:46:24.828550+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79306752 unmapped: 21364736 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:46:25.828705+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79306752 unmapped: 21364736 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:46:26.828941+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79306752 unmapped: 21364736 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:46:27.829155+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:33 compute-0 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 999232 data_alloc: 218103808 data_used: 20662
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79306752 unmapped: 21364736 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:46:28.829336+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79306752 unmapped: 21364736 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:46:29.829500+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: osd.2 141 heartbeat osd_stat(store_statfs(0x4fce7e000/0x0/0x4ffc00000, data 0xd913d/0x1ae000, compress 0x0/0x0/0x0, omap 0x13956, meta 0x2bbc6aa), peers [0,1] op hist [])
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79306752 unmapped: 21364736 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:46:30.829723+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79306752 unmapped: 21364736 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:46:31.829914+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79306752 unmapped: 21364736 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:46:32.830155+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:33 compute-0 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 999232 data_alloc: 218103808 data_used: 20662
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79306752 unmapped: 21364736 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:46:33.830397+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:46:34.831296+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79306752 unmapped: 21364736 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:46:35.832072+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79306752 unmapped: 21364736 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: osd.2 141 heartbeat osd_stat(store_statfs(0x4fce7e000/0x0/0x4ffc00000, data 0xd913d/0x1ae000, compress 0x0/0x0/0x0, omap 0x13956, meta 0x2bbc6aa), peers [0,1] op hist [])
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:46:36.832622+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79306752 unmapped: 21364736 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:46:37.833117+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79306752 unmapped: 21364736 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:33 compute-0 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 999232 data_alloc: 218103808 data_used: 20662
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:46:38.833534+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79306752 unmapped: 21364736 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:46:39.833892+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79306752 unmapped: 21364736 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: osd.2 141 heartbeat osd_stat(store_statfs(0x4fce7e000/0x0/0x4ffc00000, data 0xd913d/0x1ae000, compress 0x0/0x0/0x0, omap 0x13956, meta 0x2bbc6aa), peers [0,1] op hist [])
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:46:40.834221+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79306752 unmapped: 21364736 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:46:41.834415+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79306752 unmapped: 21364736 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:46:42.834635+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79306752 unmapped: 21364736 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:33 compute-0 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 999232 data_alloc: 218103808 data_used: 20662
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:46:43.834891+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79306752 unmapped: 21364736 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:46:44.835024+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79306752 unmapped: 21364736 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:46:45.835197+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: osd.2 141 heartbeat osd_stat(store_statfs(0x4fce7e000/0x0/0x4ffc00000, data 0xd913d/0x1ae000, compress 0x0/0x0/0x0, omap 0x13956, meta 0x2bbc6aa), peers [0,1] op hist [])
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79306752 unmapped: 21364736 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:46:46.835368+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79306752 unmapped: 21364736 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:46:47.835478+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79306752 unmapped: 21364736 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:33 compute-0 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 999232 data_alloc: 218103808 data_used: 20662
Jan 31 08:51:33 compute-0 ceph-osd[88096]: osd.2 141 heartbeat osd_stat(store_statfs(0x4fce7e000/0x0/0x4ffc00000, data 0xd913d/0x1ae000, compress 0x0/0x0/0x0, omap 0x13956, meta 0x2bbc6aa), peers [0,1] op hist [])
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:46:48.835624+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79306752 unmapped: 21364736 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:46:49.835752+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79306752 unmapped: 21364736 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:46:50.836007+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79306752 unmapped: 21364736 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:46:51.836232+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79306752 unmapped: 21364736 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: osd.2 141 heartbeat osd_stat(store_statfs(0x4fce7e000/0x0/0x4ffc00000, data 0xd913d/0x1ae000, compress 0x0/0x0/0x0, omap 0x13956, meta 0x2bbc6aa), peers [0,1] op hist [])
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:46:52.836512+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79306752 unmapped: 21364736 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:33 compute-0 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 999232 data_alloc: 218103808 data_used: 20662
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:46:53.836726+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79306752 unmapped: 21364736 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:46:54.836943+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79306752 unmapped: 21364736 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: osd.2 141 heartbeat osd_stat(store_statfs(0x4fce7e000/0x0/0x4ffc00000, data 0xd913d/0x1ae000, compress 0x0/0x0/0x0, omap 0x13956, meta 0x2bbc6aa), peers [0,1] op hist [])
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:46:55.837133+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79306752 unmapped: 21364736 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:46:56.837314+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79306752 unmapped: 21364736 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 2400.8 total, 600.0 interval
                                           Cumulative writes: 6379 writes, 26K keys, 6379 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.01 MB/s
                                           Cumulative WAL: 6379 writes, 1179 syncs, 5.41 writes per sync, written: 0.02 GB, 0.01 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 245 writes, 416 keys, 245 commit groups, 1.0 writes per commit group, ingest: 0.15 MB, 0.00 MB/s
                                           Interval WAL: 245 writes, 117 syncs, 2.09 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:46:57.837482+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79306752 unmapped: 21364736 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:33 compute-0 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 999232 data_alloc: 218103808 data_used: 20662
Jan 31 08:51:33 compute-0 ceph-osd[88096]: osd.2 141 heartbeat osd_stat(store_statfs(0x4fce7e000/0x0/0x4ffc00000, data 0xd913d/0x1ae000, compress 0x0/0x0/0x0, omap 0x13956, meta 0x2bbc6aa), peers [0,1] op hist [])
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:46:58.837647+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79306752 unmapped: 21364736 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: osd.2 141 heartbeat osd_stat(store_statfs(0x4fce7e000/0x0/0x4ffc00000, data 0xd913d/0x1ae000, compress 0x0/0x0/0x0, omap 0x13956, meta 0x2bbc6aa), peers [0,1] op hist [])
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:46:59.837819+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79306752 unmapped: 21364736 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:47:00.837958+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79306752 unmapped: 21364736 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:47:01.838069+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79306752 unmapped: 21364736 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:47:02.838215+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79306752 unmapped: 21364736 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:33 compute-0 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 999232 data_alloc: 218103808 data_used: 20662
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:47:03.838381+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: osd.2 141 heartbeat osd_stat(store_statfs(0x4fce7e000/0x0/0x4ffc00000, data 0xd913d/0x1ae000, compress 0x0/0x0/0x0, omap 0x13956, meta 0x2bbc6aa), peers [0,1] op hist [])
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79323136 unmapped: 21348352 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:47:04.838489+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79323136 unmapped: 21348352 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:47:05.838630+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79323136 unmapped: 21348352 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:47:06.838751+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79323136 unmapped: 21348352 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:47:07.838876+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79323136 unmapped: 21348352 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:33 compute-0 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 999232 data_alloc: 218103808 data_used: 20662
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:47:08.839030+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79323136 unmapped: 21348352 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: osd.2 141 heartbeat osd_stat(store_statfs(0x4fce7e000/0x0/0x4ffc00000, data 0xd913d/0x1ae000, compress 0x0/0x0/0x0, omap 0x13956, meta 0x2bbc6aa), peers [0,1] op hist [])
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:47:09.839141+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79323136 unmapped: 21348352 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:47:10.839308+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: osd.2 141 heartbeat osd_stat(store_statfs(0x4fce7e000/0x0/0x4ffc00000, data 0xd913d/0x1ae000, compress 0x0/0x0/0x0, omap 0x13956, meta 0x2bbc6aa), peers [0,1] op hist [])
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79323136 unmapped: 21348352 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: osd.2 141 heartbeat osd_stat(store_statfs(0x4fce7e000/0x0/0x4ffc00000, data 0xd913d/0x1ae000, compress 0x0/0x0/0x0, omap 0x13956, meta 0x2bbc6aa), peers [0,1] op hist [])
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:47:11.839444+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79323136 unmapped: 21348352 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:47:12.839581+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79323136 unmapped: 21348352 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:33 compute-0 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 999232 data_alloc: 218103808 data_used: 20662
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:47:13.839730+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79323136 unmapped: 21348352 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:47:14.839866+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79323136 unmapped: 21348352 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:47:15.840035+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79323136 unmapped: 21348352 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: osd.2 141 heartbeat osd_stat(store_statfs(0x4fce7e000/0x0/0x4ffc00000, data 0xd913d/0x1ae000, compress 0x0/0x0/0x0, omap 0x13956, meta 0x2bbc6aa), peers [0,1] op hist [])
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:47:16.840179+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79323136 unmapped: 21348352 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:47:17.840530+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79323136 unmapped: 21348352 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:33 compute-0 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 999232 data_alloc: 218103808 data_used: 20662
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:47:18.840796+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79323136 unmapped: 21348352 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: osd.2 141 heartbeat osd_stat(store_statfs(0x4fce7e000/0x0/0x4ffc00000, data 0xd913d/0x1ae000, compress 0x0/0x0/0x0, omap 0x13956, meta 0x2bbc6aa), peers [0,1] op hist [])
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:47:19.840942+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79323136 unmapped: 21348352 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:47:20.841101+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79323136 unmapped: 21348352 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:47:21.841340+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79323136 unmapped: 21348352 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:47:22.841492+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79323136 unmapped: 21348352 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: osd.2 141 heartbeat osd_stat(store_statfs(0x4fce7e000/0x0/0x4ffc00000, data 0xd913d/0x1ae000, compress 0x0/0x0/0x0, omap 0x13956, meta 0x2bbc6aa), peers [0,1] op hist [])
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:33 compute-0 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 999232 data_alloc: 218103808 data_used: 20662
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:47:23.841653+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79323136 unmapped: 21348352 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:47:24.841774+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79323136 unmapped: 21348352 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:47:25.841975+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: osd.2 141 heartbeat osd_stat(store_statfs(0x4fce7e000/0x0/0x4ffc00000, data 0xd913d/0x1ae000, compress 0x0/0x0/0x0, omap 0x13956, meta 0x2bbc6aa), peers [0,1] op hist [])
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79323136 unmapped: 21348352 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:47:26.842137+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79323136 unmapped: 21348352 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:47:27.842318+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79323136 unmapped: 21348352 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:33 compute-0 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 999232 data_alloc: 218103808 data_used: 20662
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:47:28.842487+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79323136 unmapped: 21348352 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: osd.2 141 heartbeat osd_stat(store_statfs(0x4fce7e000/0x0/0x4ffc00000, data 0xd913d/0x1ae000, compress 0x0/0x0/0x0, omap 0x13956, meta 0x2bbc6aa), peers [0,1] op hist [])
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:47:29.842689+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79323136 unmapped: 21348352 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:47:30.842841+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79323136 unmapped: 21348352 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:47:31.842987+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79323136 unmapped: 21348352 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:47:32.843139+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79323136 unmapped: 21348352 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:33 compute-0 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 999232 data_alloc: 218103808 data_used: 20662
Jan 31 08:51:33 compute-0 ceph-osd[88096]: osd.2 141 heartbeat osd_stat(store_statfs(0x4fce7e000/0x0/0x4ffc00000, data 0xd913d/0x1ae000, compress 0x0/0x0/0x0, omap 0x13956, meta 0x2bbc6aa), peers [0,1] op hist [])
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:47:33.843348+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79323136 unmapped: 21348352 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:47:34.843508+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79323136 unmapped: 21348352 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:47:35.843661+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79323136 unmapped: 21348352 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:47:36.843775+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79323136 unmapped: 21348352 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:47:37.843927+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79323136 unmapped: 21348352 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:33 compute-0 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 999232 data_alloc: 218103808 data_used: 20662
Jan 31 08:51:33 compute-0 ceph-osd[88096]: osd.2 141 heartbeat osd_stat(store_statfs(0x4fce7e000/0x0/0x4ffc00000, data 0xd913d/0x1ae000, compress 0x0/0x0/0x0, omap 0x13956, meta 0x2bbc6aa), peers [0,1] op hist [])
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:47:38.844092+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79323136 unmapped: 21348352 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: osd.2 141 heartbeat osd_stat(store_statfs(0x4fce7e000/0x0/0x4ffc00000, data 0xd913d/0x1ae000, compress 0x0/0x0/0x0, omap 0x13956, meta 0x2bbc6aa), peers [0,1] op hist [])
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:47:39.844269+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79323136 unmapped: 21348352 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:47:40.844458+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79323136 unmapped: 21348352 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: osd.2 141 heartbeat osd_stat(store_statfs(0x4fce7e000/0x0/0x4ffc00000, data 0xd913d/0x1ae000, compress 0x0/0x0/0x0, omap 0x13956, meta 0x2bbc6aa), peers [0,1] op hist [])
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:47:41.844607+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79323136 unmapped: 21348352 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:47:42.844789+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79323136 unmapped: 21348352 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:33 compute-0 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 999232 data_alloc: 218103808 data_used: 20662
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:47:43.845019+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79323136 unmapped: 21348352 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:47:44.845212+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79323136 unmapped: 21348352 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:47:45.845339+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79323136 unmapped: 21348352 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: osd.2 141 heartbeat osd_stat(store_statfs(0x4fce7e000/0x0/0x4ffc00000, data 0xd913d/0x1ae000, compress 0x0/0x0/0x0, omap 0x13956, meta 0x2bbc6aa), peers [0,1] op hist [])
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:47:46.845486+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: osd.2 141 heartbeat osd_stat(store_statfs(0x4fce7e000/0x0/0x4ffc00000, data 0xd913d/0x1ae000, compress 0x0/0x0/0x0, omap 0x13956, meta 0x2bbc6aa), peers [0,1] op hist [])
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79323136 unmapped: 21348352 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:47:47.845630+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79323136 unmapped: 21348352 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:33 compute-0 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 999232 data_alloc: 218103808 data_used: 20662
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:47:48.845997+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79323136 unmapped: 21348352 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:47:49.846134+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79323136 unmapped: 21348352 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:47:50.846327+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79323136 unmapped: 21348352 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:47:51.846467+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79323136 unmapped: 21348352 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: osd.2 141 heartbeat osd_stat(store_statfs(0x4fce7e000/0x0/0x4ffc00000, data 0xd913d/0x1ae000, compress 0x0/0x0/0x0, omap 0x13956, meta 0x2bbc6aa), peers [0,1] op hist [])
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:47:52.846589+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79323136 unmapped: 21348352 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:33 compute-0 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 999232 data_alloc: 218103808 data_used: 20662
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:47:53.846736+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79323136 unmapped: 21348352 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: osd.2 141 heartbeat osd_stat(store_statfs(0x4fce7e000/0x0/0x4ffc00000, data 0xd913d/0x1ae000, compress 0x0/0x0/0x0, omap 0x13956, meta 0x2bbc6aa), peers [0,1] op hist [])
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:47:54.846855+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79323136 unmapped: 21348352 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:47:55.847001+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79323136 unmapped: 21348352 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: osd.2 141 heartbeat osd_stat(store_statfs(0x4fce7e000/0x0/0x4ffc00000, data 0xd913d/0x1ae000, compress 0x0/0x0/0x0, omap 0x13956, meta 0x2bbc6aa), peers [0,1] op hist [])
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:47:56.847177+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79323136 unmapped: 21348352 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:47:57.847333+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79323136 unmapped: 21348352 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:33 compute-0 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 999232 data_alloc: 218103808 data_used: 20662
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:47:58.847490+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79323136 unmapped: 21348352 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:47:59.847676+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79323136 unmapped: 21348352 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 599.984802246s of 600.752502441s, submitted: 114
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:48:00.847806+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79380480 unmapped: 21291008 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:48:01.848379+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: osd.2 141 heartbeat osd_stat(store_statfs(0x4fce7e000/0x0/0x4ffc00000, data 0xd913d/0x1ae000, compress 0x0/0x0/0x0, omap 0x13956, meta 0x2bbc6aa), peers [0,1] op hist [0,0,0,0,1])
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 79552512 unmapped: 21118976 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:48:02.848510+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 80584704 unmapped: 20086784 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:33 compute-0 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 999232 data_alloc: 218103808 data_used: 20662
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:48:03.848703+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 80584704 unmapped: 20086784 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:48:04.848826+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 80584704 unmapped: 20086784 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:48:05.849003+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: osd.2 141 heartbeat osd_stat(store_statfs(0x4fce7e000/0x0/0x4ffc00000, data 0xd913d/0x1ae000, compress 0x0/0x0/0x0, omap 0x13956, meta 0x2bbc6aa), peers [0,1] op hist [])
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 80584704 unmapped: 20086784 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:48:06.849183+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 80584704 unmapped: 20086784 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:48:07.849309+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 80584704 unmapped: 20086784 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:33 compute-0 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 999232 data_alloc: 218103808 data_used: 20662
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:48:08.849498+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 80584704 unmapped: 20086784 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:48:09.849625+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 80584704 unmapped: 20086784 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:48:10.849733+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 80584704 unmapped: 20086784 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:48:11.849843+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: osd.2 141 heartbeat osd_stat(store_statfs(0x4fce7e000/0x0/0x4ffc00000, data 0xd913d/0x1ae000, compress 0x0/0x0/0x0, omap 0x13956, meta 0x2bbc6aa), peers [0,1] op hist [])
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 80584704 unmapped: 20086784 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:48:12.849966+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 80584704 unmapped: 20086784 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:33 compute-0 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 999232 data_alloc: 218103808 data_used: 20662
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:48:13.850119+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 80584704 unmapped: 20086784 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:48:14.850233+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 80584704 unmapped: 20086784 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:48:15.850368+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 80584704 unmapped: 20086784 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: osd.2 141 heartbeat osd_stat(store_statfs(0x4fce7e000/0x0/0x4ffc00000, data 0xd913d/0x1ae000, compress 0x0/0x0/0x0, omap 0x13956, meta 0x2bbc6aa), peers [0,1] op hist [])
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:48:16.850475+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 80584704 unmapped: 20086784 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:48:17.850612+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 80584704 unmapped: 20086784 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:33 compute-0 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 999232 data_alloc: 218103808 data_used: 20662
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:48:18.850780+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 80584704 unmapped: 20086784 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:48:19.850929+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 80584704 unmapped: 20086784 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: osd.2 141 heartbeat osd_stat(store_statfs(0x4fce7e000/0x0/0x4ffc00000, data 0xd913d/0x1ae000, compress 0x0/0x0/0x0, omap 0x13956, meta 0x2bbc6aa), peers [0,1] op hist [])
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:48:20.851083+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 80584704 unmapped: 20086784 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: osd.2 141 heartbeat osd_stat(store_statfs(0x4fce7e000/0x0/0x4ffc00000, data 0xd913d/0x1ae000, compress 0x0/0x0/0x0, omap 0x13956, meta 0x2bbc6aa), peers [0,1] op hist [])
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:48:21.851234+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 80584704 unmapped: 20086784 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:48:22.851350+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: handle_auth_request added challenge on 0x5603a4649000
Jan 31 08:51:33 compute-0 ceph-osd[88096]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 22.280435562s of 22.907487869s, submitted: 90
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 80715776 unmapped: 19955712 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:33 compute-0 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1000970 data_alloc: 218103808 data_used: 20662
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:48:23.851486+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 80740352 unmapped: 19931136 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:48:24.851627+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _renew_subs
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 31 08:51:33 compute-0 ceph-osd[88096]: osd.2 141 handle_osd_map epochs [142,142], i have 141, src has [1,142]
Jan 31 08:51:33 compute-0 ceph-osd[88096]: osd.2 141 handle_osd_map epochs [142,142], i have 142, src has [1,142]
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 80756736 unmapped: 19914752 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: osd.2 142 ms_handle_reset con 0x5603a4649000 session 0x5603a606c700
Jan 31 08:51:33 compute-0 ceph-osd[88096]: osd.2 142 heartbeat osd_stat(store_statfs(0x4fca0d000/0x0/0x4ffc00000, data 0x54914d/0x61f000, compress 0x0/0x0/0x0, omap 0x13c58, meta 0x2bbc3a8), peers [0,1] op hist [])
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:48:25.851753+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 80756736 unmapped: 19914752 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:48:26.851891+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 80756736 unmapped: 19914752 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:48:27.852013+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 80756736 unmapped: 19914752 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:33 compute-0 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1029673 data_alloc: 218103808 data_used: 20662
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:48:28.852128+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: osd.2 142 heartbeat osd_stat(store_statfs(0x4fca08000/0x0/0x4ffc00000, data 0x54ace9/0x622000, compress 0x0/0x0/0x0, omap 0x13ffa, meta 0x2bbc006), peers [0,1] op hist [])
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 80756736 unmapped: 19914752 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:48:29.852292+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 80756736 unmapped: 19914752 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:48:30.852444+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: osd.2 142 heartbeat osd_stat(store_statfs(0x4fca08000/0x0/0x4ffc00000, data 0x54ace9/0x622000, compress 0x0/0x0/0x0, omap 0x13ffa, meta 0x2bbc006), peers [0,1] op hist [])
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 80756736 unmapped: 19914752 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:48:31.852624+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 80756736 unmapped: 19914752 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:48:32.852760+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 80756736 unmapped: 19914752 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:33 compute-0 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1029673 data_alloc: 218103808 data_used: 20662
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:48:33.852969+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 80756736 unmapped: 19914752 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:48:34.853165+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 80756736 unmapped: 19914752 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:48:35.853299+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 80756736 unmapped: 19914752 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: osd.2 142 heartbeat osd_stat(store_statfs(0x4fca08000/0x0/0x4ffc00000, data 0x54ace9/0x622000, compress 0x0/0x0/0x0, omap 0x13ffa, meta 0x2bbc006), peers [0,1] op hist [])
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:48:36.853473+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 80756736 unmapped: 19914752 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: osd.2 142 heartbeat osd_stat(store_statfs(0x4fca08000/0x0/0x4ffc00000, data 0x54ace9/0x622000, compress 0x0/0x0/0x0, omap 0x13ffa, meta 0x2bbc006), peers [0,1] op hist [])
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:48:37.853604+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 80756736 unmapped: 19914752 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:33 compute-0 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1029673 data_alloc: 218103808 data_used: 20662
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:48:38.853765+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 80756736 unmapped: 19914752 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: osd.2 142 heartbeat osd_stat(store_statfs(0x4fca08000/0x0/0x4ffc00000, data 0x54ace9/0x622000, compress 0x0/0x0/0x0, omap 0x13ffa, meta 0x2bbc006), peers [0,1] op hist [])
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:48:39.853964+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 80756736 unmapped: 19914752 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:48:40.854097+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 80756736 unmapped: 19914752 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:48:41.854281+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 80756736 unmapped: 19914752 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:48:42.854420+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 80756736 unmapped: 19914752 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:33 compute-0 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1029673 data_alloc: 218103808 data_used: 20662
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:48:43.854582+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 80756736 unmapped: 19914752 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:48:44.854813+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: osd.2 142 heartbeat osd_stat(store_statfs(0x4fca08000/0x0/0x4ffc00000, data 0x54ace9/0x622000, compress 0x0/0x0/0x0, omap 0x13ffa, meta 0x2bbc006), peers [0,1] op hist [])
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 80756736 unmapped: 19914752 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:48:45.854980+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 80756736 unmapped: 19914752 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:48:46.855165+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 80756736 unmapped: 19914752 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:48:47.855316+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 80756736 unmapped: 19914752 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:33 compute-0 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1029673 data_alloc: 218103808 data_used: 20662
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:48:48.855438+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 80756736 unmapped: 19914752 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:48:49.855614+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: osd.2 142 heartbeat osd_stat(store_statfs(0x4fca08000/0x0/0x4ffc00000, data 0x54ace9/0x622000, compress 0x0/0x0/0x0, omap 0x13ffa, meta 0x2bbc006), peers [0,1] op hist [])
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 80756736 unmapped: 19914752 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:48:50.855769+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: osd.2 142 heartbeat osd_stat(store_statfs(0x4fca08000/0x0/0x4ffc00000, data 0x54ace9/0x622000, compress 0x0/0x0/0x0, omap 0x13ffa, meta 0x2bbc006), peers [0,1] op hist [])
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 80756736 unmapped: 19914752 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:48:51.855934+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: osd.2 142 heartbeat osd_stat(store_statfs(0x4fca08000/0x0/0x4ffc00000, data 0x54ace9/0x622000, compress 0x0/0x0/0x0, omap 0x13ffa, meta 0x2bbc006), peers [0,1] op hist [])
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 80756736 unmapped: 19914752 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:48:52.856076+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 80756736 unmapped: 19914752 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:33 compute-0 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1029673 data_alloc: 218103808 data_used: 20662
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:48:53.856344+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 80756736 unmapped: 19914752 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:48:54.856485+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 80756736 unmapped: 19914752 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:48:55.856645+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: osd.2 142 heartbeat osd_stat(store_statfs(0x4fca08000/0x0/0x4ffc00000, data 0x54ace9/0x622000, compress 0x0/0x0/0x0, omap 0x13ffa, meta 0x2bbc006), peers [0,1] op hist [])
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 80756736 unmapped: 19914752 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:48:56.856768+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 80756736 unmapped: 19914752 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:48:57.856924+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 80756736 unmapped: 19914752 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:33 compute-0 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1029673 data_alloc: 218103808 data_used: 20662
Jan 31 08:51:33 compute-0 ceph-osd[88096]: osd.2 142 heartbeat osd_stat(store_statfs(0x4fca08000/0x0/0x4ffc00000, data 0x54ace9/0x622000, compress 0x0/0x0/0x0, omap 0x13ffa, meta 0x2bbc006), peers [0,1] op hist [])
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:48:58.857110+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 80756736 unmapped: 19914752 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:48:59.857300+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 80756736 unmapped: 19914752 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:49:00.857422+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 80756736 unmapped: 19914752 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: osd.2 142 heartbeat osd_stat(store_statfs(0x4fca08000/0x0/0x4ffc00000, data 0x54ace9/0x622000, compress 0x0/0x0/0x0, omap 0x13ffa, meta 0x2bbc006), peers [0,1] op hist [])
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:49:01.857559+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 80756736 unmapped: 19914752 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:49:02.857673+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 80756736 unmapped: 19914752 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:33 compute-0 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1029673 data_alloc: 218103808 data_used: 20662
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:49:03.857848+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 80756736 unmapped: 19914752 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:49:04.857999+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 80756736 unmapped: 19914752 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: osd.2 142 heartbeat osd_stat(store_statfs(0x4fca08000/0x0/0x4ffc00000, data 0x54ace9/0x622000, compress 0x0/0x0/0x0, omap 0x13ffa, meta 0x2bbc006), peers [0,1] op hist [])
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:49:05.858159+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 80756736 unmapped: 19914752 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:49:06.858300+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: osd.2 142 heartbeat osd_stat(store_statfs(0x4fca08000/0x0/0x4ffc00000, data 0x54ace9/0x622000, compress 0x0/0x0/0x0, omap 0x13ffa, meta 0x2bbc006), peers [0,1] op hist [])
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 80756736 unmapped: 19914752 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:49:07.858426+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 80756736 unmapped: 19914752 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:33 compute-0 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1029673 data_alloc: 218103808 data_used: 20662
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:49:08.858575+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 80756736 unmapped: 19914752 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:49:09.858735+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 80756736 unmapped: 19914752 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: osd.2 142 heartbeat osd_stat(store_statfs(0x4fca08000/0x0/0x4ffc00000, data 0x54ace9/0x622000, compress 0x0/0x0/0x0, omap 0x13ffa, meta 0x2bbc006), peers [0,1] op hist [])
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:49:10.858924+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 80756736 unmapped: 19914752 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:49:11.859034+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 80756736 unmapped: 19914752 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:49:12.859306+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: osd.2 142 heartbeat osd_stat(store_statfs(0x4fca08000/0x0/0x4ffc00000, data 0x54ace9/0x622000, compress 0x0/0x0/0x0, omap 0x13ffa, meta 0x2bbc006), peers [0,1] op hist [])
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 80756736 unmapped: 19914752 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:33 compute-0 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1029673 data_alloc: 218103808 data_used: 20662
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:49:13.859770+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 80756736 unmapped: 19914752 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:49:14.860599+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 80756736 unmapped: 19914752 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:49:15.861303+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 80756736 unmapped: 19914752 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:49:16.861909+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 80756736 unmapped: 19914752 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:49:17.862068+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 80756736 unmapped: 19914752 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:33 compute-0 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1029673 data_alloc: 218103808 data_used: 20662
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:49:18.862604+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: osd.2 142 heartbeat osd_stat(store_statfs(0x4fca08000/0x0/0x4ffc00000, data 0x54ace9/0x622000, compress 0x0/0x0/0x0, omap 0x13ffa, meta 0x2bbc006), peers [0,1] op hist [])
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 80756736 unmapped: 19914752 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:49:19.863065+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 80756736 unmapped: 19914752 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:49:20.863498+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 80756736 unmapped: 19914752 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:49:21.863813+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 80756736 unmapped: 19914752 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:49:22.864044+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: osd.2 142 heartbeat osd_stat(store_statfs(0x4fca08000/0x0/0x4ffc00000, data 0x54ace9/0x622000, compress 0x0/0x0/0x0, omap 0x13ffa, meta 0x2bbc006), peers [0,1] op hist [])
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 80756736 unmapped: 19914752 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:33 compute-0 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1029673 data_alloc: 218103808 data_used: 20662
Jan 31 08:51:33 compute-0 ceph-osd[88096]: osd.2 142 heartbeat osd_stat(store_statfs(0x4fca08000/0x0/0x4ffc00000, data 0x54ace9/0x622000, compress 0x0/0x0/0x0, omap 0x13ffa, meta 0x2bbc006), peers [0,1] op hist [])
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:49:23.864488+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 80756736 unmapped: 19914752 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:49:24.864749+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 80756736 unmapped: 19914752 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:49:25.864900+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: osd.2 142 heartbeat osd_stat(store_statfs(0x4fca08000/0x0/0x4ffc00000, data 0x54ace9/0x622000, compress 0x0/0x0/0x0, omap 0x13ffa, meta 0x2bbc006), peers [0,1] op hist [])
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 80756736 unmapped: 19914752 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:49:26.865324+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 80756736 unmapped: 19914752 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:49:27.865627+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: osd.2 142 heartbeat osd_stat(store_statfs(0x4fca08000/0x0/0x4ffc00000, data 0x54ace9/0x622000, compress 0x0/0x0/0x0, omap 0x13ffa, meta 0x2bbc006), peers [0,1] op hist [])
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 80756736 unmapped: 19914752 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:33 compute-0 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1029673 data_alloc: 218103808 data_used: 20662
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:49:28.866054+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: handle_auth_request added challenge on 0x5603a3a4d800
Jan 31 08:51:33 compute-0 ceph-osd[88096]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 65.113044739s of 65.703971863s, submitted: 12
Jan 31 08:51:33 compute-0 ceph-osd[88096]: osd.2 142 handle_osd_map epochs [142,143], i have 142, src has [1,143]
Jan 31 08:51:33 compute-0 ceph-osd[88096]: osd.2 143 ms_handle_reset con 0x5603a3a4d800 session 0x5603a53c0e00
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 80773120 unmapped: 19898368 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:49:29.866324+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: osd.2 143 heartbeat osd_stat(store_statfs(0x4fca0a000/0x0/0x4ffc00000, data 0x54ace9/0x622000, compress 0x0/0x0/0x0, omap 0x13d7c, meta 0x2bbc284), peers [0,1] op hist [])
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 80789504 unmapped: 19881984 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:49:30.866621+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 80789504 unmapped: 19881984 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:49:31.866906+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 80789504 unmapped: 19881984 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: osd.2 143 heartbeat osd_stat(store_statfs(0x4fce76000/0x0/0x4ffc00000, data 0xdc8c9/0x1b4000, compress 0x0/0x0/0x0, omap 0x143bc, meta 0x2bbbc44), peers [0,1] op hist [])
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:49:32.867106+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 80789504 unmapped: 19881984 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:33 compute-0 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1008610 data_alloc: 218103808 data_used: 20662
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:49:33.867293+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 80789504 unmapped: 19881984 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:49:34.867554+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 80789504 unmapped: 19881984 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:49:35.867805+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 80789504 unmapped: 19881984 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:49:36.868061+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: osd.2 143 heartbeat osd_stat(store_statfs(0x4fce76000/0x0/0x4ffc00000, data 0xdc8c9/0x1b4000, compress 0x0/0x0/0x0, omap 0x143bc, meta 0x2bbbc44), peers [0,1] op hist [])
Jan 31 08:51:33 compute-0 ceph-osd[88096]: osd.2 143 handle_osd_map epochs [144,144], i have 143, src has [1,144]
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 81838080 unmapped: 18833408 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:49:37.868347+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 81838080 unmapped: 18833408 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:33 compute-0 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1011384 data_alloc: 218103808 data_used: 20662
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:49:38.868556+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 81838080 unmapped: 18833408 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:49:39.868740+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 81838080 unmapped: 18833408 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:49:40.868920+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 81838080 unmapped: 18833408 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:49:41.869107+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: osd.2 144 heartbeat osd_stat(store_statfs(0x4fce73000/0x0/0x4ffc00000, data 0xde348/0x1b7000, compress 0x0/0x0/0x0, omap 0x146c5, meta 0x2bbb93b), peers [0,1] op hist [])
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 81838080 unmapped: 18833408 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:49:42.869331+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 81838080 unmapped: 18833408 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:33 compute-0 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1011384 data_alloc: 218103808 data_used: 20662
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:49:43.869688+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: osd.2 144 heartbeat osd_stat(store_statfs(0x4fce73000/0x0/0x4ffc00000, data 0xde348/0x1b7000, compress 0x0/0x0/0x0, omap 0x146c5, meta 0x2bbb93b), peers [0,1] op hist [])
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 81838080 unmapped: 18833408 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:49:44.869915+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: osd.2 144 heartbeat osd_stat(store_statfs(0x4fce73000/0x0/0x4ffc00000, data 0xde348/0x1b7000, compress 0x0/0x0/0x0, omap 0x146c5, meta 0x2bbb93b), peers [0,1] op hist [])
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 81838080 unmapped: 18833408 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:49:45.870070+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 81838080 unmapped: 18833408 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:49:46.870842+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 81838080 unmapped: 18833408 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:49:47.871166+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 81838080 unmapped: 18833408 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:33 compute-0 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1011384 data_alloc: 218103808 data_used: 20662
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:49:48.871336+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 81838080 unmapped: 18833408 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:49:49.871588+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 81838080 unmapped: 18833408 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:49:50.871722+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: osd.2 144 heartbeat osd_stat(store_statfs(0x4fce73000/0x0/0x4ffc00000, data 0xde348/0x1b7000, compress 0x0/0x0/0x0, omap 0x146c5, meta 0x2bbb93b), peers [0,1] op hist [])
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 81838080 unmapped: 18833408 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:49:51.871867+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: osd.2 144 heartbeat osd_stat(store_statfs(0x4fce73000/0x0/0x4ffc00000, data 0xde348/0x1b7000, compress 0x0/0x0/0x0, omap 0x146c5, meta 0x2bbb93b), peers [0,1] op hist [])
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 81838080 unmapped: 18833408 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:49:52.872051+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 81838080 unmapped: 18833408 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:33 compute-0 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1011384 data_alloc: 218103808 data_used: 20662
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:49:53.872301+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 81838080 unmapped: 18833408 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:49:54.872550+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: osd.2 144 heartbeat osd_stat(store_statfs(0x4fce73000/0x0/0x4ffc00000, data 0xde348/0x1b7000, compress 0x0/0x0/0x0, omap 0x146c5, meta 0x2bbb93b), peers [0,1] op hist [])
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 81838080 unmapped: 18833408 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:49:55.872770+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 81838080 unmapped: 18833408 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:49:56.872956+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 81838080 unmapped: 18833408 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:49:57.873184+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 81838080 unmapped: 18833408 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:33 compute-0 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1011384 data_alloc: 218103808 data_used: 20662
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:49:58.873445+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: osd.2 144 heartbeat osd_stat(store_statfs(0x4fce73000/0x0/0x4ffc00000, data 0xde348/0x1b7000, compress 0x0/0x0/0x0, omap 0x146c5, meta 0x2bbb93b), peers [0,1] op hist [])
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 81838080 unmapped: 18833408 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:49:59.873726+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 81838080 unmapped: 18833408 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:50:00.874011+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 81838080 unmapped: 18833408 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:50:01.874135+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 81838080 unmapped: 18833408 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:50:02.874366+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 81846272 unmapped: 18825216 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:33 compute-0 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1011384 data_alloc: 218103808 data_used: 20662
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:50:03.874526+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: osd.2 144 heartbeat osd_stat(store_statfs(0x4fce73000/0x0/0x4ffc00000, data 0xde348/0x1b7000, compress 0x0/0x0/0x0, omap 0x146c5, meta 0x2bbb93b), peers [0,1] op hist [])
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 81846272 unmapped: 18825216 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:50:04.874651+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 81846272 unmapped: 18825216 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:50:05.874793+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 81846272 unmapped: 18825216 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:50:06.875048+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 81846272 unmapped: 18825216 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:50:07.875174+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 81846272 unmapped: 18825216 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:33 compute-0 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1011384 data_alloc: 218103808 data_used: 20662
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:50:08.875366+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: osd.2 144 heartbeat osd_stat(store_statfs(0x4fce73000/0x0/0x4ffc00000, data 0xde348/0x1b7000, compress 0x0/0x0/0x0, omap 0x146c5, meta 0x2bbb93b), peers [0,1] op hist [])
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 81846272 unmapped: 18825216 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:50:09.875535+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 81846272 unmapped: 18825216 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: osd.2 144 heartbeat osd_stat(store_statfs(0x4fce73000/0x0/0x4ffc00000, data 0xde348/0x1b7000, compress 0x0/0x0/0x0, omap 0x146c5, meta 0x2bbb93b), peers [0,1] op hist [])
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:50:10.875677+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 81846272 unmapped: 18825216 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: osd.2 144 heartbeat osd_stat(store_statfs(0x4fce73000/0x0/0x4ffc00000, data 0xde348/0x1b7000, compress 0x0/0x0/0x0, omap 0x146c5, meta 0x2bbb93b), peers [0,1] op hist [])
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:50:11.875877+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 81846272 unmapped: 18825216 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:50:12.876124+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 81846272 unmapped: 18825216 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:33 compute-0 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1011384 data_alloc: 218103808 data_used: 20662
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:50:13.876331+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 81846272 unmapped: 18825216 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:50:14.876637+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 81846272 unmapped: 18825216 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:50:15.876819+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: osd.2 144 heartbeat osd_stat(store_statfs(0x4fce73000/0x0/0x4ffc00000, data 0xde348/0x1b7000, compress 0x0/0x0/0x0, omap 0x146c5, meta 0x2bbb93b), peers [0,1] op hist [])
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 81846272 unmapped: 18825216 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:50:16.877000+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: osd.2 144 heartbeat osd_stat(store_statfs(0x4fce73000/0x0/0x4ffc00000, data 0xde348/0x1b7000, compress 0x0/0x0/0x0, omap 0x146c5, meta 0x2bbb93b), peers [0,1] op hist [])
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 81846272 unmapped: 18825216 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:50:17.877173+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 81846272 unmapped: 18825216 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:33 compute-0 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1011384 data_alloc: 218103808 data_used: 20662
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:50:18.877389+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 81846272 unmapped: 18825216 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:50:19.877601+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: osd.2 144 heartbeat osd_stat(store_statfs(0x4fce73000/0x0/0x4ffc00000, data 0xde348/0x1b7000, compress 0x0/0x0/0x0, omap 0x146c5, meta 0x2bbb93b), peers [0,1] op hist [])
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 81846272 unmapped: 18825216 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:50:20.877793+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: osd.2 144 heartbeat osd_stat(store_statfs(0x4fce73000/0x0/0x4ffc00000, data 0xde348/0x1b7000, compress 0x0/0x0/0x0, omap 0x146c5, meta 0x2bbb93b), peers [0,1] op hist [])
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 81846272 unmapped: 18825216 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:50:21.878008+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 81846272 unmapped: 18825216 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:50:22.878192+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 81846272 unmapped: 18825216 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:33 compute-0 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1011384 data_alloc: 218103808 data_used: 20662
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:50:23.878357+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 81846272 unmapped: 18825216 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:50:24.878628+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: osd.2 144 heartbeat osd_stat(store_statfs(0x4fce73000/0x0/0x4ffc00000, data 0xde348/0x1b7000, compress 0x0/0x0/0x0, omap 0x146c5, meta 0x2bbb93b), peers [0,1] op hist [])
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 81846272 unmapped: 18825216 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:50:25.878879+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 81846272 unmapped: 18825216 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:50:26.879220+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 81846272 unmapped: 18825216 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:50:27.879392+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 81846272 unmapped: 18825216 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:33 compute-0 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1011384 data_alloc: 218103808 data_used: 20662
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:50:28.879583+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 81846272 unmapped: 18825216 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:50:29.879787+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: osd.2 144 heartbeat osd_stat(store_statfs(0x4fce73000/0x0/0x4ffc00000, data 0xde348/0x1b7000, compress 0x0/0x0/0x0, omap 0x146c5, meta 0x2bbb93b), peers [0,1] op hist [])
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 81846272 unmapped: 18825216 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:50:30.880020+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 81846272 unmapped: 18825216 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:50:31.880155+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 81846272 unmapped: 18825216 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:50:32.880363+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 81846272 unmapped: 18825216 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:50:33.880631+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:33 compute-0 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1011384 data_alloc: 218103808 data_used: 20662
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 81846272 unmapped: 18825216 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:50:34.880883+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: osd.2 144 heartbeat osd_stat(store_statfs(0x4fce73000/0x0/0x4ffc00000, data 0xde348/0x1b7000, compress 0x0/0x0/0x0, omap 0x146c5, meta 0x2bbb93b), peers [0,1] op hist [])
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 81846272 unmapped: 18825216 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:50:35.881162+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 81846272 unmapped: 18825216 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:50:36.881377+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 81846272 unmapped: 18825216 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:50:37.881557+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: osd.2 144 heartbeat osd_stat(store_statfs(0x4fce73000/0x0/0x4ffc00000, data 0xde348/0x1b7000, compress 0x0/0x0/0x0, omap 0x146c5, meta 0x2bbb93b), peers [0,1] op hist [])
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 81846272 unmapped: 18825216 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:50:38.881810+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:33 compute-0 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1011384 data_alloc: 218103808 data_used: 20662
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 81846272 unmapped: 18825216 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:50:39.882023+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 81846272 unmapped: 18825216 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:50:40.882304+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: osd.2 144 heartbeat osd_stat(store_statfs(0x4fce73000/0x0/0x4ffc00000, data 0xde348/0x1b7000, compress 0x0/0x0/0x0, omap 0x146c5, meta 0x2bbb93b), peers [0,1] op hist [])
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 81846272 unmapped: 18825216 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:50:41.882477+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 81846272 unmapped: 18825216 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:50:42.882667+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 81846272 unmapped: 18825216 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:50:43.882842+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:33 compute-0 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1011384 data_alloc: 218103808 data_used: 20662
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 81846272 unmapped: 18825216 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:50:44.882995+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: osd.2 144 heartbeat osd_stat(store_statfs(0x4fce73000/0x0/0x4ffc00000, data 0xde348/0x1b7000, compress 0x0/0x0/0x0, omap 0x146c5, meta 0x2bbb93b), peers [0,1] op hist [])
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 81846272 unmapped: 18825216 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:50:45.883195+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 81846272 unmapped: 18825216 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:50:46.883393+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: osd.2 144 heartbeat osd_stat(store_statfs(0x4fce73000/0x0/0x4ffc00000, data 0xde348/0x1b7000, compress 0x0/0x0/0x0, omap 0x146c5, meta 0x2bbb93b), peers [0,1] op hist [])
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 81846272 unmapped: 18825216 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:50:47.883595+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 81846272 unmapped: 18825216 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:50:48.883826+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:33 compute-0 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1011384 data_alloc: 218103808 data_used: 20662
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 81846272 unmapped: 18825216 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:50:49.883987+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 81846272 unmapped: 18825216 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:50:50.884124+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 81846272 unmapped: 18825216 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:50:51.884308+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 81846272 unmapped: 18825216 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:50:52.884445+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: osd.2 144 heartbeat osd_stat(store_statfs(0x4fce73000/0x0/0x4ffc00000, data 0xde348/0x1b7000, compress 0x0/0x0/0x0, omap 0x146c5, meta 0x2bbb93b), peers [0,1] op hist [])
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 81846272 unmapped: 18825216 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:50:53.884595+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:33 compute-0 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1011384 data_alloc: 218103808 data_used: 20662
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 81846272 unmapped: 18825216 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:50:54.884719+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: osd.2 144 heartbeat osd_stat(store_statfs(0x4fce73000/0x0/0x4ffc00000, data 0xde348/0x1b7000, compress 0x0/0x0/0x0, omap 0x146c5, meta 0x2bbb93b), peers [0,1] op hist [])
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 81846272 unmapped: 18825216 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:50:55.884861+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 81846272 unmapped: 18825216 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:50:56.884970+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: osd.2 144 heartbeat osd_stat(store_statfs(0x4fce73000/0x0/0x4ffc00000, data 0xde348/0x1b7000, compress 0x0/0x0/0x0, omap 0x146c5, meta 0x2bbb93b), peers [0,1] op hist [])
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 81846272 unmapped: 18825216 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:50:57.885105+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 81846272 unmapped: 18825216 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:50:58.885217+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:33 compute-0 ceph-osd[88096]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:33 compute-0 ceph-osd[88096]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1011384 data_alloc: 218103808 data_used: 20662
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:50:59.885384+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 81846272 unmapped: 18825216 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:51:00.885560+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 81952768 unmapped: 18718720 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: do_command 'config diff' '{prefix=config diff}'
Jan 31 08:51:33 compute-0 ceph-osd[88096]: do_command 'config diff' '{prefix=config diff}' result is 0 bytes
Jan 31 08:51:33 compute-0 ceph-osd[88096]: do_command 'config show' '{prefix=config show}'
Jan 31 08:51:33 compute-0 ceph-osd[88096]: do_command 'config show' '{prefix=config show}' result is 0 bytes
Jan 31 08:51:33 compute-0 ceph-osd[88096]: do_command 'counter dump' '{prefix=counter dump}'
Jan 31 08:51:33 compute-0 ceph-osd[88096]: do_command 'counter dump' '{prefix=counter dump}' result is 0 bytes
Jan 31 08:51:33 compute-0 ceph-osd[88096]: do_command 'counter schema' '{prefix=counter schema}'
Jan 31 08:51:33 compute-0 ceph-osd[88096]: do_command 'counter schema' '{prefix=counter schema}' result is 0 bytes
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:51:01.885710+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 82370560 unmapped: 18300928 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: tick
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_tickets
Jan 31 08:51:33 compute-0 ceph-osd[88096]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:51:02.885839+0000)
Jan 31 08:51:33 compute-0 ceph-osd[88096]: prioritycache tune_memory target: 4294967296 mapped: 82649088 unmapped: 18022400 heap: 100671488 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:33 compute-0 ceph-osd[88096]: osd.2 144 heartbeat osd_stat(store_statfs(0x4fce73000/0x0/0x4ffc00000, data 0xde348/0x1b7000, compress 0x0/0x0/0x0, omap 0x146c5, meta 0x2bbb93b), peers [0,1] op hist [])
Jan 31 08:51:33 compute-0 ceph-osd[88096]: do_command 'log dump' '{prefix=log dump}'
Jan 31 08:51:33 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "log last", "num": 10000, "level": "debug", "channel": "cluster"} v 0)
Jan 31 08:51:33 compute-0 ceph-mon[75227]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/315195756' entity='client.admin' cmd={"prefix": "log last", "num": 10000, "level": "debug", "channel": "cluster"} : dispatch
Jan 31 08:51:33 compute-0 ceph-mgr[75519]: log_channel(audit) log [DBG] : from='client.14664 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 08:51:33 compute-0 rsyslogd[1007]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 31 08:51:33 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1485: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:51:34 compute-0 ceph-mon[75227]: from='client.14660 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 08:51:34 compute-0 ceph-mon[75227]: from='client.? 192.168.122.100:0/315195756' entity='client.admin' cmd={"prefix": "log last", "num": 10000, "level": "debug", "channel": "cluster"} : dispatch
Jan 31 08:51:34 compute-0 ceph-mon[75227]: from='client.14664 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 08:51:34 compute-0 ceph-mgr[75519]: log_channel(audit) log [DBG] : from='client.14666 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 08:51:34 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config get", "who": "client.rgw.rgw.compute-0.dnvgmk", "name": "rgw_frontends"} v 0)
Jan 31 08:51:34 compute-0 ceph-mon[75227]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "config get", "who": "client.rgw.rgw.compute-0.dnvgmk", "name": "rgw_frontends"} : dispatch
Jan 31 08:51:34 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr dump"} v 0)
Jan 31 08:51:34 compute-0 ceph-mon[75227]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/616624009' entity='client.admin' cmd={"prefix": "mgr dump"} : dispatch
Jan 31 08:51:34 compute-0 ceph-mgr[75519]: log_channel(audit) log [DBG] : from='client.14670 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""]}]: dispatch
Jan 31 08:51:34 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config get", "who": "client.rgw.rgw.compute-0.dnvgmk", "name": "rgw_frontends"} v 0)
Jan 31 08:51:34 compute-0 ceph-mon[75227]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "config get", "who": "client.rgw.rgw.compute-0.dnvgmk", "name": "rgw_frontends"} : dispatch
Jan 31 08:51:34 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr metadata"} v 0)
Jan 31 08:51:34 compute-0 ceph-mon[75227]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/407287328' entity='client.admin' cmd={"prefix": "mgr metadata"} : dispatch
Jan 31 08:51:34 compute-0 ceph-mgr[75519]: log_channel(audit) log [DBG] : from='client.14674 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 08:51:35 compute-0 ceph-mon[75227]: pgmap v1485: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:51:35 compute-0 ceph-mon[75227]: from='client.14666 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 08:51:35 compute-0 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "config get", "who": "client.rgw.rgw.compute-0.dnvgmk", "name": "rgw_frontends"} : dispatch
Jan 31 08:51:35 compute-0 ceph-mon[75227]: from='client.? 192.168.122.100:0/616624009' entity='client.admin' cmd={"prefix": "mgr dump"} : dispatch
Jan 31 08:51:35 compute-0 ceph-mon[75227]: from='client.14670 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""]}]: dispatch
Jan 31 08:51:35 compute-0 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "config get", "who": "client.rgw.rgw.compute-0.dnvgmk", "name": "rgw_frontends"} : dispatch
Jan 31 08:51:35 compute-0 ceph-mon[75227]: from='client.? 192.168.122.100:0/407287328' entity='client.admin' cmd={"prefix": "mgr metadata"} : dispatch
Jan 31 08:51:35 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr module ls"} v 0)
Jan 31 08:51:35 compute-0 ceph-mon[75227]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/277069709' entity='client.admin' cmd={"prefix": "mgr module ls"} : dispatch
Jan 31 08:51:35 compute-0 nova_compute[238824]: 2026-01-31 08:51:35.339 238828 DEBUG oslo_service.periodic_task [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:51:35 compute-0 ceph-mgr[75519]: log_channel(audit) log [DBG] : from='client.14678 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""]}]: dispatch
Jan 31 08:51:35 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr services"} v 0)
Jan 31 08:51:35 compute-0 ceph-mon[75227]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3529536462' entity='client.admin' cmd={"prefix": "mgr services"} : dispatch
Jan 31 08:51:35 compute-0 ceph-mgr[75519]: log_channel(audit) log [DBG] : from='client.14682 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 08:51:35 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1486: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:51:36 compute-0 sudo[262170]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 08:51:36 compute-0 sudo[262170]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:51:36 compute-0 sudo[262170]: pam_unix(sudo:session): session closed for user root
Jan 31 08:51:36 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:51:36 compute-0 ceph-mon[75227]: from='client.14674 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 08:51:36 compute-0 ceph-mon[75227]: from='client.? 192.168.122.100:0/277069709' entity='client.admin' cmd={"prefix": "mgr module ls"} : dispatch
Jan 31 08:51:36 compute-0 ceph-mon[75227]: from='client.14678 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""]}]: dispatch
Jan 31 08:51:36 compute-0 ceph-mon[75227]: from='client.? 192.168.122.100:0/3529536462' entity='client.admin' cmd={"prefix": "mgr services"} : dispatch
Jan 31 08:51:36 compute-0 ceph-mon[75227]: from='client.14682 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 08:51:36 compute-0 sudo[262203]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/82c880e6-d992-5408-8b12-efff9c275473/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Jan 31 08:51:36 compute-0 sudo[262203]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:51:36 compute-0 crontab[262232]: (root) LIST (root)
Jan 31 08:51:36 compute-0 ceph-mgr[75519]: log_channel(audit) log [DBG] : from='client.14686 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 31 08:51:36 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr versions"} v 0)
Jan 31 08:51:36 compute-0 ceph-mon[75227]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3569238403' entity='client.admin' cmd={"prefix": "mgr versions"} : dispatch
Jan 31 08:51:36 compute-0 nova_compute[238824]: 2026-01-31 08:51:36.339 238828 DEBUG oslo_service.periodic_task [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:51:36 compute-0 nova_compute[238824]: 2026-01-31 08:51:36.340 238828 DEBUG nova.compute.manager [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 31 08:51:36 compute-0 nova_compute[238824]: 2026-01-31 08:51:36.340 238828 DEBUG nova.compute.manager [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 31 08:51:36 compute-0 nova_compute[238824]: 2026-01-31 08:51:36.374 238828 DEBUG nova.compute.manager [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 31 08:51:36 compute-0 sudo[262203]: pam_unix(sudo:session): session closed for user root
Jan 31 08:51:36 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 31 08:51:36 compute-0 ceph-mon[75227]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 08:51:36 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 31 08:51:36 compute-0 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 31 08:51:36 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 31 08:51:36 compute-0 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 08:51:36 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 31 08:51:36 compute-0 ceph-mon[75227]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 31 08:51:36 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 31 08:51:36 compute-0 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 31 08:51:36 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 31 08:51:36 compute-0 ceph-mon[75227]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 08:51:36 compute-0 ceph-mgr[75519]: log_channel(audit) log [DBG] : from='client.14690 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 31 08:51:36 compute-0 sudo[262359]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 08:51:36 compute-0 sudo[262359]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:51:36 compute-0 sudo[262359]: pam_unix(sudo:session): session closed for user root
Jan 31 08:51:36 compute-0 sudo[262388]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/82c880e6-d992-5408-8b12-efff9c275473/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 82c880e6-d992-5408-8b12-efff9c275473 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Jan 31 08:51:36 compute-0 sudo[262388]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:51:36 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon stat"} v 0)
Jan 31 08:51:36 compute-0 ceph-mon[75227]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2215502721' entity='client.admin' cmd={"prefix": "mon stat"} : dispatch
Jan 31 08:51:37 compute-0 podman[262473]: 2026-01-31 08:51:37.095223955 +0000 UTC m=+0.024027076 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 08:51:37 compute-0 ceph-mgr[75519]: log_channel(audit) log [DBG] : from='client.14692 -' entity='client.admin' cmd=[{"prefix": "balancer status detail", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 31 08:51:37 compute-0 podman[262473]: 2026-01-31 08:51:37.323764535 +0000 UTC m=+0.252567646 container create 3be3ced4a2a00120c0ee8ba50c9e19e5abed8fd9db9fff58081846f84e3900ad (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=agitated_sammet, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_REF=tentacle, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 08:51:37 compute-0 ceph-mon[75227]: pgmap v1486: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:51:37 compute-0 ceph-mon[75227]: from='client.14686 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 31 08:51:37 compute-0 ceph-mon[75227]: from='client.? 192.168.122.100:0/3569238403' entity='client.admin' cmd={"prefix": "mgr versions"} : dispatch
Jan 31 08:51:37 compute-0 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 08:51:37 compute-0 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 31 08:51:37 compute-0 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 08:51:37 compute-0 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 31 08:51:37 compute-0 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 31 08:51:37 compute-0 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 31 08:51:37 compute-0 ceph-mon[75227]: from='client.14690 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 31 08:51:37 compute-0 ceph-mon[75227]: from='client.? 192.168.122.100:0/2215502721' entity='client.admin' cmd={"prefix": "mon stat"} : dispatch
Jan 31 08:51:37 compute-0 systemd[1]: Started libpod-conmon-3be3ced4a2a00120c0ee8ba50c9e19e5abed8fd9db9fff58081846f84e3900ad.scope.
Jan 31 08:51:37 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:51:37 compute-0 podman[262473]: 2026-01-31 08:51:37.639937432 +0000 UTC m=+0.568740553 container init 3be3ced4a2a00120c0ee8ba50c9e19e5abed8fd9db9fff58081846f84e3900ad (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=agitated_sammet, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 08:51:37 compute-0 podman[262473]: 2026-01-31 08:51:37.64691274 +0000 UTC m=+0.575715841 container start 3be3ced4a2a00120c0ee8ba50c9e19e5abed8fd9db9fff58081846f84e3900ad (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=agitated_sammet, CEPH_REF=tentacle, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:17:32.239069+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83795968 unmapped: 0 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:17:33.239329+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:37 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1012546 data_alloc: 218103808 data_used: 6999
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83795968 unmapped: 0 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:17:34.239471+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fce35000/0x0/0x4ffc00000, data 0x13186c/0x1f7000, compress 0x0/0x0/0x0, omap 0x143cc, meta 0x2bbbc34), peers [0,2] op hist [])
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83795968 unmapped: 0 heap: 83795968 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:17:35.239683+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83804160 unmapped: 1040384 heap: 84844544 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:17:36.239840+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83804160 unmapped: 1040384 heap: 84844544 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:17:37.240018+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 1032192 heap: 84844544 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:17:38.240168+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:37 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1012546 data_alloc: 218103808 data_used: 6999
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 1032192 heap: 84844544 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:17:39.240350+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fce35000/0x0/0x4ffc00000, data 0x13186c/0x1f7000, compress 0x0/0x0/0x0, omap 0x143cc, meta 0x2bbbc34), peers [0,2] op hist [])
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83820544 unmapped: 1024000 heap: 84844544 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:17:40.240555+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83820544 unmapped: 1024000 heap: 84844544 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:17:41.241406+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83820544 unmapped: 1024000 heap: 84844544 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:17:42.241570+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83828736 unmapped: 1015808 heap: 84844544 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:17:43.241722+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:37 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1012546 data_alloc: 218103808 data_used: 6999
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83828736 unmapped: 1015808 heap: 84844544 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fce35000/0x0/0x4ffc00000, data 0x13186c/0x1f7000, compress 0x0/0x0/0x0, omap 0x143cc, meta 0x2bbbc34), peers [0,2] op hist [])
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:17:44.241888+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fce35000/0x0/0x4ffc00000, data 0x13186c/0x1f7000, compress 0x0/0x0/0x0, omap 0x143cc, meta 0x2bbbc34), peers [0,2] op hist [])
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83836928 unmapped: 1007616 heap: 84844544 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:17:45.242070+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83845120 unmapped: 999424 heap: 84844544 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:17:46.242214+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83845120 unmapped: 999424 heap: 84844544 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:17:47.242403+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83853312 unmapped: 991232 heap: 84844544 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:17:48.242605+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:37 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1012546 data_alloc: 218103808 data_used: 6999
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83853312 unmapped: 991232 heap: 84844544 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:17:49.242891+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83861504 unmapped: 983040 heap: 84844544 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fce35000/0x0/0x4ffc00000, data 0x13186c/0x1f7000, compress 0x0/0x0/0x0, omap 0x143cc, meta 0x2bbbc34), peers [0,2] op hist [])
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:17:50.243075+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83869696 unmapped: 974848 heap: 84844544 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:17:51.243200+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83869696 unmapped: 974848 heap: 84844544 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:17:52.243328+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fce35000/0x0/0x4ffc00000, data 0x13186c/0x1f7000, compress 0x0/0x0/0x0, omap 0x143cc, meta 0x2bbbc34), peers [0,2] op hist [])
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83877888 unmapped: 966656 heap: 84844544 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:17:53.243435+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:37 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1012546 data_alloc: 218103808 data_used: 6999
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83877888 unmapped: 966656 heap: 84844544 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:17:54.243602+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83886080 unmapped: 958464 heap: 84844544 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:17:55.243923+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fce35000/0x0/0x4ffc00000, data 0x13186c/0x1f7000, compress 0x0/0x0/0x0, omap 0x143cc, meta 0x2bbbc34), peers [0,2] op hist [])
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83886080 unmapped: 958464 heap: 84844544 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:17:56.244081+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83886080 unmapped: 958464 heap: 84844544 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:17:57.244282+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83902464 unmapped: 942080 heap: 84844544 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:17:58.244418+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:37 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1012546 data_alloc: 218103808 data_used: 6999
Jan 31 08:51:37 compute-0 ceph-osd[87035]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 296.841705322s of 296.850128174s, submitted: 4
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83738624 unmapped: 1105920 heap: 84844544 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:17:59.244570+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83566592 unmapped: 2326528 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:18:00.244742+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83566592 unmapped: 2326528 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:18:01.244865+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fce35000/0x0/0x4ffc00000, data 0x13186c/0x1f7000, compress 0x0/0x0/0x0, omap 0x143cc, meta 0x2bbbc34), peers [0,2] op hist [])
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83566592 unmapped: 2326528 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:18:02.245013+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83566592 unmapped: 2326528 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:18:03.245168+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:37 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1012546 data_alloc: 218103808 data_used: 6999
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83566592 unmapped: 2326528 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:18:04.245325+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fce35000/0x0/0x4ffc00000, data 0x13186c/0x1f7000, compress 0x0/0x0/0x0, omap 0x143cc, meta 0x2bbbc34), peers [0,2] op hist [])
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83566592 unmapped: 2326528 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:18:05.245541+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83566592 unmapped: 2326528 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:18:06.245666+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fce35000/0x0/0x4ffc00000, data 0x13186c/0x1f7000, compress 0x0/0x0/0x0, omap 0x143cc, meta 0x2bbbc34), peers [0,2] op hist [])
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83566592 unmapped: 2326528 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:18:07.245814+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83574784 unmapped: 2318336 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:18:08.245928+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:37 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1012546 data_alloc: 218103808 data_used: 6999
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83574784 unmapped: 2318336 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:18:09.246045+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83574784 unmapped: 2318336 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:18:10.246213+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83582976 unmapped: 2310144 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:18:11.246305+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83582976 unmapped: 2310144 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fce35000/0x0/0x4ffc00000, data 0x13186c/0x1f7000, compress 0x0/0x0/0x0, omap 0x143cc, meta 0x2bbbc34), peers [0,2] op hist [])
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:18:12.246454+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83582976 unmapped: 2310144 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:18:13.246583+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fce35000/0x0/0x4ffc00000, data 0x13186c/0x1f7000, compress 0x0/0x0/0x0, omap 0x143cc, meta 0x2bbbc34), peers [0,2] op hist [])
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:37 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1012546 data_alloc: 218103808 data_used: 6999
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83591168 unmapped: 2301952 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:18:14.246742+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83591168 unmapped: 2301952 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:18:15.246946+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83607552 unmapped: 2285568 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:18:16.247140+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83607552 unmapped: 2285568 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:18:17.247342+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fce35000/0x0/0x4ffc00000, data 0x13186c/0x1f7000, compress 0x0/0x0/0x0, omap 0x143cc, meta 0x2bbbc34), peers [0,2] op hist [])
Jan 31 08:51:37 compute-0 ceph-osd[87035]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fce35000/0x0/0x4ffc00000, data 0x13186c/0x1f7000, compress 0x0/0x0/0x0, omap 0x143cc, meta 0x2bbbc34), peers [0,2] op hist [])
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83607552 unmapped: 2285568 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:18:18.247566+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:37 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1012546 data_alloc: 218103808 data_used: 6999
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83615744 unmapped: 2277376 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:18:19.247740+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83615744 unmapped: 2277376 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:18:20.247956+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83623936 unmapped: 2269184 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:18:21.248091+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83623936 unmapped: 2269184 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:18:22.248211+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83623936 unmapped: 2269184 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:18:23.248358+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fce35000/0x0/0x4ffc00000, data 0x13186c/0x1f7000, compress 0x0/0x0/0x0, omap 0x143cc, meta 0x2bbbc34), peers [0,2] op hist [])
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:37 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1012546 data_alloc: 218103808 data_used: 6999
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83632128 unmapped: 2260992 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:18:24.248516+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fce35000/0x0/0x4ffc00000, data 0x13186c/0x1f7000, compress 0x0/0x0/0x0, omap 0x143cc, meta 0x2bbbc34), peers [0,2] op hist [])
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83632128 unmapped: 2260992 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:18:25.248694+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83640320 unmapped: 2252800 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:18:26.248835+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83640320 unmapped: 2252800 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:18:27.248961+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83640320 unmapped: 2252800 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:18:28.249126+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:37 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1012546 data_alloc: 218103808 data_used: 6999
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83648512 unmapped: 2244608 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:18:29.249265+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83648512 unmapped: 2244608 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:18:30.249391+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fce35000/0x0/0x4ffc00000, data 0x13186c/0x1f7000, compress 0x0/0x0/0x0, omap 0x143cc, meta 0x2bbbc34), peers [0,2] op hist [])
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83648512 unmapped: 2244608 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:18:31.249523+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83656704 unmapped: 2236416 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:18:32.249656+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83656704 unmapped: 2236416 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:18:33.249779+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fce35000/0x0/0x4ffc00000, data 0x13186c/0x1f7000, compress 0x0/0x0/0x0, omap 0x143cc, meta 0x2bbbc34), peers [0,2] op hist [])
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:37 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1012546 data_alloc: 218103808 data_used: 6999
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83664896 unmapped: 2228224 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:18:34.249900+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83664896 unmapped: 2228224 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:18:35.250041+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83673088 unmapped: 2220032 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:18:36.250173+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83673088 unmapped: 2220032 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:18:37.250299+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83673088 unmapped: 2220032 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:18:38.250434+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:37 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1012546 data_alloc: 218103808 data_used: 6999
Jan 31 08:51:37 compute-0 ceph-osd[87035]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fce35000/0x0/0x4ffc00000, data 0x13186c/0x1f7000, compress 0x0/0x0/0x0, omap 0x143cc, meta 0x2bbbc34), peers [0,2] op hist [])
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83681280 unmapped: 2211840 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:18:39.250576+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83681280 unmapped: 2211840 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:18:40.250740+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83689472 unmapped: 2203648 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:18:41.250890+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83689472 unmapped: 2203648 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:18:42.251031+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83689472 unmapped: 2203648 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:18:43.251144+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:37 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1012546 data_alloc: 218103808 data_used: 6999
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83697664 unmapped: 2195456 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:18:44.251296+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fce35000/0x0/0x4ffc00000, data 0x13186c/0x1f7000, compress 0x0/0x0/0x0, omap 0x143cc, meta 0x2bbbc34), peers [0,2] op hist [])
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83697664 unmapped: 2195456 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:18:45.251456+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fce35000/0x0/0x4ffc00000, data 0x13186c/0x1f7000, compress 0x0/0x0/0x0, omap 0x143cc, meta 0x2bbbc34), peers [0,2] op hist [])
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83705856 unmapped: 2187264 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:18:46.251630+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83705856 unmapped: 2187264 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:18:47.251766+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83705856 unmapped: 2187264 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:18:48.251891+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:37 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1012546 data_alloc: 218103808 data_used: 6999
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83714048 unmapped: 2179072 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:18:49.252039+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83714048 unmapped: 2179072 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:18:50.252183+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83714048 unmapped: 2179072 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:18:51.252332+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fce35000/0x0/0x4ffc00000, data 0x13186c/0x1f7000, compress 0x0/0x0/0x0, omap 0x143cc, meta 0x2bbbc34), peers [0,2] op hist [])
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83722240 unmapped: 2170880 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:18:52.252440+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83722240 unmapped: 2170880 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:18:53.252585+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:37 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1012546 data_alloc: 218103808 data_used: 6999
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83722240 unmapped: 2170880 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:18:54.252742+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83722240 unmapped: 2170880 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:18:55.252923+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83722240 unmapped: 2170880 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:18:56.253074+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83722240 unmapped: 2170880 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fce35000/0x0/0x4ffc00000, data 0x13186c/0x1f7000, compress 0x0/0x0/0x0, omap 0x143cc, meta 0x2bbbc34), peers [0,2] op hist [])
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 conmon[262514]: conmon 3be3ced4a2a00120c0ee <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-3be3ced4a2a00120c0ee8ba50c9e19e5abed8fd9db9fff58081846f84e3900ad.scope/container/memory.events
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:18:57.253233+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83722240 unmapped: 2170880 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:18:58.253392+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:37 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1012546 data_alloc: 218103808 data_used: 6999
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83722240 unmapped: 2170880 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:18:59.253557+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83722240 unmapped: 2170880 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:19:00.253689+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83722240 unmapped: 2170880 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:19:01.253864+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83722240 unmapped: 2170880 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:19:02.254021+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fce35000/0x0/0x4ffc00000, data 0x13186c/0x1f7000, compress 0x0/0x0/0x0, omap 0x143cc, meta 0x2bbbc34), peers [0,2] op hist [])
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83730432 unmapped: 2162688 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:19:03.254185+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:37 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1012546 data_alloc: 218103808 data_used: 6999
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83730432 unmapped: 2162688 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:19:04.254833+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83730432 unmapped: 2162688 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:19:05.255003+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fce35000/0x0/0x4ffc00000, data 0x13186c/0x1f7000, compress 0x0/0x0/0x0, omap 0x143cc, meta 0x2bbbc34), peers [0,2] op hist [])
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83730432 unmapped: 2162688 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:19:06.255154+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fce35000/0x0/0x4ffc00000, data 0x13186c/0x1f7000, compress 0x0/0x0/0x0, omap 0x143cc, meta 0x2bbbc34), peers [0,2] op hist [])
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83730432 unmapped: 2162688 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:19:07.255324+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83730432 unmapped: 2162688 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:19:08.255455+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:37 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1012546 data_alloc: 218103808 data_used: 6999
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83730432 unmapped: 2162688 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:19:09.255610+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fce35000/0x0/0x4ffc00000, data 0x13186c/0x1f7000, compress 0x0/0x0/0x0, omap 0x143cc, meta 0x2bbbc34), peers [0,2] op hist [])
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83730432 unmapped: 2162688 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:19:10.255790+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83730432 unmapped: 2162688 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:19:11.255918+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83738624 unmapped: 2154496 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:19:12.256068+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83738624 unmapped: 2154496 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:19:13.256223+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fce35000/0x0/0x4ffc00000, data 0x13186c/0x1f7000, compress 0x0/0x0/0x0, omap 0x143cc, meta 0x2bbbc34), peers [0,2] op hist [])
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:37 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1012546 data_alloc: 218103808 data_used: 6999
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83738624 unmapped: 2154496 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:19:14.256369+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83738624 unmapped: 2154496 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:19:15.256549+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83738624 unmapped: 2154496 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:19:16.256702+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83738624 unmapped: 2154496 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:19:17.256864+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fce35000/0x0/0x4ffc00000, data 0x13186c/0x1f7000, compress 0x0/0x0/0x0, omap 0x143cc, meta 0x2bbbc34), peers [0,2] op hist [])
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83738624 unmapped: 2154496 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:19:18.256996+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:37 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1012546 data_alloc: 218103808 data_used: 6999
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83738624 unmapped: 2154496 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:19:19.257118+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:19:20.257289+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83738624 unmapped: 2154496 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:19:21.257435+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83738624 unmapped: 2154496 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:19:22.257569+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83738624 unmapped: 2154496 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fce35000/0x0/0x4ffc00000, data 0x13186c/0x1f7000, compress 0x0/0x0/0x0, omap 0x143cc, meta 0x2bbbc34), peers [0,2] op hist [])
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:19:23.257711+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83738624 unmapped: 2154496 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:37 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1012546 data_alloc: 218103808 data_used: 6999
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:19:24.257827+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83738624 unmapped: 2154496 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:19:25.257975+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83738624 unmapped: 2154496 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:19:26.258128+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83738624 unmapped: 2154496 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fce35000/0x0/0x4ffc00000, data 0x13186c/0x1f7000, compress 0x0/0x0/0x0, omap 0x143cc, meta 0x2bbbc34), peers [0,2] op hist [])
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:19:27.258344+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83738624 unmapped: 2154496 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:19:28.258531+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83738624 unmapped: 2154496 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:37 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1012546 data_alloc: 218103808 data_used: 6999
Jan 31 08:51:37 compute-0 systemd[1]: libpod-3be3ced4a2a00120c0ee8ba50c9e19e5abed8fd9db9fff58081846f84e3900ad.scope: Deactivated successfully.
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:19:29.258661+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83738624 unmapped: 2154496 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fce35000/0x0/0x4ffc00000, data 0x13186c/0x1f7000, compress 0x0/0x0/0x0, omap 0x143cc, meta 0x2bbbc34), peers [0,2] op hist [])
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:19:30.258815+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83738624 unmapped: 2154496 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:19:31.259220+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83738624 unmapped: 2154496 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fce35000/0x0/0x4ffc00000, data 0x13186c/0x1f7000, compress 0x0/0x0/0x0, omap 0x143cc, meta 0x2bbbc34), peers [0,2] op hist [])
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:19:32.263033+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83738624 unmapped: 2154496 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:19:33.263186+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83738624 unmapped: 2154496 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:37 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1012546 data_alloc: 218103808 data_used: 6999
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:19:34.263411+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83738624 unmapped: 2154496 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fce35000/0x0/0x4ffc00000, data 0x13186c/0x1f7000, compress 0x0/0x0/0x0, omap 0x143cc, meta 0x2bbbc34), peers [0,2] op hist [])
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:19:35.263656+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83738624 unmapped: 2154496 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:19:36.263829+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83738624 unmapped: 2154496 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:19:37.264091+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83738624 unmapped: 2154496 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 agitated_sammet[262514]: 167 167
Jan 31 08:51:37 compute-0 ceph-osd[87035]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fce35000/0x0/0x4ffc00000, data 0x13186c/0x1f7000, compress 0x0/0x0/0x0, omap 0x143cc, meta 0x2bbbc34), peers [0,2] op hist [])
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:19:38.264219+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83738624 unmapped: 2154496 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:37 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1012546 data_alloc: 218103808 data_used: 6999
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:19:39.264457+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83738624 unmapped: 2154496 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:19:40.264668+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fce35000/0x0/0x4ffc00000, data 0x13186c/0x1f7000, compress 0x0/0x0/0x0, omap 0x143cc, meta 0x2bbbc34), peers [0,2] op hist [])
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83738624 unmapped: 2154496 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:19:41.264830+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83746816 unmapped: 2146304 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:19:42.265015+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83746816 unmapped: 2146304 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:19:43.265403+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83746816 unmapped: 2146304 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:37 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1012546 data_alloc: 218103808 data_used: 6999
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:19:44.265712+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83746816 unmapped: 2146304 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:19:45.265910+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83746816 unmapped: 2146304 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fce35000/0x0/0x4ffc00000, data 0x13186c/0x1f7000, compress 0x0/0x0/0x0, omap 0x143cc, meta 0x2bbbc34), peers [0,2] op hist [])
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:19:46.266055+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83746816 unmapped: 2146304 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fce35000/0x0/0x4ffc00000, data 0x13186c/0x1f7000, compress 0x0/0x0/0x0, omap 0x143cc, meta 0x2bbbc34), peers [0,2] op hist [])
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:19:47.266215+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83746816 unmapped: 2146304 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:19:48.266346+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83746816 unmapped: 2146304 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:37 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1012546 data_alloc: 218103808 data_used: 6999
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:19:49.266474+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83746816 unmapped: 2146304 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fce35000/0x0/0x4ffc00000, data 0x13186c/0x1f7000, compress 0x0/0x0/0x0, omap 0x143cc, meta 0x2bbbc34), peers [0,2] op hist [])
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:19:50.266623+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83746816 unmapped: 2146304 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:19:51.266807+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83746816 unmapped: 2146304 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:19:52.267026+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83746816 unmapped: 2146304 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fce35000/0x0/0x4ffc00000, data 0x13186c/0x1f7000, compress 0x0/0x0/0x0, omap 0x143cc, meta 0x2bbbc34), peers [0,2] op hist [])
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:19:53.267228+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83746816 unmapped: 2146304 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:37 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1012546 data_alloc: 218103808 data_used: 6999
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:19:54.267355+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83746816 unmapped: 2146304 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fce35000/0x0/0x4ffc00000, data 0x13186c/0x1f7000, compress 0x0/0x0/0x0, omap 0x143cc, meta 0x2bbbc34), peers [0,2] op hist [])
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:19:55.267514+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83746816 unmapped: 2146304 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:19:56.267701+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83746816 unmapped: 2146304 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:19:57.267932+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83746816 unmapped: 2146304 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:19:58.268113+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fce35000/0x0/0x4ffc00000, data 0x13186c/0x1f7000, compress 0x0/0x0/0x0, omap 0x143cc, meta 0x2bbbc34), peers [0,2] op hist [])
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83746816 unmapped: 2146304 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:37 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1012546 data_alloc: 218103808 data_used: 6999
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:19:59.268281+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83746816 unmapped: 2146304 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:20:00.268464+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83755008 unmapped: 2138112 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:20:01.268617+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83755008 unmapped: 2138112 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:20:02.268865+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83755008 unmapped: 2138112 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fce35000/0x0/0x4ffc00000, data 0x13186c/0x1f7000, compress 0x0/0x0/0x0, omap 0x143cc, meta 0x2bbbc34), peers [0,2] op hist [])
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:20:03.268989+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83755008 unmapped: 2138112 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fce35000/0x0/0x4ffc00000, data 0x13186c/0x1f7000, compress 0x0/0x0/0x0, omap 0x143cc, meta 0x2bbbc34), peers [0,2] op hist [])
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:37 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1012546 data_alloc: 218103808 data_used: 6999
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:20:04.269187+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83755008 unmapped: 2138112 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:20:05.269367+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83755008 unmapped: 2138112 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:20:06.269523+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83755008 unmapped: 2138112 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fce35000/0x0/0x4ffc00000, data 0x13186c/0x1f7000, compress 0x0/0x0/0x0, omap 0x143cc, meta 0x2bbbc34), peers [0,2] op hist [])
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:20:07.269638+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83755008 unmapped: 2138112 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:20:08.269788+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83755008 unmapped: 2138112 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:37 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1012546 data_alloc: 218103808 data_used: 6999
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:20:09.269924+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83755008 unmapped: 2138112 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:20:10.270051+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83755008 unmapped: 2138112 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:20:11.270188+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83755008 unmapped: 2138112 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:20:12.270340+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fce35000/0x0/0x4ffc00000, data 0x13186c/0x1f7000, compress 0x0/0x0/0x0, omap 0x143cc, meta 0x2bbbc34), peers [0,2] op hist [])
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83755008 unmapped: 2138112 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:20:13.271112+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fce35000/0x0/0x4ffc00000, data 0x13186c/0x1f7000, compress 0x0/0x0/0x0, omap 0x143cc, meta 0x2bbbc34), peers [0,2] op hist [])
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83755008 unmapped: 2138112 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:37 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1012546 data_alloc: 218103808 data_used: 6999
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:20:14.271316+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83755008 unmapped: 2138112 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:20:15.271509+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83755008 unmapped: 2138112 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:20:16.271670+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fce35000/0x0/0x4ffc00000, data 0x13186c/0x1f7000, compress 0x0/0x0/0x0, omap 0x143cc, meta 0x2bbbc34), peers [0,2] op hist [])
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83755008 unmapped: 2138112 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:20:17.271921+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83755008 unmapped: 2138112 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:20:18.272041+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83755008 unmapped: 2138112 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fce35000/0x0/0x4ffc00000, data 0x13186c/0x1f7000, compress 0x0/0x0/0x0, omap 0x143cc, meta 0x2bbbc34), peers [0,2] op hist [])
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:20:19.272185+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:37 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1012546 data_alloc: 218103808 data_used: 6999
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83755008 unmapped: 2138112 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fce35000/0x0/0x4ffc00000, data 0x13186c/0x1f7000, compress 0x0/0x0/0x0, omap 0x143cc, meta 0x2bbbc34), peers [0,2] op hist [])
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:20:20.272391+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83755008 unmapped: 2138112 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:20:21.272547+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83755008 unmapped: 2138112 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:20:22.272827+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83755008 unmapped: 2138112 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fce35000/0x0/0x4ffc00000, data 0x13186c/0x1f7000, compress 0x0/0x0/0x0, omap 0x143cc, meta 0x2bbbc34), peers [0,2] op hist [])
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:20:23.273034+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83755008 unmapped: 2138112 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:20:24.273226+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:37 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1012546 data_alloc: 218103808 data_used: 6999
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83755008 unmapped: 2138112 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:20:25.273531+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83755008 unmapped: 2138112 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fce35000/0x0/0x4ffc00000, data 0x13186c/0x1f7000, compress 0x0/0x0/0x0, omap 0x143cc, meta 0x2bbbc34), peers [0,2] op hist [])
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:20:26.273709+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83755008 unmapped: 2138112 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:20:27.273969+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83755008 unmapped: 2138112 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:20:28.274196+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83755008 unmapped: 2138112 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:20:29.274339+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:37 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1012546 data_alloc: 218103808 data_used: 6999
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83755008 unmapped: 2138112 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:20:30.274512+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83755008 unmapped: 2138112 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:20:31.274637+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83755008 unmapped: 2138112 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fce35000/0x0/0x4ffc00000, data 0x13186c/0x1f7000, compress 0x0/0x0/0x0, omap 0x143cc, meta 0x2bbbc34), peers [0,2] op hist [])
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:20:32.274783+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83755008 unmapped: 2138112 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:20:33.274973+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83755008 unmapped: 2138112 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:20:34.275084+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:37 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1012546 data_alloc: 218103808 data_used: 6999
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83755008 unmapped: 2138112 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:20:35.275271+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83755008 unmapped: 2138112 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:20:36.275402+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83755008 unmapped: 2138112 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fce35000/0x0/0x4ffc00000, data 0x13186c/0x1f7000, compress 0x0/0x0/0x0, omap 0x143cc, meta 0x2bbbc34), peers [0,2] op hist [])
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:20:37.275550+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83763200 unmapped: 2129920 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:20:38.275662+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83763200 unmapped: 2129920 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fce35000/0x0/0x4ffc00000, data 0x13186c/0x1f7000, compress 0x0/0x0/0x0, omap 0x143cc, meta 0x2bbbc34), peers [0,2] op hist [])
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:20:39.275779+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:37 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1012546 data_alloc: 218103808 data_used: 6999
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83763200 unmapped: 2129920 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:20:40.275896+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83763200 unmapped: 2129920 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:20:41.276214+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83763200 unmapped: 2129920 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:20:42.277379+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83763200 unmapped: 2129920 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fce35000/0x0/0x4ffc00000, data 0x13186c/0x1f7000, compress 0x0/0x0/0x0, omap 0x143cc, meta 0x2bbbc34), peers [0,2] op hist [])
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:20:43.278120+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83763200 unmapped: 2129920 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:20:44.278288+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:37 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1012546 data_alloc: 218103808 data_used: 6999
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83763200 unmapped: 2129920 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:20:45.278767+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83763200 unmapped: 2129920 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:20:46.278948+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83763200 unmapped: 2129920 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:20:47.279057+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83763200 unmapped: 2129920 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:20:48.279212+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fce35000/0x0/0x4ffc00000, data 0x13186c/0x1f7000, compress 0x0/0x0/0x0, omap 0x143cc, meta 0x2bbbc34), peers [0,2] op hist [])
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83763200 unmapped: 2129920 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:20:49.279305+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:37 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1012546 data_alloc: 218103808 data_used: 6999
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83763200 unmapped: 2129920 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:20:50.279462+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83763200 unmapped: 2129920 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:20:51.279650+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83763200 unmapped: 2129920 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:20:52.279754+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83763200 unmapped: 2129920 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fce35000/0x0/0x4ffc00000, data 0x13186c/0x1f7000, compress 0x0/0x0/0x0, omap 0x143cc, meta 0x2bbbc34), peers [0,2] op hist [])
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:20:53.279886+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83763200 unmapped: 2129920 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:20:54.280750+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:37 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1012546 data_alloc: 218103808 data_used: 6999
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83763200 unmapped: 2129920 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:20:55.280962+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83763200 unmapped: 2129920 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fce35000/0x0/0x4ffc00000, data 0x13186c/0x1f7000, compress 0x0/0x0/0x0, omap 0x143cc, meta 0x2bbbc34), peers [0,2] op hist [])
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:20:56.281125+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83763200 unmapped: 2129920 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:20:57.281262+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83763200 unmapped: 2129920 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:20:58.281431+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83763200 unmapped: 2129920 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:20:59.281606+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:37 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1012546 data_alloc: 218103808 data_used: 6999
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83763200 unmapped: 2129920 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:21:00.281763+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83763200 unmapped: 2129920 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fce35000/0x0/0x4ffc00000, data 0x13186c/0x1f7000, compress 0x0/0x0/0x0, omap 0x143cc, meta 0x2bbbc34), peers [0,2] op hist [])
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:21:01.281876+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83763200 unmapped: 2129920 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:21:02.282011+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83763200 unmapped: 2129920 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:21:03.282151+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83763200 unmapped: 2129920 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fce35000/0x0/0x4ffc00000, data 0x13186c/0x1f7000, compress 0x0/0x0/0x0, omap 0x143cc, meta 0x2bbbc34), peers [0,2] op hist [])
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:21:04.282546+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:37 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1012546 data_alloc: 218103808 data_used: 6999
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83771392 unmapped: 2121728 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:21:05.282717+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83771392 unmapped: 2121728 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fce35000/0x0/0x4ffc00000, data 0x13186c/0x1f7000, compress 0x0/0x0/0x0, omap 0x143cc, meta 0x2bbbc34), peers [0,2] op hist [])
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:21:06.282843+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83771392 unmapped: 2121728 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fce35000/0x0/0x4ffc00000, data 0x13186c/0x1f7000, compress 0x0/0x0/0x0, omap 0x143cc, meta 0x2bbbc34), peers [0,2] op hist [])
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:21:07.283073+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83771392 unmapped: 2121728 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fce35000/0x0/0x4ffc00000, data 0x13186c/0x1f7000, compress 0x0/0x0/0x0, omap 0x143cc, meta 0x2bbbc34), peers [0,2] op hist [])
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:21:08.283289+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83771392 unmapped: 2121728 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:21:09.283487+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:37 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1012546 data_alloc: 218103808 data_used: 6999
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83771392 unmapped: 2121728 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:21:10.283732+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83771392 unmapped: 2121728 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:21:11.283983+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fce35000/0x0/0x4ffc00000, data 0x13186c/0x1f7000, compress 0x0/0x0/0x0, omap 0x143cc, meta 0x2bbbc34), peers [0,2] op hist [])
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83771392 unmapped: 2121728 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:21:12.284245+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83771392 unmapped: 2121728 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:21:13.284475+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83771392 unmapped: 2121728 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:21:14.284612+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:37 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1012546 data_alloc: 218103808 data_used: 6999
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83771392 unmapped: 2121728 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:21:15.284842+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83771392 unmapped: 2121728 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:21:16.284964+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fce35000/0x0/0x4ffc00000, data 0x13186c/0x1f7000, compress 0x0/0x0/0x0, omap 0x143cc, meta 0x2bbbc34), peers [0,2] op hist [])
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83771392 unmapped: 2121728 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:21:17.285146+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83771392 unmapped: 2121728 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:21:18.285390+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83771392 unmapped: 2121728 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:21:19.285588+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:37 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1012546 data_alloc: 218103808 data_used: 6999
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83771392 unmapped: 2121728 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:21:20.285790+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fce35000/0x0/0x4ffc00000, data 0x13186c/0x1f7000, compress 0x0/0x0/0x0, omap 0x143cc, meta 0x2bbbc34), peers [0,2] op hist [])
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83771392 unmapped: 2121728 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:21:21.285956+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83771392 unmapped: 2121728 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:21:22.286082+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83771392 unmapped: 2121728 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:21:23.286210+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83771392 unmapped: 2121728 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:21:24.286458+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:37 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1012546 data_alloc: 218103808 data_used: 6999
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83771392 unmapped: 2121728 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fce35000/0x0/0x4ffc00000, data 0x13186c/0x1f7000, compress 0x0/0x0/0x0, omap 0x143cc, meta 0x2bbbc34), peers [0,2] op hist [])
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:21:25.286690+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83771392 unmapped: 2121728 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:21:26.286896+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83771392 unmapped: 2121728 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:21:27.287112+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83771392 unmapped: 2121728 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:21:28.287300+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83771392 unmapped: 2121728 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:21:29.287534+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:37 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1012546 data_alloc: 218103808 data_used: 6999
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83771392 unmapped: 2121728 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fce35000/0x0/0x4ffc00000, data 0x13186c/0x1f7000, compress 0x0/0x0/0x0, omap 0x143cc, meta 0x2bbbc34), peers [0,2] op hist [])
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:21:30.287674+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83771392 unmapped: 2121728 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fce35000/0x0/0x4ffc00000, data 0x13186c/0x1f7000, compress 0x0/0x0/0x0, omap 0x143cc, meta 0x2bbbc34), peers [0,2] op hist [])
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:21:31.287937+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83771392 unmapped: 2121728 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:21:32.288120+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83771392 unmapped: 2121728 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fce35000/0x0/0x4ffc00000, data 0x13186c/0x1f7000, compress 0x0/0x0/0x0, omap 0x143cc, meta 0x2bbbc34), peers [0,2] op hist [])
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:21:33.288266+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83771392 unmapped: 2121728 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:21:34.288449+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fce35000/0x0/0x4ffc00000, data 0x13186c/0x1f7000, compress 0x0/0x0/0x0, omap 0x143cc, meta 0x2bbbc34), peers [0,2] op hist [])
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:37 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1012546 data_alloc: 218103808 data_used: 6999
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83771392 unmapped: 2121728 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:21:35.288640+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83771392 unmapped: 2121728 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:21:36.288792+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83771392 unmapped: 2121728 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:21:37.288934+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83771392 unmapped: 2121728 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:21:38.289134+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fce35000/0x0/0x4ffc00000, data 0x13186c/0x1f7000, compress 0x0/0x0/0x0, omap 0x143cc, meta 0x2bbbc34), peers [0,2] op hist [])
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83771392 unmapped: 2121728 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fce35000/0x0/0x4ffc00000, data 0x13186c/0x1f7000, compress 0x0/0x0/0x0, omap 0x143cc, meta 0x2bbbc34), peers [0,2] op hist [])
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:21:39.289315+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:37 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1012546 data_alloc: 218103808 data_used: 6999
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83771392 unmapped: 2121728 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fce35000/0x0/0x4ffc00000, data 0x13186c/0x1f7000, compress 0x0/0x0/0x0, omap 0x143cc, meta 0x2bbbc34), peers [0,2] op hist [])
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:21:40.289488+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83771392 unmapped: 2121728 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:21:41.289635+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83771392 unmapped: 2121728 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:21:42.289771+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83771392 unmapped: 2121728 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fce35000/0x0/0x4ffc00000, data 0x13186c/0x1f7000, compress 0x0/0x0/0x0, omap 0x143cc, meta 0x2bbbc34), peers [0,2] op hist [])
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:21:43.289918+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83779584 unmapped: 2113536 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:21:44.290060+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:37 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1012546 data_alloc: 218103808 data_used: 6999
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83779584 unmapped: 2113536 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:21:45.290314+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83779584 unmapped: 2113536 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:21:46.290490+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83779584 unmapped: 2113536 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:21:47.290635+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83779584 unmapped: 2113536 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fce35000/0x0/0x4ffc00000, data 0x13186c/0x1f7000, compress 0x0/0x0/0x0, omap 0x143cc, meta 0x2bbbc34), peers [0,2] op hist [])
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:21:48.290778+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83779584 unmapped: 2113536 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:21:49.290961+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:37 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1012546 data_alloc: 218103808 data_used: 6999
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83779584 unmapped: 2113536 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:21:50.291165+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83779584 unmapped: 2113536 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fce35000/0x0/0x4ffc00000, data 0x13186c/0x1f7000, compress 0x0/0x0/0x0, omap 0x143cc, meta 0x2bbbc34), peers [0,2] op hist [])
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:21:51.291320+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fce35000/0x0/0x4ffc00000, data 0x13186c/0x1f7000, compress 0x0/0x0/0x0, omap 0x143cc, meta 0x2bbbc34), peers [0,2] op hist [])
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83779584 unmapped: 2113536 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:21:52.291493+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fce35000/0x0/0x4ffc00000, data 0x13186c/0x1f7000, compress 0x0/0x0/0x0, omap 0x143cc, meta 0x2bbbc34), peers [0,2] op hist [])
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83779584 unmapped: 2113536 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:21:53.291674+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83779584 unmapped: 2113536 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:21:54.291821+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:37 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1012546 data_alloc: 218103808 data_used: 6999
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83779584 unmapped: 2113536 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:21:55.291990+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83779584 unmapped: 2113536 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fce35000/0x0/0x4ffc00000, data 0x13186c/0x1f7000, compress 0x0/0x0/0x0, omap 0x143cc, meta 0x2bbbc34), peers [0,2] op hist [])
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:21:56.292130+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83779584 unmapped: 2113536 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:21:57.292332+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83779584 unmapped: 2113536 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:21:58.292462+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83779584 unmapped: 2113536 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:21:59.292635+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:37 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1012546 data_alloc: 218103808 data_used: 6999
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83779584 unmapped: 2113536 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:22:00.292762+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fce35000/0x0/0x4ffc00000, data 0x13186c/0x1f7000, compress 0x0/0x0/0x0, omap 0x143cc, meta 0x2bbbc34), peers [0,2] op hist [])
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83787776 unmapped: 2105344 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:22:01.292905+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: osd.1 127 ms_handle_reset con 0x55d782c66400 session 0x55d782980e00
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: handle_auth_request added challenge on 0x55d782c66c00
Jan 31 08:51:37 compute-0 ceph-osd[87035]: osd.1 127 ms_handle_reset con 0x55d782c67000 session 0x55d782f1a380
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: handle_auth_request added challenge on 0x55d782c66400
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83705856 unmapped: 2187264 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:22:02.293075+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83705856 unmapped: 2187264 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:22:03.293285+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83714048 unmapped: 2179072 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:22:04.293438+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fce35000/0x0/0x4ffc00000, data 0x13186c/0x1f7000, compress 0x0/0x0/0x0, omap 0x143cc, meta 0x2bbbc34), peers [0,2] op hist [])
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:37 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1012546 data_alloc: 218103808 data_used: 6999
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83714048 unmapped: 2179072 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:22:05.293604+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83714048 unmapped: 2179072 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:22:06.293844+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83714048 unmapped: 2179072 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:22:07.293997+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83714048 unmapped: 2179072 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:22:08.294117+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fce35000/0x0/0x4ffc00000, data 0x13186c/0x1f7000, compress 0x0/0x0/0x0, omap 0x143cc, meta 0x2bbbc34), peers [0,2] op hist [])
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83714048 unmapped: 2179072 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:22:09.294327+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:37 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1012546 data_alloc: 218103808 data_used: 6999
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83714048 unmapped: 2179072 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:22:10.294437+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fce35000/0x0/0x4ffc00000, data 0x13186c/0x1f7000, compress 0x0/0x0/0x0, omap 0x143cc, meta 0x2bbbc34), peers [0,2] op hist [])
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83714048 unmapped: 2179072 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:22:11.294594+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fce35000/0x0/0x4ffc00000, data 0x13186c/0x1f7000, compress 0x0/0x0/0x0, omap 0x143cc, meta 0x2bbbc34), peers [0,2] op hist [])
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83714048 unmapped: 2179072 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:22:12.294723+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83714048 unmapped: 2179072 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:22:13.294843+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83714048 unmapped: 2179072 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fce35000/0x0/0x4ffc00000, data 0x13186c/0x1f7000, compress 0x0/0x0/0x0, omap 0x143cc, meta 0x2bbbc34), peers [0,2] op hist [])
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:22:14.295017+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:37 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1012546 data_alloc: 218103808 data_used: 6999
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83714048 unmapped: 2179072 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:22:15.295215+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83714048 unmapped: 2179072 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:22:16.295316+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fce35000/0x0/0x4ffc00000, data 0x13186c/0x1f7000, compress 0x0/0x0/0x0, omap 0x143cc, meta 0x2bbbc34), peers [0,2] op hist [])
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83714048 unmapped: 2179072 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:22:17.295461+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83705856 unmapped: 2187264 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:22:18.295606+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83705856 unmapped: 2187264 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:22:19.295767+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:37 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1012546 data_alloc: 218103808 data_used: 6999
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83705856 unmapped: 2187264 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:22:20.295893+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83705856 unmapped: 2187264 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:22:21.296035+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fce35000/0x0/0x4ffc00000, data 0x13186c/0x1f7000, compress 0x0/0x0/0x0, omap 0x143cc, meta 0x2bbbc34), peers [0,2] op hist [])
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83705856 unmapped: 2187264 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:22:22.296276+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83705856 unmapped: 2187264 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:22:23.296501+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83705856 unmapped: 2187264 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:22:24.296679+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:37 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1012546 data_alloc: 218103808 data_used: 6999
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83705856 unmapped: 2187264 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:22:25.296844+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83705856 unmapped: 2187264 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:22:26.297008+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83705856 unmapped: 2187264 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:22:27.297207+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fce35000/0x0/0x4ffc00000, data 0x13186c/0x1f7000, compress 0x0/0x0/0x0, omap 0x143cc, meta 0x2bbbc34), peers [0,2] op hist [])
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83705856 unmapped: 2187264 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:22:28.297326+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83705856 unmapped: 2187264 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:22:29.297544+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fce35000/0x0/0x4ffc00000, data 0x13186c/0x1f7000, compress 0x0/0x0/0x0, omap 0x143cc, meta 0x2bbbc34), peers [0,2] op hist [])
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:37 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1012546 data_alloc: 218103808 data_used: 6999
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83705856 unmapped: 2187264 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:22:30.297752+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83705856 unmapped: 2187264 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:22:31.297948+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83705856 unmapped: 2187264 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:22:32.298096+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fce35000/0x0/0x4ffc00000, data 0x13186c/0x1f7000, compress 0x0/0x0/0x0, omap 0x143cc, meta 0x2bbbc34), peers [0,2] op hist [])
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83705856 unmapped: 2187264 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:22:33.298244+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fce35000/0x0/0x4ffc00000, data 0x13186c/0x1f7000, compress 0x0/0x0/0x0, omap 0x143cc, meta 0x2bbbc34), peers [0,2] op hist [])
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83705856 unmapped: 2187264 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:22:34.298396+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:37 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1012546 data_alloc: 218103808 data_used: 6999
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83705856 unmapped: 2187264 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:22:35.298602+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83705856 unmapped: 2187264 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:22:36.298801+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83705856 unmapped: 2187264 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:22:37.298934+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83705856 unmapped: 2187264 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fce35000/0x0/0x4ffc00000, data 0x13186c/0x1f7000, compress 0x0/0x0/0x0, omap 0x143cc, meta 0x2bbbc34), peers [0,2] op hist [])
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:22:38.299090+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83705856 unmapped: 2187264 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:22:39.299326+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:37 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1012546 data_alloc: 218103808 data_used: 6999
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83705856 unmapped: 2187264 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:22:40.299467+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83705856 unmapped: 2187264 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:22:41.299610+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83705856 unmapped: 2187264 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:22:42.299763+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fce35000/0x0/0x4ffc00000, data 0x13186c/0x1f7000, compress 0x0/0x0/0x0, omap 0x143cc, meta 0x2bbbc34), peers [0,2] op hist [])
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83705856 unmapped: 2187264 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:22:43.299930+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83705856 unmapped: 2187264 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:22:44.300100+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:37 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1012546 data_alloc: 218103808 data_used: 6999
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83705856 unmapped: 2187264 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:22:45.300272+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83705856 unmapped: 2187264 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:22:46.300479+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83705856 unmapped: 2187264 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:22:47.300629+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83705856 unmapped: 2187264 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fce35000/0x0/0x4ffc00000, data 0x13186c/0x1f7000, compress 0x0/0x0/0x0, omap 0x143cc, meta 0x2bbbc34), peers [0,2] op hist [])
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:22:48.300783+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83705856 unmapped: 2187264 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:22:49.300931+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:37 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1012546 data_alloc: 218103808 data_used: 6999
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83705856 unmapped: 2187264 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:22:50.301135+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fce35000/0x0/0x4ffc00000, data 0x13186c/0x1f7000, compress 0x0/0x0/0x0, omap 0x143cc, meta 0x2bbbc34), peers [0,2] op hist [])
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83705856 unmapped: 2187264 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:22:51.301312+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83705856 unmapped: 2187264 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:22:52.301727+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fce35000/0x0/0x4ffc00000, data 0x13186c/0x1f7000, compress 0x0/0x0/0x0, omap 0x143cc, meta 0x2bbbc34), peers [0,2] op hist [])
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83705856 unmapped: 2187264 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:22:53.301887+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83705856 unmapped: 2187264 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:22:54.302070+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:37 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1012546 data_alloc: 218103808 data_used: 6999
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83705856 unmapped: 2187264 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:22:55.302235+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83705856 unmapped: 2187264 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fce35000/0x0/0x4ffc00000, data 0x13186c/0x1f7000, compress 0x0/0x0/0x0, omap 0x143cc, meta 0x2bbbc34), peers [0,2] op hist [])
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:22:56.302422+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83705856 unmapped: 2187264 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:22:57.302579+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fce35000/0x0/0x4ffc00000, data 0x13186c/0x1f7000, compress 0x0/0x0/0x0, omap 0x143cc, meta 0x2bbbc34), peers [0,2] op hist [])
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83705856 unmapped: 2187264 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:22:58.302700+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83705856 unmapped: 2187264 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:22:59.302828+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 300.297973633s of 300.506713867s, submitted: 90
Jan 31 08:51:37 compute-0 ceph-osd[87035]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:37 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1012546 data_alloc: 218103808 data_used: 6999
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83722240 unmapped: 2170880 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:23:00.302975+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83722240 unmapped: 2170880 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:23:01.303176+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fce35000/0x0/0x4ffc00000, data 0x13186c/0x1f7000, compress 0x0/0x0/0x0, omap 0x143cc, meta 0x2bbbc34), peers [0,2] op hist [])
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83722240 unmapped: 2170880 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:23:02.303428+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83722240 unmapped: 2170880 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:23:03.303615+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83722240 unmapped: 2170880 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:23:04.303752+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:37 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1012546 data_alloc: 218103808 data_used: 6999
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83722240 unmapped: 2170880 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:23:05.303931+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83722240 unmapped: 2170880 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:23:06.304319+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fce35000/0x0/0x4ffc00000, data 0x13186c/0x1f7000, compress 0x0/0x0/0x0, omap 0x143cc, meta 0x2bbbc34), peers [0,2] op hist [])
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83722240 unmapped: 2170880 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:23:07.304471+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83722240 unmapped: 2170880 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:23:08.304610+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83722240 unmapped: 2170880 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:23:09.304802+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fce35000/0x0/0x4ffc00000, data 0x13186c/0x1f7000, compress 0x0/0x0/0x0, omap 0x143cc, meta 0x2bbbc34), peers [0,2] op hist [])
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:37 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1012546 data_alloc: 218103808 data_used: 6999
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83722240 unmapped: 2170880 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:23:10.305003+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83722240 unmapped: 2170880 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:23:11.305240+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fce35000/0x0/0x4ffc00000, data 0x13186c/0x1f7000, compress 0x0/0x0/0x0, omap 0x143cc, meta 0x2bbbc34), peers [0,2] op hist [])
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83722240 unmapped: 2170880 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:23:12.305407+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83722240 unmapped: 2170880 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:23:13.305570+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83722240 unmapped: 2170880 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:23:14.305710+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:37 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1012546 data_alloc: 218103808 data_used: 6999
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83722240 unmapped: 2170880 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:23:15.305862+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83722240 unmapped: 2170880 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:23:16.306009+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83722240 unmapped: 2170880 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fce35000/0x0/0x4ffc00000, data 0x13186c/0x1f7000, compress 0x0/0x0/0x0, omap 0x143cc, meta 0x2bbbc34), peers [0,2] op hist [])
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:23:17.306079+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83722240 unmapped: 2170880 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:23:18.306193+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83722240 unmapped: 2170880 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:23:19.306401+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:37 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1012546 data_alloc: 218103808 data_used: 6999
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83722240 unmapped: 2170880 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:23:20.306533+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83722240 unmapped: 2170880 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:23:21.306684+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fce35000/0x0/0x4ffc00000, data 0x13186c/0x1f7000, compress 0x0/0x0/0x0, omap 0x143cc, meta 0x2bbbc34), peers [0,2] op hist [])
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83722240 unmapped: 2170880 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:23:22.306868+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83722240 unmapped: 2170880 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:23:23.307010+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83722240 unmapped: 2170880 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:23:24.307172+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:37 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1012546 data_alloc: 218103808 data_used: 6999
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83730432 unmapped: 2162688 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:23:25.307456+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83730432 unmapped: 2162688 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:23:26.307684+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fce35000/0x0/0x4ffc00000, data 0x13186c/0x1f7000, compress 0x0/0x0/0x0, omap 0x143cc, meta 0x2bbbc34), peers [0,2] op hist [])
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83730432 unmapped: 2162688 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:23:27.307819+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83730432 unmapped: 2162688 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:23:28.307983+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83730432 unmapped: 2162688 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:23:29.308127+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:37 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1012546 data_alloc: 218103808 data_used: 6999
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83730432 unmapped: 2162688 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:23:30.308336+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83730432 unmapped: 2162688 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:23:31.308459+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83730432 unmapped: 2162688 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:23:32.308674+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fce35000/0x0/0x4ffc00000, data 0x13186c/0x1f7000, compress 0x0/0x0/0x0, omap 0x143cc, meta 0x2bbbc34), peers [0,2] op hist [])
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83730432 unmapped: 2162688 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:23:33.308861+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fce35000/0x0/0x4ffc00000, data 0x13186c/0x1f7000, compress 0x0/0x0/0x0, omap 0x143cc, meta 0x2bbbc34), peers [0,2] op hist [])
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83730432 unmapped: 2162688 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:23:34.309016+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:37 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1012546 data_alloc: 218103808 data_used: 6999
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83730432 unmapped: 2162688 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:23:35.309223+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83730432 unmapped: 2162688 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:23:36.309318+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83730432 unmapped: 2162688 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:23:37.309456+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83730432 unmapped: 2162688 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:23:38.309612+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83730432 unmapped: 2162688 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:23:39.309757+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fce35000/0x0/0x4ffc00000, data 0x13186c/0x1f7000, compress 0x0/0x0/0x0, omap 0x143cc, meta 0x2bbbc34), peers [0,2] op hist [])
Jan 31 08:51:37 compute-0 ceph-osd[87035]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fce35000/0x0/0x4ffc00000, data 0x13186c/0x1f7000, compress 0x0/0x0/0x0, omap 0x143cc, meta 0x2bbbc34), peers [0,2] op hist [])
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:37 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1012546 data_alloc: 218103808 data_used: 6999
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83730432 unmapped: 2162688 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:23:40.309950+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83730432 unmapped: 2162688 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:23:41.310125+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83730432 unmapped: 2162688 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:23:42.310305+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83730432 unmapped: 2162688 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:23:43.310461+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fce35000/0x0/0x4ffc00000, data 0x13186c/0x1f7000, compress 0x0/0x0/0x0, omap 0x143cc, meta 0x2bbbc34), peers [0,2] op hist [])
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83730432 unmapped: 2162688 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:23:44.310582+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:37 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1012546 data_alloc: 218103808 data_used: 6999
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:23:45.310822+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83730432 unmapped: 2162688 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:23:46.310988+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83730432 unmapped: 2162688 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:23:47.311204+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83730432 unmapped: 2162688 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fce35000/0x0/0x4ffc00000, data 0x13186c/0x1f7000, compress 0x0/0x0/0x0, omap 0x143cc, meta 0x2bbbc34), peers [0,2] op hist [])
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:23:48.311319+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83730432 unmapped: 2162688 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:23:49.311483+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83730432 unmapped: 2162688 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:37 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1012546 data_alloc: 218103808 data_used: 6999
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:23:50.311813+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83730432 unmapped: 2162688 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:23:51.312208+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83730432 unmapped: 2162688 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:23:52.312383+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83730432 unmapped: 2162688 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fce35000/0x0/0x4ffc00000, data 0x13186c/0x1f7000, compress 0x0/0x0/0x0, omap 0x143cc, meta 0x2bbbc34), peers [0,2] op hist [])
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:23:53.312576+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83730432 unmapped: 2162688 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:23:54.312828+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83730432 unmapped: 2162688 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:37 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1012546 data_alloc: 218103808 data_used: 6999
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:23:55.313028+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83730432 unmapped: 2162688 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fce35000/0x0/0x4ffc00000, data 0x13186c/0x1f7000, compress 0x0/0x0/0x0, omap 0x143cc, meta 0x2bbbc34), peers [0,2] op hist [])
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:23:56.313248+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fce35000/0x0/0x4ffc00000, data 0x13186c/0x1f7000, compress 0x0/0x0/0x0, omap 0x143cc, meta 0x2bbbc34), peers [0,2] op hist [])
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83730432 unmapped: 2162688 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:23:57.313440+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83730432 unmapped: 2162688 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:23:58.313590+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83730432 unmapped: 2162688 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:23:59.313722+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83730432 unmapped: 2162688 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:37 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1012546 data_alloc: 218103808 data_used: 6999
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:24:00.313894+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83738624 unmapped: 2154496 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:24:01.314095+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83738624 unmapped: 2154496 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fce35000/0x0/0x4ffc00000, data 0x13186c/0x1f7000, compress 0x0/0x0/0x0, omap 0x143cc, meta 0x2bbbc34), peers [0,2] op hist [])
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:24:02.314293+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83738624 unmapped: 2154496 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:24:03.314442+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83738624 unmapped: 2154496 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:24:04.314577+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83738624 unmapped: 2154496 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:37 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1012546 data_alloc: 218103808 data_used: 6999
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:24:05.314797+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83738624 unmapped: 2154496 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:24:06.315007+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83738624 unmapped: 2154496 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 podman[262473]: 2026-01-31 08:51:37.695641929 +0000 UTC m=+0.624445080 container attach 3be3ced4a2a00120c0ee8ba50c9e19e5abed8fd9db9fff58081846f84e3900ad (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=agitated_sammet, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 08:51:37 compute-0 podman[262473]: 2026-01-31 08:51:37.696009889 +0000 UTC m=+0.624813010 container died 3be3ced4a2a00120c0ee8ba50c9e19e5abed8fd9db9fff58081846f84e3900ad (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=agitated_sammet, org.label-schema.build-date=20251030, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fce35000/0x0/0x4ffc00000, data 0x13186c/0x1f7000, compress 0x0/0x0/0x0, omap 0x143cc, meta 0x2bbbc34), peers [0,2] op hist [])
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:24:07.315202+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83738624 unmapped: 2154496 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:24:08.315370+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83738624 unmapped: 2154496 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:24:09.315531+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83738624 unmapped: 2154496 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:37 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1012546 data_alloc: 218103808 data_used: 6999
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:24:10.315716+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83738624 unmapped: 2154496 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fce35000/0x0/0x4ffc00000, data 0x13186c/0x1f7000, compress 0x0/0x0/0x0, omap 0x143cc, meta 0x2bbbc34), peers [0,2] op hist [])
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:24:11.315918+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83738624 unmapped: 2154496 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:24:12.316096+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83738624 unmapped: 2154496 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:24:13.316410+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83738624 unmapped: 2154496 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:24:14.316628+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83738624 unmapped: 2154496 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:37 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1012546 data_alloc: 218103808 data_used: 6999
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:24:15.316823+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83738624 unmapped: 2154496 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:24:16.317102+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fce35000/0x0/0x4ffc00000, data 0x13186c/0x1f7000, compress 0x0/0x0/0x0, omap 0x143cc, meta 0x2bbbc34), peers [0,2] op hist [])
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83738624 unmapped: 2154496 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:24:17.317325+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83738624 unmapped: 2154496 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:24:18.317545+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83738624 unmapped: 2154496 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:24:19.317829+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83738624 unmapped: 2154496 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:37 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1012546 data_alloc: 218103808 data_used: 6999
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:24:20.318138+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fce35000/0x0/0x4ffc00000, data 0x13186c/0x1f7000, compress 0x0/0x0/0x0, omap 0x143cc, meta 0x2bbbc34), peers [0,2] op hist [])
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83738624 unmapped: 2154496 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:24:21.318436+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83738624 unmapped: 2154496 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:24:22.318684+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83738624 unmapped: 2154496 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:24:23.318838+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83738624 unmapped: 2154496 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:24:24.318992+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83746816 unmapped: 2146304 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:37 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1012546 data_alloc: 218103808 data_used: 6999
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:24:25.319209+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83746816 unmapped: 2146304 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fce35000/0x0/0x4ffc00000, data 0x13186c/0x1f7000, compress 0x0/0x0/0x0, omap 0x143cc, meta 0x2bbbc34), peers [0,2] op hist [])
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:24:26.319488+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83746816 unmapped: 2146304 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:24:27.319677+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83746816 unmapped: 2146304 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:24:28.319842+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83746816 unmapped: 2146304 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fce35000/0x0/0x4ffc00000, data 0x13186c/0x1f7000, compress 0x0/0x0/0x0, omap 0x143cc, meta 0x2bbbc34), peers [0,2] op hist [])
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:24:29.320026+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83746816 unmapped: 2146304 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:37 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1012546 data_alloc: 218103808 data_used: 6999
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:24:30.320231+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83746816 unmapped: 2146304 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:24:31.320447+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83746816 unmapped: 2146304 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:24:32.320592+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83746816 unmapped: 2146304 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:24:33.320841+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83746816 unmapped: 2146304 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:24:34.321108+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83746816 unmapped: 2146304 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fce35000/0x0/0x4ffc00000, data 0x13186c/0x1f7000, compress 0x0/0x0/0x0, omap 0x143cc, meta 0x2bbbc34), peers [0,2] op hist [])
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:37 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1012546 data_alloc: 218103808 data_used: 6999
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:24:35.321432+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83746816 unmapped: 2146304 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:24:36.321697+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83746816 unmapped: 2146304 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:24:37.321959+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83746816 unmapped: 2146304 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fce35000/0x0/0x4ffc00000, data 0x13186c/0x1f7000, compress 0x0/0x0/0x0, omap 0x143cc, meta 0x2bbbc34), peers [0,2] op hist [])
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:24:38.322122+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83746816 unmapped: 2146304 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:24:39.322312+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83746816 unmapped: 2146304 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:37 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1012546 data_alloc: 218103808 data_used: 6999
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:24:40.322500+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83746816 unmapped: 2146304 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fce35000/0x0/0x4ffc00000, data 0x13186c/0x1f7000, compress 0x0/0x0/0x0, omap 0x143cc, meta 0x2bbbc34), peers [0,2] op hist [])
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:24:41.322680+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83746816 unmapped: 2146304 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:24:42.322834+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83755008 unmapped: 2138112 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:24:43.322996+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83755008 unmapped: 2138112 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:24:44.323170+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83755008 unmapped: 2138112 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:37 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1012546 data_alloc: 218103808 data_used: 6999
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:24:45.323358+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83755008 unmapped: 2138112 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fce35000/0x0/0x4ffc00000, data 0x13186c/0x1f7000, compress 0x0/0x0/0x0, omap 0x143cc, meta 0x2bbbc34), peers [0,2] op hist [])
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:24:46.323567+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83755008 unmapped: 2138112 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:24:47.323693+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83755008 unmapped: 2138112 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:24:48.323831+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83755008 unmapped: 2138112 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:24:49.323999+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83755008 unmapped: 2138112 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:24:50.324164+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:37 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1012546 data_alloc: 218103808 data_used: 6999
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83755008 unmapped: 2138112 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:24:51.324345+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83755008 unmapped: 2138112 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fce35000/0x0/0x4ffc00000, data 0x13186c/0x1f7000, compress 0x0/0x0/0x0, omap 0x143cc, meta 0x2bbbc34), peers [0,2] op hist [])
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:24:52.324560+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83755008 unmapped: 2138112 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:24:53.324812+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83755008 unmapped: 2138112 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:24:54.324981+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83755008 unmapped: 2138112 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:24:55.325181+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:37 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1012546 data_alloc: 218103808 data_used: 6999
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83755008 unmapped: 2138112 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:24:56.325381+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83755008 unmapped: 2138112 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fce35000/0x0/0x4ffc00000, data 0x13186c/0x1f7000, compress 0x0/0x0/0x0, omap 0x143cc, meta 0x2bbbc34), peers [0,2] op hist [])
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:24:57.325611+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83755008 unmapped: 2138112 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:24:58.325792+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83755008 unmapped: 2138112 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:24:59.325948+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83755008 unmapped: 2138112 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:25:00.326102+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fce35000/0x0/0x4ffc00000, data 0x13186c/0x1f7000, compress 0x0/0x0/0x0, omap 0x143cc, meta 0x2bbbc34), peers [0,2] op hist [])
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:37 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1012546 data_alloc: 218103808 data_used: 6999
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83755008 unmapped: 2138112 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:25:01.326236+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83755008 unmapped: 2138112 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fce35000/0x0/0x4ffc00000, data 0x13186c/0x1f7000, compress 0x0/0x0/0x0, omap 0x143cc, meta 0x2bbbc34), peers [0,2] op hist [])
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:25:02.326417+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83755008 unmapped: 2138112 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:25:03.326609+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83755008 unmapped: 2138112 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:25:04.326777+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fce35000/0x0/0x4ffc00000, data 0x13186c/0x1f7000, compress 0x0/0x0/0x0, omap 0x143cc, meta 0x2bbbc34), peers [0,2] op hist [])
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83763200 unmapped: 2129920 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:25:05.326986+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:37 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1012546 data_alloc: 218103808 data_used: 6999
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83763200 unmapped: 2129920 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:25:06.327139+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83763200 unmapped: 2129920 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fce35000/0x0/0x4ffc00000, data 0x13186c/0x1f7000, compress 0x0/0x0/0x0, omap 0x143cc, meta 0x2bbbc34), peers [0,2] op hist [])
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:25:07.327340+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83763200 unmapped: 2129920 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:25:08.327542+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83763200 unmapped: 2129920 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:25:09.327708+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83763200 unmapped: 2129920 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:25:10.327825+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:37 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1012546 data_alloc: 218103808 data_used: 6999
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83763200 unmapped: 2129920 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:25:11.327983+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83763200 unmapped: 2129920 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fce35000/0x0/0x4ffc00000, data 0x13186c/0x1f7000, compress 0x0/0x0/0x0, omap 0x143cc, meta 0x2bbbc34), peers [0,2] op hist [])
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:25:12.328133+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83763200 unmapped: 2129920 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:25:13.328467+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83763200 unmapped: 2129920 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:25:14.328748+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83763200 unmapped: 2129920 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:25:15.329126+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:37 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1012546 data_alloc: 218103808 data_used: 6999
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83763200 unmapped: 2129920 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:25:16.329330+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83763200 unmapped: 2129920 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fce35000/0x0/0x4ffc00000, data 0x13186c/0x1f7000, compress 0x0/0x0/0x0, omap 0x143cc, meta 0x2bbbc34), peers [0,2] op hist [])
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:25:17.329562+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83763200 unmapped: 2129920 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:25:18.329781+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83763200 unmapped: 2129920 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:25:19.329929+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83763200 unmapped: 2129920 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fce35000/0x0/0x4ffc00000, data 0x13186c/0x1f7000, compress 0x0/0x0/0x0, omap 0x143cc, meta 0x2bbbc34), peers [0,2] op hist [])
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:25:20.330137+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:37 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1012546 data_alloc: 218103808 data_used: 6999
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83763200 unmapped: 2129920 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:25:21.330380+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83763200 unmapped: 2129920 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:25:22.330654+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fce35000/0x0/0x4ffc00000, data 0x13186c/0x1f7000, compress 0x0/0x0/0x0, omap 0x143cc, meta 0x2bbbc34), peers [0,2] op hist [])
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83763200 unmapped: 2129920 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:25:23.330894+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83763200 unmapped: 2129920 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:25:24.331130+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83763200 unmapped: 2129920 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:25:25.331435+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:37 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1012546 data_alloc: 218103808 data_used: 6999
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83763200 unmapped: 2129920 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:25:26.331708+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83763200 unmapped: 2129920 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fce35000/0x0/0x4ffc00000, data 0x13186c/0x1f7000, compress 0x0/0x0/0x0, omap 0x143cc, meta 0x2bbbc34), peers [0,2] op hist [])
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:25:27.331972+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83763200 unmapped: 2129920 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:25:28.332220+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83771392 unmapped: 2121728 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:25:29.332773+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fce35000/0x0/0x4ffc00000, data 0x13186c/0x1f7000, compress 0x0/0x0/0x0, omap 0x143cc, meta 0x2bbbc34), peers [0,2] op hist [])
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83771392 unmapped: 2121728 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:25:30.333014+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:37 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1012546 data_alloc: 218103808 data_used: 6999
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83771392 unmapped: 2121728 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:25:31.333234+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83771392 unmapped: 2121728 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:25:32.333516+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83771392 unmapped: 2121728 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fce35000/0x0/0x4ffc00000, data 0x13186c/0x1f7000, compress 0x0/0x0/0x0, omap 0x143cc, meta 0x2bbbc34), peers [0,2] op hist [])
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:25:33.333756+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83771392 unmapped: 2121728 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:25:34.334000+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83771392 unmapped: 2121728 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:25:35.334210+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:37 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1012546 data_alloc: 218103808 data_used: 6999
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83771392 unmapped: 2121728 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:25:36.334417+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83771392 unmapped: 2121728 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:25:37.334650+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83771392 unmapped: 2121728 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:25:38.334854+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fce35000/0x0/0x4ffc00000, data 0x13186c/0x1f7000, compress 0x0/0x0/0x0, omap 0x143cc, meta 0x2bbbc34), peers [0,2] op hist [])
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83771392 unmapped: 2121728 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:25:39.335447+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83771392 unmapped: 2121728 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:25:40.335627+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:37 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1012546 data_alloc: 218103808 data_used: 6999
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83771392 unmapped: 2121728 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:25:41.335874+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83771392 unmapped: 2121728 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:25:42.336037+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83771392 unmapped: 2121728 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fce35000/0x0/0x4ffc00000, data 0x13186c/0x1f7000, compress 0x0/0x0/0x0, omap 0x143cc, meta 0x2bbbc34), peers [0,2] op hist [])
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:25:43.336240+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83771392 unmapped: 2121728 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:25:44.336490+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83771392 unmapped: 2121728 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fce35000/0x0/0x4ffc00000, data 0x13186c/0x1f7000, compress 0x0/0x0/0x0, omap 0x143cc, meta 0x2bbbc34), peers [0,2] op hist [])
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:25:45.336809+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:37 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1012546 data_alloc: 218103808 data_used: 6999
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83771392 unmapped: 2121728 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:25:46.337026+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83771392 unmapped: 2121728 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:25:47.337181+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83771392 unmapped: 2121728 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:25:48.337374+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83771392 unmapped: 2121728 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:25:49.337529+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fce35000/0x0/0x4ffc00000, data 0x13186c/0x1f7000, compress 0x0/0x0/0x0, omap 0x143cc, meta 0x2bbbc34), peers [0,2] op hist [])
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83771392 unmapped: 2121728 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:25:50.337711+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:37 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1012546 data_alloc: 218103808 data_used: 6999
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83771392 unmapped: 2121728 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:25:51.337983+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83771392 unmapped: 2121728 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:25:52.338204+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83771392 unmapped: 2121728 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:25:53.338475+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fce35000/0x0/0x4ffc00000, data 0x13186c/0x1f7000, compress 0x0/0x0/0x0, omap 0x143cc, meta 0x2bbbc34), peers [0,2] op hist [])
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83771392 unmapped: 2121728 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:25:54.338713+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83771392 unmapped: 2121728 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:25:55.338901+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:37 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1012546 data_alloc: 218103808 data_used: 6999
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83771392 unmapped: 2121728 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:25:56.339126+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83771392 unmapped: 2121728 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:25:57.339362+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83771392 unmapped: 2121728 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:25:58.339548+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fce35000/0x0/0x4ffc00000, data 0x13186c/0x1f7000, compress 0x0/0x0/0x0, omap 0x143cc, meta 0x2bbbc34), peers [0,2] op hist [])
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83771392 unmapped: 2121728 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:25:59.339703+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83779584 unmapped: 2113536 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:26:00.340041+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:37 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1012546 data_alloc: 218103808 data_used: 6999
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83779584 unmapped: 2113536 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:26:01.341423+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83779584 unmapped: 2113536 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:26:02.341599+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83779584 unmapped: 2113536 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:26:03.341731+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fce35000/0x0/0x4ffc00000, data 0x13186c/0x1f7000, compress 0x0/0x0/0x0, omap 0x143cc, meta 0x2bbbc34), peers [0,2] op hist [])
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83779584 unmapped: 2113536 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:26:04.341851+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83779584 unmapped: 2113536 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:26:05.342052+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:37 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1012546 data_alloc: 218103808 data_used: 6999
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83779584 unmapped: 2113536 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:26:06.342195+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83779584 unmapped: 2113536 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:26:07.342385+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83779584 unmapped: 2113536 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:26:08.342542+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fce35000/0x0/0x4ffc00000, data 0x13186c/0x1f7000, compress 0x0/0x0/0x0, omap 0x143cc, meta 0x2bbbc34), peers [0,2] op hist [])
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83779584 unmapped: 2113536 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:26:09.342701+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83779584 unmapped: 2113536 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:26:10.342876+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:37 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1012546 data_alloc: 218103808 data_used: 6999
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83779584 unmapped: 2113536 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:26:11.343033+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83779584 unmapped: 2113536 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:26:12.343166+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83779584 unmapped: 2113536 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:26:13.343326+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fce35000/0x0/0x4ffc00000, data 0x13186c/0x1f7000, compress 0x0/0x0/0x0, omap 0x143cc, meta 0x2bbbc34), peers [0,2] op hist [])
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83779584 unmapped: 2113536 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:26:14.343467+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83779584 unmapped: 2113536 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:26:15.343691+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:37 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1012546 data_alloc: 218103808 data_used: 6999
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83779584 unmapped: 2113536 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:26:16.343871+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fce35000/0x0/0x4ffc00000, data 0x13186c/0x1f7000, compress 0x0/0x0/0x0, omap 0x143cc, meta 0x2bbbc34), peers [0,2] op hist [])
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83779584 unmapped: 2113536 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:26:17.343994+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83779584 unmapped: 2113536 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:26:18.344106+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83779584 unmapped: 2113536 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:26:19.344302+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83779584 unmapped: 2113536 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:26:20.344481+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:37 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1012546 data_alloc: 218103808 data_used: 6999
Jan 31 08:51:37 compute-0 ceph-osd[87035]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fce35000/0x0/0x4ffc00000, data 0x13186c/0x1f7000, compress 0x0/0x0/0x0, omap 0x143cc, meta 0x2bbbc34), peers [0,2] op hist [])
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83779584 unmapped: 2113536 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:26:21.344627+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83779584 unmapped: 2113536 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:26:22.344825+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83779584 unmapped: 2113536 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:26:23.344983+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fce35000/0x0/0x4ffc00000, data 0x13186c/0x1f7000, compress 0x0/0x0/0x0, omap 0x143cc, meta 0x2bbbc34), peers [0,2] op hist [])
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83787776 unmapped: 2105344 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:26:24.345152+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83787776 unmapped: 2105344 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:26:25.345429+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:37 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1012546 data_alloc: 218103808 data_used: 6999
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83787776 unmapped: 2105344 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:26:26.345576+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fce35000/0x0/0x4ffc00000, data 0x13186c/0x1f7000, compress 0x0/0x0/0x0, omap 0x143cc, meta 0x2bbbc34), peers [0,2] op hist [])
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83787776 unmapped: 2105344 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:26:27.345778+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83787776 unmapped: 2105344 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:26:28.345954+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83787776 unmapped: 2105344 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:26:29.346218+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fce35000/0x0/0x4ffc00000, data 0x13186c/0x1f7000, compress 0x0/0x0/0x0, omap 0x143cc, meta 0x2bbbc34), peers [0,2] op hist [])
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83787776 unmapped: 2105344 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:26:30.346528+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:37 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1012546 data_alloc: 218103808 data_used: 6999
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83787776 unmapped: 2105344 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:26:31.346708+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83787776 unmapped: 2105344 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:26:32.346852+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83787776 unmapped: 2105344 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fce35000/0x0/0x4ffc00000, data 0x13186c/0x1f7000, compress 0x0/0x0/0x0, omap 0x143cc, meta 0x2bbbc34), peers [0,2] op hist [])
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:26:33.347101+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83787776 unmapped: 2105344 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:26:34.347275+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83787776 unmapped: 2105344 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:26:35.347492+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:37 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1012546 data_alloc: 218103808 data_used: 6999
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83787776 unmapped: 2105344 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:26:36.347625+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83787776 unmapped: 2105344 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:26:37.347752+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83787776 unmapped: 2105344 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:26:38.347953+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fce35000/0x0/0x4ffc00000, data 0x13186c/0x1f7000, compress 0x0/0x0/0x0, omap 0x143cc, meta 0x2bbbc34), peers [0,2] op hist [])
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83787776 unmapped: 2105344 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:26:39.348134+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83787776 unmapped: 2105344 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:26:40.348426+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:37 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1012546 data_alloc: 218103808 data_used: 6999
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83787776 unmapped: 2105344 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:26:41.349814+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83787776 unmapped: 2105344 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:26:42.350023+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83787776 unmapped: 2105344 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:26:43.351117+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fce35000/0x0/0x4ffc00000, data 0x13186c/0x1f7000, compress 0x0/0x0/0x0, omap 0x143cc, meta 0x2bbbc34), peers [0,2] op hist [])
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83787776 unmapped: 2105344 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:26:44.352059+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83787776 unmapped: 2105344 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:26:45.353656+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:37 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1012546 data_alloc: 218103808 data_used: 6999
Jan 31 08:51:37 compute-0 ceph-osd[87035]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fce35000/0x0/0x4ffc00000, data 0x13186c/0x1f7000, compress 0x0/0x0/0x0, omap 0x143cc, meta 0x2bbbc34), peers [0,2] op hist [])
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83787776 unmapped: 2105344 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:26:46.354119+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83787776 unmapped: 2105344 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:26:47.354588+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83787776 unmapped: 2105344 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:26:48.355338+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83787776 unmapped: 2105344 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:26:49.356056+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83787776 unmapped: 2105344 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:26:50.356765+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:37 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1012546 data_alloc: 218103808 data_used: 6999
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 1200.3 total, 600.0 interval
                                           Cumulative writes: 7056 writes, 29K keys, 7056 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.02 MB/s
                                           Cumulative WAL: 7056 writes, 1347 syncs, 5.24 writes per sync, written: 0.02 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 224 writes, 336 keys, 224 commit groups, 1.0 writes per commit group, ingest: 0.12 MB, 0.00 MB/s
                                           Interval WAL: 224 writes, 112 syncs, 2.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.06              0.00         1    0.065       0      0       0.0       0.0
                                            Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.06              0.00         1    0.065       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.06              0.00         1    0.065       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.3 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.1 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55d7805d98d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 5.7e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
                                           
                                           ** Compaction Stats [m-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.3 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55d7805d98d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 5.7e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-0] **
                                           
                                           ** Compaction Stats [m-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.3 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55d7805d98d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 5.7e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-1] **
                                           
                                           ** Compaction Stats [m-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.3 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55d7805d98d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 5.7e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-2] **
                                           
                                           ** Compaction Stats [p-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.56 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.01              0.00         1    0.011       0      0       0.0       0.0
                                            Sum      1/0    1.56 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.01              0.00         1    0.011       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.1      0.01              0.00         1    0.011       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.3 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55d7805d98d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 5.7e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-0] **
                                           
                                           ** Compaction Stats [p-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.3 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55d7805d98d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 5.7e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-1] **
                                           
                                           ** Compaction Stats [p-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.3 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55d7805d98d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 5.7e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-2] **
                                           
                                           ** Compaction Stats [O-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.3 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55d7805d9a30#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 9e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-0] **
                                           
                                           ** Compaction Stats [O-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.3 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55d7805d9a30#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 9e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-1] **
                                           
                                           ** Compaction Stats [O-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.25 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.03              0.00         1    0.031       0      0       0.0       0.0
                                            Sum      1/0    1.25 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.03              0.00         1    0.031       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.03              0.00         1    0.031       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.3 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55d7805d9a30#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 9e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-2] **
                                           
                                           ** Compaction Stats [L] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.03              0.00         1    0.032       0      0       0.0       0.0
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.03              0.00         1    0.032       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [L] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.03              0.00         1    0.032       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.3 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55d7805d98d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 5.7e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [L] **
                                           
                                           ** Compaction Stats [P] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [P] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.3 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55d7805d98d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 5.7e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [P] **
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83820544 unmapped: 2072576 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:26:51.357396+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fce35000/0x0/0x4ffc00000, data 0x13186c/0x1f7000, compress 0x0/0x0/0x0, omap 0x143cc, meta 0x2bbbc34), peers [0,2] op hist [])
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83828736 unmapped: 2064384 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:26:52.357726+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83828736 unmapped: 2064384 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:26:53.357995+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83828736 unmapped: 2064384 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:26:54.358361+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83828736 unmapped: 2064384 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:26:55.358694+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:37 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1012546 data_alloc: 218103808 data_used: 6999
Jan 31 08:51:37 compute-0 ceph-osd[87035]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fce35000/0x0/0x4ffc00000, data 0x13186c/0x1f7000, compress 0x0/0x0/0x0, omap 0x143cc, meta 0x2bbbc34), peers [0,2] op hist [])
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83828736 unmapped: 2064384 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:26:56.359399+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83828736 unmapped: 2064384 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:26:57.360209+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83828736 unmapped: 2064384 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:26:58.360613+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83828736 unmapped: 2064384 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:26:59.361028+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83828736 unmapped: 2064384 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:27:00.361436+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:37 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1012546 data_alloc: 218103808 data_used: 6999
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83828736 unmapped: 2064384 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:27:01.361951+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fce35000/0x0/0x4ffc00000, data 0x13186c/0x1f7000, compress 0x0/0x0/0x0, omap 0x143cc, meta 0x2bbbc34), peers [0,2] op hist [])
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83828736 unmapped: 2064384 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:27:02.362158+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83828736 unmapped: 2064384 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:27:03.362376+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fce35000/0x0/0x4ffc00000, data 0x13186c/0x1f7000, compress 0x0/0x0/0x0, omap 0x143cc, meta 0x2bbbc34), peers [0,2] op hist [])
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83828736 unmapped: 2064384 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:27:04.362710+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83828736 unmapped: 2064384 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:27:05.363155+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:37 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1012546 data_alloc: 218103808 data_used: 6999
Jan 31 08:51:37 compute-0 ceph-osd[87035]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fce35000/0x0/0x4ffc00000, data 0x13186c/0x1f7000, compress 0x0/0x0/0x0, omap 0x143cc, meta 0x2bbbc34), peers [0,2] op hist [])
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83828736 unmapped: 2064384 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:27:06.363481+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:27:07.363789+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83828736 unmapped: 2064384 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:27:08.364042+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83828736 unmapped: 2064384 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:27:09.364351+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83828736 unmapped: 2064384 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fce35000/0x0/0x4ffc00000, data 0x13186c/0x1f7000, compress 0x0/0x0/0x0, omap 0x143cc, meta 0x2bbbc34), peers [0,2] op hist [])
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:27:10.364600+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83828736 unmapped: 2064384 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:37 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1012546 data_alloc: 218103808 data_used: 6999
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:27:11.364766+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83828736 unmapped: 2064384 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:27:12.365036+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83828736 unmapped: 2064384 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:27:13.365408+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83828736 unmapped: 2064384 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:27:14.365652+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83828736 unmapped: 2064384 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:27:15.365885+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83828736 unmapped: 2064384 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fce35000/0x0/0x4ffc00000, data 0x13186c/0x1f7000, compress 0x0/0x0/0x0, omap 0x143cc, meta 0x2bbbc34), peers [0,2] op hist [])
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:37 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1012546 data_alloc: 218103808 data_used: 6999
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:27:16.366124+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83828736 unmapped: 2064384 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fce35000/0x0/0x4ffc00000, data 0x13186c/0x1f7000, compress 0x0/0x0/0x0, omap 0x143cc, meta 0x2bbbc34), peers [0,2] op hist [])
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:27:17.366288+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83828736 unmapped: 2064384 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fce35000/0x0/0x4ffc00000, data 0x13186c/0x1f7000, compress 0x0/0x0/0x0, omap 0x143cc, meta 0x2bbbc34), peers [0,2] op hist [])
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:27:18.366543+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83828736 unmapped: 2064384 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:27:19.366680+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fce35000/0x0/0x4ffc00000, data 0x13186c/0x1f7000, compress 0x0/0x0/0x0, omap 0x143cc, meta 0x2bbbc34), peers [0,2] op hist [])
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83828736 unmapped: 2064384 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:27:20.366822+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83828736 unmapped: 2064384 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fce35000/0x0/0x4ffc00000, data 0x13186c/0x1f7000, compress 0x0/0x0/0x0, omap 0x143cc, meta 0x2bbbc34), peers [0,2] op hist [])
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:37 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1012546 data_alloc: 218103808 data_used: 6999
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:27:21.367016+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83828736 unmapped: 2064384 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:27:22.367180+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83828736 unmapped: 2064384 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:27:23.367346+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83828736 unmapped: 2064384 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:27:24.367530+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83828736 unmapped: 2064384 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fce35000/0x0/0x4ffc00000, data 0x13186c/0x1f7000, compress 0x0/0x0/0x0, omap 0x143cc, meta 0x2bbbc34), peers [0,2] op hist [])
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:27:25.367741+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83828736 unmapped: 2064384 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fce35000/0x0/0x4ffc00000, data 0x13186c/0x1f7000, compress 0x0/0x0/0x0, omap 0x143cc, meta 0x2bbbc34), peers [0,2] op hist [])
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:37 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1012546 data_alloc: 218103808 data_used: 6999
Jan 31 08:51:37 compute-0 ceph-osd[87035]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fce35000/0x0/0x4ffc00000, data 0x13186c/0x1f7000, compress 0x0/0x0/0x0, omap 0x143cc, meta 0x2bbbc34), peers [0,2] op hist [])
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:27:26.367863+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83828736 unmapped: 2064384 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:27:27.368014+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83828736 unmapped: 2064384 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:27:28.368194+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83836928 unmapped: 2056192 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fce35000/0x0/0x4ffc00000, data 0x13186c/0x1f7000, compress 0x0/0x0/0x0, omap 0x143cc, meta 0x2bbbc34), peers [0,2] op hist [])
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:27:29.368416+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83836928 unmapped: 2056192 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:27:30.368650+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83836928 unmapped: 2056192 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fce35000/0x0/0x4ffc00000, data 0x13186c/0x1f7000, compress 0x0/0x0/0x0, omap 0x143cc, meta 0x2bbbc34), peers [0,2] op hist [])
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:37 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1012546 data_alloc: 218103808 data_used: 6999
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:27:31.368901+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83836928 unmapped: 2056192 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:27:32.369068+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83836928 unmapped: 2056192 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:27:33.369234+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83836928 unmapped: 2056192 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:27:34.369471+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83836928 unmapped: 2056192 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fce35000/0x0/0x4ffc00000, data 0x13186c/0x1f7000, compress 0x0/0x0/0x0, omap 0x143cc, meta 0x2bbbc34), peers [0,2] op hist [])
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:27:35.369705+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83836928 unmapped: 2056192 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:37 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1012546 data_alloc: 218103808 data_used: 6999
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:27:36.369865+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83836928 unmapped: 2056192 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:27:37.370001+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83836928 unmapped: 2056192 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:27:38.370230+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83836928 unmapped: 2056192 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:27:39.370606+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83836928 unmapped: 2056192 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:27:40.370737+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83836928 unmapped: 2056192 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fce35000/0x0/0x4ffc00000, data 0x13186c/0x1f7000, compress 0x0/0x0/0x0, omap 0x143cc, meta 0x2bbbc34), peers [0,2] op hist [])
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:37 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1012546 data_alloc: 218103808 data_used: 6999
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:27:41.370923+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83836928 unmapped: 2056192 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fce35000/0x0/0x4ffc00000, data 0x13186c/0x1f7000, compress 0x0/0x0/0x0, omap 0x143cc, meta 0x2bbbc34), peers [0,2] op hist [])
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:27:42.371112+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83836928 unmapped: 2056192 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:27:43.371336+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83836928 unmapped: 2056192 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:27:44.371512+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83836928 unmapped: 2056192 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:27:45.371778+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fce35000/0x0/0x4ffc00000, data 0x13186c/0x1f7000, compress 0x0/0x0/0x0, omap 0x143cc, meta 0x2bbbc34), peers [0,2] op hist [])
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83836928 unmapped: 2056192 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:37 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1012546 data_alloc: 218103808 data_used: 6999
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:27:46.371934+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83836928 unmapped: 2056192 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:27:47.372065+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83836928 unmapped: 2056192 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:27:48.372209+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83836928 unmapped: 2056192 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:27:49.372389+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83836928 unmapped: 2056192 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fce35000/0x0/0x4ffc00000, data 0x13186c/0x1f7000, compress 0x0/0x0/0x0, omap 0x143cc, meta 0x2bbbc34), peers [0,2] op hist [])
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:27:50.372522+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83836928 unmapped: 2056192 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:37 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1012546 data_alloc: 218103808 data_used: 6999
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:27:51.372685+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83836928 unmapped: 2056192 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:27:52.373092+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83836928 unmapped: 2056192 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:27:53.373320+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83836928 unmapped: 2056192 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:27:54.373539+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83836928 unmapped: 2056192 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:27:55.373802+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83836928 unmapped: 2056192 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fce35000/0x0/0x4ffc00000, data 0x13186c/0x1f7000, compress 0x0/0x0/0x0, omap 0x143cc, meta 0x2bbbc34), peers [0,2] op hist [])
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:37 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1012546 data_alloc: 218103808 data_used: 6999
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:27:56.373998+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83836928 unmapped: 2056192 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:27:57.374236+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83836928 unmapped: 2056192 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fce35000/0x0/0x4ffc00000, data 0x13186c/0x1f7000, compress 0x0/0x0/0x0, omap 0x143cc, meta 0x2bbbc34), peers [0,2] op hist [])
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:27:58.374386+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83836928 unmapped: 2056192 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:27:59.374600+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 299.966552734s of 300.005035400s, submitted: 22
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83836928 unmapped: 2056192 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:28:00.374772+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83836928 unmapped: 2056192 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:37 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1012546 data_alloc: 218103808 data_used: 6999
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:28:01.374955+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83836928 unmapped: 2056192 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:28:02.375111+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fce35000/0x0/0x4ffc00000, data 0x13186c/0x1f7000, compress 0x0/0x0/0x0, omap 0x143cc, meta 0x2bbbc34), peers [0,2] op hist [])
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83836928 unmapped: 2056192 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:28:03.375384+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83648512 unmapped: 2244608 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:28:04.375751+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83648512 unmapped: 2244608 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:28:05.375914+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83648512 unmapped: 2244608 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fce35000/0x0/0x4ffc00000, data 0x13186c/0x1f7000, compress 0x0/0x0/0x0, omap 0x143cc, meta 0x2bbbc34), peers [0,2] op hist [])
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:37 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1012546 data_alloc: 218103808 data_used: 6999
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:28:06.376175+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83648512 unmapped: 2244608 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:28:07.376348+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83656704 unmapped: 2236416 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:28:08.376562+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83656704 unmapped: 2236416 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:28:09.376807+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83656704 unmapped: 2236416 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fce35000/0x0/0x4ffc00000, data 0x13186c/0x1f7000, compress 0x0/0x0/0x0, omap 0x143cc, meta 0x2bbbc34), peers [0,2] op hist [])
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:28:10.377061+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83656704 unmapped: 2236416 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:37 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1012546 data_alloc: 218103808 data_used: 6999
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:28:11.377307+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83656704 unmapped: 2236416 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fce35000/0x0/0x4ffc00000, data 0x13186c/0x1f7000, compress 0x0/0x0/0x0, omap 0x143cc, meta 0x2bbbc34), peers [0,2] op hist [])
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:28:12.377496+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83656704 unmapped: 2236416 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fce35000/0x0/0x4ffc00000, data 0x13186c/0x1f7000, compress 0x0/0x0/0x0, omap 0x143cc, meta 0x2bbbc34), peers [0,2] op hist [])
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:28:13.377731+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83656704 unmapped: 2236416 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:28:14.377931+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83656704 unmapped: 2236416 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:28:15.378168+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83656704 unmapped: 2236416 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:37 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1012546 data_alloc: 218103808 data_used: 6999
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:28:16.378312+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83656704 unmapped: 2236416 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:28:17.378983+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fce35000/0x0/0x4ffc00000, data 0x13186c/0x1f7000, compress 0x0/0x0/0x0, omap 0x143cc, meta 0x2bbbc34), peers [0,2] op hist [])
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83656704 unmapped: 2236416 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:28:18.379771+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83656704 unmapped: 2236416 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:28:19.380320+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83656704 unmapped: 2236416 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:28:20.380578+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fce35000/0x0/0x4ffc00000, data 0x13186c/0x1f7000, compress 0x0/0x0/0x0, omap 0x143cc, meta 0x2bbbc34), peers [0,2] op hist [])
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83656704 unmapped: 2236416 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:37 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1012546 data_alloc: 218103808 data_used: 6999
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:28:21.381320+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83656704 unmapped: 2236416 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:28:22.381749+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83656704 unmapped: 2236416 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:28:23.382234+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83656704 unmapped: 2236416 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:28:24.382740+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fce35000/0x0/0x4ffc00000, data 0x13186c/0x1f7000, compress 0x0/0x0/0x0, omap 0x143cc, meta 0x2bbbc34), peers [0,2] op hist [])
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83656704 unmapped: 2236416 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:28:25.383067+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83656704 unmapped: 2236416 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:37 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1012546 data_alloc: 218103808 data_used: 6999
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:28:26.383215+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83656704 unmapped: 2236416 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:28:27.383520+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83656704 unmapped: 2236416 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:28:28.383642+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83656704 unmapped: 2236416 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fce35000/0x0/0x4ffc00000, data 0x13186c/0x1f7000, compress 0x0/0x0/0x0, omap 0x143cc, meta 0x2bbbc34), peers [0,2] op hist [])
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:28:29.383788+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83656704 unmapped: 2236416 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fce35000/0x0/0x4ffc00000, data 0x13186c/0x1f7000, compress 0x0/0x0/0x0, omap 0x143cc, meta 0x2bbbc34), peers [0,2] op hist [])
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:28:30.384146+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83656704 unmapped: 2236416 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:37 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1012546 data_alloc: 218103808 data_used: 6999
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:28:31.384338+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83656704 unmapped: 2236416 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:28:32.384475+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83656704 unmapped: 2236416 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:28:33.384748+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fce35000/0x0/0x4ffc00000, data 0x13186c/0x1f7000, compress 0x0/0x0/0x0, omap 0x143cc, meta 0x2bbbc34), peers [0,2] op hist [])
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83656704 unmapped: 2236416 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:28:34.384977+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83656704 unmapped: 2236416 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:28:35.385114+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83656704 unmapped: 2236416 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:28:36.385349+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:37 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1012546 data_alloc: 218103808 data_used: 6999
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83656704 unmapped: 2236416 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:28:37.385502+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83656704 unmapped: 2236416 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:28:38.385693+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83656704 unmapped: 2236416 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:28:39.385887+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fce35000/0x0/0x4ffc00000, data 0x13186c/0x1f7000, compress 0x0/0x0/0x0, omap 0x143cc, meta 0x2bbbc34), peers [0,2] op hist [])
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83656704 unmapped: 2236416 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:28:40.386020+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83656704 unmapped: 2236416 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:28:41.386233+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:37 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1012546 data_alloc: 218103808 data_used: 6999
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83656704 unmapped: 2236416 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:28:42.386496+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83656704 unmapped: 2236416 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:28:43.386750+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fce35000/0x0/0x4ffc00000, data 0x13186c/0x1f7000, compress 0x0/0x0/0x0, omap 0x143cc, meta 0x2bbbc34), peers [0,2] op hist [])
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83656704 unmapped: 2236416 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:28:44.387034+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83656704 unmapped: 2236416 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:28:45.387300+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fce35000/0x0/0x4ffc00000, data 0x13186c/0x1f7000, compress 0x0/0x0/0x0, omap 0x143cc, meta 0x2bbbc34), peers [0,2] op hist [])
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83656704 unmapped: 2236416 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:28:46.387516+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:37 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1012546 data_alloc: 218103808 data_used: 6999
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83656704 unmapped: 2236416 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:28:47.387794+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83656704 unmapped: 2236416 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:28:48.388039+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83656704 unmapped: 2236416 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:28:49.388195+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83656704 unmapped: 2236416 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:28:50.388441+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83656704 unmapped: 2236416 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:28:51.388759+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fce35000/0x0/0x4ffc00000, data 0x13186c/0x1f7000, compress 0x0/0x0/0x0, omap 0x143cc, meta 0x2bbbc34), peers [0,2] op hist [])
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:37 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1012546 data_alloc: 218103808 data_used: 6999
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83656704 unmapped: 2236416 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:28:52.388889+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83656704 unmapped: 2236416 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:28:53.389081+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83656704 unmapped: 2236416 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:28:54.389347+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83656704 unmapped: 2236416 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:28:55.389572+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83656704 unmapped: 2236416 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:28:56.389762+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:37 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1012546 data_alloc: 218103808 data_used: 6999
Jan 31 08:51:37 compute-0 ceph-osd[87035]: osd.1 127 heartbeat osd_stat(store_statfs(0x4fce35000/0x0/0x4ffc00000, data 0x13186c/0x1f7000, compress 0x0/0x0/0x0, omap 0x143cc, meta 0x2bbbc34), peers [0,2] op hist [])
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83656704 unmapped: 2236416 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:28:57.389926+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83656704 unmapped: 2236416 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: handle_auth_request added challenge on 0x55d784a97400
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:28:58.390180+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _renew_subs
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 31 08:51:37 compute-0 ceph-osd[87035]: osd.1 127 handle_osd_map epochs [128,128], i have 127, src has [1,128]
Jan 31 08:51:37 compute-0 ceph-osd[87035]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 56.129043579s of 59.412487030s, submitted: 90
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83664896 unmapped: 2228224 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: osd.1 128 heartbeat osd_stat(store_statfs(0x4fce30000/0x0/0x4ffc00000, data 0x133408/0x1fa000, compress 0x0/0x0/0x0, omap 0x145f0, meta 0x2bbba10), peers [0,2] op hist [])
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:28:59.390481+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83664896 unmapped: 2228224 heap: 85893120 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: osd.1 128 handle_osd_map epochs [128,129], i have 128, src has [1,129]
Jan 31 08:51:37 compute-0 ceph-osd[87035]: osd.1 129 heartbeat osd_stat(store_statfs(0x4fce2f000/0x0/0x4ffc00000, data 0x133410/0x1fb000, compress 0x0/0x0/0x0, omap 0x145f0, meta 0x2bbba10), peers [0,2] op hist [0,0,0,0,0,0,1])
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:29:00.390626+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 92078080 unmapped: 10600448 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:29:01.391043+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:37 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1063859 data_alloc: 218103808 data_used: 6999
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83697664 unmapped: 18980864 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:29:02.391315+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _renew_subs
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 31 08:51:37 compute-0 ceph-osd[87035]: osd.1 129 handle_osd_map epochs [130,130], i have 129, src has [1,130]
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83566592 unmapped: 19111936 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: osd.1 130 ms_handle_reset con 0x55d784a97400 session 0x55d784e8ba40
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:29:03.391606+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: handle_auth_request added challenge on 0x55d784a97800
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83697664 unmapped: 18980864 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:29:04.391889+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 92127232 unmapped: 10551296 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:29:05.392144+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83746816 unmapped: 18931712 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: osd.1 130 heartbeat osd_stat(store_statfs(0x4fb1b8000/0x0/0x4ffc00000, data 0x1da6bd3/0x1e72000, compress 0x0/0x0/0x0, omap 0x150e3, meta 0x2bbaf1d), peers [0,2] op hist [])
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:29:06.392360+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:37 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1176756 data_alloc: 218103808 data_used: 7018
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83746816 unmapped: 18931712 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: osd.1 130 handle_osd_map epochs [130,131], i have 130, src has [1,131]
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _renew_subs
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 31 08:51:37 compute-0 ceph-osd[87035]: osd.1 130 handle_osd_map epochs [131,131], i have 131, src has [1,131]
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:29:07.392571+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: osd.1 131 ms_handle_reset con 0x55d784a97800 session 0x55d784a401c0
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83738624 unmapped: 18939904 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:29:08.392854+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: osd.1 131 heartbeat osd_stat(store_statfs(0x4fb1b4000/0x0/0x4ffc00000, data 0x1da87ae/0x1e76000, compress 0x0/0x0/0x0, omap 0x154ad, meta 0x2bbab53), peers [0,2] op hist [])
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83738624 unmapped: 18939904 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-mgr[75519]: log_channel(audit) log [DBG] : from='client.14696 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 31 08:51:37 compute-0 ceph-82c880e6-d992-5408-8b12-efff9c275473-mgr-compute-0-fqetdi[75515]: 2026-01-31T08:51:37.723+0000 7fcf0ed23640 -1 mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Jan 31 08:51:37 compute-0 ceph-mgr[75519]: mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:29:09.393187+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83738624 unmapped: 18939904 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:29:10.393380+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83738624 unmapped: 18939904 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:29:11.393580+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:37 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1180902 data_alloc: 218103808 data_used: 7018
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83738624 unmapped: 18939904 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _renew_subs
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 31 08:51:37 compute-0 ceph-osd[87035]: osd.1 131 handle_osd_map epochs [132,132], i have 131, src has [1,132]
Jan 31 08:51:37 compute-0 ceph-osd[87035]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.004181862s of 13.245240211s, submitted: 48
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:29:12.393845+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83738624 unmapped: 18939904 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:29:13.394119+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83738624 unmapped: 18939904 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: osd.1 132 heartbeat osd_stat(store_statfs(0x4fb1b1000/0x0/0x4ffc00000, data 0x1daa22d/0x1e79000, compress 0x0/0x0/0x0, omap 0x15720, meta 0x2bba8e0), peers [0,2] op hist [])
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:29:14.394430+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83738624 unmapped: 18939904 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:29:15.394635+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: handle_auth_request added challenge on 0x55d784a97c00
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83869696 unmapped: 18808832 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:29:16.394762+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:37 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1183036 data_alloc: 218103808 data_used: 7018
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _renew_subs
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 31 08:51:37 compute-0 ceph-osd[87035]: osd.1 132 handle_osd_map epochs [133,133], i have 132, src has [1,133]
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83869696 unmapped: 18808832 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fb1ae000/0x0/0x4ffc00000, data 0x1dabe1d/0x1e7c000, compress 0x0/0x0/0x0, omap 0x15bc3, meta 0x2bba43d), peers [0,2] op hist [])
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:29:17.394988+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83869696 unmapped: 18808832 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: osd.1 133 ms_handle_reset con 0x55d784a97c00 session 0x55d782c948c0
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:29:18.395196+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83869696 unmapped: 18808832 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:29:19.395427+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83869696 unmapped: 18808832 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:29:20.395657+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: handle_auth_request added challenge on 0x55d783df2000
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 84008960 unmapped: 18669568 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:29:21.395846+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:37 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1121976 data_alloc: 218103808 data_used: 7034
Jan 31 08:51:37 compute-0 ceph-osd[87035]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fbe21000/0x0/0x4ffc00000, data 0x113bdfa/0x120b000, compress 0x0/0x0/0x0, omap 0x15fd9, meta 0x2bba027), peers [0,2] op hist [])
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 84025344 unmapped: 18653184 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fbe21000/0x0/0x4ffc00000, data 0x113bdfa/0x120b000, compress 0x0/0x0/0x0, omap 0x15fd9, meta 0x2bba027), peers [0,2] op hist [])
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:29:22.396112+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: osd.1 133 handle_osd_map epochs [133,134], i have 133, src has [1,134]
Jan 31 08:51:37 compute-0 ceph-osd[87035]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.201231003s of 10.794104576s, submitted: 57
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _renew_subs
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 31 08:51:37 compute-0 ceph-osd[87035]: osd.1 133 handle_osd_map epochs [134,134], i have 134, src has [1,134]
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 84025344 unmapped: 18653184 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:29:23.396391+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 84041728 unmapped: 18636800 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:29:24.396620+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 84041728 unmapped: 18636800 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _renew_subs
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 31 08:51:37 compute-0 ceph-osd[87035]: osd.1 134 handle_osd_map epochs [135,135], i have 134, src has [1,135]
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:29:25.396887+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: osd.1 135 heartbeat osd_stat(store_statfs(0x4fbe1f000/0x0/0x4ffc00000, data 0x113d9da/0x120d000, compress 0x0/0x0/0x0, omap 0x162e4, meta 0x2bb9d1c), peers [0,2] op hist [0,0,1])
Jan 31 08:51:37 compute-0 ceph-osd[87035]: osd.1 135 ms_handle_reset con 0x55d783df2000 session 0x55d78517b880
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83910656 unmapped: 18767872 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:29:26.397138+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:37 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1046147 data_alloc: 218103808 data_used: 7018
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83763200 unmapped: 18915328 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:29:27.397354+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83763200 unmapped: 18915328 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:29:28.397606+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83763200 unmapped: 18915328 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:29:29.397931+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: osd.1 135 heartbeat osd_stat(store_statfs(0x4fce1a000/0x0/0x4ffc00000, data 0x13f475/0x210000, compress 0x0/0x0/0x0, omap 0x16729, meta 0x2bb98d7), peers [0,2] op hist [])
Jan 31 08:51:37 compute-0 ceph-osd[87035]: osd.1 135 handle_osd_map epochs [136,136], i have 135, src has [1,136]
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83763200 unmapped: 18915328 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:29:30.398151+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83763200 unmapped: 18915328 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:29:31.398329+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:37 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1048921 data_alloc: 218103808 data_used: 7018
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83763200 unmapped: 18915328 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:29:32.398508+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83763200 unmapped: 18915328 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:29:33.398698+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83763200 unmapped: 18915328 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:29:34.398901+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fce17000/0x0/0x4ffc00000, data 0x140ef4/0x213000, compress 0x0/0x0/0x0, omap 0x16a75, meta 0x2bb958b), peers [0,2] op hist [])
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83763200 unmapped: 18915328 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:29:35.399086+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83763200 unmapped: 18915328 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:29:36.399220+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:37 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1048921 data_alloc: 218103808 data_used: 7018
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83763200 unmapped: 18915328 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:29:37.399345+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fce17000/0x0/0x4ffc00000, data 0x140ef4/0x213000, compress 0x0/0x0/0x0, omap 0x16a75, meta 0x2bb958b), peers [0,2] op hist [])
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83763200 unmapped: 18915328 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:29:38.399504+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83763200 unmapped: 18915328 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:29:39.399641+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fce17000/0x0/0x4ffc00000, data 0x140ef4/0x213000, compress 0x0/0x0/0x0, omap 0x16a75, meta 0x2bb958b), peers [0,2] op hist [])
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83779584 unmapped: 18898944 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:29:40.399813+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fce17000/0x0/0x4ffc00000, data 0x140ef4/0x213000, compress 0x0/0x0/0x0, omap 0x16a75, meta 0x2bb958b), peers [0,2] op hist [])
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fce17000/0x0/0x4ffc00000, data 0x140ef4/0x213000, compress 0x0/0x0/0x0, omap 0x16a75, meta 0x2bb958b), peers [0,2] op hist [])
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:29:41.399954+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:37 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1048921 data_alloc: 218103808 data_used: 7018
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:29:42.400117+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:29:43.407739+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:29:44.407892+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:29:45.408145+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:29:46.408400+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:37 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1048921 data_alloc: 218103808 data_used: 7018
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fce17000/0x0/0x4ffc00000, data 0x140ef4/0x213000, compress 0x0/0x0/0x0, omap 0x16a75, meta 0x2bb958b), peers [0,2] op hist [])
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:29:47.408573+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:29:48.408760+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:29:49.408938+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:29:50.409117+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fce17000/0x0/0x4ffc00000, data 0x140ef4/0x213000, compress 0x0/0x0/0x0, omap 0x16a75, meta 0x2bb958b), peers [0,2] op hist [])
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:29:51.409340+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:37 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1048921 data_alloc: 218103808 data_used: 7018
Jan 31 08:51:37 compute-0 ceph-osd[87035]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fce17000/0x0/0x4ffc00000, data 0x140ef4/0x213000, compress 0x0/0x0/0x0, omap 0x16a75, meta 0x2bb958b), peers [0,2] op hist [])
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:29:52.409544+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:29:53.409696+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fce17000/0x0/0x4ffc00000, data 0x140ef4/0x213000, compress 0x0/0x0/0x0, omap 0x16a75, meta 0x2bb958b), peers [0,2] op hist [])
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:29:54.409919+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:29:55.410168+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:29:56.410339+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fce17000/0x0/0x4ffc00000, data 0x140ef4/0x213000, compress 0x0/0x0/0x0, omap 0x16a75, meta 0x2bb958b), peers [0,2] op hist [])
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:37 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1048921 data_alloc: 218103808 data_used: 7018
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:29:57.410545+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fce17000/0x0/0x4ffc00000, data 0x140ef4/0x213000, compress 0x0/0x0/0x0, omap 0x16a75, meta 0x2bb958b), peers [0,2] op hist [])
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:29:58.410851+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:29:59.411134+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:30:00.411373+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fce17000/0x0/0x4ffc00000, data 0x140ef4/0x213000, compress 0x0/0x0/0x0, omap 0x16a75, meta 0x2bb958b), peers [0,2] op hist [])
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:30:01.411667+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:37 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1048921 data_alloc: 218103808 data_used: 7018
Jan 31 08:51:37 compute-0 ceph-osd[87035]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fce17000/0x0/0x4ffc00000, data 0x140ef4/0x213000, compress 0x0/0x0/0x0, omap 0x16a75, meta 0x2bb958b), peers [0,2] op hist [])
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:30:02.411825+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:30:03.412081+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:30:04.412269+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:30:05.412600+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fce17000/0x0/0x4ffc00000, data 0x140ef4/0x213000, compress 0x0/0x0/0x0, omap 0x16a75, meta 0x2bb958b), peers [0,2] op hist [])
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:30:06.412876+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:37 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1048921 data_alloc: 218103808 data_used: 7018
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:30:07.413093+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:30:08.413363+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:30:09.413621+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fce17000/0x0/0x4ffc00000, data 0x140ef4/0x213000, compress 0x0/0x0/0x0, omap 0x16a75, meta 0x2bb958b), peers [0,2] op hist [])
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:30:10.413825+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fce17000/0x0/0x4ffc00000, data 0x140ef4/0x213000, compress 0x0/0x0/0x0, omap 0x16a75, meta 0x2bb958b), peers [0,2] op hist [])
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:30:11.414157+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:37 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1048921 data_alloc: 218103808 data_used: 7018
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:30:12.414407+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:30:13.414611+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:30:14.414817+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fce17000/0x0/0x4ffc00000, data 0x140ef4/0x213000, compress 0x0/0x0/0x0, omap 0x16a75, meta 0x2bb958b), peers [0,2] op hist [])
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:30:15.415060+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fce17000/0x0/0x4ffc00000, data 0x140ef4/0x213000, compress 0x0/0x0/0x0, omap 0x16a75, meta 0x2bb958b), peers [0,2] op hist [])
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:30:16.415234+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:37 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1048921 data_alloc: 218103808 data_used: 7018
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:30:17.415544+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:30:18.415774+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fce17000/0x0/0x4ffc00000, data 0x140ef4/0x213000, compress 0x0/0x0/0x0, omap 0x16a75, meta 0x2bb958b), peers [0,2] op hist [])
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:30:19.416043+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:30:20.416320+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:30:21.416452+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:37 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1048921 data_alloc: 218103808 data_used: 7018
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:30:22.416603+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fce17000/0x0/0x4ffc00000, data 0x140ef4/0x213000, compress 0x0/0x0/0x0, omap 0x16a75, meta 0x2bb958b), peers [0,2] op hist [])
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:30:23.416778+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:30:24.416987+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:30:25.417192+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:30:26.417441+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:37 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1048921 data_alloc: 218103808 data_used: 7018
Jan 31 08:51:37 compute-0 ceph-osd[87035]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fce17000/0x0/0x4ffc00000, data 0x140ef4/0x213000, compress 0x0/0x0/0x0, omap 0x16a75, meta 0x2bb958b), peers [0,2] op hist [])
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:30:27.417596+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:30:28.417767+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:30:29.417934+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:30:30.418069+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fce17000/0x0/0x4ffc00000, data 0x140ef4/0x213000, compress 0x0/0x0/0x0, omap 0x16a75, meta 0x2bb958b), peers [0,2] op hist [])
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:30:31.418294+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:37 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1048921 data_alloc: 218103808 data_used: 7018
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:30:32.418447+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:30:33.418564+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:30:34.418753+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fce17000/0x0/0x4ffc00000, data 0x140ef4/0x213000, compress 0x0/0x0/0x0, omap 0x16a75, meta 0x2bb958b), peers [0,2] op hist [])
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:30:35.418963+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:30:36.419123+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:37 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1048921 data_alloc: 218103808 data_used: 7018
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:30:37.419239+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:30:38.419492+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:30:39.419720+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fce17000/0x0/0x4ffc00000, data 0x140ef4/0x213000, compress 0x0/0x0/0x0, omap 0x16a75, meta 0x2bb958b), peers [0,2] op hist [])
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:30:40.419897+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:30:41.420105+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:37 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1048921 data_alloc: 218103808 data_used: 7018
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:30:42.420355+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:30:43.420584+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:30:44.420770+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fce17000/0x0/0x4ffc00000, data 0x140ef4/0x213000, compress 0x0/0x0/0x0, omap 0x16a75, meta 0x2bb958b), peers [0,2] op hist [])
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:30:45.421000+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:30:46.421183+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:37 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1048921 data_alloc: 218103808 data_used: 7018
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:30:47.421323+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:30:48.421475+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:30:49.421682+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:30:50.421825+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fce17000/0x0/0x4ffc00000, data 0x140ef4/0x213000, compress 0x0/0x0/0x0, omap 0x16a75, meta 0x2bb958b), peers [0,2] op hist [])
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:30:51.422031+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:37 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1048921 data_alloc: 218103808 data_used: 7018
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:30:52.422204+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:30:53.422334+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:30:54.422517+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:30:55.422800+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fce17000/0x0/0x4ffc00000, data 0x140ef4/0x213000, compress 0x0/0x0/0x0, omap 0x16a75, meta 0x2bb958b), peers [0,2] op hist [])
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:30:56.422992+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:37 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1048921 data_alloc: 218103808 data_used: 7018
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:30:57.423143+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:30:58.423497+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:30:59.423709+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:31:00.423851+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:31:01.424063+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fce17000/0x0/0x4ffc00000, data 0x140ef4/0x213000, compress 0x0/0x0/0x0, omap 0x16a75, meta 0x2bb958b), peers [0,2] op hist [])
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:37 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1048921 data_alloc: 218103808 data_used: 7018
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:31:02.424218+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:31:03.424423+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fce17000/0x0/0x4ffc00000, data 0x140ef4/0x213000, compress 0x0/0x0/0x0, omap 0x16a75, meta 0x2bb958b), peers [0,2] op hist [])
Jan 31 08:51:37 compute-0 ceph-osd[87035]: bluestore.MempoolThread fragmentation_score=0.000128 took=0.000027s
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:31:04.424625+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:31:05.424906+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:31:06.425037+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fce17000/0x0/0x4ffc00000, data 0x140ef4/0x213000, compress 0x0/0x0/0x0, omap 0x16a75, meta 0x2bb958b), peers [0,2] op hist [])
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:37 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1048921 data_alloc: 218103808 data_used: 7018
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:31:07.425194+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:31:08.425360+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:31:09.425532+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:31:10.425730+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fce17000/0x0/0x4ffc00000, data 0x140ef4/0x213000, compress 0x0/0x0/0x0, omap 0x16a75, meta 0x2bb958b), peers [0,2] op hist [])
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:31:11.425901+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:37 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1048921 data_alloc: 218103808 data_used: 7018
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:31:12.426039+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:31:13.426160+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:31:14.426326+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:31:15.426565+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:31:16.426882+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fce17000/0x0/0x4ffc00000, data 0x140ef4/0x213000, compress 0x0/0x0/0x0, omap 0x16a75, meta 0x2bb958b), peers [0,2] op hist [])
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:37 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1048921 data_alloc: 218103808 data_used: 7018
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:31:17.427099+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fce17000/0x0/0x4ffc00000, data 0x140ef4/0x213000, compress 0x0/0x0/0x0, omap 0x16a75, meta 0x2bb958b), peers [0,2] op hist [])
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:31:18.427286+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:31:19.427633+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:31:20.427752+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:31:21.427889+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:37 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1048921 data_alloc: 218103808 data_used: 7018
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:31:22.428142+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:31:23.428321+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fce17000/0x0/0x4ffc00000, data 0x140ef4/0x213000, compress 0x0/0x0/0x0, omap 0x16a75, meta 0x2bb958b), peers [0,2] op hist [])
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:31:24.428493+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fce17000/0x0/0x4ffc00000, data 0x140ef4/0x213000, compress 0x0/0x0/0x0, omap 0x16a75, meta 0x2bb958b), peers [0,2] op hist [])
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:31:25.428779+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:31:26.428951+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fce17000/0x0/0x4ffc00000, data 0x140ef4/0x213000, compress 0x0/0x0/0x0, omap 0x16a75, meta 0x2bb958b), peers [0,2] op hist [])
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:37 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1048921 data_alloc: 218103808 data_used: 7018
Jan 31 08:51:37 compute-0 ceph-osd[87035]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fce17000/0x0/0x4ffc00000, data 0x140ef4/0x213000, compress 0x0/0x0/0x0, omap 0x16a75, meta 0x2bb958b), peers [0,2] op hist [])
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:31:27.429103+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:31:28.429295+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:31:29.429449+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:31:30.429628+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:31:31.429792+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:37 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1048921 data_alloc: 218103808 data_used: 7018
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:31:32.429995+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fce17000/0x0/0x4ffc00000, data 0x140ef4/0x213000, compress 0x0/0x0/0x0, omap 0x16a75, meta 0x2bb958b), peers [0,2] op hist [])
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:31:33.430155+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:31:34.430331+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fce17000/0x0/0x4ffc00000, data 0x140ef4/0x213000, compress 0x0/0x0/0x0, omap 0x16a75, meta 0x2bb958b), peers [0,2] op hist [])
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:31:35.430562+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:31:36.430700+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:37 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1048921 data_alloc: 218103808 data_used: 7018
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:31:37.430857+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fce17000/0x0/0x4ffc00000, data 0x140ef4/0x213000, compress 0x0/0x0/0x0, omap 0x16a75, meta 0x2bb958b), peers [0,2] op hist [])
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:31:38.430996+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:31:39.431231+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fce17000/0x0/0x4ffc00000, data 0x140ef4/0x213000, compress 0x0/0x0/0x0, omap 0x16a75, meta 0x2bb958b), peers [0,2] op hist [])
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:31:40.431510+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:31:41.431740+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:37 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1048921 data_alloc: 218103808 data_used: 7018
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:31:42.431903+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fce17000/0x0/0x4ffc00000, data 0x140ef4/0x213000, compress 0x0/0x0/0x0, omap 0x16a75, meta 0x2bb958b), peers [0,2] op hist [])
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:31:43.432026+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:31:44.432155+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:31:45.432386+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:31:46.432545+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:37 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1048921 data_alloc: 218103808 data_used: 7018
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:31:47.432751+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:31:48.432910+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fce17000/0x0/0x4ffc00000, data 0x140ef4/0x213000, compress 0x0/0x0/0x0, omap 0x16a75, meta 0x2bb958b), peers [0,2] op hist [])
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:31:49.433073+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:31:50.433313+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:31:51.433502+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fce17000/0x0/0x4ffc00000, data 0x140ef4/0x213000, compress 0x0/0x0/0x0, omap 0x16a75, meta 0x2bb958b), peers [0,2] op hist [])
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:37 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1048921 data_alloc: 218103808 data_used: 7018
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:31:52.433714+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fce17000/0x0/0x4ffc00000, data 0x140ef4/0x213000, compress 0x0/0x0/0x0, omap 0x16a75, meta 0x2bb958b), peers [0,2] op hist [])
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:31:53.433845+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:31:54.433971+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:31:55.434155+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:31:56.434340+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:37 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1048921 data_alloc: 218103808 data_used: 7018
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:31:57.434517+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fce17000/0x0/0x4ffc00000, data 0x140ef4/0x213000, compress 0x0/0x0/0x0, omap 0x16a75, meta 0x2bb958b), peers [0,2] op hist [])
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:31:58.434674+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:31:59.434826+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:32:00.434998+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fce17000/0x0/0x4ffc00000, data 0x140ef4/0x213000, compress 0x0/0x0/0x0, omap 0x16a75, meta 0x2bb958b), peers [0,2] op hist [])
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:32:01.435117+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:37 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1048921 data_alloc: 218103808 data_used: 7018
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:32:02.435462+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:32:03.435629+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:32:04.435777+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:32:05.435906+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:32:06.436050+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fce17000/0x0/0x4ffc00000, data 0x140ef4/0x213000, compress 0x0/0x0/0x0, omap 0x16a75, meta 0x2bb958b), peers [0,2] op hist [])
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:37 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1048921 data_alloc: 218103808 data_used: 7018
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:32:07.436220+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fce17000/0x0/0x4ffc00000, data 0x140ef4/0x213000, compress 0x0/0x0/0x0, omap 0x16a75, meta 0x2bb958b), peers [0,2] op hist [])
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:32:08.436353+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fce17000/0x0/0x4ffc00000, data 0x140ef4/0x213000, compress 0x0/0x0/0x0, omap 0x16a75, meta 0x2bb958b), peers [0,2] op hist [])
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:32:09.436485+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:32:10.436617+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:32:11.436772+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fce17000/0x0/0x4ffc00000, data 0x140ef4/0x213000, compress 0x0/0x0/0x0, omap 0x16a75, meta 0x2bb958b), peers [0,2] op hist [])
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:37 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1048921 data_alloc: 218103808 data_used: 7018
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:32:12.436893+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:32:13.437047+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:32:14.437173+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:32:15.437326+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:32:16.437430+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:37 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1048921 data_alloc: 218103808 data_used: 7018
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:32:17.437536+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fce17000/0x0/0x4ffc00000, data 0x140ef4/0x213000, compress 0x0/0x0/0x0, omap 0x16a75, meta 0x2bb958b), peers [0,2] op hist [])
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:32:18.437691+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fce17000/0x0/0x4ffc00000, data 0x140ef4/0x213000, compress 0x0/0x0/0x0, omap 0x16a75, meta 0x2bb958b), peers [0,2] op hist [])
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:32:19.437857+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:32:20.438103+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:32:21.438359+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:37 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1048921 data_alloc: 218103808 data_used: 7018
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:32:22.438526+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:32:23.438651+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:32:24.438816+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fce17000/0x0/0x4ffc00000, data 0x140ef4/0x213000, compress 0x0/0x0/0x0, omap 0x16a75, meta 0x2bb958b), peers [0,2] op hist [])
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:32:25.439020+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:32:26.439202+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:37 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1048921 data_alloc: 218103808 data_used: 7018
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:32:27.439363+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:32:28.439509+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fce17000/0x0/0x4ffc00000, data 0x140ef4/0x213000, compress 0x0/0x0/0x0, omap 0x16a75, meta 0x2bb958b), peers [0,2] op hist [])
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:32:29.439925+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fce17000/0x0/0x4ffc00000, data 0x140ef4/0x213000, compress 0x0/0x0/0x0, omap 0x16a75, meta 0x2bb958b), peers [0,2] op hist [])
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:32:30.440163+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:32:31.440360+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:37 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1048921 data_alloc: 218103808 data_used: 7018
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:32:32.440546+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:32:33.440684+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:32:34.440818+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:32:35.441009+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fce17000/0x0/0x4ffc00000, data 0x140ef4/0x213000, compress 0x0/0x0/0x0, omap 0x16a75, meta 0x2bb958b), peers [0,2] op hist [])
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:32:36.441183+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:32:37.441346+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:37 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1048921 data_alloc: 218103808 data_used: 7018
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:32:38.441453+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:32:39.441550+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fce17000/0x0/0x4ffc00000, data 0x140ef4/0x213000, compress 0x0/0x0/0x0, omap 0x16a75, meta 0x2bb958b), peers [0,2] op hist [])
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:32:40.441731+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fce17000/0x0/0x4ffc00000, data 0x140ef4/0x213000, compress 0x0/0x0/0x0, omap 0x16a75, meta 0x2bb958b), peers [0,2] op hist [])
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:32:41.441973+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:32:42.442130+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:37 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1048921 data_alloc: 218103808 data_used: 7018
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fce17000/0x0/0x4ffc00000, data 0x140ef4/0x213000, compress 0x0/0x0/0x0, omap 0x16a75, meta 0x2bb958b), peers [0,2] op hist [])
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:32:43.442312+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:32:44.442513+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fce17000/0x0/0x4ffc00000, data 0x140ef4/0x213000, compress 0x0/0x0/0x0, omap 0x16a75, meta 0x2bb958b), peers [0,2] op hist [])
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:32:45.442779+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:32:46.443112+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:32:47.446959+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:37 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1048921 data_alloc: 218103808 data_used: 7018
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:32:48.447194+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:32:49.447572+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:32:50.447903+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fce17000/0x0/0x4ffc00000, data 0x140ef4/0x213000, compress 0x0/0x0/0x0, omap 0x16a75, meta 0x2bb958b), peers [0,2] op hist [])
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:32:51.448116+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:32:52.448401+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:37 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1048921 data_alloc: 218103808 data_used: 7018
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:32:53.448675+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fce17000/0x0/0x4ffc00000, data 0x140ef4/0x213000, compress 0x0/0x0/0x0, omap 0x16a75, meta 0x2bb958b), peers [0,2] op hist [])
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:32:54.448930+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:32:55.449202+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:32:56.449430+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fce17000/0x0/0x4ffc00000, data 0x140ef4/0x213000, compress 0x0/0x0/0x0, omap 0x16a75, meta 0x2bb958b), peers [0,2] op hist [])
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:32:57.449701+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:37 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1048921 data_alloc: 218103808 data_used: 7018
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:32:58.450007+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:32:59.450164+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:33:00.450372+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:33:01.450579+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fce17000/0x0/0x4ffc00000, data 0x140ef4/0x213000, compress 0x0/0x0/0x0, omap 0x16a75, meta 0x2bb958b), peers [0,2] op hist [])
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:33:02.450827+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:37 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1048921 data_alloc: 218103808 data_used: 7018
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:33:03.451070+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:33:04.451205+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:33:05.451426+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fce17000/0x0/0x4ffc00000, data 0x140ef4/0x213000, compress 0x0/0x0/0x0, omap 0x16a75, meta 0x2bb958b), peers [0,2] op hist [])
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:33:06.451562+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:33:07.451708+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:37 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1048921 data_alloc: 218103808 data_used: 7018
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:33:08.451849+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fce17000/0x0/0x4ffc00000, data 0x140ef4/0x213000, compress 0x0/0x0/0x0, omap 0x16a75, meta 0x2bb958b), peers [0,2] op hist [])
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:33:09.452055+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:33:10.452245+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fce17000/0x0/0x4ffc00000, data 0x140ef4/0x213000, compress 0x0/0x0/0x0, omap 0x16a75, meta 0x2bb958b), peers [0,2] op hist [])
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:33:11.452502+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:33:12.452675+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:37 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1048921 data_alloc: 218103808 data_used: 7018
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:33:13.452835+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:33:14.452988+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fce17000/0x0/0x4ffc00000, data 0x140ef4/0x213000, compress 0x0/0x0/0x0, omap 0x16a75, meta 0x2bb958b), peers [0,2] op hist [])
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:33:15.453133+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:33:16.453236+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:33:17.453399+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:37 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1048921 data_alloc: 218103808 data_used: 7018
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:33:18.453571+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:33:19.453736+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fce17000/0x0/0x4ffc00000, data 0x140ef4/0x213000, compress 0x0/0x0/0x0, omap 0x16a75, meta 0x2bb958b), peers [0,2] op hist [])
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:33:20.453884+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:33:21.454061+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:33:22.454226+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:37 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1048921 data_alloc: 218103808 data_used: 7018
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fce17000/0x0/0x4ffc00000, data 0x140ef4/0x213000, compress 0x0/0x0/0x0, omap 0x16a75, meta 0x2bb958b), peers [0,2] op hist [])
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:33:23.454421+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:33:24.454578+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:33:25.454710+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:33:26.454869+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:33:27.455016+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:37 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1048921 data_alloc: 218103808 data_used: 7018
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:33:28.455193+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fce17000/0x0/0x4ffc00000, data 0x140ef4/0x213000, compress 0x0/0x0/0x0, omap 0x16a75, meta 0x2bb958b), peers [0,2] op hist [])
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:33:29.455336+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:33:30.455442+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:33:31.455607+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:33:32.455774+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:37 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1048921 data_alloc: 218103808 data_used: 7018
Jan 31 08:51:37 compute-0 ceph-osd[87035]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fce17000/0x0/0x4ffc00000, data 0x140ef4/0x213000, compress 0x0/0x0/0x0, omap 0x16a75, meta 0x2bb958b), peers [0,2] op hist [])
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:33:33.455905+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:33:34.456079+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:33:35.456325+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:33:36.456527+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fce17000/0x0/0x4ffc00000, data 0x140ef4/0x213000, compress 0x0/0x0/0x0, omap 0x16a75, meta 0x2bb958b), peers [0,2] op hist [])
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:33:37.456686+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:37 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1048921 data_alloc: 218103808 data_used: 7018
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:33:38.457523+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:33:39.458514+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fce17000/0x0/0x4ffc00000, data 0x140ef4/0x213000, compress 0x0/0x0/0x0, omap 0x16a75, meta 0x2bb958b), peers [0,2] op hist [])
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:33:40.459319+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:33:41.459828+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:33:42.460155+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:37 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1048921 data_alloc: 218103808 data_used: 7018
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:33:43.461025+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:33:44.461847+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fce17000/0x0/0x4ffc00000, data 0x140ef4/0x213000, compress 0x0/0x0/0x0, omap 0x16a75, meta 0x2bb958b), peers [0,2] op hist [])
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:33:45.462391+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:33:46.462665+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fce17000/0x0/0x4ffc00000, data 0x140ef4/0x213000, compress 0x0/0x0/0x0, omap 0x16a75, meta 0x2bb958b), peers [0,2] op hist [])
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:33:47.462854+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:37 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1048921 data_alloc: 218103808 data_used: 7018
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:33:48.463032+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fce17000/0x0/0x4ffc00000, data 0x140ef4/0x213000, compress 0x0/0x0/0x0, omap 0x16a75, meta 0x2bb958b), peers [0,2] op hist [])
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:33:49.463443+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:33:50.463673+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:33:51.464089+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:33:52.464483+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fce17000/0x0/0x4ffc00000, data 0x140ef4/0x213000, compress 0x0/0x0/0x0, omap 0x16a75, meta 0x2bb958b), peers [0,2] op hist [])
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:37 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1048921 data_alloc: 218103808 data_used: 7018
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:33:53.464835+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:33:54.465218+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:33:55.465526+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fce17000/0x0/0x4ffc00000, data 0x140ef4/0x213000, compress 0x0/0x0/0x0, omap 0x16a75, meta 0x2bb958b), peers [0,2] op hist [])
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:33:56.465845+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:33:57.466153+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:37 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1048921 data_alloc: 218103808 data_used: 7018
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:33:58.466325+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:33:59.466542+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fce17000/0x0/0x4ffc00000, data 0x140ef4/0x213000, compress 0x0/0x0/0x0, omap 0x16a75, meta 0x2bb958b), peers [0,2] op hist [])
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:34:00.466840+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fce17000/0x0/0x4ffc00000, data 0x140ef4/0x213000, compress 0x0/0x0/0x0, omap 0x16a75, meta 0x2bb958b), peers [0,2] op hist [])
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:34:01.467172+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:34:02.467381+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:37 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1048921 data_alloc: 218103808 data_used: 7018
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:34:03.467534+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fce17000/0x0/0x4ffc00000, data 0x140ef4/0x213000, compress 0x0/0x0/0x0, omap 0x16a75, meta 0x2bb958b), peers [0,2] op hist [])
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:34:04.467840+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:34:05.468058+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:34:06.468306+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:34:07.468533+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:37 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1048921 data_alloc: 218103808 data_used: 7018
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:34:08.468813+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fce17000/0x0/0x4ffc00000, data 0x140ef4/0x213000, compress 0x0/0x0/0x0, omap 0x16a75, meta 0x2bb958b), peers [0,2] op hist [])
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:34:09.469025+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fce17000/0x0/0x4ffc00000, data 0x140ef4/0x213000, compress 0x0/0x0/0x0, omap 0x16a75, meta 0x2bb958b), peers [0,2] op hist [])
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:34:10.469187+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:34:11.469289+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fce17000/0x0/0x4ffc00000, data 0x140ef4/0x213000, compress 0x0/0x0/0x0, omap 0x16a75, meta 0x2bb958b), peers [0,2] op hist [])
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:34:12.469534+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:37 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1048921 data_alloc: 218103808 data_used: 7018
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:34:13.469712+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:34:14.469899+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:34:15.470291+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:34:16.470487+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:34:17.470614+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fce17000/0x0/0x4ffc00000, data 0x140ef4/0x213000, compress 0x0/0x0/0x0, omap 0x16a75, meta 0x2bb958b), peers [0,2] op hist [])
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:37 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1048921 data_alloc: 218103808 data_used: 7018
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:34:18.470756+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fce17000/0x0/0x4ffc00000, data 0x140ef4/0x213000, compress 0x0/0x0/0x0, omap 0x16a75, meta 0x2bb958b), peers [0,2] op hist [])
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:34:19.470839+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:34:20.471093+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fce17000/0x0/0x4ffc00000, data 0x140ef4/0x213000, compress 0x0/0x0/0x0, omap 0x16a75, meta 0x2bb958b), peers [0,2] op hist [])
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:34:21.471284+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fce17000/0x0/0x4ffc00000, data 0x140ef4/0x213000, compress 0x0/0x0/0x0, omap 0x16a75, meta 0x2bb958b), peers [0,2] op hist [])
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:34:22.471463+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:37 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1048921 data_alloc: 218103808 data_used: 7018
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:34:23.471601+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:34:24.471729+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fce17000/0x0/0x4ffc00000, data 0x140ef4/0x213000, compress 0x0/0x0/0x0, omap 0x16a75, meta 0x2bb958b), peers [0,2] op hist [])
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:34:25.471891+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:34:26.472006+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:34:27.472151+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:37 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1048921 data_alloc: 218103808 data_used: 7018
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:34:28.472338+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fce17000/0x0/0x4ffc00000, data 0x140ef4/0x213000, compress 0x0/0x0/0x0, omap 0x16a75, meta 0x2bb958b), peers [0,2] op hist [])
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:34:29.472507+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:34:30.472650+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:34:31.472784+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:34:32.472898+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:37 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1048921 data_alloc: 218103808 data_used: 7018
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:34:33.473055+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fce17000/0x0/0x4ffc00000, data 0x140ef4/0x213000, compress 0x0/0x0/0x0, omap 0x16a75, meta 0x2bb958b), peers [0,2] op hist [])
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:34:34.473197+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:34:35.473394+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fce17000/0x0/0x4ffc00000, data 0x140ef4/0x213000, compress 0x0/0x0/0x0, omap 0x16a75, meta 0x2bb958b), peers [0,2] op hist [])
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:34:36.473542+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:34:37.473698+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:37 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1048921 data_alloc: 218103808 data_used: 7018
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:34:38.473863+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:34:39.473999+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:34:40.474145+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fce17000/0x0/0x4ffc00000, data 0x140ef4/0x213000, compress 0x0/0x0/0x0, omap 0x16a75, meta 0x2bb958b), peers [0,2] op hist [])
Jan 31 08:51:37 compute-0 ceph-osd[87035]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fce17000/0x0/0x4ffc00000, data 0x140ef4/0x213000, compress 0x0/0x0/0x0, omap 0x16a75, meta 0x2bb958b), peers [0,2] op hist [])
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:34:41.474307+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:34:42.474472+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:37 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1048921 data_alloc: 218103808 data_used: 7018
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:34:43.474919+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:34:44.475059+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:34:45.475217+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fce17000/0x0/0x4ffc00000, data 0x140ef4/0x213000, compress 0x0/0x0/0x0, omap 0x16a75, meta 0x2bb958b), peers [0,2] op hist [])
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:34:46.475528+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:34:47.475670+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:37 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1048921 data_alloc: 218103808 data_used: 7018
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:34:48.476201+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:34:49.476380+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fce17000/0x0/0x4ffc00000, data 0x140ef4/0x213000, compress 0x0/0x0/0x0, omap 0x16a75, meta 0x2bb958b), peers [0,2] op hist [])
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:34:50.476644+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fce17000/0x0/0x4ffc00000, data 0x140ef4/0x213000, compress 0x0/0x0/0x0, omap 0x16a75, meta 0x2bb958b), peers [0,2] op hist [])
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:34:51.476857+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:34:52.477173+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fce17000/0x0/0x4ffc00000, data 0x140ef4/0x213000, compress 0x0/0x0/0x0, omap 0x16a75, meta 0x2bb958b), peers [0,2] op hist [])
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:37 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1048921 data_alloc: 218103808 data_used: 7018
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:34:53.477390+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:34:54.477546+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:34:55.477796+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:34:56.478226+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fce17000/0x0/0x4ffc00000, data 0x140ef4/0x213000, compress 0x0/0x0/0x0, omap 0x16a75, meta 0x2bb958b), peers [0,2] op hist [])
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:34:57.478562+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:37 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1048921 data_alloc: 218103808 data_used: 7018
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:34:58.478734+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:34:59.478917+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:35:00.479151+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fce17000/0x0/0x4ffc00000, data 0x140ef4/0x213000, compress 0x0/0x0/0x0, omap 0x16a75, meta 0x2bb958b), peers [0,2] op hist [])
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:35:01.479300+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:35:02.479627+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:37 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1048921 data_alloc: 218103808 data_used: 7018
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:35:03.479795+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fce17000/0x0/0x4ffc00000, data 0x140ef4/0x213000, compress 0x0/0x0/0x0, omap 0x16a75, meta 0x2bb958b), peers [0,2] op hist [])
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:35:04.479960+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fce17000/0x0/0x4ffc00000, data 0x140ef4/0x213000, compress 0x0/0x0/0x0, omap 0x16a75, meta 0x2bb958b), peers [0,2] op hist [])
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:35:05.480129+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:35:06.480427+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:35:07.480592+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fce17000/0x0/0x4ffc00000, data 0x140ef4/0x213000, compress 0x0/0x0/0x0, omap 0x16a75, meta 0x2bb958b), peers [0,2] op hist [])
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:37 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1048921 data_alloc: 218103808 data_used: 7018
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:35:08.480841+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fce17000/0x0/0x4ffc00000, data 0x140ef4/0x213000, compress 0x0/0x0/0x0, omap 0x16a75, meta 0x2bb958b), peers [0,2] op hist [])
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:35:09.481006+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:35:10.481186+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:35:11.481321+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:35:12.481468+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:37 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1048921 data_alloc: 218103808 data_used: 7018
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:35:13.481633+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fce17000/0x0/0x4ffc00000, data 0x140ef4/0x213000, compress 0x0/0x0/0x0, omap 0x16a75, meta 0x2bb958b), peers [0,2] op hist [])
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:35:14.481801+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:35:15.481972+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:35:16.482069+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:35:17.482200+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:37 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1048921 data_alloc: 218103808 data_used: 7018
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:35:18.482388+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:35:19.482575+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fce17000/0x0/0x4ffc00000, data 0x140ef4/0x213000, compress 0x0/0x0/0x0, omap 0x16a75, meta 0x2bb958b), peers [0,2] op hist [])
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:35:20.484744+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:35:21.486112+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fce17000/0x0/0x4ffc00000, data 0x140ef4/0x213000, compress 0x0/0x0/0x0, omap 0x16a75, meta 0x2bb958b), peers [0,2] op hist [])
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:35:22.486224+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:37 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1048921 data_alloc: 218103808 data_used: 7018
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:35:23.486363+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:35:24.486528+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fce17000/0x0/0x4ffc00000, data 0x140ef4/0x213000, compress 0x0/0x0/0x0, omap 0x16a75, meta 0x2bb958b), peers [0,2] op hist [])
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:35:25.486692+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:35:26.486825+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:35:27.486941+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:37 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1048921 data_alloc: 218103808 data_used: 7018
Jan 31 08:51:37 compute-0 ceph-osd[87035]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fce17000/0x0/0x4ffc00000, data 0x140ef4/0x213000, compress 0x0/0x0/0x0, omap 0x16a75, meta 0x2bb958b), peers [0,2] op hist [])
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:35:28.487114+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fce17000/0x0/0x4ffc00000, data 0x140ef4/0x213000, compress 0x0/0x0/0x0, omap 0x16a75, meta 0x2bb958b), peers [0,2] op hist [])
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:35:29.487336+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:35:30.487452+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:35:31.487629+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:35:32.487804+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:37 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1048921 data_alloc: 218103808 data_used: 7018
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:35:33.488028+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fce17000/0x0/0x4ffc00000, data 0x140ef4/0x213000, compress 0x0/0x0/0x0, omap 0x16a75, meta 0x2bb958b), peers [0,2] op hist [])
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:35:34.488184+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fce17000/0x0/0x4ffc00000, data 0x140ef4/0x213000, compress 0x0/0x0/0x0, omap 0x16a75, meta 0x2bb958b), peers [0,2] op hist [])
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:35:35.488746+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:35:36.488906+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:35:37.489115+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:37 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1048921 data_alloc: 218103808 data_used: 7018
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:35:38.489273+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fce17000/0x0/0x4ffc00000, data 0x140ef4/0x213000, compress 0x0/0x0/0x0, omap 0x16a75, meta 0x2bb958b), peers [0,2] op hist [])
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:35:39.489552+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fce17000/0x0/0x4ffc00000, data 0x140ef4/0x213000, compress 0x0/0x0/0x0, omap 0x16a75, meta 0x2bb958b), peers [0,2] op hist [])
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:35:40.489803+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:35:41.489980+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:35:42.490132+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:37 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1048921 data_alloc: 218103808 data_used: 7018
Jan 31 08:51:37 compute-0 ceph-osd[87035]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fce17000/0x0/0x4ffc00000, data 0x140ef4/0x213000, compress 0x0/0x0/0x0, omap 0x16a75, meta 0x2bb958b), peers [0,2] op hist [])
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:35:43.490344+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:35:44.490579+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:35:45.490752+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:35:46.490933+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:35:47.491104+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:37 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1048921 data_alloc: 218103808 data_used: 7018
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:35:48.491345+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fce17000/0x0/0x4ffc00000, data 0x140ef4/0x213000, compress 0x0/0x0/0x0, omap 0x16a75, meta 0x2bb958b), peers [0,2] op hist [])
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:35:49.491493+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:35:50.491794+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:35:51.492127+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fce17000/0x0/0x4ffc00000, data 0x140ef4/0x213000, compress 0x0/0x0/0x0, omap 0x16a75, meta 0x2bb958b), peers [0,2] op hist [])
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:35:52.492415+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:37 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1048921 data_alloc: 218103808 data_used: 7018
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:35:53.492576+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:35:54.492733+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:35:55.492979+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fce17000/0x0/0x4ffc00000, data 0x140ef4/0x213000, compress 0x0/0x0/0x0, omap 0x16a75, meta 0x2bb958b), peers [0,2] op hist [])
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:35:56.493284+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:35:57.493559+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:37 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1048921 data_alloc: 218103808 data_used: 7018
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:35:58.493786+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:35:59.493984+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:36:00.494312+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:36:01.494495+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fce17000/0x0/0x4ffc00000, data 0x140ef4/0x213000, compress 0x0/0x0/0x0, omap 0x16a75, meta 0x2bb958b), peers [0,2] op hist [])
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:36:02.494709+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fce17000/0x0/0x4ffc00000, data 0x140ef4/0x213000, compress 0x0/0x0/0x0, omap 0x16a75, meta 0x2bb958b), peers [0,2] op hist [])
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:37 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1048921 data_alloc: 218103808 data_used: 7018
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:36:03.494890+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:36:04.495165+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:36:05.495554+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fce17000/0x0/0x4ffc00000, data 0x140ef4/0x213000, compress 0x0/0x0/0x0, omap 0x16a75, meta 0x2bb958b), peers [0,2] op hist [])
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:36:06.495759+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fce17000/0x0/0x4ffc00000, data 0x140ef4/0x213000, compress 0x0/0x0/0x0, omap 0x16a75, meta 0x2bb958b), peers [0,2] op hist [])
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:36:07.495919+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:37 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1048921 data_alloc: 218103808 data_used: 7018
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:36:08.496053+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fce17000/0x0/0x4ffc00000, data 0x140ef4/0x213000, compress 0x0/0x0/0x0, omap 0x16a75, meta 0x2bb958b), peers [0,2] op hist [])
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:36:09.496324+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:36:10.496522+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:36:11.496690+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fce17000/0x0/0x4ffc00000, data 0x140ef4/0x213000, compress 0x0/0x0/0x0, omap 0x16a75, meta 0x2bb958b), peers [0,2] op hist [])
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fce17000/0x0/0x4ffc00000, data 0x140ef4/0x213000, compress 0x0/0x0/0x0, omap 0x16a75, meta 0x2bb958b), peers [0,2] op hist [])
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:36:12.496916+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:37 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1048921 data_alloc: 218103808 data_used: 7018
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:36:13.497117+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fce17000/0x0/0x4ffc00000, data 0x140ef4/0x213000, compress 0x0/0x0/0x0, omap 0x16a75, meta 0x2bb958b), peers [0,2] op hist [])
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fce17000/0x0/0x4ffc00000, data 0x140ef4/0x213000, compress 0x0/0x0/0x0, omap 0x16a75, meta 0x2bb958b), peers [0,2] op hist [])
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:36:14.497287+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fce17000/0x0/0x4ffc00000, data 0x140ef4/0x213000, compress 0x0/0x0/0x0, omap 0x16a75, meta 0x2bb958b), peers [0,2] op hist [])
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:36:15.497583+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:36:16.497771+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:36:17.497999+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:37 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1048921 data_alloc: 218103808 data_used: 7018
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:36:18.498215+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fce17000/0x0/0x4ffc00000, data 0x140ef4/0x213000, compress 0x0/0x0/0x0, omap 0x16a75, meta 0x2bb958b), peers [0,2] op hist [])
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:36:19.498371+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:36:20.498563+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:36:21.498698+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:36:22.498836+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:37 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1048921 data_alloc: 218103808 data_used: 7018
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:36:23.498980+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:36:24.499140+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fce17000/0x0/0x4ffc00000, data 0x140ef4/0x213000, compress 0x0/0x0/0x0, omap 0x16a75, meta 0x2bb958b), peers [0,2] op hist [])
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:36:25.499516+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:36:26.499732+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:36:27.499966+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:37 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1048921 data_alloc: 218103808 data_used: 7018
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:36:28.500188+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:36:29.500342+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fce17000/0x0/0x4ffc00000, data 0x140ef4/0x213000, compress 0x0/0x0/0x0, omap 0x16a75, meta 0x2bb958b), peers [0,2] op hist [])
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:36:30.500482+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:36:31.500632+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: osd.1 136 heartbeat osd_stat(store_statfs(0x4fce17000/0x0/0x4ffc00000, data 0x140ef4/0x213000, compress 0x0/0x0/0x0, omap 0x16a75, meta 0x2bb958b), peers [0,2] op hist [])
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:36:32.500785+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:37 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1048921 data_alloc: 218103808 data_used: 7018
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:36:33.500928+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: handle_auth_request added challenge on 0x55d784a97800
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 18866176 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:36:34.501146+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 429.815399170s of 431.936645508s, submitted: 61
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 84852736 unmapped: 17825792 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:36:35.501326+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _renew_subs
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 31 08:51:37 compute-0 ceph-osd[87035]: osd.1 136 handle_osd_map epochs [137,137], i have 136, src has [1,137]
Jan 31 08:51:37 compute-0 ceph-osd[87035]: osd.1 137 ms_handle_reset con 0x55d784a97800 session 0x55d78461ba40
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 85901312 unmapped: 16777216 heap: 102678528 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: handle_auth_request added challenge on 0x55d781e85400
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:36:36.501462+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 85966848 unmapped: 25108480 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: osd.1 137 heartbeat osd_stat(store_statfs(0x4fc614000/0x0/0x4ffc00000, data 0x942a90/0xa16000, compress 0x0/0x0/0x0, omap 0x17227, meta 0x2bb8dd9), peers [0,2] op hist [])
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:36:37.501601+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _renew_subs
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 31 08:51:37 compute-0 ceph-osd[87035]: osd.1 137 handle_osd_map epochs [138,138], i have 137, src has [1,138]
Jan 31 08:51:37 compute-0 ceph-osd[87035]: osd.1 138 ms_handle_reset con 0x55d781e85400 session 0x55d784246700
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 85975040 unmapped: 25100288 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:36:38.501757+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:37 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1100919 data_alloc: 218103808 data_used: 7018
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 85975040 unmapped: 25100288 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:36:39.501842+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 85975040 unmapped: 25100288 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:36:40.501984+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 85975040 unmapped: 25100288 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:36:41.502118+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: osd.1 138 heartbeat osd_stat(store_statfs(0x4fc610000/0x0/0x4ffc00000, data 0x94464f/0xa1a000, compress 0x0/0x0/0x0, omap 0x17606, meta 0x2bb89fa), peers [0,2] op hist [])
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 85975040 unmapped: 25100288 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:36:42.502344+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 85975040 unmapped: 25100288 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:36:43.502502+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:37 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1100919 data_alloc: 218103808 data_used: 7018
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 85975040 unmapped: 25100288 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:36:44.502858+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 85975040 unmapped: 25100288 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:36:45.503057+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 85975040 unmapped: 25100288 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: osd.1 138 heartbeat osd_stat(store_statfs(0x4fc610000/0x0/0x4ffc00000, data 0x94464f/0xa1a000, compress 0x0/0x0/0x0, omap 0x17606, meta 0x2bb89fa), peers [0,2] op hist [])
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:36:46.503231+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 85975040 unmapped: 25100288 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:36:47.503484+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 85975040 unmapped: 25100288 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:36:48.503704+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:37 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1100919 data_alloc: 218103808 data_used: 7018
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 85975040 unmapped: 25100288 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:36:49.504320+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: handle_auth_request added challenge on 0x55d781e85000
Jan 31 08:51:37 compute-0 ceph-osd[87035]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 14.652050018s of 14.759039879s, submitted: 27
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _renew_subs
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 31 08:51:37 compute-0 ceph-osd[87035]: osd.1 138 handle_osd_map epochs [139,139], i have 138, src has [1,139]
Jan 31 08:51:37 compute-0 ceph-osd[87035]: osd.1 139 ms_handle_reset con 0x55d781e85000 session 0x55d783d97180
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87023616 unmapped: 24051712 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:36:50.504476+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 1800.3 total, 600.0 interval
                                           Cumulative writes: 7626 writes, 30K keys, 7626 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.01 MB/s
                                           Cumulative WAL: 7626 writes, 1597 syncs, 4.78 writes per sync, written: 0.02 GB, 0.01 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 570 writes, 1601 keys, 570 commit groups, 1.0 writes per commit group, ingest: 0.67 MB, 0.00 MB/s
                                           Interval WAL: 570 writes, 250 syncs, 2.28 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87023616 unmapped: 24051712 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:36:51.504641+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: osd.1 139 heartbeat osd_stat(store_statfs(0x4fc60d000/0x0/0x4ffc00000, data 0x94623f/0xa1d000, compress 0x0/0x0/0x0, omap 0x17ca5, meta 0x2bb835b), peers [0,2] op hist [])
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87023616 unmapped: 24051712 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: handle_auth_request added challenge on 0x55d784ba7400
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:36:52.504761+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: osd.1 139 handle_osd_map epochs [139,140], i have 139, src has [1,140]
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _renew_subs
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 31 08:51:37 compute-0 ceph-osd[87035]: osd.1 139 handle_osd_map epochs [140,140], i have 140, src has [1,140]
Jan 31 08:51:37 compute-0 ceph-osd[87035]: osd.1 140 ms_handle_reset con 0x55d784ba7400 session 0x55d784c10c40
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87130112 unmapped: 23945216 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:36:53.504912+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:37 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1064249 data_alloc: 218103808 data_used: 7018
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87130112 unmapped: 23945216 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:36:54.505130+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: osd.1 140 heartbeat osd_stat(store_statfs(0x4fce0b000/0x0/0x4ffc00000, data 0x147e0c/0x21f000, compress 0x0/0x0/0x0, omap 0x182cd, meta 0x2bb7d33), peers [0,2] op hist [])
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87130112 unmapped: 23945216 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:36:55.505328+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: osd.1 140 heartbeat osd_stat(store_statfs(0x4fce0b000/0x0/0x4ffc00000, data 0x147e0c/0x21f000, compress 0x0/0x0/0x0, omap 0x182cd, meta 0x2bb7d33), peers [0,2] op hist [])
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87031808 unmapped: 24043520 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:36:56.505569+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87031808 unmapped: 24043520 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:36:57.505796+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87031808 unmapped: 24043520 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:36:58.505974+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:37 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1064249 data_alloc: 218103808 data_used: 7018
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87031808 unmapped: 24043520 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:36:59.506243+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: osd.1 140 heartbeat osd_stat(store_statfs(0x4fce0b000/0x0/0x4ffc00000, data 0x147e0c/0x21f000, compress 0x0/0x0/0x0, omap 0x182cd, meta 0x2bb7d33), peers [0,2] op hist [])
Jan 31 08:51:37 compute-0 ceph-osd[87035]: mgrc ms_handle_reset ms_handle_reset con 0x55d782260000
Jan 31 08:51:37 compute-0 ceph-osd[87035]: mgrc reconnect Terminating session with v2:192.168.122.100:6800/2264315754
Jan 31 08:51:37 compute-0 ceph-osd[87035]: mgrc reconnect Starting new session with [v2:192.168.122.100:6800/2264315754,v1:192.168.122.100:6801/2264315754]
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: get_auth_request con 0x55d78492c000 auth_method 0
Jan 31 08:51:37 compute-0 ceph-osd[87035]: mgrc handle_mgr_configure stats_period=5
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87408640 unmapped: 23666688 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:37:00.506529+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87408640 unmapped: 23666688 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: osd.1 140 heartbeat osd_stat(store_statfs(0x4fce0b000/0x0/0x4ffc00000, data 0x147e0c/0x21f000, compress 0x0/0x0/0x0, omap 0x182cd, meta 0x2bb7d33), peers [0,2] op hist [])
Jan 31 08:51:37 compute-0 ceph-osd[87035]: osd.1 140 handle_osd_map epochs [141,141], i have 140, src has [1,141]
Jan 31 08:51:37 compute-0 ceph-osd[87035]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.615085602s of 11.834397316s, submitted: 74
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:37:01.506673+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: osd.1 141 ms_handle_reset con 0x55d7806c5000 session 0x55d782f1a540
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: handle_auth_request added challenge on 0x55d781e85000
Jan 31 08:51:37 compute-0 ceph-osd[87035]: osd.1 141 ms_handle_reset con 0x55d782c66c00 session 0x55d784675dc0
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: handle_auth_request added challenge on 0x55d784a97000
Jan 31 08:51:37 compute-0 ceph-osd[87035]: osd.1 141 ms_handle_reset con 0x55d782c66400 session 0x55d784a90a80
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: handle_auth_request added challenge on 0x55d782c66c00
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87408640 unmapped: 23666688 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:37:02.506820+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87408640 unmapped: 23666688 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:37:03.507099+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:37 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1067023 data_alloc: 218103808 data_used: 7018
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87408640 unmapped: 23666688 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:37:04.507235+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87408640 unmapped: 23666688 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:37:05.507433+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87408640 unmapped: 23666688 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:37:06.507671+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: osd.1 141 heartbeat osd_stat(store_statfs(0x4fce08000/0x0/0x4ffc00000, data 0x14988b/0x222000, compress 0x0/0x0/0x0, omap 0x18621, meta 0x2bb79df), peers [0,2] op hist [])
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87408640 unmapped: 23666688 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:37:07.507883+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87408640 unmapped: 23666688 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:37:08.508056+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:37 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1067023 data_alloc: 218103808 data_used: 7018
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87408640 unmapped: 23666688 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:37:09.508186+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: osd.1 141 heartbeat osd_stat(store_statfs(0x4fce08000/0x0/0x4ffc00000, data 0x14988b/0x222000, compress 0x0/0x0/0x0, omap 0x18621, meta 0x2bb79df), peers [0,2] op hist [])
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87408640 unmapped: 23666688 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:37:10.508341+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: osd.1 141 heartbeat osd_stat(store_statfs(0x4fce08000/0x0/0x4ffc00000, data 0x14988b/0x222000, compress 0x0/0x0/0x0, omap 0x18621, meta 0x2bb79df), peers [0,2] op hist [])
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87408640 unmapped: 23666688 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:37:11.508539+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87408640 unmapped: 23666688 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:37:12.508652+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87408640 unmapped: 23666688 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:37:13.508812+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:37 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1067023 data_alloc: 218103808 data_used: 7018
Jan 31 08:51:37 compute-0 ceph-osd[87035]: osd.1 141 heartbeat osd_stat(store_statfs(0x4fce08000/0x0/0x4ffc00000, data 0x14988b/0x222000, compress 0x0/0x0/0x0, omap 0x18621, meta 0x2bb79df), peers [0,2] op hist [])
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87408640 unmapped: 23666688 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:37:14.508963+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87408640 unmapped: 23666688 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:37:15.509207+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: osd.1 141 heartbeat osd_stat(store_statfs(0x4fce08000/0x0/0x4ffc00000, data 0x14988b/0x222000, compress 0x0/0x0/0x0, omap 0x18621, meta 0x2bb79df), peers [0,2] op hist [])
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87408640 unmapped: 23666688 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:37:16.509411+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87408640 unmapped: 23666688 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:37:17.509557+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: osd.1 141 heartbeat osd_stat(store_statfs(0x4fce08000/0x0/0x4ffc00000, data 0x14988b/0x222000, compress 0x0/0x0/0x0, omap 0x18621, meta 0x2bb79df), peers [0,2] op hist [])
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87408640 unmapped: 23666688 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:37:18.509735+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: osd.1 141 heartbeat osd_stat(store_statfs(0x4fce08000/0x0/0x4ffc00000, data 0x14988b/0x222000, compress 0x0/0x0/0x0, omap 0x18621, meta 0x2bb79df), peers [0,2] op hist [])
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:37 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1067023 data_alloc: 218103808 data_used: 7018
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87408640 unmapped: 23666688 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:37:19.509929+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87408640 unmapped: 23666688 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:37:20.510108+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87408640 unmapped: 23666688 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:37:21.510378+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87408640 unmapped: 23666688 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:37:22.510535+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: osd.1 141 heartbeat osd_stat(store_statfs(0x4fce08000/0x0/0x4ffc00000, data 0x14988b/0x222000, compress 0x0/0x0/0x0, omap 0x18621, meta 0x2bb79df), peers [0,2] op hist [])
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87408640 unmapped: 23666688 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:37:23.511408+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:37 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1067023 data_alloc: 218103808 data_used: 7018
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87408640 unmapped: 23666688 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:37:24.511599+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87408640 unmapped: 23666688 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:37:25.511913+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87408640 unmapped: 23666688 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:37:26.512237+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87408640 unmapped: 23666688 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: osd.1 141 heartbeat osd_stat(store_statfs(0x4fce08000/0x0/0x4ffc00000, data 0x14988b/0x222000, compress 0x0/0x0/0x0, omap 0x18621, meta 0x2bb79df), peers [0,2] op hist [])
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:37:27.512855+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87408640 unmapped: 23666688 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:37:28.513195+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:37 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1067023 data_alloc: 218103808 data_used: 7018
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87408640 unmapped: 23666688 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:37:29.513679+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87408640 unmapped: 23666688 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:37:30.513990+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87408640 unmapped: 23666688 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:37:31.514209+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: osd.1 141 heartbeat osd_stat(store_statfs(0x4fce08000/0x0/0x4ffc00000, data 0x14988b/0x222000, compress 0x0/0x0/0x0, omap 0x18621, meta 0x2bb79df), peers [0,2] op hist [])
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87408640 unmapped: 23666688 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:37:32.514435+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87408640 unmapped: 23666688 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:37:33.514676+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:37 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1067023 data_alloc: 218103808 data_used: 7018
Jan 31 08:51:37 compute-0 ceph-osd[87035]: osd.1 141 heartbeat osd_stat(store_statfs(0x4fce08000/0x0/0x4ffc00000, data 0x14988b/0x222000, compress 0x0/0x0/0x0, omap 0x18621, meta 0x2bb79df), peers [0,2] op hist [])
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87408640 unmapped: 23666688 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:37:34.514929+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87408640 unmapped: 23666688 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:37:35.515370+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87408640 unmapped: 23666688 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:37:36.515670+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87408640 unmapped: 23666688 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:37:37.515803+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87408640 unmapped: 23666688 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:37:38.515988+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:37 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1067023 data_alloc: 218103808 data_used: 7018
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87408640 unmapped: 23666688 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:37:39.516138+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: osd.1 141 heartbeat osd_stat(store_statfs(0x4fce08000/0x0/0x4ffc00000, data 0x14988b/0x222000, compress 0x0/0x0/0x0, omap 0x18621, meta 0x2bb79df), peers [0,2] op hist [])
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:37:40.516405+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87408640 unmapped: 23666688 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:37:41.516618+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87408640 unmapped: 23666688 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:37:42.516759+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87408640 unmapped: 23666688 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:37:43.516946+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87408640 unmapped: 23666688 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:37 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1067023 data_alloc: 218103808 data_used: 7018
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:37:44.517104+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87408640 unmapped: 23666688 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: osd.1 141 heartbeat osd_stat(store_statfs(0x4fce08000/0x0/0x4ffc00000, data 0x14988b/0x222000, compress 0x0/0x0/0x0, omap 0x18621, meta 0x2bb79df), peers [0,2] op hist [])
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:37:45.517394+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87408640 unmapped: 23666688 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:37:46.517566+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87408640 unmapped: 23666688 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:37:47.517722+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87408640 unmapped: 23666688 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:37:48.517861+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87408640 unmapped: 23666688 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:37 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1067023 data_alloc: 218103808 data_used: 7018
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:37:49.518007+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87408640 unmapped: 23666688 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:37:50.518180+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87408640 unmapped: 23666688 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: osd.1 141 heartbeat osd_stat(store_statfs(0x4fce08000/0x0/0x4ffc00000, data 0x14988b/0x222000, compress 0x0/0x0/0x0, omap 0x18621, meta 0x2bb79df), peers [0,2] op hist [])
Jan 31 08:51:37 compute-0 ceph-osd[87035]: osd.1 141 ms_handle_reset con 0x55d78283fc00 session 0x55d782c95c00
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: handle_auth_request added challenge on 0x55d784a97800
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:37:51.518393+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87408640 unmapped: 23666688 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:37:52.518566+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87408640 unmapped: 23666688 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:37:53.518725+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87408640 unmapped: 23666688 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:37 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1067023 data_alloc: 218103808 data_used: 7018
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:37:54.518916+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87408640 unmapped: 23666688 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:37:55.519130+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87408640 unmapped: 23666688 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: osd.1 141 heartbeat osd_stat(store_statfs(0x4fce08000/0x0/0x4ffc00000, data 0x14988b/0x222000, compress 0x0/0x0/0x0, omap 0x18621, meta 0x2bb79df), peers [0,2] op hist [])
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:37:56.519321+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87408640 unmapped: 23666688 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:37:57.520230+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87408640 unmapped: 23666688 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:37:58.520356+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87408640 unmapped: 23666688 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:37 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1067023 data_alloc: 218103808 data_used: 7018
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:37:59.521027+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 58.195083618s of 58.221981049s, submitted: 15
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87408640 unmapped: 23666688 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: osd.1 141 heartbeat osd_stat(store_statfs(0x4fce08000/0x0/0x4ffc00000, data 0x14988b/0x222000, compress 0x0/0x0/0x0, omap 0x18621, meta 0x2bb79df), peers [0,2] op hist [0,0,0,0,0,0,2])
Jan 31 08:51:37 compute-0 ceph-osd[87035]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:38:00.521275+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87408640 unmapped: 23666688 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:38:01.521612+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87449600 unmapped: 23625728 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:38:02.522823+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87457792 unmapped: 23617536 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:38:03.524158+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87465984 unmapped: 23609344 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:37 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1066303 data_alloc: 218103808 data_used: 7018
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:38:04.525343+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87465984 unmapped: 23609344 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: osd.1 141 heartbeat osd_stat(store_statfs(0x4fce0a000/0x0/0x4ffc00000, data 0x14988b/0x222000, compress 0x0/0x0/0x0, omap 0x18621, meta 0x2bb79df), peers [0,2] op hist [])
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:38:05.525810+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87465984 unmapped: 23609344 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:38:06.526066+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87465984 unmapped: 23609344 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: osd.1 141 heartbeat osd_stat(store_statfs(0x4fce0a000/0x0/0x4ffc00000, data 0x14988b/0x222000, compress 0x0/0x0/0x0, omap 0x18621, meta 0x2bb79df), peers [0,2] op hist [])
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:38:07.526507+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87465984 unmapped: 23609344 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:38:08.526922+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87465984 unmapped: 23609344 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:37 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1066303 data_alloc: 218103808 data_used: 7018
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:38:09.527169+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87465984 unmapped: 23609344 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:38:10.527277+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87465984 unmapped: 23609344 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: osd.1 141 heartbeat osd_stat(store_statfs(0x4fce0a000/0x0/0x4ffc00000, data 0x14988b/0x222000, compress 0x0/0x0/0x0, omap 0x18621, meta 0x2bb79df), peers [0,2] op hist [])
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:38:11.527535+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87465984 unmapped: 23609344 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:38:12.527721+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87465984 unmapped: 23609344 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:38:13.527948+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87465984 unmapped: 23609344 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: osd.1 141 heartbeat osd_stat(store_statfs(0x4fce0a000/0x0/0x4ffc00000, data 0x14988b/0x222000, compress 0x0/0x0/0x0, omap 0x18621, meta 0x2bb79df), peers [0,2] op hist [])
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:37 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1066303 data_alloc: 218103808 data_used: 7018
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:38:14.528180+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87465984 unmapped: 23609344 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:38:15.528374+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87465984 unmapped: 23609344 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: osd.1 141 heartbeat osd_stat(store_statfs(0x4fce0a000/0x0/0x4ffc00000, data 0x14988b/0x222000, compress 0x0/0x0/0x0, omap 0x18621, meta 0x2bb79df), peers [0,2] op hist [])
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:38:16.528538+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87465984 unmapped: 23609344 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:38:17.528685+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87416832 unmapped: 23658496 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: osd.1 141 heartbeat osd_stat(store_statfs(0x4fce0a000/0x0/0x4ffc00000, data 0x14988b/0x222000, compress 0x0/0x0/0x0, omap 0x18621, meta 0x2bb79df), peers [0,2] op hist [])
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:38:18.528877+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87416832 unmapped: 23658496 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:37 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1066303 data_alloc: 218103808 data_used: 7018
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:38:19.529063+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87416832 unmapped: 23658496 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:38:20.529299+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87416832 unmapped: 23658496 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:38:21.529521+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87416832 unmapped: 23658496 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:38:22.529674+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87416832 unmapped: 23658496 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: osd.1 141 heartbeat osd_stat(store_statfs(0x4fce0a000/0x0/0x4ffc00000, data 0x14988b/0x222000, compress 0x0/0x0/0x0, omap 0x18621, meta 0x2bb79df), peers [0,2] op hist [])
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:38:23.529863+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87416832 unmapped: 23658496 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:37 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1066303 data_alloc: 218103808 data_used: 7018
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:38:24.530026+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87416832 unmapped: 23658496 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:38:25.530237+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87416832 unmapped: 23658496 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:38:26.530471+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87416832 unmapped: 23658496 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:38:27.530626+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87416832 unmapped: 23658496 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:38:28.530799+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87416832 unmapped: 23658496 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: osd.1 141 heartbeat osd_stat(store_statfs(0x4fce0a000/0x0/0x4ffc00000, data 0x14988b/0x222000, compress 0x0/0x0/0x0, omap 0x18621, meta 0x2bb79df), peers [0,2] op hist [])
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:37 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1066303 data_alloc: 218103808 data_used: 7018
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:38:29.530961+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87416832 unmapped: 23658496 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:38:30.531110+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87416832 unmapped: 23658496 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: osd.1 141 heartbeat osd_stat(store_statfs(0x4fce0a000/0x0/0x4ffc00000, data 0x14988b/0x222000, compress 0x0/0x0/0x0, omap 0x18621, meta 0x2bb79df), peers [0,2] op hist [])
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:38:31.531243+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87416832 unmapped: 23658496 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:38:32.531430+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87416832 unmapped: 23658496 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:38:33.531566+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: osd.1 141 heartbeat osd_stat(store_statfs(0x4fce0a000/0x0/0x4ffc00000, data 0x14988b/0x222000, compress 0x0/0x0/0x0, omap 0x18621, meta 0x2bb79df), peers [0,2] op hist [])
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87416832 unmapped: 23658496 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:37 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1066303 data_alloc: 218103808 data_used: 7018
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:38:34.531705+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87416832 unmapped: 23658496 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:38:35.531888+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87416832 unmapped: 23658496 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:38:36.532113+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87416832 unmapped: 23658496 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:38:37.532290+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87416832 unmapped: 23658496 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:38:38.532394+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: osd.1 141 heartbeat osd_stat(store_statfs(0x4fce0a000/0x0/0x4ffc00000, data 0x14988b/0x222000, compress 0x0/0x0/0x0, omap 0x18621, meta 0x2bb79df), peers [0,2] op hist [])
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:37 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1066303 data_alloc: 218103808 data_used: 7018
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87416832 unmapped: 23658496 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:38:39.532523+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87416832 unmapped: 23658496 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:38:40.532651+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: osd.1 141 heartbeat osd_stat(store_statfs(0x4fce0a000/0x0/0x4ffc00000, data 0x14988b/0x222000, compress 0x0/0x0/0x0, omap 0x18621, meta 0x2bb79df), peers [0,2] op hist [])
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87416832 unmapped: 23658496 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:38:41.532780+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87416832 unmapped: 23658496 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:38:42.532940+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87416832 unmapped: 23658496 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:38:43.533025+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: osd.1 141 heartbeat osd_stat(store_statfs(0x4fce0a000/0x0/0x4ffc00000, data 0x14988b/0x222000, compress 0x0/0x0/0x0, omap 0x18621, meta 0x2bb79df), peers [0,2] op hist [])
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:37 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1066303 data_alloc: 218103808 data_used: 7018
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87416832 unmapped: 23658496 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:38:44.533134+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87416832 unmapped: 23658496 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:38:45.533329+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87416832 unmapped: 23658496 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:38:46.533494+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87416832 unmapped: 23658496 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:38:47.533632+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: osd.1 141 heartbeat osd_stat(store_statfs(0x4fce0a000/0x0/0x4ffc00000, data 0x14988b/0x222000, compress 0x0/0x0/0x0, omap 0x18621, meta 0x2bb79df), peers [0,2] op hist [])
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87416832 unmapped: 23658496 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:38:48.533765+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:37 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1066303 data_alloc: 218103808 data_used: 7018
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87416832 unmapped: 23658496 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:38:49.533935+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87416832 unmapped: 23658496 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:38:50.534148+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87416832 unmapped: 23658496 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:38:51.534418+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87416832 unmapped: 23658496 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: osd.1 141 heartbeat osd_stat(store_statfs(0x4fce0a000/0x0/0x4ffc00000, data 0x14988b/0x222000, compress 0x0/0x0/0x0, omap 0x18621, meta 0x2bb79df), peers [0,2] op hist [])
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:38:52.534610+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87416832 unmapped: 23658496 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:38:53.534743+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: osd.1 141 heartbeat osd_stat(store_statfs(0x4fce0a000/0x0/0x4ffc00000, data 0x14988b/0x222000, compress 0x0/0x0/0x0, omap 0x18621, meta 0x2bb79df), peers [0,2] op hist [])
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:37 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1066303 data_alloc: 218103808 data_used: 7018
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87416832 unmapped: 23658496 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: osd.1 141 heartbeat osd_stat(store_statfs(0x4fce0a000/0x0/0x4ffc00000, data 0x14988b/0x222000, compress 0x0/0x0/0x0, omap 0x18621, meta 0x2bb79df), peers [0,2] op hist [])
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:38:54.534900+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87416832 unmapped: 23658496 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:38:55.535054+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87416832 unmapped: 23658496 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:38:56.535172+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87416832 unmapped: 23658496 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:38:57.535416+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87416832 unmapped: 23658496 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:38:58.535556+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:37 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1066303 data_alloc: 218103808 data_used: 7018
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87416832 unmapped: 23658496 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:38:59.535750+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87416832 unmapped: 23658496 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: osd.1 141 heartbeat osd_stat(store_statfs(0x4fce0a000/0x0/0x4ffc00000, data 0x14988b/0x222000, compress 0x0/0x0/0x0, omap 0x18621, meta 0x2bb79df), peers [0,2] op hist [])
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:39:00.535991+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87416832 unmapped: 23658496 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:39:01.536703+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87416832 unmapped: 23658496 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:39:02.537869+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87416832 unmapped: 23658496 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:39:03.538065+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:37 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1066303 data_alloc: 218103808 data_used: 7018
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87416832 unmapped: 23658496 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:39:04.538314+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: osd.1 141 heartbeat osd_stat(store_statfs(0x4fce0a000/0x0/0x4ffc00000, data 0x14988b/0x222000, compress 0x0/0x0/0x0, omap 0x18621, meta 0x2bb79df), peers [0,2] op hist [])
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87416832 unmapped: 23658496 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:39:05.538535+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87416832 unmapped: 23658496 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:39:06.538841+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87416832 unmapped: 23658496 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:39:07.539078+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87416832 unmapped: 23658496 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:39:08.539297+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: osd.1 141 heartbeat osd_stat(store_statfs(0x4fce0a000/0x0/0x4ffc00000, data 0x14988b/0x222000, compress 0x0/0x0/0x0, omap 0x18621, meta 0x2bb79df), peers [0,2] op hist [])
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:37 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1066303 data_alloc: 218103808 data_used: 7018
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87416832 unmapped: 23658496 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:39:09.539453+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87416832 unmapped: 23658496 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:39:10.540168+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: osd.1 141 heartbeat osd_stat(store_statfs(0x4fce0a000/0x0/0x4ffc00000, data 0x14988b/0x222000, compress 0x0/0x0/0x0, omap 0x18621, meta 0x2bb79df), peers [0,2] op hist [])
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87416832 unmapped: 23658496 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: osd.1 141 heartbeat osd_stat(store_statfs(0x4fce0a000/0x0/0x4ffc00000, data 0x14988b/0x222000, compress 0x0/0x0/0x0, omap 0x18621, meta 0x2bb79df), peers [0,2] op hist [])
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:39:11.540721+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87416832 unmapped: 23658496 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: osd.1 141 heartbeat osd_stat(store_statfs(0x4fce0a000/0x0/0x4ffc00000, data 0x14988b/0x222000, compress 0x0/0x0/0x0, omap 0x18621, meta 0x2bb79df), peers [0,2] op hist [])
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:39:12.541062+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87416832 unmapped: 23658496 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:39:13.541542+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: osd.1 141 heartbeat osd_stat(store_statfs(0x4fce0a000/0x0/0x4ffc00000, data 0x14988b/0x222000, compress 0x0/0x0/0x0, omap 0x18621, meta 0x2bb79df), peers [0,2] op hist [])
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:37 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1066303 data_alloc: 218103808 data_used: 7018
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87416832 unmapped: 23658496 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:39:14.541926+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87416832 unmapped: 23658496 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:39:15.542383+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87416832 unmapped: 23658496 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:39:16.542773+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87416832 unmapped: 23658496 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:39:17.543017+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87416832 unmapped: 23658496 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:39:18.543197+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: osd.1 141 heartbeat osd_stat(store_statfs(0x4fce0a000/0x0/0x4ffc00000, data 0x14988b/0x222000, compress 0x0/0x0/0x0, omap 0x18621, meta 0x2bb79df), peers [0,2] op hist [])
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:37 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1066303 data_alloc: 218103808 data_used: 7018
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87416832 unmapped: 23658496 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:39:19.543430+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87416832 unmapped: 23658496 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:39:20.543612+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87416832 unmapped: 23658496 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:39:21.554070+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: osd.1 141 heartbeat osd_stat(store_statfs(0x4fce0a000/0x0/0x4ffc00000, data 0x14988b/0x222000, compress 0x0/0x0/0x0, omap 0x18621, meta 0x2bb79df), peers [0,2] op hist [])
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87416832 unmapped: 23658496 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:39:22.554349+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87416832 unmapped: 23658496 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: osd.1 141 heartbeat osd_stat(store_statfs(0x4fce0a000/0x0/0x4ffc00000, data 0x14988b/0x222000, compress 0x0/0x0/0x0, omap 0x18621, meta 0x2bb79df), peers [0,2] op hist [])
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:39:23.555927+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:37 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1066303 data_alloc: 218103808 data_used: 7018
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87416832 unmapped: 23658496 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:39:24.556170+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87416832 unmapped: 23658496 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:39:25.556461+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87416832 unmapped: 23658496 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:39:26.556654+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87416832 unmapped: 23658496 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:39:27.556864+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: osd.1 141 heartbeat osd_stat(store_statfs(0x4fce0a000/0x0/0x4ffc00000, data 0x14988b/0x222000, compress 0x0/0x0/0x0, omap 0x18621, meta 0x2bb79df), peers [0,2] op hist [])
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87416832 unmapped: 23658496 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:39:28.557064+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:37 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1066303 data_alloc: 218103808 data_used: 7018
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87416832 unmapped: 23658496 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:39:29.557283+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87416832 unmapped: 23658496 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:39:30.557476+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: osd.1 141 heartbeat osd_stat(store_statfs(0x4fce0a000/0x0/0x4ffc00000, data 0x14988b/0x222000, compress 0x0/0x0/0x0, omap 0x18621, meta 0x2bb79df), peers [0,2] op hist [])
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87416832 unmapped: 23658496 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:39:31.557691+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: osd.1 141 heartbeat osd_stat(store_statfs(0x4fce0a000/0x0/0x4ffc00000, data 0x14988b/0x222000, compress 0x0/0x0/0x0, omap 0x18621, meta 0x2bb79df), peers [0,2] op hist [])
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87416832 unmapped: 23658496 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:39:32.557883+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: osd.1 141 heartbeat osd_stat(store_statfs(0x4fce0a000/0x0/0x4ffc00000, data 0x14988b/0x222000, compress 0x0/0x0/0x0, omap 0x18621, meta 0x2bb79df), peers [0,2] op hist [])
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87416832 unmapped: 23658496 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:39:33.558024+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:37 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1066303 data_alloc: 218103808 data_used: 7018
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87416832 unmapped: 23658496 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:39:34.558160+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: osd.1 141 heartbeat osd_stat(store_statfs(0x4fce0a000/0x0/0x4ffc00000, data 0x14988b/0x222000, compress 0x0/0x0/0x0, omap 0x18621, meta 0x2bb79df), peers [0,2] op hist [])
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87416832 unmapped: 23658496 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:39:35.558504+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87416832 unmapped: 23658496 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:39:36.558660+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:39:37.558885+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87416832 unmapped: 23658496 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: osd.1 141 heartbeat osd_stat(store_statfs(0x4fce0a000/0x0/0x4ffc00000, data 0x14988b/0x222000, compress 0x0/0x0/0x0, omap 0x18621, meta 0x2bb79df), peers [0,2] op hist [])
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:39:38.559100+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87416832 unmapped: 23658496 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: osd.1 141 heartbeat osd_stat(store_statfs(0x4fce0a000/0x0/0x4ffc00000, data 0x14988b/0x222000, compress 0x0/0x0/0x0, omap 0x18621, meta 0x2bb79df), peers [0,2] op hist [])
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:37 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1066303 data_alloc: 218103808 data_used: 7018
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:39:39.559325+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87416832 unmapped: 23658496 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:39:40.559530+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87416832 unmapped: 23658496 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:39:41.559736+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87416832 unmapped: 23658496 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:39:42.559915+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87416832 unmapped: 23658496 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:39:43.560174+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87416832 unmapped: 23658496 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: osd.1 141 heartbeat osd_stat(store_statfs(0x4fce0a000/0x0/0x4ffc00000, data 0x14988b/0x222000, compress 0x0/0x0/0x0, omap 0x18621, meta 0x2bb79df), peers [0,2] op hist [])
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:37 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1066303 data_alloc: 218103808 data_used: 7018
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:39:44.560394+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87416832 unmapped: 23658496 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:39:45.560636+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87416832 unmapped: 23658496 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:39:46.560834+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87416832 unmapped: 23658496 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:39:47.561101+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87416832 unmapped: 23658496 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:39:48.561360+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87416832 unmapped: 23658496 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: osd.1 141 heartbeat osd_stat(store_statfs(0x4fce0a000/0x0/0x4ffc00000, data 0x14988b/0x222000, compress 0x0/0x0/0x0, omap 0x18621, meta 0x2bb79df), peers [0,2] op hist [])
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:37 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1066303 data_alloc: 218103808 data_used: 7018
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:39:49.561619+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87416832 unmapped: 23658496 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:39:50.561802+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87416832 unmapped: 23658496 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:39:51.561989+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87416832 unmapped: 23658496 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: osd.1 141 heartbeat osd_stat(store_statfs(0x4fce0a000/0x0/0x4ffc00000, data 0x14988b/0x222000, compress 0x0/0x0/0x0, omap 0x18621, meta 0x2bb79df), peers [0,2] op hist [])
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:39:52.562130+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87416832 unmapped: 23658496 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:39:53.562328+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87416832 unmapped: 23658496 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:37 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1066303 data_alloc: 218103808 data_used: 7018
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:39:54.562520+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87416832 unmapped: 23658496 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:39:55.562798+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87416832 unmapped: 23658496 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:39:56.562965+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: osd.1 141 heartbeat osd_stat(store_statfs(0x4fce0a000/0x0/0x4ffc00000, data 0x14988b/0x222000, compress 0x0/0x0/0x0, omap 0x18621, meta 0x2bb79df), peers [0,2] op hist [])
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87416832 unmapped: 23658496 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:39:57.563156+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87416832 unmapped: 23658496 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:39:58.563303+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87416832 unmapped: 23658496 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:37 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1066303 data_alloc: 218103808 data_used: 7018
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:39:59.563443+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87416832 unmapped: 23658496 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:40:00.563598+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87416832 unmapped: 23658496 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: osd.1 141 heartbeat osd_stat(store_statfs(0x4fce0a000/0x0/0x4ffc00000, data 0x14988b/0x222000, compress 0x0/0x0/0x0, omap 0x18621, meta 0x2bb79df), peers [0,2] op hist [])
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:40:01.563763+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87416832 unmapped: 23658496 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:40:02.563925+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87416832 unmapped: 23658496 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:40:03.564061+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87416832 unmapped: 23658496 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:37 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1066303 data_alloc: 218103808 data_used: 7018
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:40:04.564203+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87416832 unmapped: 23658496 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:40:05.564375+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87416832 unmapped: 23658496 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:40:06.564579+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: osd.1 141 heartbeat osd_stat(store_statfs(0x4fce0a000/0x0/0x4ffc00000, data 0x14988b/0x222000, compress 0x0/0x0/0x0, omap 0x18621, meta 0x2bb79df), peers [0,2] op hist [])
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87416832 unmapped: 23658496 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: osd.1 141 heartbeat osd_stat(store_statfs(0x4fce0a000/0x0/0x4ffc00000, data 0x14988b/0x222000, compress 0x0/0x0/0x0, omap 0x18621, meta 0x2bb79df), peers [0,2] op hist [])
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:40:07.564731+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87416832 unmapped: 23658496 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:40:08.564964+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87416832 unmapped: 23658496 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:37 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1066303 data_alloc: 218103808 data_used: 7018
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:40:09.565175+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87416832 unmapped: 23658496 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:40:10.565371+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: osd.1 141 heartbeat osd_stat(store_statfs(0x4fce0a000/0x0/0x4ffc00000, data 0x14988b/0x222000, compress 0x0/0x0/0x0, omap 0x18621, meta 0x2bb79df), peers [0,2] op hist [])
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87416832 unmapped: 23658496 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:40:11.565586+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87416832 unmapped: 23658496 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:40:12.565805+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87416832 unmapped: 23658496 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: osd.1 141 heartbeat osd_stat(store_statfs(0x4fce0a000/0x0/0x4ffc00000, data 0x14988b/0x222000, compress 0x0/0x0/0x0, omap 0x18621, meta 0x2bb79df), peers [0,2] op hist [])
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:40:13.566006+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87416832 unmapped: 23658496 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:40:14.566281+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:37 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1066303 data_alloc: 218103808 data_used: 7018
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87416832 unmapped: 23658496 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:40:15.566572+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87416832 unmapped: 23658496 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:40:16.566822+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87416832 unmapped: 23658496 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: osd.1 141 heartbeat osd_stat(store_statfs(0x4fce0a000/0x0/0x4ffc00000, data 0x14988b/0x222000, compress 0x0/0x0/0x0, omap 0x18621, meta 0x2bb79df), peers [0,2] op hist [])
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:40:17.567081+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87416832 unmapped: 23658496 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:40:18.567389+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87416832 unmapped: 23658496 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: osd.1 141 heartbeat osd_stat(store_statfs(0x4fce0a000/0x0/0x4ffc00000, data 0x14988b/0x222000, compress 0x0/0x0/0x0, omap 0x18621, meta 0x2bb79df), peers [0,2] op hist [])
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:40:19.567617+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:37 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1066303 data_alloc: 218103808 data_used: 7018
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87416832 unmapped: 23658496 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: osd.1 141 heartbeat osd_stat(store_statfs(0x4fce0a000/0x0/0x4ffc00000, data 0x14988b/0x222000, compress 0x0/0x0/0x0, omap 0x18621, meta 0x2bb79df), peers [0,2] op hist [])
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:40:20.567777+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87416832 unmapped: 23658496 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: osd.1 141 heartbeat osd_stat(store_statfs(0x4fce0a000/0x0/0x4ffc00000, data 0x14988b/0x222000, compress 0x0/0x0/0x0, omap 0x18621, meta 0x2bb79df), peers [0,2] op hist [])
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:40:21.567927+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87416832 unmapped: 23658496 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:40:22.568167+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87416832 unmapped: 23658496 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: osd.1 141 heartbeat osd_stat(store_statfs(0x4fce0a000/0x0/0x4ffc00000, data 0x14988b/0x222000, compress 0x0/0x0/0x0, omap 0x18621, meta 0x2bb79df), peers [0,2] op hist [])
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:40:23.568320+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87416832 unmapped: 23658496 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:40:24.568517+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:37 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1066303 data_alloc: 218103808 data_used: 7018
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87416832 unmapped: 23658496 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: osd.1 141 heartbeat osd_stat(store_statfs(0x4fce0a000/0x0/0x4ffc00000, data 0x14988b/0x222000, compress 0x0/0x0/0x0, omap 0x18621, meta 0x2bb79df), peers [0,2] op hist [])
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:40:25.568751+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87416832 unmapped: 23658496 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:40:26.568910+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87416832 unmapped: 23658496 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:40:27.569172+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87416832 unmapped: 23658496 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:40:28.569334+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87416832 unmapped: 23658496 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:40:29.569567+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: osd.1 141 heartbeat osd_stat(store_statfs(0x4fce0a000/0x0/0x4ffc00000, data 0x14988b/0x222000, compress 0x0/0x0/0x0, omap 0x18621, meta 0x2bb79df), peers [0,2] op hist [])
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:37 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1066303 data_alloc: 218103808 data_used: 7018
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87416832 unmapped: 23658496 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: osd.1 141 heartbeat osd_stat(store_statfs(0x4fce0a000/0x0/0x4ffc00000, data 0x14988b/0x222000, compress 0x0/0x0/0x0, omap 0x18621, meta 0x2bb79df), peers [0,2] op hist [])
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:40:30.569773+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87416832 unmapped: 23658496 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:40:31.569963+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87416832 unmapped: 23658496 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:40:32.570152+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87416832 unmapped: 23658496 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:40:33.570328+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87416832 unmapped: 23658496 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:40:34.570540+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:37 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1066303 data_alloc: 218103808 data_used: 7018
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87416832 unmapped: 23658496 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: osd.1 141 heartbeat osd_stat(store_statfs(0x4fce0a000/0x0/0x4ffc00000, data 0x14988b/0x222000, compress 0x0/0x0/0x0, omap 0x18621, meta 0x2bb79df), peers [0,2] op hist [])
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:40:35.570769+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87416832 unmapped: 23658496 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:40:36.570907+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87416832 unmapped: 23658496 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:40:37.571079+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87416832 unmapped: 23658496 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:40:38.571223+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87416832 unmapped: 23658496 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: osd.1 141 heartbeat osd_stat(store_statfs(0x4fce0a000/0x0/0x4ffc00000, data 0x14988b/0x222000, compress 0x0/0x0/0x0, omap 0x18621, meta 0x2bb79df), peers [0,2] op hist [])
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:40:39.571349+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:37 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1066303 data_alloc: 218103808 data_used: 7018
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87416832 unmapped: 23658496 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:40:40.571498+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: osd.1 141 heartbeat osd_stat(store_statfs(0x4fce0a000/0x0/0x4ffc00000, data 0x14988b/0x222000, compress 0x0/0x0/0x0, omap 0x18621, meta 0x2bb79df), peers [0,2] op hist [])
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87416832 unmapped: 23658496 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: osd.1 141 heartbeat osd_stat(store_statfs(0x4fce0a000/0x0/0x4ffc00000, data 0x14988b/0x222000, compress 0x0/0x0/0x0, omap 0x18621, meta 0x2bb79df), peers [0,2] op hist [])
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:40:41.571644+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87416832 unmapped: 23658496 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:40:42.571782+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87416832 unmapped: 23658496 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:40:43.571938+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87416832 unmapped: 23658496 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:40:44.572061+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:37 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1066303 data_alloc: 218103808 data_used: 7018
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87416832 unmapped: 23658496 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: osd.1 141 heartbeat osd_stat(store_statfs(0x4fce0a000/0x0/0x4ffc00000, data 0x14988b/0x222000, compress 0x0/0x0/0x0, omap 0x18621, meta 0x2bb79df), peers [0,2] op hist [])
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:40:45.572240+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87416832 unmapped: 23658496 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:40:46.572400+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87416832 unmapped: 23658496 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:40:47.572531+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87416832 unmapped: 23658496 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:40:48.572674+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87416832 unmapped: 23658496 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:40:49.572827+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:37 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1066303 data_alloc: 218103808 data_used: 7018
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87416832 unmapped: 23658496 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: osd.1 141 heartbeat osd_stat(store_statfs(0x4fce0a000/0x0/0x4ffc00000, data 0x14988b/0x222000, compress 0x0/0x0/0x0, omap 0x18621, meta 0x2bb79df), peers [0,2] op hist [])
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:40:50.572986+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87416832 unmapped: 23658496 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:40:51.573141+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: osd.1 141 heartbeat osd_stat(store_statfs(0x4fce0a000/0x0/0x4ffc00000, data 0x14988b/0x222000, compress 0x0/0x0/0x0, omap 0x18621, meta 0x2bb79df), peers [0,2] op hist [])
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87416832 unmapped: 23658496 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:40:52.573239+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87416832 unmapped: 23658496 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:40:53.573398+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87416832 unmapped: 23658496 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: osd.1 141 heartbeat osd_stat(store_statfs(0x4fce0a000/0x0/0x4ffc00000, data 0x14988b/0x222000, compress 0x0/0x0/0x0, omap 0x18621, meta 0x2bb79df), peers [0,2] op hist [])
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:40:54.573539+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:37 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1066303 data_alloc: 218103808 data_used: 7018
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87416832 unmapped: 23658496 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:40:55.573699+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: osd.1 141 heartbeat osd_stat(store_statfs(0x4fce0a000/0x0/0x4ffc00000, data 0x14988b/0x222000, compress 0x0/0x0/0x0, omap 0x18621, meta 0x2bb79df), peers [0,2] op hist [])
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87416832 unmapped: 23658496 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:40:56.573858+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87416832 unmapped: 23658496 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:40:57.574013+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87416832 unmapped: 23658496 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: osd.1 141 heartbeat osd_stat(store_statfs(0x4fce0a000/0x0/0x4ffc00000, data 0x14988b/0x222000, compress 0x0/0x0/0x0, omap 0x18621, meta 0x2bb79df), peers [0,2] op hist [])
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:40:58.574139+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87416832 unmapped: 23658496 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:40:59.574245+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:37 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1066303 data_alloc: 218103808 data_used: 7018
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87416832 unmapped: 23658496 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:41:00.574389+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87416832 unmapped: 23658496 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: osd.1 141 heartbeat osd_stat(store_statfs(0x4fce0a000/0x0/0x4ffc00000, data 0x14988b/0x222000, compress 0x0/0x0/0x0, omap 0x18621, meta 0x2bb79df), peers [0,2] op hist [])
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:41:01.574564+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87416832 unmapped: 23658496 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:41:02.574709+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87416832 unmapped: 23658496 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:41:03.574885+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87416832 unmapped: 23658496 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:41:04.575029+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:37 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1066303 data_alloc: 218103808 data_used: 7018
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87416832 unmapped: 23658496 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:41:05.575196+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87416832 unmapped: 23658496 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: osd.1 141 heartbeat osd_stat(store_statfs(0x4fce0a000/0x0/0x4ffc00000, data 0x14988b/0x222000, compress 0x0/0x0/0x0, omap 0x18621, meta 0x2bb79df), peers [0,2] op hist [])
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:41:06.575362+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87416832 unmapped: 23658496 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:41:07.575792+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87416832 unmapped: 23658496 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:41:08.575992+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87416832 unmapped: 23658496 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:41:09.576134+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:37 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1066303 data_alloc: 218103808 data_used: 7018
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87416832 unmapped: 23658496 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:41:10.576302+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87416832 unmapped: 23658496 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: osd.1 141 heartbeat osd_stat(store_statfs(0x4fce0a000/0x0/0x4ffc00000, data 0x14988b/0x222000, compress 0x0/0x0/0x0, omap 0x18621, meta 0x2bb79df), peers [0,2] op hist [])
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:41:11.576462+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87416832 unmapped: 23658496 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:41:12.576610+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87416832 unmapped: 23658496 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:41:13.576720+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87416832 unmapped: 23658496 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:41:14.576841+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:37 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1066303 data_alloc: 218103808 data_used: 7018
Jan 31 08:51:37 compute-0 ceph-osd[87035]: osd.1 141 heartbeat osd_stat(store_statfs(0x4fce0a000/0x0/0x4ffc00000, data 0x14988b/0x222000, compress 0x0/0x0/0x0, omap 0x18621, meta 0x2bb79df), peers [0,2] op hist [])
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87416832 unmapped: 23658496 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:41:15.577015+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87416832 unmapped: 23658496 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:41:16.577245+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87416832 unmapped: 23658496 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:41:17.577445+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87416832 unmapped: 23658496 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:41:18.577621+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87416832 unmapped: 23658496 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: osd.1 141 heartbeat osd_stat(store_statfs(0x4fce0a000/0x0/0x4ffc00000, data 0x14988b/0x222000, compress 0x0/0x0/0x0, omap 0x18621, meta 0x2bb79df), peers [0,2] op hist [])
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:41:19.577756+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:37 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1066303 data_alloc: 218103808 data_used: 7018
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87416832 unmapped: 23658496 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:41:20.577876+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87416832 unmapped: 23658496 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:41:21.578018+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87416832 unmapped: 23658496 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:41:22.578194+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87416832 unmapped: 23658496 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:41:23.578817+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: osd.1 141 heartbeat osd_stat(store_statfs(0x4fce0a000/0x0/0x4ffc00000, data 0x14988b/0x222000, compress 0x0/0x0/0x0, omap 0x18621, meta 0x2bb79df), peers [0,2] op hist [])
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87416832 unmapped: 23658496 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:41:24.578979+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:37 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1066303 data_alloc: 218103808 data_used: 7018
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87416832 unmapped: 23658496 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: osd.1 141 heartbeat osd_stat(store_statfs(0x4fce0a000/0x0/0x4ffc00000, data 0x14988b/0x222000, compress 0x0/0x0/0x0, omap 0x18621, meta 0x2bb79df), peers [0,2] op hist [])
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:41:25.579193+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87416832 unmapped: 23658496 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:41:26.579365+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87416832 unmapped: 23658496 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:41:27.579547+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87416832 unmapped: 23658496 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:41:28.579667+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: osd.1 141 heartbeat osd_stat(store_statfs(0x4fce0a000/0x0/0x4ffc00000, data 0x14988b/0x222000, compress 0x0/0x0/0x0, omap 0x18621, meta 0x2bb79df), peers [0,2] op hist [])
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87416832 unmapped: 23658496 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:41:29.579836+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:37 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1066303 data_alloc: 218103808 data_used: 7018
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87416832 unmapped: 23658496 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:41:30.579987+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87416832 unmapped: 23658496 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:41:31.580177+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: osd.1 141 heartbeat osd_stat(store_statfs(0x4fce0a000/0x0/0x4ffc00000, data 0x14988b/0x222000, compress 0x0/0x0/0x0, omap 0x18621, meta 0x2bb79df), peers [0,2] op hist [])
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87416832 unmapped: 23658496 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:41:32.580388+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: osd.1 141 heartbeat osd_stat(store_statfs(0x4fce0a000/0x0/0x4ffc00000, data 0x14988b/0x222000, compress 0x0/0x0/0x0, omap 0x18621, meta 0x2bb79df), peers [0,2] op hist [])
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87416832 unmapped: 23658496 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:41:33.580532+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87416832 unmapped: 23658496 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:41:34.580671+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:37 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1066303 data_alloc: 218103808 data_used: 7018
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87416832 unmapped: 23658496 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:41:35.580836+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87416832 unmapped: 23658496 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:41:36.580965+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87416832 unmapped: 23658496 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:41:37.581111+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87416832 unmapped: 23658496 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: osd.1 141 heartbeat osd_stat(store_statfs(0x4fce0a000/0x0/0x4ffc00000, data 0x14988b/0x222000, compress 0x0/0x0/0x0, omap 0x18621, meta 0x2bb79df), peers [0,2] op hist [])
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:41:38.581272+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87416832 unmapped: 23658496 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: osd.1 141 heartbeat osd_stat(store_statfs(0x4fce0a000/0x0/0x4ffc00000, data 0x14988b/0x222000, compress 0x0/0x0/0x0, omap 0x18621, meta 0x2bb79df), peers [0,2] op hist [])
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:41:39.581471+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:37 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1066303 data_alloc: 218103808 data_used: 7018
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87416832 unmapped: 23658496 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:41:40.581650+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87416832 unmapped: 23658496 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:41:41.581807+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87416832 unmapped: 23658496 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:41:42.582019+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87416832 unmapped: 23658496 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: osd.1 141 heartbeat osd_stat(store_statfs(0x4fce0a000/0x0/0x4ffc00000, data 0x14988b/0x222000, compress 0x0/0x0/0x0, omap 0x18621, meta 0x2bb79df), peers [0,2] op hist [])
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:41:43.582204+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87416832 unmapped: 23658496 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:41:44.582354+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:37 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1066303 data_alloc: 218103808 data_used: 7018
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87416832 unmapped: 23658496 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:41:45.582545+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87416832 unmapped: 23658496 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: osd.1 141 heartbeat osd_stat(store_statfs(0x4fce0a000/0x0/0x4ffc00000, data 0x14988b/0x222000, compress 0x0/0x0/0x0, omap 0x18621, meta 0x2bb79df), peers [0,2] op hist [])
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:41:46.582781+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87416832 unmapped: 23658496 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:41:47.582951+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87416832 unmapped: 23658496 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:41:48.583098+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: osd.1 141 heartbeat osd_stat(store_statfs(0x4fce0a000/0x0/0x4ffc00000, data 0x14988b/0x222000, compress 0x0/0x0/0x0, omap 0x18621, meta 0x2bb79df), peers [0,2] op hist [])
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87416832 unmapped: 23658496 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:41:49.583330+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:37 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1066303 data_alloc: 218103808 data_used: 7018
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87416832 unmapped: 23658496 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:41:50.583472+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87416832 unmapped: 23658496 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:41:51.583609+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87416832 unmapped: 23658496 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:41:52.583730+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87416832 unmapped: 23658496 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:41:53.583886+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87416832 unmapped: 23658496 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:41:54.584053+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:37 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1066303 data_alloc: 218103808 data_used: 7018
Jan 31 08:51:37 compute-0 ceph-osd[87035]: osd.1 141 heartbeat osd_stat(store_statfs(0x4fce0a000/0x0/0x4ffc00000, data 0x14988b/0x222000, compress 0x0/0x0/0x0, omap 0x18621, meta 0x2bb79df), peers [0,2] op hist [])
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87416832 unmapped: 23658496 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:41:55.589960+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87416832 unmapped: 23658496 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:41:56.590135+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87416832 unmapped: 23658496 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:41:57.590321+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87416832 unmapped: 23658496 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:41:58.590485+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: osd.1 141 heartbeat osd_stat(store_statfs(0x4fce0a000/0x0/0x4ffc00000, data 0x14988b/0x222000, compress 0x0/0x0/0x0, omap 0x18621, meta 0x2bb79df), peers [0,2] op hist [])
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87416832 unmapped: 23658496 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:41:59.590674+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:37 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1066303 data_alloc: 218103808 data_used: 7018
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87416832 unmapped: 23658496 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:42:00.590835+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87416832 unmapped: 23658496 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:42:01.590978+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87416832 unmapped: 23658496 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:42:02.591143+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87425024 unmapped: 23650304 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:42:03.591350+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87425024 unmapped: 23650304 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: osd.1 141 heartbeat osd_stat(store_statfs(0x4fce0a000/0x0/0x4ffc00000, data 0x14988b/0x222000, compress 0x0/0x0/0x0, omap 0x18621, meta 0x2bb79df), peers [0,2] op hist [])
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:42:04.591467+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:37 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1066303 data_alloc: 218103808 data_used: 7018
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87425024 unmapped: 23650304 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:42:05.591635+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: osd.1 141 heartbeat osd_stat(store_statfs(0x4fce0a000/0x0/0x4ffc00000, data 0x14988b/0x222000, compress 0x0/0x0/0x0, omap 0x18621, meta 0x2bb79df), peers [0,2] op hist [])
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87425024 unmapped: 23650304 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:42:06.591777+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87425024 unmapped: 23650304 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:42:07.591918+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87425024 unmapped: 23650304 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: osd.1 141 heartbeat osd_stat(store_statfs(0x4fce0a000/0x0/0x4ffc00000, data 0x14988b/0x222000, compress 0x0/0x0/0x0, omap 0x18621, meta 0x2bb79df), peers [0,2] op hist [])
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:42:08.592060+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: osd.1 141 heartbeat osd_stat(store_statfs(0x4fce0a000/0x0/0x4ffc00000, data 0x14988b/0x222000, compress 0x0/0x0/0x0, omap 0x18621, meta 0x2bb79df), peers [0,2] op hist [])
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87425024 unmapped: 23650304 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:42:09.592207+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:37 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1066303 data_alloc: 218103808 data_used: 7018
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87425024 unmapped: 23650304 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:42:10.592476+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87425024 unmapped: 23650304 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:42:11.592642+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87425024 unmapped: 23650304 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:42:12.592790+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87425024 unmapped: 23650304 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:42:13.592975+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87425024 unmapped: 23650304 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:42:14.593492+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: osd.1 141 heartbeat osd_stat(store_statfs(0x4fce0a000/0x0/0x4ffc00000, data 0x14988b/0x222000, compress 0x0/0x0/0x0, omap 0x18621, meta 0x2bb79df), peers [0,2] op hist [])
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:37 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1066303 data_alloc: 218103808 data_used: 7018
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87425024 unmapped: 23650304 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:42:15.593686+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87425024 unmapped: 23650304 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:42:16.594042+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87425024 unmapped: 23650304 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:42:17.594271+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: osd.1 141 heartbeat osd_stat(store_statfs(0x4fce0a000/0x0/0x4ffc00000, data 0x14988b/0x222000, compress 0x0/0x0/0x0, omap 0x18621, meta 0x2bb79df), peers [0,2] op hist [])
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87425024 unmapped: 23650304 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:42:18.594487+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: osd.1 141 heartbeat osd_stat(store_statfs(0x4fce0a000/0x0/0x4ffc00000, data 0x14988b/0x222000, compress 0x0/0x0/0x0, omap 0x18621, meta 0x2bb79df), peers [0,2] op hist [])
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87425024 unmapped: 23650304 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:42:19.594663+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: osd.1 141 heartbeat osd_stat(store_statfs(0x4fce0a000/0x0/0x4ffc00000, data 0x14988b/0x222000, compress 0x0/0x0/0x0, omap 0x18621, meta 0x2bb79df), peers [0,2] op hist [])
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:37 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1066303 data_alloc: 218103808 data_used: 7018
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87425024 unmapped: 23650304 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: osd.1 141 heartbeat osd_stat(store_statfs(0x4fce0a000/0x0/0x4ffc00000, data 0x14988b/0x222000, compress 0x0/0x0/0x0, omap 0x18621, meta 0x2bb79df), peers [0,2] op hist [])
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:42:20.594789+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87425024 unmapped: 23650304 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:42:21.594939+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87425024 unmapped: 23650304 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:42:22.595080+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87425024 unmapped: 23650304 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:42:23.595215+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87425024 unmapped: 23650304 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:42:24.595506+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:37 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1066303 data_alloc: 218103808 data_used: 7018
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87425024 unmapped: 23650304 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:42:25.595689+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: osd.1 141 heartbeat osd_stat(store_statfs(0x4fce0a000/0x0/0x4ffc00000, data 0x14988b/0x222000, compress 0x0/0x0/0x0, omap 0x18621, meta 0x2bb79df), peers [0,2] op hist [])
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87425024 unmapped: 23650304 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:42:26.595930+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87425024 unmapped: 23650304 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:42:27.596127+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87425024 unmapped: 23650304 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:42:28.596318+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87425024 unmapped: 23650304 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:42:29.596720+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: osd.1 141 heartbeat osd_stat(store_statfs(0x4fce0a000/0x0/0x4ffc00000, data 0x14988b/0x222000, compress 0x0/0x0/0x0, omap 0x18621, meta 0x2bb79df), peers [0,2] op hist [])
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:37 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1066303 data_alloc: 218103808 data_used: 7018
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87425024 unmapped: 23650304 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:42:30.597080+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87425024 unmapped: 23650304 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:42:31.597508+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: osd.1 141 heartbeat osd_stat(store_statfs(0x4fce0a000/0x0/0x4ffc00000, data 0x14988b/0x222000, compress 0x0/0x0/0x0, omap 0x18621, meta 0x2bb79df), peers [0,2] op hist [])
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87425024 unmapped: 23650304 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:42:32.597694+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87425024 unmapped: 23650304 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:42:33.597892+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87425024 unmapped: 23650304 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: osd.1 141 heartbeat osd_stat(store_statfs(0x4fce0a000/0x0/0x4ffc00000, data 0x14988b/0x222000, compress 0x0/0x0/0x0, omap 0x18621, meta 0x2bb79df), peers [0,2] op hist [])
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:42:34.598112+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:37 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1066303 data_alloc: 218103808 data_used: 7018
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87425024 unmapped: 23650304 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:42:35.598374+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87425024 unmapped: 23650304 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:42:36.598581+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87425024 unmapped: 23650304 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:42:37.598740+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87425024 unmapped: 23650304 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:42:38.598917+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87425024 unmapped: 23650304 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:42:39.599049+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:37 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1066303 data_alloc: 218103808 data_used: 7018
Jan 31 08:51:37 compute-0 ceph-osd[87035]: osd.1 141 heartbeat osd_stat(store_statfs(0x4fce0a000/0x0/0x4ffc00000, data 0x14988b/0x222000, compress 0x0/0x0/0x0, omap 0x18621, meta 0x2bb79df), peers [0,2] op hist [])
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87425024 unmapped: 23650304 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:42:40.599178+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: osd.1 141 heartbeat osd_stat(store_statfs(0x4fce0a000/0x0/0x4ffc00000, data 0x14988b/0x222000, compress 0x0/0x0/0x0, omap 0x18621, meta 0x2bb79df), peers [0,2] op hist [])
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87425024 unmapped: 23650304 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:42:41.599420+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: osd.1 141 heartbeat osd_stat(store_statfs(0x4fce0a000/0x0/0x4ffc00000, data 0x14988b/0x222000, compress 0x0/0x0/0x0, omap 0x18621, meta 0x2bb79df), peers [0,2] op hist [])
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87425024 unmapped: 23650304 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:42:42.599635+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87425024 unmapped: 23650304 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:42:43.599826+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87425024 unmapped: 23650304 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:42:44.600022+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:37 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1066303 data_alloc: 218103808 data_used: 7018
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87425024 unmapped: 23650304 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:42:45.600349+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:42:46.600546+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87425024 unmapped: 23650304 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:42:47.600791+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87425024 unmapped: 23650304 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: osd.1 141 heartbeat osd_stat(store_statfs(0x4fce0a000/0x0/0x4ffc00000, data 0x14988b/0x222000, compress 0x0/0x0/0x0, omap 0x18621, meta 0x2bb79df), peers [0,2] op hist [])
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:42:48.604551+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87425024 unmapped: 23650304 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:42:49.604881+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87425024 unmapped: 23650304 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:37 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1066303 data_alloc: 218103808 data_used: 7018
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:42:50.605166+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87425024 unmapped: 23650304 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:42:51.605657+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87425024 unmapped: 23650304 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:42:52.605778+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: osd.1 141 heartbeat osd_stat(store_statfs(0x4fce0a000/0x0/0x4ffc00000, data 0x14988b/0x222000, compress 0x0/0x0/0x0, omap 0x18621, meta 0x2bb79df), peers [0,2] op hist [])
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87425024 unmapped: 23650304 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:42:53.606424+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87425024 unmapped: 23650304 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:42:54.606593+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87425024 unmapped: 23650304 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:37 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1066303 data_alloc: 218103808 data_used: 7018
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:42:55.606895+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87425024 unmapped: 23650304 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:42:56.607006+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87425024 unmapped: 23650304 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:42:57.607381+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87425024 unmapped: 23650304 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: osd.1 141 heartbeat osd_stat(store_statfs(0x4fce0a000/0x0/0x4ffc00000, data 0x14988b/0x222000, compress 0x0/0x0/0x0, omap 0x18621, meta 0x2bb79df), peers [0,2] op hist [])
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:42:58.607578+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87425024 unmapped: 23650304 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:42:59.607744+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87425024 unmapped: 23650304 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:37 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1066303 data_alloc: 218103808 data_used: 7018
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:43:00.607886+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87425024 unmapped: 23650304 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:43:01.608052+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87425024 unmapped: 23650304 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:43:02.608209+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: osd.1 141 heartbeat osd_stat(store_statfs(0x4fce0a000/0x0/0x4ffc00000, data 0x14988b/0x222000, compress 0x0/0x0/0x0, omap 0x18621, meta 0x2bb79df), peers [0,2] op hist [])
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87425024 unmapped: 23650304 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:43:03.608403+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87425024 unmapped: 23650304 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:43:04.608538+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87425024 unmapped: 23650304 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:37 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1066303 data_alloc: 218103808 data_used: 7018
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:43:05.608757+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87425024 unmapped: 23650304 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:43:06.608928+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87425024 unmapped: 23650304 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:43:07.609122+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87425024 unmapped: 23650304 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: osd.1 141 heartbeat osd_stat(store_statfs(0x4fce0a000/0x0/0x4ffc00000, data 0x14988b/0x222000, compress 0x0/0x0/0x0, omap 0x18621, meta 0x2bb79df), peers [0,2] op hist [])
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:43:08.609323+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87425024 unmapped: 23650304 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:43:09.609486+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87425024 unmapped: 23650304 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:37 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1066303 data_alloc: 218103808 data_used: 7018
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:43:10.609646+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87425024 unmapped: 23650304 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: osd.1 141 heartbeat osd_stat(store_statfs(0x4fce0a000/0x0/0x4ffc00000, data 0x14988b/0x222000, compress 0x0/0x0/0x0, omap 0x18621, meta 0x2bb79df), peers [0,2] op hist [])
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:43:11.609908+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87425024 unmapped: 23650304 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: osd.1 141 heartbeat osd_stat(store_statfs(0x4fce0a000/0x0/0x4ffc00000, data 0x14988b/0x222000, compress 0x0/0x0/0x0, omap 0x18621, meta 0x2bb79df), peers [0,2] op hist [])
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:43:12.610072+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87425024 unmapped: 23650304 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: osd.1 141 heartbeat osd_stat(store_statfs(0x4fce0a000/0x0/0x4ffc00000, data 0x14988b/0x222000, compress 0x0/0x0/0x0, omap 0x18621, meta 0x2bb79df), peers [0,2] op hist [])
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:43:13.610295+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87425024 unmapped: 23650304 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:43:14.610509+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87425024 unmapped: 23650304 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:37 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1066303 data_alloc: 218103808 data_used: 7018
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:43:15.610696+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87425024 unmapped: 23650304 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: osd.1 141 heartbeat osd_stat(store_statfs(0x4fce0a000/0x0/0x4ffc00000, data 0x14988b/0x222000, compress 0x0/0x0/0x0, omap 0x18621, meta 0x2bb79df), peers [0,2] op hist [])
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:43:16.610827+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87425024 unmapped: 23650304 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: osd.1 141 heartbeat osd_stat(store_statfs(0x4fce0a000/0x0/0x4ffc00000, data 0x14988b/0x222000, compress 0x0/0x0/0x0, omap 0x18621, meta 0x2bb79df), peers [0,2] op hist [])
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:43:17.611005+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87425024 unmapped: 23650304 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:43:18.611154+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87425024 unmapped: 23650304 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:43:19.611311+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87425024 unmapped: 23650304 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:37 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1066303 data_alloc: 218103808 data_used: 7018
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:43:20.611462+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87425024 unmapped: 23650304 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:43:21.611624+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: osd.1 141 heartbeat osd_stat(store_statfs(0x4fce0a000/0x0/0x4ffc00000, data 0x14988b/0x222000, compress 0x0/0x0/0x0, omap 0x18621, meta 0x2bb79df), peers [0,2] op hist [])
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87425024 unmapped: 23650304 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:43:22.611761+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87425024 unmapped: 23650304 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:43:23.611968+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87425024 unmapped: 23650304 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:43:24.612102+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87425024 unmapped: 23650304 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:37 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1066303 data_alloc: 218103808 data_used: 7018
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:43:25.612347+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87425024 unmapped: 23650304 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:43:26.612541+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87425024 unmapped: 23650304 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:43:27.612700+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: osd.1 141 heartbeat osd_stat(store_statfs(0x4fce0a000/0x0/0x4ffc00000, data 0x14988b/0x222000, compress 0x0/0x0/0x0, omap 0x18621, meta 0x2bb79df), peers [0,2] op hist [])
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87425024 unmapped: 23650304 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:43:28.612862+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87425024 unmapped: 23650304 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:43:29.613018+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87425024 unmapped: 23650304 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:37 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1066303 data_alloc: 218103808 data_used: 7018
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:43:30.613149+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87425024 unmapped: 23650304 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:43:31.613317+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87425024 unmapped: 23650304 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: osd.1 141 heartbeat osd_stat(store_statfs(0x4fce0a000/0x0/0x4ffc00000, data 0x14988b/0x222000, compress 0x0/0x0/0x0, omap 0x18621, meta 0x2bb79df), peers [0,2] op hist [])
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:43:32.613451+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87425024 unmapped: 23650304 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:43:33.613598+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87425024 unmapped: 23650304 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:43:34.613761+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87425024 unmapped: 23650304 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:37 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1066303 data_alloc: 218103808 data_used: 7018
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:43:35.614018+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87425024 unmapped: 23650304 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:43:36.614150+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87425024 unmapped: 23650304 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: osd.1 141 heartbeat osd_stat(store_statfs(0x4fce0a000/0x0/0x4ffc00000, data 0x14988b/0x222000, compress 0x0/0x0/0x0, omap 0x18621, meta 0x2bb79df), peers [0,2] op hist [])
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:43:37.614347+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87425024 unmapped: 23650304 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:43:38.614491+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87425024 unmapped: 23650304 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:43:39.614637+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87425024 unmapped: 23650304 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:37 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1066303 data_alloc: 218103808 data_used: 7018
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:43:40.614814+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87425024 unmapped: 23650304 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: osd.1 141 heartbeat osd_stat(store_statfs(0x4fce0a000/0x0/0x4ffc00000, data 0x14988b/0x222000, compress 0x0/0x0/0x0, omap 0x18621, meta 0x2bb79df), peers [0,2] op hist [])
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:43:41.614952+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87425024 unmapped: 23650304 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:43:42.615105+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87425024 unmapped: 23650304 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:43:43.615327+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87425024 unmapped: 23650304 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:43:44.615472+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87425024 unmapped: 23650304 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:37 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1066303 data_alloc: 218103808 data_used: 7018
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:43:45.615771+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: osd.1 141 heartbeat osd_stat(store_statfs(0x4fce0a000/0x0/0x4ffc00000, data 0x14988b/0x222000, compress 0x0/0x0/0x0, omap 0x18621, meta 0x2bb79df), peers [0,2] op hist [])
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87425024 unmapped: 23650304 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: osd.1 141 heartbeat osd_stat(store_statfs(0x4fce0a000/0x0/0x4ffc00000, data 0x14988b/0x222000, compress 0x0/0x0/0x0, omap 0x18621, meta 0x2bb79df), peers [0,2] op hist [])
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:43:46.615951+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87425024 unmapped: 23650304 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:43:47.616088+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87425024 unmapped: 23650304 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: osd.1 141 heartbeat osd_stat(store_statfs(0x4fce0a000/0x0/0x4ffc00000, data 0x14988b/0x222000, compress 0x0/0x0/0x0, omap 0x18621, meta 0x2bb79df), peers [0,2] op hist [])
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:43:48.616223+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87425024 unmapped: 23650304 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:43:49.616387+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87425024 unmapped: 23650304 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:43:50.616523+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:37 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1066303 data_alloc: 218103808 data_used: 7018
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87425024 unmapped: 23650304 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:43:51.616693+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87433216 unmapped: 23642112 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:43:52.616820+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: osd.1 141 heartbeat osd_stat(store_statfs(0x4fce0a000/0x0/0x4ffc00000, data 0x14988b/0x222000, compress 0x0/0x0/0x0, omap 0x18621, meta 0x2bb79df), peers [0,2] op hist [])
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87433216 unmapped: 23642112 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:43:53.616963+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87433216 unmapped: 23642112 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:43:54.617119+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87433216 unmapped: 23642112 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:43:55.617347+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:37 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1066303 data_alloc: 218103808 data_used: 7018
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87433216 unmapped: 23642112 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:43:56.617532+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87433216 unmapped: 23642112 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:43:57.617746+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87433216 unmapped: 23642112 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:43:58.617984+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: osd.1 141 heartbeat osd_stat(store_statfs(0x4fce0a000/0x0/0x4ffc00000, data 0x14988b/0x222000, compress 0x0/0x0/0x0, omap 0x18621, meta 0x2bb79df), peers [0,2] op hist [])
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87433216 unmapped: 23642112 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:43:59.618154+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87433216 unmapped: 23642112 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:44:00.618536+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:37 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1066303 data_alloc: 218103808 data_used: 7018
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87433216 unmapped: 23642112 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:44:01.618753+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87433216 unmapped: 23642112 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:44:02.618967+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87433216 unmapped: 23642112 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:44:03.619098+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: osd.1 141 heartbeat osd_stat(store_statfs(0x4fce0a000/0x0/0x4ffc00000, data 0x14988b/0x222000, compress 0x0/0x0/0x0, omap 0x18621, meta 0x2bb79df), peers [0,2] op hist [])
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87433216 unmapped: 23642112 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: osd.1 141 heartbeat osd_stat(store_statfs(0x4fce0a000/0x0/0x4ffc00000, data 0x14988b/0x222000, compress 0x0/0x0/0x0, omap 0x18621, meta 0x2bb79df), peers [0,2] op hist [])
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:44:04.619529+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: osd.1 141 heartbeat osd_stat(store_statfs(0x4fce0a000/0x0/0x4ffc00000, data 0x14988b/0x222000, compress 0x0/0x0/0x0, omap 0x18621, meta 0x2bb79df), peers [0,2] op hist [])
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87433216 unmapped: 23642112 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:44:05.619684+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:37 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1066303 data_alloc: 218103808 data_used: 7018
Jan 31 08:51:37 compute-0 ceph-osd[87035]: osd.1 141 heartbeat osd_stat(store_statfs(0x4fce0a000/0x0/0x4ffc00000, data 0x14988b/0x222000, compress 0x0/0x0/0x0, omap 0x18621, meta 0x2bb79df), peers [0,2] op hist [])
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87433216 unmapped: 23642112 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:44:06.619846+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87433216 unmapped: 23642112 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:44:07.620070+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87433216 unmapped: 23642112 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:44:08.630398+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87433216 unmapped: 23642112 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:44:09.630653+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87433216 unmapped: 23642112 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:44:10.630810+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:37 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1066303 data_alloc: 218103808 data_used: 7018
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87433216 unmapped: 23642112 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:44:11.630968+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: osd.1 141 heartbeat osd_stat(store_statfs(0x4fce0a000/0x0/0x4ffc00000, data 0x14988b/0x222000, compress 0x0/0x0/0x0, omap 0x18621, meta 0x2bb79df), peers [0,2] op hist [])
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87433216 unmapped: 23642112 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:44:12.631112+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87433216 unmapped: 23642112 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:44:13.631295+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87433216 unmapped: 23642112 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:44:14.631435+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87433216 unmapped: 23642112 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: osd.1 141 heartbeat osd_stat(store_statfs(0x4fce0a000/0x0/0x4ffc00000, data 0x14988b/0x222000, compress 0x0/0x0/0x0, omap 0x18621, meta 0x2bb79df), peers [0,2] op hist [])
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:44:15.637738+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:37 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1066303 data_alloc: 218103808 data_used: 7018
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87433216 unmapped: 23642112 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:44:16.637938+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87433216 unmapped: 23642112 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:44:17.638132+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87433216 unmapped: 23642112 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: osd.1 141 heartbeat osd_stat(store_statfs(0x4fce0a000/0x0/0x4ffc00000, data 0x14988b/0x222000, compress 0x0/0x0/0x0, omap 0x18621, meta 0x2bb79df), peers [0,2] op hist [])
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:44:18.638320+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87441408 unmapped: 23633920 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:44:19.638525+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87441408 unmapped: 23633920 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:44:20.638702+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:37 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1066303 data_alloc: 218103808 data_used: 7018
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87441408 unmapped: 23633920 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:44:21.638852+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: osd.1 141 heartbeat osd_stat(store_statfs(0x4fce0a000/0x0/0x4ffc00000, data 0x14988b/0x222000, compress 0x0/0x0/0x0, omap 0x18621, meta 0x2bb79df), peers [0,2] op hist [])
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87441408 unmapped: 23633920 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:44:22.639036+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87441408 unmapped: 23633920 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:44:23.639193+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87441408 unmapped: 23633920 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:44:24.639361+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87441408 unmapped: 23633920 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:44:25.639542+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:37 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1066303 data_alloc: 218103808 data_used: 7018
Jan 31 08:51:37 compute-0 ceph-osd[87035]: osd.1 141 heartbeat osd_stat(store_statfs(0x4fce0a000/0x0/0x4ffc00000, data 0x14988b/0x222000, compress 0x0/0x0/0x0, omap 0x18621, meta 0x2bb79df), peers [0,2] op hist [])
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87441408 unmapped: 23633920 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:44:26.639699+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87441408 unmapped: 23633920 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:44:27.639864+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87441408 unmapped: 23633920 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:44:28.640077+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87441408 unmapped: 23633920 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:44:29.640206+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: osd.1 141 heartbeat osd_stat(store_statfs(0x4fce0a000/0x0/0x4ffc00000, data 0x14988b/0x222000, compress 0x0/0x0/0x0, omap 0x18621, meta 0x2bb79df), peers [0,2] op hist [])
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87441408 unmapped: 23633920 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:44:30.640338+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:37 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1066303 data_alloc: 218103808 data_used: 7018
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87441408 unmapped: 23633920 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:44:31.640523+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87441408 unmapped: 23633920 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:44:32.640665+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87441408 unmapped: 23633920 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:44:33.640853+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: osd.1 141 heartbeat osd_stat(store_statfs(0x4fce0a000/0x0/0x4ffc00000, data 0x14988b/0x222000, compress 0x0/0x0/0x0, omap 0x18621, meta 0x2bb79df), peers [0,2] op hist [])
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87441408 unmapped: 23633920 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:44:34.640987+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87441408 unmapped: 23633920 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:44:35.641119+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:37 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1066303 data_alloc: 218103808 data_used: 7018
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87441408 unmapped: 23633920 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:44:36.641270+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87441408 unmapped: 23633920 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:44:37.641383+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: osd.1 141 heartbeat osd_stat(store_statfs(0x4fce0a000/0x0/0x4ffc00000, data 0x14988b/0x222000, compress 0x0/0x0/0x0, omap 0x18621, meta 0x2bb79df), peers [0,2] op hist [])
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87441408 unmapped: 23633920 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:44:38.641521+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87441408 unmapped: 23633920 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:44:39.641655+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87441408 unmapped: 23633920 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: osd.1 141 heartbeat osd_stat(store_statfs(0x4fce0a000/0x0/0x4ffc00000, data 0x14988b/0x222000, compress 0x0/0x0/0x0, omap 0x18621, meta 0x2bb79df), peers [0,2] op hist [])
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:44:40.641824+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:37 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1066303 data_alloc: 218103808 data_used: 7018
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87441408 unmapped: 23633920 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:44:41.641994+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87441408 unmapped: 23633920 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:44:42.642150+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87441408 unmapped: 23633920 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:44:43.642311+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: osd.1 141 heartbeat osd_stat(store_statfs(0x4fce0a000/0x0/0x4ffc00000, data 0x14988b/0x222000, compress 0x0/0x0/0x0, omap 0x18621, meta 0x2bb79df), peers [0,2] op hist [])
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87441408 unmapped: 23633920 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:44:44.642469+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87441408 unmapped: 23633920 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:44:45.642689+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: osd.1 141 heartbeat osd_stat(store_statfs(0x4fce0a000/0x0/0x4ffc00000, data 0x14988b/0x222000, compress 0x0/0x0/0x0, omap 0x18621, meta 0x2bb79df), peers [0,2] op hist [])
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:37 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1066303 data_alloc: 218103808 data_used: 7018
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87441408 unmapped: 23633920 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:44:46.642894+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87441408 unmapped: 23633920 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:44:47.643101+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: osd.1 141 heartbeat osd_stat(store_statfs(0x4fce0a000/0x0/0x4ffc00000, data 0x14988b/0x222000, compress 0x0/0x0/0x0, omap 0x18621, meta 0x2bb79df), peers [0,2] op hist [])
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87441408 unmapped: 23633920 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:44:48.643326+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: osd.1 141 heartbeat osd_stat(store_statfs(0x4fce0a000/0x0/0x4ffc00000, data 0x14988b/0x222000, compress 0x0/0x0/0x0, omap 0x18621, meta 0x2bb79df), peers [0,2] op hist [])
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87449600 unmapped: 23625728 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:44:49.643498+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87449600 unmapped: 23625728 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:44:50.643647+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:37 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1066303 data_alloc: 218103808 data_used: 7018
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87449600 unmapped: 23625728 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:44:51.643782+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: osd.1 141 heartbeat osd_stat(store_statfs(0x4fce0a000/0x0/0x4ffc00000, data 0x14988b/0x222000, compress 0x0/0x0/0x0, omap 0x18621, meta 0x2bb79df), peers [0,2] op hist [])
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87449600 unmapped: 23625728 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:44:52.643915+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87449600 unmapped: 23625728 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:44:53.644074+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87449600 unmapped: 23625728 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:44:54.644221+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87449600 unmapped: 23625728 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:44:55.644430+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:37 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1066303 data_alloc: 218103808 data_used: 7018
Jan 31 08:51:37 compute-0 ceph-osd[87035]: osd.1 141 heartbeat osd_stat(store_statfs(0x4fce0a000/0x0/0x4ffc00000, data 0x14988b/0x222000, compress 0x0/0x0/0x0, omap 0x18621, meta 0x2bb79df), peers [0,2] op hist [])
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87449600 unmapped: 23625728 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:44:56.644589+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87449600 unmapped: 23625728 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:44:57.644822+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87449600 unmapped: 23625728 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:44:58.645022+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87449600 unmapped: 23625728 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:44:59.645155+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: osd.1 141 heartbeat osd_stat(store_statfs(0x4fce0a000/0x0/0x4ffc00000, data 0x14988b/0x222000, compress 0x0/0x0/0x0, omap 0x18621, meta 0x2bb79df), peers [0,2] op hist [])
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87449600 unmapped: 23625728 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:45:00.645377+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:37 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1066303 data_alloc: 218103808 data_used: 7018
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87449600 unmapped: 23625728 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:45:01.645572+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87449600 unmapped: 23625728 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:45:02.645745+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87449600 unmapped: 23625728 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:45:03.645898+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87449600 unmapped: 23625728 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:45:04.646077+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87449600 unmapped: 23625728 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: osd.1 141 heartbeat osd_stat(store_statfs(0x4fce0a000/0x0/0x4ffc00000, data 0x14988b/0x222000, compress 0x0/0x0/0x0, omap 0x18621, meta 0x2bb79df), peers [0,2] op hist [])
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:45:05.646319+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:37 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1066303 data_alloc: 218103808 data_used: 7018
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87449600 unmapped: 23625728 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:45:06.646536+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87449600 unmapped: 23625728 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:45:07.646709+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87449600 unmapped: 23625728 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:45:08.646854+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87449600 unmapped: 23625728 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:45:09.646993+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: osd.1 141 heartbeat osd_stat(store_statfs(0x4fce0a000/0x0/0x4ffc00000, data 0x14988b/0x222000, compress 0x0/0x0/0x0, omap 0x18621, meta 0x2bb79df), peers [0,2] op hist [])
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87449600 unmapped: 23625728 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:45:10.647174+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:37 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1066303 data_alloc: 218103808 data_used: 7018
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87449600 unmapped: 23625728 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:45:11.647374+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87449600 unmapped: 23625728 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:45:12.647533+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87449600 unmapped: 23625728 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:45:13.647692+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: osd.1 141 heartbeat osd_stat(store_statfs(0x4fce0a000/0x0/0x4ffc00000, data 0x14988b/0x222000, compress 0x0/0x0/0x0, omap 0x18621, meta 0x2bb79df), peers [0,2] op hist [])
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87449600 unmapped: 23625728 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:45:14.647853+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87449600 unmapped: 23625728 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:45:15.648076+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:37 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1066303 data_alloc: 218103808 data_used: 7018
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87449600 unmapped: 23625728 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:45:16.648223+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87449600 unmapped: 23625728 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:45:17.648347+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: osd.1 141 heartbeat osd_stat(store_statfs(0x4fce0a000/0x0/0x4ffc00000, data 0x14988b/0x222000, compress 0x0/0x0/0x0, omap 0x18621, meta 0x2bb79df), peers [0,2] op hist [])
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87449600 unmapped: 23625728 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:45:18.648502+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87449600 unmapped: 23625728 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:45:19.648707+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: osd.1 141 heartbeat osd_stat(store_statfs(0x4fce0a000/0x0/0x4ffc00000, data 0x14988b/0x222000, compress 0x0/0x0/0x0, omap 0x18621, meta 0x2bb79df), peers [0,2] op hist [])
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87457792 unmapped: 23617536 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:45:20.648897+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:37 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1066303 data_alloc: 218103808 data_used: 7018
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87457792 unmapped: 23617536 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:45:21.649072+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87457792 unmapped: 23617536 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:45:22.649277+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87457792 unmapped: 23617536 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:45:23.649501+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: osd.1 141 heartbeat osd_stat(store_statfs(0x4fce0a000/0x0/0x4ffc00000, data 0x14988b/0x222000, compress 0x0/0x0/0x0, omap 0x18621, meta 0x2bb79df), peers [0,2] op hist [])
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87457792 unmapped: 23617536 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:45:24.649640+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: osd.1 141 heartbeat osd_stat(store_statfs(0x4fce0a000/0x0/0x4ffc00000, data 0x14988b/0x222000, compress 0x0/0x0/0x0, omap 0x18621, meta 0x2bb79df), peers [0,2] op hist [])
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87457792 unmapped: 23617536 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:45:25.649803+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:37 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1066303 data_alloc: 218103808 data_used: 7018
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87457792 unmapped: 23617536 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:45:26.649998+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87457792 unmapped: 23617536 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:45:27.650128+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87457792 unmapped: 23617536 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:45:28.650332+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: osd.1 141 heartbeat osd_stat(store_statfs(0x4fce0a000/0x0/0x4ffc00000, data 0x14988b/0x222000, compress 0x0/0x0/0x0, omap 0x18621, meta 0x2bb79df), peers [0,2] op hist [])
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87457792 unmapped: 23617536 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:45:29.650475+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87457792 unmapped: 23617536 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:45:30.650639+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:37 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1066303 data_alloc: 218103808 data_used: 7018
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87457792 unmapped: 23617536 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:45:31.650804+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87457792 unmapped: 23617536 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:45:32.650996+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87457792 unmapped: 23617536 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: osd.1 141 heartbeat osd_stat(store_statfs(0x4fce0a000/0x0/0x4ffc00000, data 0x14988b/0x222000, compress 0x0/0x0/0x0, omap 0x18621, meta 0x2bb79df), peers [0,2] op hist [])
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:45:33.651234+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87457792 unmapped: 23617536 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:45:34.651445+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: osd.1 141 heartbeat osd_stat(store_statfs(0x4fce0a000/0x0/0x4ffc00000, data 0x14988b/0x222000, compress 0x0/0x0/0x0, omap 0x18621, meta 0x2bb79df), peers [0,2] op hist [])
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87457792 unmapped: 23617536 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:45:35.651779+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:37 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1066303 data_alloc: 218103808 data_used: 7018
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87457792 unmapped: 23617536 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:45:36.651975+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87457792 unmapped: 23617536 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:45:37.652171+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87457792 unmapped: 23617536 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:45:38.652316+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: osd.1 141 heartbeat osd_stat(store_statfs(0x4fce0a000/0x0/0x4ffc00000, data 0x14988b/0x222000, compress 0x0/0x0/0x0, omap 0x18621, meta 0x2bb79df), peers [0,2] op hist [])
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87457792 unmapped: 23617536 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:45:39.652470+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87457792 unmapped: 23617536 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:45:40.652578+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:37 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1066303 data_alloc: 218103808 data_used: 7018
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87457792 unmapped: 23617536 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:45:41.652762+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87457792 unmapped: 23617536 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:45:42.652904+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: osd.1 141 heartbeat osd_stat(store_statfs(0x4fce0a000/0x0/0x4ffc00000, data 0x14988b/0x222000, compress 0x0/0x0/0x0, omap 0x18621, meta 0x2bb79df), peers [0,2] op hist [])
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87457792 unmapped: 23617536 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:45:43.653075+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87457792 unmapped: 23617536 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:45:44.653335+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: osd.1 141 heartbeat osd_stat(store_statfs(0x4fce0a000/0x0/0x4ffc00000, data 0x14988b/0x222000, compress 0x0/0x0/0x0, omap 0x18621, meta 0x2bb79df), peers [0,2] op hist [])
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87465984 unmapped: 23609344 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:45:45.653605+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:37 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1066303 data_alloc: 218103808 data_used: 7018
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87465984 unmapped: 23609344 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:45:46.653780+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87465984 unmapped: 23609344 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:45:47.653926+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87465984 unmapped: 23609344 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:45:48.654111+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87465984 unmapped: 23609344 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:45:49.654378+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87465984 unmapped: 23609344 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:45:50.654545+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: osd.1 141 heartbeat osd_stat(store_statfs(0x4fce0a000/0x0/0x4ffc00000, data 0x14988b/0x222000, compress 0x0/0x0/0x0, omap 0x18621, meta 0x2bb79df), peers [0,2] op hist [])
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:37 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1066303 data_alloc: 218103808 data_used: 7018
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87465984 unmapped: 23609344 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:45:51.654757+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87465984 unmapped: 23609344 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:45:52.655001+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87465984 unmapped: 23609344 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:45:53.655203+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87465984 unmapped: 23609344 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:45:54.655372+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87465984 unmapped: 23609344 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:45:55.655616+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: osd.1 141 heartbeat osd_stat(store_statfs(0x4fce0a000/0x0/0x4ffc00000, data 0x14988b/0x222000, compress 0x0/0x0/0x0, omap 0x18621, meta 0x2bb79df), peers [0,2] op hist [])
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:37 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1066303 data_alloc: 218103808 data_used: 7018
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87465984 unmapped: 23609344 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:45:56.655843+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87465984 unmapped: 23609344 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:45:57.656097+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: osd.1 141 heartbeat osd_stat(store_statfs(0x4fce0a000/0x0/0x4ffc00000, data 0x14988b/0x222000, compress 0x0/0x0/0x0, omap 0x18621, meta 0x2bb79df), peers [0,2] op hist [])
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87465984 unmapped: 23609344 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:45:58.656293+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87465984 unmapped: 23609344 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:45:59.656510+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87465984 unmapped: 23609344 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:46:00.656662+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:37 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1066303 data_alloc: 218103808 data_used: 7018
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87465984 unmapped: 23609344 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:46:01.656868+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87465984 unmapped: 23609344 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:46:02.656991+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: osd.1 141 heartbeat osd_stat(store_statfs(0x4fce0a000/0x0/0x4ffc00000, data 0x14988b/0x222000, compress 0x0/0x0/0x0, omap 0x18621, meta 0x2bb79df), peers [0,2] op hist [])
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87474176 unmapped: 23601152 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:46:03.657243+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87474176 unmapped: 23601152 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:46:04.657401+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: osd.1 141 heartbeat osd_stat(store_statfs(0x4fce0a000/0x0/0x4ffc00000, data 0x14988b/0x222000, compress 0x0/0x0/0x0, omap 0x18621, meta 0x2bb79df), peers [0,2] op hist [])
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87474176 unmapped: 23601152 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:46:05.657557+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:37 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1066303 data_alloc: 218103808 data_used: 7018
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87474176 unmapped: 23601152 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:46:06.657790+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87474176 unmapped: 23601152 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 systemd[1]: var-lib-containers-storage-overlay-f6a58454dbaf4c1ee1a4a9d7221c38c65784fdf82bb0613276f3fe012f38120c-merged.mount: Deactivated successfully.
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:46:07.658017+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87474176 unmapped: 23601152 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:46:08.658306+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87474176 unmapped: 23601152 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:46:09.658563+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: osd.1 141 heartbeat osd_stat(store_statfs(0x4fce0a000/0x0/0x4ffc00000, data 0x14988b/0x222000, compress 0x0/0x0/0x0, omap 0x18621, meta 0x2bb79df), peers [0,2] op hist [])
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87474176 unmapped: 23601152 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:46:10.658745+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:37 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1066303 data_alloc: 218103808 data_used: 7018
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87474176 unmapped: 23601152 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:46:11.658950+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87474176 unmapped: 23601152 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:46:12.659301+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: osd.1 141 heartbeat osd_stat(store_statfs(0x4fce0a000/0x0/0x4ffc00000, data 0x14988b/0x222000, compress 0x0/0x0/0x0, omap 0x18621, meta 0x2bb79df), peers [0,2] op hist [])
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87474176 unmapped: 23601152 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:46:13.659508+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: osd.1 141 heartbeat osd_stat(store_statfs(0x4fce0a000/0x0/0x4ffc00000, data 0x14988b/0x222000, compress 0x0/0x0/0x0, omap 0x18621, meta 0x2bb79df), peers [0,2] op hist [])
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87482368 unmapped: 23592960 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:46:14.659734+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87482368 unmapped: 23592960 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:46:15.659980+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:37 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1066303 data_alloc: 218103808 data_used: 7018
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87482368 unmapped: 23592960 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:46:16.660193+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: osd.1 141 heartbeat osd_stat(store_statfs(0x4fce0a000/0x0/0x4ffc00000, data 0x14988b/0x222000, compress 0x0/0x0/0x0, omap 0x18621, meta 0x2bb79df), peers [0,2] op hist [])
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87482368 unmapped: 23592960 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:46:17.660415+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: osd.1 141 heartbeat osd_stat(store_statfs(0x4fce0a000/0x0/0x4ffc00000, data 0x14988b/0x222000, compress 0x0/0x0/0x0, omap 0x18621, meta 0x2bb79df), peers [0,2] op hist [])
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87482368 unmapped: 23592960 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:46:18.660611+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87482368 unmapped: 23592960 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:46:19.660815+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87482368 unmapped: 23592960 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:46:20.660983+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:37 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1066303 data_alloc: 218103808 data_used: 7018
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87482368 unmapped: 23592960 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:46:21.661163+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87482368 unmapped: 23592960 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:46:22.661299+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:46:23.661466+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87482368 unmapped: 23592960 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: osd.1 141 heartbeat osd_stat(store_statfs(0x4fce0a000/0x0/0x4ffc00000, data 0x14988b/0x222000, compress 0x0/0x0/0x0, omap 0x18621, meta 0x2bb79df), peers [0,2] op hist [])
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:46:24.661647+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87482368 unmapped: 23592960 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: osd.1 141 heartbeat osd_stat(store_statfs(0x4fce0a000/0x0/0x4ffc00000, data 0x14988b/0x222000, compress 0x0/0x0/0x0, omap 0x18621, meta 0x2bb79df), peers [0,2] op hist [])
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:46:25.661871+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87482368 unmapped: 23592960 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:37 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1066303 data_alloc: 218103808 data_used: 7018
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:46:26.662073+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87482368 unmapped: 23592960 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:46:27.662239+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87482368 unmapped: 23592960 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: osd.1 141 heartbeat osd_stat(store_statfs(0x4fce0a000/0x0/0x4ffc00000, data 0x14988b/0x222000, compress 0x0/0x0/0x0, omap 0x18621, meta 0x2bb79df), peers [0,2] op hist [])
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:46:28.662376+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87482368 unmapped: 23592960 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:46:29.662581+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87482368 unmapped: 23592960 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: osd.1 141 heartbeat osd_stat(store_statfs(0x4fce0a000/0x0/0x4ffc00000, data 0x14988b/0x222000, compress 0x0/0x0/0x0, omap 0x18621, meta 0x2bb79df), peers [0,2] op hist [])
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:46:30.662757+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87482368 unmapped: 23592960 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:37 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1066303 data_alloc: 218103808 data_used: 7018
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:46:31.662920+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87482368 unmapped: 23592960 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:46:32.665464+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87482368 unmapped: 23592960 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: osd.1 141 heartbeat osd_stat(store_statfs(0x4fce0a000/0x0/0x4ffc00000, data 0x14988b/0x222000, compress 0x0/0x0/0x0, omap 0x18621, meta 0x2bb79df), peers [0,2] op hist [])
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:46:33.668045+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87482368 unmapped: 23592960 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:46:34.668442+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87482368 unmapped: 23592960 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:46:35.669409+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87482368 unmapped: 23592960 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:37 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1066303 data_alloc: 218103808 data_used: 7018
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:46:36.669628+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87482368 unmapped: 23592960 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:46:37.669777+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87482368 unmapped: 23592960 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:46:38.670739+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: osd.1 141 heartbeat osd_stat(store_statfs(0x4fce0a000/0x0/0x4ffc00000, data 0x14988b/0x222000, compress 0x0/0x0/0x0, omap 0x18621, meta 0x2bb79df), peers [0,2] op hist [])
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87482368 unmapped: 23592960 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: osd.1 141 heartbeat osd_stat(store_statfs(0x4fce0a000/0x0/0x4ffc00000, data 0x14988b/0x222000, compress 0x0/0x0/0x0, omap 0x18621, meta 0x2bb79df), peers [0,2] op hist [])
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:46:39.671054+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87482368 unmapped: 23592960 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:46:40.671363+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87490560 unmapped: 23584768 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:37 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1066303 data_alloc: 218103808 data_used: 7018
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:46:41.672104+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87490560 unmapped: 23584768 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:46:42.672747+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87490560 unmapped: 23584768 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:46:43.673347+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87490560 unmapped: 23584768 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:46:44.673523+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87490560 unmapped: 23584768 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: osd.1 141 heartbeat osd_stat(store_statfs(0x4fce0a000/0x0/0x4ffc00000, data 0x14988b/0x222000, compress 0x0/0x0/0x0, omap 0x18621, meta 0x2bb79df), peers [0,2] op hist [])
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:46:45.673712+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87490560 unmapped: 23584768 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:37 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1066303 data_alloc: 218103808 data_used: 7018
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:46:46.673835+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87490560 unmapped: 23584768 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:46:47.673975+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87490560 unmapped: 23584768 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:46:48.674238+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87490560 unmapped: 23584768 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:46:49.674631+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: osd.1 141 heartbeat osd_stat(store_statfs(0x4fce0a000/0x0/0x4ffc00000, data 0x14988b/0x222000, compress 0x0/0x0/0x0, omap 0x18621, meta 0x2bb79df), peers [0,2] op hist [])
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87490560 unmapped: 23584768 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:46:50.674882+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87490560 unmapped: 23584768 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 2400.3 total, 600.0 interval
                                           Cumulative writes: 7939 writes, 31K keys, 7939 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.01 MB/s
                                           Cumulative WAL: 7939 writes, 1746 syncs, 4.55 writes per sync, written: 0.02 GB, 0.01 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 313 writes, 601 keys, 313 commit groups, 1.0 writes per commit group, ingest: 0.22 MB, 0.00 MB/s
                                           Interval WAL: 313 writes, 149 syncs, 2.10 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:37 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1066303 data_alloc: 218103808 data_used: 7018
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:46:51.675141+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87490560 unmapped: 23584768 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:46:52.675352+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87490560 unmapped: 23584768 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:46:53.675608+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87490560 unmapped: 23584768 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: osd.1 141 heartbeat osd_stat(store_statfs(0x4fce0a000/0x0/0x4ffc00000, data 0x14988b/0x222000, compress 0x0/0x0/0x0, omap 0x18621, meta 0x2bb79df), peers [0,2] op hist [])
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:46:54.675818+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87490560 unmapped: 23584768 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:46:55.675997+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87490560 unmapped: 23584768 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:37 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1066303 data_alloc: 218103808 data_used: 7018
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:46:56.676207+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87490560 unmapped: 23584768 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:46:57.676450+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87490560 unmapped: 23584768 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: osd.1 141 heartbeat osd_stat(store_statfs(0x4fce0a000/0x0/0x4ffc00000, data 0x14988b/0x222000, compress 0x0/0x0/0x0, omap 0x18621, meta 0x2bb79df), peers [0,2] op hist [])
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:46:58.676607+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87490560 unmapped: 23584768 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:46:59.676821+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87490560 unmapped: 23584768 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:47:00.677021+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87490560 unmapped: 23584768 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:37 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1066303 data_alloc: 218103808 data_used: 7018
Jan 31 08:51:37 compute-0 ceph-osd[87035]: osd.1 141 heartbeat osd_stat(store_statfs(0x4fce0a000/0x0/0x4ffc00000, data 0x14988b/0x222000, compress 0x0/0x0/0x0, omap 0x18621, meta 0x2bb79df), peers [0,2] op hist [])
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:47:01.677302+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87490560 unmapped: 23584768 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: osd.1 141 heartbeat osd_stat(store_statfs(0x4fce0a000/0x0/0x4ffc00000, data 0x14988b/0x222000, compress 0x0/0x0/0x0, omap 0x18621, meta 0x2bb79df), peers [0,2] op hist [])
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:47:02.677441+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87490560 unmapped: 23584768 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:47:03.677563+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: osd.1 141 heartbeat osd_stat(store_statfs(0x4fce0a000/0x0/0x4ffc00000, data 0x14988b/0x222000, compress 0x0/0x0/0x0, omap 0x18621, meta 0x2bb79df), peers [0,2] op hist [])
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87490560 unmapped: 23584768 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:47:04.677712+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87490560 unmapped: 23584768 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:47:05.677958+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87490560 unmapped: 23584768 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:37 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1066303 data_alloc: 218103808 data_used: 7018
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:47:06.678084+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87490560 unmapped: 23584768 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:47:07.678351+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87490560 unmapped: 23584768 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: osd.1 141 heartbeat osd_stat(store_statfs(0x4fce0a000/0x0/0x4ffc00000, data 0x14988b/0x222000, compress 0x0/0x0/0x0, omap 0x18621, meta 0x2bb79df), peers [0,2] op hist [])
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:47:08.678491+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87490560 unmapped: 23584768 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:47:09.678641+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87498752 unmapped: 23576576 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:47:10.678729+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87498752 unmapped: 23576576 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:37 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1066303 data_alloc: 218103808 data_used: 7018
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:47:11.678888+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87498752 unmapped: 23576576 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 31 08:51:37 compute-0 ceph-osd[87035]: osd.1 141 heartbeat osd_stat(store_statfs(0x4fce0a000/0x0/0x4ffc00000, data 0x14988b/0x222000, compress 0x0/0x0/0x0, omap 0x18621, meta 0x2bb79df), peers [0,2] op hist [])
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:47:12.679030+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87498752 unmapped: 23576576 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: osd.1 141 heartbeat osd_stat(store_statfs(0x4fce0a000/0x0/0x4ffc00000, data 0x14988b/0x222000, compress 0x0/0x0/0x0, omap 0x18621, meta 0x2bb79df), peers [0,2] op hist [])
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:47:13.679183+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87498752 unmapped: 23576576 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:47:14.679395+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87498752 unmapped: 23576576 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:47:15.679671+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87498752 unmapped: 23576576 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:37 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1066303 data_alloc: 218103808 data_used: 7018
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:47:16.679850+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87498752 unmapped: 23576576 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:47:17.680001+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87498752 unmapped: 23576576 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:47:18.680149+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: osd.1 141 heartbeat osd_stat(store_statfs(0x4fce0a000/0x0/0x4ffc00000, data 0x14988b/0x222000, compress 0x0/0x0/0x0, omap 0x18621, meta 0x2bb79df), peers [0,2] op hist [])
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87498752 unmapped: 23576576 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:47:19.680341+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87498752 unmapped: 23576576 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:47:20.680459+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87498752 unmapped: 23576576 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: osd.1 141 heartbeat osd_stat(store_statfs(0x4fce0a000/0x0/0x4ffc00000, data 0x14988b/0x222000, compress 0x0/0x0/0x0, omap 0x18621, meta 0x2bb79df), peers [0,2] op hist [])
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:37 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1066303 data_alloc: 218103808 data_used: 7018
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:47:21.680563+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87498752 unmapped: 23576576 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:47:22.680735+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87498752 unmapped: 23576576 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:47:23.680906+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87498752 unmapped: 23576576 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:47:24.681093+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87498752 unmapped: 23576576 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:47:25.681283+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87498752 unmapped: 23576576 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:37 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1066303 data_alloc: 218103808 data_used: 7018
Jan 31 08:51:37 compute-0 ceph-osd[87035]: osd.1 141 heartbeat osd_stat(store_statfs(0x4fce0a000/0x0/0x4ffc00000, data 0x14988b/0x222000, compress 0x0/0x0/0x0, omap 0x18621, meta 0x2bb79df), peers [0,2] op hist [])
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:47:26.681425+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87498752 unmapped: 23576576 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: osd.1 141 heartbeat osd_stat(store_statfs(0x4fce0a000/0x0/0x4ffc00000, data 0x14988b/0x222000, compress 0x0/0x0/0x0, omap 0x18621, meta 0x2bb79df), peers [0,2] op hist [])
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:47:27.681589+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87498752 unmapped: 23576576 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:47:28.681756+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87498752 unmapped: 23576576 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:47:29.681910+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87498752 unmapped: 23576576 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:47:30.682819+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87498752 unmapped: 23576576 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: osd.1 141 heartbeat osd_stat(store_statfs(0x4fce0a000/0x0/0x4ffc00000, data 0x14988b/0x222000, compress 0x0/0x0/0x0, omap 0x18621, meta 0x2bb79df), peers [0,2] op hist [])
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:37 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1066303 data_alloc: 218103808 data_used: 7018
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:47:31.682957+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87498752 unmapped: 23576576 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:47:32.683139+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87498752 unmapped: 23576576 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:47:33.683356+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87498752 unmapped: 23576576 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:47:34.683533+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87498752 unmapped: 23576576 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:47:35.683744+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: osd.1 141 heartbeat osd_stat(store_statfs(0x4fce0a000/0x0/0x4ffc00000, data 0x14988b/0x222000, compress 0x0/0x0/0x0, omap 0x18621, meta 0x2bb79df), peers [0,2] op hist [])
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87498752 unmapped: 23576576 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:37 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1066303 data_alloc: 218103808 data_used: 7018
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:47:36.683901+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87498752 unmapped: 23576576 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:47:37.684058+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87498752 unmapped: 23576576 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:47:38.684324+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87498752 unmapped: 23576576 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:47:39.684525+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: osd.1 141 heartbeat osd_stat(store_statfs(0x4fce0a000/0x0/0x4ffc00000, data 0x14988b/0x222000, compress 0x0/0x0/0x0, omap 0x18621, meta 0x2bb79df), peers [0,2] op hist [])
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87498752 unmapped: 23576576 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:47:40.684736+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87498752 unmapped: 23576576 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:37 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1066303 data_alloc: 218103808 data_used: 7018
Jan 31 08:51:37 compute-0 ceph-osd[87035]: osd.1 141 heartbeat osd_stat(store_statfs(0x4fce0a000/0x0/0x4ffc00000, data 0x14988b/0x222000, compress 0x0/0x0/0x0, omap 0x18621, meta 0x2bb79df), peers [0,2] op hist [])
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:47:41.684943+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87498752 unmapped: 23576576 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:47:42.685178+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87498752 unmapped: 23576576 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:47:43.685363+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: osd.1 141 heartbeat osd_stat(store_statfs(0x4fce0a000/0x0/0x4ffc00000, data 0x14988b/0x222000, compress 0x0/0x0/0x0, omap 0x18621, meta 0x2bb79df), peers [0,2] op hist [])
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87506944 unmapped: 23568384 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:47:44.685541+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87506944 unmapped: 23568384 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:47:45.685734+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87506944 unmapped: 23568384 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:37 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1066303 data_alloc: 218103808 data_used: 7018
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:47:46.685936+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87506944 unmapped: 23568384 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: osd.1 141 heartbeat osd_stat(store_statfs(0x4fce0a000/0x0/0x4ffc00000, data 0x14988b/0x222000, compress 0x0/0x0/0x0, omap 0x18621, meta 0x2bb79df), peers [0,2] op hist [])
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:47:47.686119+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87506944 unmapped: 23568384 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:47:48.686471+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87506944 unmapped: 23568384 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:47:49.686653+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87506944 unmapped: 23568384 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:47:50.687145+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: osd.1 141 heartbeat osd_stat(store_statfs(0x4fce0a000/0x0/0x4ffc00000, data 0x14988b/0x222000, compress 0x0/0x0/0x0, omap 0x18621, meta 0x2bb79df), peers [0,2] op hist [])
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87506944 unmapped: 23568384 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:37 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1066303 data_alloc: 218103808 data_used: 7018
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:47:51.687326+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87506944 unmapped: 23568384 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:47:52.687474+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87506944 unmapped: 23568384 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:47:53.687630+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87506944 unmapped: 23568384 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: osd.1 141 heartbeat osd_stat(store_statfs(0x4fce0a000/0x0/0x4ffc00000, data 0x14988b/0x222000, compress 0x0/0x0/0x0, omap 0x18621, meta 0x2bb79df), peers [0,2] op hist [])
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:47:54.687774+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87506944 unmapped: 23568384 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:47:55.687931+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87506944 unmapped: 23568384 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:47:56.688103+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:37 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1066303 data_alloc: 218103808 data_used: 7018
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87506944 unmapped: 23568384 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:47:57.688229+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87506944 unmapped: 23568384 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:47:58.688405+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87506944 unmapped: 23568384 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:47:59.688552+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87506944 unmapped: 23568384 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 599.761596680s of 600.772583008s, submitted: 112
Jan 31 08:51:37 compute-0 ceph-osd[87035]: osd.1 141 heartbeat osd_stat(store_statfs(0x4fce0a000/0x0/0x4ffc00000, data 0x14988b/0x222000, compress 0x0/0x0/0x0, omap 0x18621, meta 0x2bb79df), peers [0,2] op hist [])
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:48:00.688678+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87506944 unmapped: 23568384 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:48:01.688908+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:37 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1066303 data_alloc: 218103808 data_used: 7018
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87375872 unmapped: 23699456 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:48:02.689033+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87457792 unmapped: 23617536 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:48:03.689189+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87457792 unmapped: 23617536 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:48:04.689333+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87457792 unmapped: 23617536 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: osd.1 141 heartbeat osd_stat(store_statfs(0x4fce0a000/0x0/0x4ffc00000, data 0x14988b/0x222000, compress 0x0/0x0/0x0, omap 0x18621, meta 0x2bb79df), peers [0,2] op hist [])
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:48:05.689472+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87457792 unmapped: 23617536 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:48:06.689632+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:37 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1066303 data_alloc: 218103808 data_used: 7018
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87457792 unmapped: 23617536 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:48:07.689783+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87457792 unmapped: 23617536 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:48:08.689961+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: osd.1 141 heartbeat osd_stat(store_statfs(0x4fce0a000/0x0/0x4ffc00000, data 0x14988b/0x222000, compress 0x0/0x0/0x0, omap 0x18621, meta 0x2bb79df), peers [0,2] op hist [])
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87457792 unmapped: 23617536 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:48:09.690104+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87457792 unmapped: 23617536 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:48:10.690298+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87457792 unmapped: 23617536 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: osd.1 141 heartbeat osd_stat(store_statfs(0x4fce0a000/0x0/0x4ffc00000, data 0x14988b/0x222000, compress 0x0/0x0/0x0, omap 0x18621, meta 0x2bb79df), peers [0,2] op hist [])
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:48:11.690414+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:37 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1066303 data_alloc: 218103808 data_used: 7018
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87457792 unmapped: 23617536 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:48:12.690615+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87457792 unmapped: 23617536 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:48:13.690746+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: osd.1 141 heartbeat osd_stat(store_statfs(0x4fce0a000/0x0/0x4ffc00000, data 0x14988b/0x222000, compress 0x0/0x0/0x0, omap 0x18621, meta 0x2bb79df), peers [0,2] op hist [])
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87457792 unmapped: 23617536 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:48:14.690874+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87457792 unmapped: 23617536 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:48:15.691071+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87457792 unmapped: 23617536 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:48:16.691211+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:37 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1066303 data_alloc: 218103808 data_used: 7018
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87457792 unmapped: 23617536 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:48:17.691360+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87457792 unmapped: 23617536 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:48:18.691503+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: osd.1 141 heartbeat osd_stat(store_statfs(0x4fce0a000/0x0/0x4ffc00000, data 0x14988b/0x222000, compress 0x0/0x0/0x0, omap 0x18621, meta 0x2bb79df), peers [0,2] op hist [])
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87457792 unmapped: 23617536 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:48:19.691650+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87457792 unmapped: 23617536 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:48:20.691912+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87457792 unmapped: 23617536 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:48:21.692144+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:37 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1066303 data_alloc: 218103808 data_used: 7018
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87457792 unmapped: 23617536 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:48:22.692336+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: osd.1 141 heartbeat osd_stat(store_statfs(0x4fce0a000/0x0/0x4ffc00000, data 0x14988b/0x222000, compress 0x0/0x0/0x0, omap 0x18621, meta 0x2bb79df), peers [0,2] op hist [])
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 87457792 unmapped: 23617536 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: handle_auth_request added challenge on 0x55d784ba7400
Jan 31 08:51:37 compute-0 ceph-osd[87035]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 22.437479019s of 23.119520187s, submitted: 90
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:48:23.692462+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 96944128 unmapped: 14131200 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:48:24.692599+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: osd.1 141 handle_osd_map epochs [141,142], i have 141, src has [1,142]
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 88563712 unmapped: 22511616 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: osd.1 141 handle_osd_map epochs [142,142], i have 142, src has [1,142]
Jan 31 08:51:37 compute-0 ceph-osd[87035]: osd.1 142 heartbeat osd_stat(store_statfs(0x4fc609000/0x0/0x4ffc00000, data 0x9498ae/0xa23000, compress 0x0/0x0/0x0, omap 0x18621, meta 0x2bb79df), peers [0,2] op hist [])
Jan 31 08:51:37 compute-0 ceph-osd[87035]: osd.1 142 ms_handle_reset con 0x55d784ba7400 session 0x55d7846ce540
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:48:25.692758+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 88588288 unmapped: 22487040 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: osd.1 142 heartbeat osd_stat(store_statfs(0x4fc603000/0x0/0x4ffc00000, data 0x94b46d/0xa27000, compress 0x0/0x0/0x0, omap 0x1884e, meta 0x2bb77b2), peers [0,2] op hist [])
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:48:26.692886+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:37 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1114966 data_alloc: 218103808 data_used: 7018
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 88588288 unmapped: 22487040 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:48:27.693128+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 88588288 unmapped: 22487040 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:48:28.693329+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 88588288 unmapped: 22487040 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:48:29.693481+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 88588288 unmapped: 22487040 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:48:30.693634+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 88588288 unmapped: 22487040 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: osd.1 142 heartbeat osd_stat(store_statfs(0x4fc603000/0x0/0x4ffc00000, data 0x94b46d/0xa27000, compress 0x0/0x0/0x0, omap 0x1884e, meta 0x2bb77b2), peers [0,2] op hist [])
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:48:31.693786+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:37 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1114966 data_alloc: 218103808 data_used: 7018
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 88588288 unmapped: 22487040 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:48:32.693924+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 88588288 unmapped: 22487040 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:48:33.694092+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 88588288 unmapped: 22487040 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:48:34.694224+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 88588288 unmapped: 22487040 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:48:35.694454+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 88588288 unmapped: 22487040 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:48:36.694634+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:37 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1114966 data_alloc: 218103808 data_used: 7018
Jan 31 08:51:37 compute-0 ceph-osd[87035]: osd.1 142 heartbeat osd_stat(store_statfs(0x4fc603000/0x0/0x4ffc00000, data 0x94b46d/0xa27000, compress 0x0/0x0/0x0, omap 0x1884e, meta 0x2bb77b2), peers [0,2] op hist [])
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 88588288 unmapped: 22487040 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:48:37.694801+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 88588288 unmapped: 22487040 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:48:38.694940+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 88588288 unmapped: 22487040 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:48:39.695101+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 88588288 unmapped: 22487040 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:48:40.695341+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: osd.1 142 heartbeat osd_stat(store_statfs(0x4fc603000/0x0/0x4ffc00000, data 0x94b46d/0xa27000, compress 0x0/0x0/0x0, omap 0x1884e, meta 0x2bb77b2), peers [0,2] op hist [])
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 88588288 unmapped: 22487040 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:48:41.695477+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:37 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1114966 data_alloc: 218103808 data_used: 7018
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 88588288 unmapped: 22487040 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:48:42.695583+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 88588288 unmapped: 22487040 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:48:43.695696+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: osd.1 142 heartbeat osd_stat(store_statfs(0x4fc603000/0x0/0x4ffc00000, data 0x94b46d/0xa27000, compress 0x0/0x0/0x0, omap 0x1884e, meta 0x2bb77b2), peers [0,2] op hist [])
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 88588288 unmapped: 22487040 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:48:44.695802+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 88588288 unmapped: 22487040 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:48:45.695951+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 88588288 unmapped: 22487040 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:48:46.696097+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:37 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1114966 data_alloc: 218103808 data_used: 7018
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 88588288 unmapped: 22487040 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:48:47.696241+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 88588288 unmapped: 22487040 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:48:48.696457+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: osd.1 142 heartbeat osd_stat(store_statfs(0x4fc603000/0x0/0x4ffc00000, data 0x94b46d/0xa27000, compress 0x0/0x0/0x0, omap 0x1884e, meta 0x2bb77b2), peers [0,2] op hist [])
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 88588288 unmapped: 22487040 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:48:49.696590+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 88588288 unmapped: 22487040 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:48:50.696740+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 88588288 unmapped: 22487040 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:48:51.696883+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:37 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1114966 data_alloc: 218103808 data_used: 7018
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 88588288 unmapped: 22487040 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:48:52.697045+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 88588288 unmapped: 22487040 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:48:53.697191+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: osd.1 142 heartbeat osd_stat(store_statfs(0x4fc603000/0x0/0x4ffc00000, data 0x94b46d/0xa27000, compress 0x0/0x0/0x0, omap 0x1884e, meta 0x2bb77b2), peers [0,2] op hist [])
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 88588288 unmapped: 22487040 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:48:54.697336+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 88588288 unmapped: 22487040 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:48:55.697598+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 88588288 unmapped: 22487040 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:48:56.697768+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:37 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1114966 data_alloc: 218103808 data_used: 7018
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 88588288 unmapped: 22487040 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:48:57.697918+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 88588288 unmapped: 22487040 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:48:58.698119+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: osd.1 142 heartbeat osd_stat(store_statfs(0x4fc603000/0x0/0x4ffc00000, data 0x94b46d/0xa27000, compress 0x0/0x0/0x0, omap 0x1884e, meta 0x2bb77b2), peers [0,2] op hist [])
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 88588288 unmapped: 22487040 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:48:59.698243+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 88588288 unmapped: 22487040 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: osd.1 142 heartbeat osd_stat(store_statfs(0x4fc603000/0x0/0x4ffc00000, data 0x94b46d/0xa27000, compress 0x0/0x0/0x0, omap 0x1884e, meta 0x2bb77b2), peers [0,2] op hist [])
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:49:00.698422+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 88588288 unmapped: 22487040 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:49:01.698583+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:37 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1114966 data_alloc: 218103808 data_used: 7018
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 88588288 unmapped: 22487040 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:49:02.698701+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 88588288 unmapped: 22487040 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:49:03.698885+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 88588288 unmapped: 22487040 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: osd.1 142 heartbeat osd_stat(store_statfs(0x4fc603000/0x0/0x4ffc00000, data 0x94b46d/0xa27000, compress 0x0/0x0/0x0, omap 0x1884e, meta 0x2bb77b2), peers [0,2] op hist [])
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:49:04.699010+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 88588288 unmapped: 22487040 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:49:05.699213+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 88588288 unmapped: 22487040 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:49:06.699315+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:37 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1114966 data_alloc: 218103808 data_used: 7018
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 88588288 unmapped: 22487040 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: osd.1 142 heartbeat osd_stat(store_statfs(0x4fc603000/0x0/0x4ffc00000, data 0x94b46d/0xa27000, compress 0x0/0x0/0x0, omap 0x1884e, meta 0x2bb77b2), peers [0,2] op hist [])
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:49:07.699470+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 88588288 unmapped: 22487040 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:49:08.699627+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 88588288 unmapped: 22487040 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:49:09.699734+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 88588288 unmapped: 22487040 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: osd.1 142 heartbeat osd_stat(store_statfs(0x4fc603000/0x0/0x4ffc00000, data 0x94b46d/0xa27000, compress 0x0/0x0/0x0, omap 0x1884e, meta 0x2bb77b2), peers [0,2] op hist [])
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:49:10.699909+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 88588288 unmapped: 22487040 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:49:11.700077+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:37 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1114966 data_alloc: 218103808 data_used: 7018
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 88588288 unmapped: 22487040 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:49:12.700322+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: osd.1 142 heartbeat osd_stat(store_statfs(0x4fc603000/0x0/0x4ffc00000, data 0x94b46d/0xa27000, compress 0x0/0x0/0x0, omap 0x1884e, meta 0x2bb77b2), peers [0,2] op hist [])
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 88588288 unmapped: 22487040 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:49:13.700500+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 88588288 unmapped: 22487040 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:49:14.700705+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 88588288 unmapped: 22487040 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:49:15.700917+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 88588288 unmapped: 22487040 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:49:16.701163+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:37 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1114966 data_alloc: 218103808 data_used: 7018
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 88588288 unmapped: 22487040 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:49:17.701425+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 88588288 unmapped: 22487040 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:49:18.701680+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: osd.1 142 heartbeat osd_stat(store_statfs(0x4fc603000/0x0/0x4ffc00000, data 0x94b46d/0xa27000, compress 0x0/0x0/0x0, omap 0x1884e, meta 0x2bb77b2), peers [0,2] op hist [])
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 88588288 unmapped: 22487040 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:49:19.701874+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 88588288 unmapped: 22487040 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:49:20.702162+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 88588288 unmapped: 22487040 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:49:21.702413+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:37 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1114966 data_alloc: 218103808 data_used: 7018
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 88588288 unmapped: 22487040 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:49:22.702558+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 88588288 unmapped: 22487040 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:49:23.702710+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: osd.1 142 heartbeat osd_stat(store_statfs(0x4fc603000/0x0/0x4ffc00000, data 0x94b46d/0xa27000, compress 0x0/0x0/0x0, omap 0x1884e, meta 0x2bb77b2), peers [0,2] op hist [])
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 88588288 unmapped: 22487040 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: osd.1 142 heartbeat osd_stat(store_statfs(0x4fc603000/0x0/0x4ffc00000, data 0x94b46d/0xa27000, compress 0x0/0x0/0x0, omap 0x1884e, meta 0x2bb77b2), peers [0,2] op hist [])
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:49:24.702917+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: osd.1 142 heartbeat osd_stat(store_statfs(0x4fc603000/0x0/0x4ffc00000, data 0x94b46d/0xa27000, compress 0x0/0x0/0x0, omap 0x1884e, meta 0x2bb77b2), peers [0,2] op hist [])
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 88588288 unmapped: 22487040 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:49:25.703081+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 88588288 unmapped: 22487040 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:49:26.703321+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:37 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1114966 data_alloc: 218103808 data_used: 7018
Jan 31 08:51:37 compute-0 ceph-osd[87035]: osd.1 142 heartbeat osd_stat(store_statfs(0x4fc603000/0x0/0x4ffc00000, data 0x94b46d/0xa27000, compress 0x0/0x0/0x0, omap 0x1884e, meta 0x2bb77b2), peers [0,2] op hist [])
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 88588288 unmapped: 22487040 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:49:27.703503+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 88588288 unmapped: 22487040 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:49:28.703692+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: handle_auth_request added challenge on 0x55d782d7d400
Jan 31 08:51:37 compute-0 ceph-osd[87035]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 64.962402344s of 65.510986328s, submitted: 10
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _renew_subs
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 31 08:51:37 compute-0 ceph-osd[87035]: osd.1 142 handle_osd_map epochs [143,143], i have 142, src has [1,143]
Jan 31 08:51:37 compute-0 ceph-osd[87035]: osd.1 143 ms_handle_reset con 0x55d782d7d400 session 0x55d784a40000
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 88662016 unmapped: 22413312 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:49:29.703831+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 88662016 unmapped: 22413312 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:49:30.703966+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fce01000/0x0/0x4ffc00000, data 0x14d017/0x228000, compress 0x0/0x0/0x0, omap 0x18b1b, meta 0x2bb74e5), peers [0,2] op hist [])
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 88662016 unmapped: 22413312 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:49:31.704139+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:37 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1076054 data_alloc: 218103808 data_used: 7018
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 88662016 unmapped: 22413312 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:49:32.704293+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 88662016 unmapped: 22413312 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:49:33.704476+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 88662016 unmapped: 22413312 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:49:34.704646+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fce01000/0x0/0x4ffc00000, data 0x14d017/0x228000, compress 0x0/0x0/0x0, omap 0x18b1b, meta 0x2bb74e5), peers [0,2] op hist [])
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 88662016 unmapped: 22413312 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:49:35.857972+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 88662016 unmapped: 22413312 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:49:36.858142+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:37 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1076054 data_alloc: 218103808 data_used: 7018
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _renew_subs
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 31 08:51:37 compute-0 ceph-osd[87035]: osd.1 143 handle_osd_map epochs [144,144], i have 143, src has [1,144]
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 88662016 unmapped: 22413312 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: osd.1 144 heartbeat osd_stat(store_statfs(0x4fce01000/0x0/0x4ffc00000, data 0x14d017/0x228000, compress 0x0/0x0/0x0, omap 0x18b1b, meta 0x2bb74e5), peers [0,2] op hist [])
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:49:37.858318+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 88662016 unmapped: 22413312 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:49:38.858533+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 88662016 unmapped: 22413312 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:49:39.858725+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 88662016 unmapped: 22413312 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:49:40.858896+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: osd.1 144 heartbeat osd_stat(store_statfs(0x4fcdff000/0x0/0x4ffc00000, data 0x14ea96/0x22b000, compress 0x0/0x0/0x0, omap 0x18e74, meta 0x2bb718c), peers [0,2] op hist [])
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 88662016 unmapped: 22413312 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:49:41.859068+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:37 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1077655 data_alloc: 218103808 data_used: 7018
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 88662016 unmapped: 22413312 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:49:42.859189+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 88662016 unmapped: 22413312 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:49:43.859399+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: osd.1 144 heartbeat osd_stat(store_statfs(0x4fcdff000/0x0/0x4ffc00000, data 0x14ea96/0x22b000, compress 0x0/0x0/0x0, omap 0x18e74, meta 0x2bb718c), peers [0,2] op hist [])
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 88662016 unmapped: 22413312 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:49:44.859597+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 88662016 unmapped: 22413312 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:49:45.859816+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 88662016 unmapped: 22413312 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:49:46.860046+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:37 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1077655 data_alloc: 218103808 data_used: 7018
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 88662016 unmapped: 22413312 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:49:47.860177+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 88662016 unmapped: 22413312 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:49:48.860349+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 88662016 unmapped: 22413312 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:49:49.860471+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: osd.1 144 heartbeat osd_stat(store_statfs(0x4fcdff000/0x0/0x4ffc00000, data 0x14ea96/0x22b000, compress 0x0/0x0/0x0, omap 0x18e74, meta 0x2bb718c), peers [0,2] op hist [])
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 88662016 unmapped: 22413312 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:49:50.860645+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 88662016 unmapped: 22413312 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: osd.1 144 heartbeat osd_stat(store_statfs(0x4fcdff000/0x0/0x4ffc00000, data 0x14ea96/0x22b000, compress 0x0/0x0/0x0, omap 0x18e74, meta 0x2bb718c), peers [0,2] op hist [])
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:49:51.860859+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:37 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1077655 data_alloc: 218103808 data_used: 7018
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 88662016 unmapped: 22413312 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:49:52.861051+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 88662016 unmapped: 22413312 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:49:53.861736+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 88662016 unmapped: 22413312 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:49:54.861892+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: osd.1 144 heartbeat osd_stat(store_statfs(0x4fcdff000/0x0/0x4ffc00000, data 0x14ea96/0x22b000, compress 0x0/0x0/0x0, omap 0x18e74, meta 0x2bb718c), peers [0,2] op hist [])
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 88662016 unmapped: 22413312 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:49:55.862071+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 88662016 unmapped: 22413312 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:49:56.862193+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:37 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1077655 data_alloc: 218103808 data_used: 7018
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 88662016 unmapped: 22413312 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:49:57.862337+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 88662016 unmapped: 22413312 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:49:58.862530+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 88662016 unmapped: 22413312 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:49:59.862697+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: osd.1 144 heartbeat osd_stat(store_statfs(0x4fcdff000/0x0/0x4ffc00000, data 0x14ea96/0x22b000, compress 0x0/0x0/0x0, omap 0x18e74, meta 0x2bb718c), peers [0,2] op hist [])
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 88662016 unmapped: 22413312 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:50:00.862892+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 88662016 unmapped: 22413312 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:50:01.863049+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:37 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1077655 data_alloc: 218103808 data_used: 7018
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 88662016 unmapped: 22413312 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:50:02.863161+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 88670208 unmapped: 22405120 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:50:03.863302+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: osd.1 144 heartbeat osd_stat(store_statfs(0x4fcdff000/0x0/0x4ffc00000, data 0x14ea96/0x22b000, compress 0x0/0x0/0x0, omap 0x18e74, meta 0x2bb718c), peers [0,2] op hist [])
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 88670208 unmapped: 22405120 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:50:04.863421+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 88670208 unmapped: 22405120 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:50:05.863598+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: osd.1 144 heartbeat osd_stat(store_statfs(0x4fcdff000/0x0/0x4ffc00000, data 0x14ea96/0x22b000, compress 0x0/0x0/0x0, omap 0x18e74, meta 0x2bb718c), peers [0,2] op hist [])
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 88670208 unmapped: 22405120 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:50:06.863794+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:37 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1077655 data_alloc: 218103808 data_used: 7018
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 88670208 unmapped: 22405120 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:50:07.864113+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 88670208 unmapped: 22405120 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:50:08.864291+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 88670208 unmapped: 22405120 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:50:09.864521+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: osd.1 144 heartbeat osd_stat(store_statfs(0x4fcdff000/0x0/0x4ffc00000, data 0x14ea96/0x22b000, compress 0x0/0x0/0x0, omap 0x18e74, meta 0x2bb718c), peers [0,2] op hist [])
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 88670208 unmapped: 22405120 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:50:10.864660+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 88670208 unmapped: 22405120 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:50:11.864823+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:37 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1077655 data_alloc: 218103808 data_used: 7018
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 88670208 unmapped: 22405120 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:50:12.865028+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 88670208 unmapped: 22405120 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:50:13.865176+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 88670208 unmapped: 22405120 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:50:14.865363+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 88670208 unmapped: 22405120 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: osd.1 144 heartbeat osd_stat(store_statfs(0x4fcdff000/0x0/0x4ffc00000, data 0x14ea96/0x22b000, compress 0x0/0x0/0x0, omap 0x18e74, meta 0x2bb718c), peers [0,2] op hist [])
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:50:15.865563+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 88670208 unmapped: 22405120 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:50:16.865694+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:37 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1077655 data_alloc: 218103808 data_used: 7018
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 88670208 unmapped: 22405120 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:50:17.865857+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 88670208 unmapped: 22405120 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:50:18.865997+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 88670208 unmapped: 22405120 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:50:19.866198+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 88670208 unmapped: 22405120 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:50:20.866365+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: osd.1 144 heartbeat osd_stat(store_statfs(0x4fcdff000/0x0/0x4ffc00000, data 0x14ea96/0x22b000, compress 0x0/0x0/0x0, omap 0x18e74, meta 0x2bb718c), peers [0,2] op hist [])
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 88670208 unmapped: 22405120 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:50:21.866569+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:37 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1077655 data_alloc: 218103808 data_used: 7018
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 88670208 unmapped: 22405120 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:50:22.866723+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 88670208 unmapped: 22405120 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:50:23.866876+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 88670208 unmapped: 22405120 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:50:24.867040+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 88670208 unmapped: 22405120 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: osd.1 144 heartbeat osd_stat(store_statfs(0x4fcdff000/0x0/0x4ffc00000, data 0x14ea96/0x22b000, compress 0x0/0x0/0x0, omap 0x18e74, meta 0x2bb718c), peers [0,2] op hist [])
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:50:25.867224+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 88670208 unmapped: 22405120 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:50:26.867413+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:37 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1077655 data_alloc: 218103808 data_used: 7018
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 88670208 unmapped: 22405120 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:50:27.867573+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 88670208 unmapped: 22405120 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:50:28.867783+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 88670208 unmapped: 22405120 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:50:29.867966+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 88670208 unmapped: 22405120 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:50:30.868114+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: osd.1 144 heartbeat osd_stat(store_statfs(0x4fcdff000/0x0/0x4ffc00000, data 0x14ea96/0x22b000, compress 0x0/0x0/0x0, omap 0x18e74, meta 0x2bb718c), peers [0,2] op hist [])
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 88670208 unmapped: 22405120 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:50:31.868339+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:37 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1077655 data_alloc: 218103808 data_used: 7018
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 88670208 unmapped: 22405120 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:50:32.868546+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 88670208 unmapped: 22405120 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:50:33.868730+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 88670208 unmapped: 22405120 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:50:34.868921+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 88670208 unmapped: 22405120 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:50:35.869179+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 88670208 unmapped: 22405120 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:50:36.869448+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: osd.1 144 heartbeat osd_stat(store_statfs(0x4fcdff000/0x0/0x4ffc00000, data 0x14ea96/0x22b000, compress 0x0/0x0/0x0, omap 0x18e74, meta 0x2bb718c), peers [0,2] op hist [])
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:37 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1077655 data_alloc: 218103808 data_used: 7018
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 88670208 unmapped: 22405120 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:50:37.869630+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 88670208 unmapped: 22405120 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:50:38.869807+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 88670208 unmapped: 22405120 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:50:39.869983+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 88670208 unmapped: 22405120 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:50:40.870183+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: osd.1 144 heartbeat osd_stat(store_statfs(0x4fcdff000/0x0/0x4ffc00000, data 0x14ea96/0x22b000, compress 0x0/0x0/0x0, omap 0x18e74, meta 0x2bb718c), peers [0,2] op hist [])
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 88670208 unmapped: 22405120 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:50:41.870383+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:37 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1077655 data_alloc: 218103808 data_used: 7018
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 88670208 unmapped: 22405120 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:50:42.870588+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: osd.1 144 heartbeat osd_stat(store_statfs(0x4fcdff000/0x0/0x4ffc00000, data 0x14ea96/0x22b000, compress 0x0/0x0/0x0, omap 0x18e74, meta 0x2bb718c), peers [0,2] op hist [])
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 88670208 unmapped: 22405120 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:50:43.870785+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 88670208 unmapped: 22405120 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:50:44.871012+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 88670208 unmapped: 22405120 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:50:45.871342+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: osd.1 144 heartbeat osd_stat(store_statfs(0x4fcdff000/0x0/0x4ffc00000, data 0x14ea96/0x22b000, compress 0x0/0x0/0x0, omap 0x18e74, meta 0x2bb718c), peers [0,2] op hist [])
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 88670208 unmapped: 22405120 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:50:46.871484+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:37 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1077655 data_alloc: 218103808 data_used: 7018
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 88670208 unmapped: 22405120 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:50:47.871630+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 88670208 unmapped: 22405120 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:50:48.871790+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 88670208 unmapped: 22405120 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:50:49.871922+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 88670208 unmapped: 22405120 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:50:50.872061+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: osd.1 144 heartbeat osd_stat(store_statfs(0x4fcdff000/0x0/0x4ffc00000, data 0x14ea96/0x22b000, compress 0x0/0x0/0x0, omap 0x18e74, meta 0x2bb718c), peers [0,2] op hist [])
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 88670208 unmapped: 22405120 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:50:51.872201+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:37 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1077655 data_alloc: 218103808 data_used: 7018
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 88670208 unmapped: 22405120 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:50:52.872375+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: osd.1 144 heartbeat osd_stat(store_statfs(0x4fcdff000/0x0/0x4ffc00000, data 0x14ea96/0x22b000, compress 0x0/0x0/0x0, omap 0x18e74, meta 0x2bb718c), peers [0,2] op hist [])
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 88670208 unmapped: 22405120 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:50:53.872523+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 88670208 unmapped: 22405120 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:50:54.872704+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: osd.1 144 heartbeat osd_stat(store_statfs(0x4fcdff000/0x0/0x4ffc00000, data 0x14ea96/0x22b000, compress 0x0/0x0/0x0, omap 0x18e74, meta 0x2bb718c), peers [0,2] op hist [])
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 88670208 unmapped: 22405120 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:50:55.872883+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 88670208 unmapped: 22405120 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:50:56.872986+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:37 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1077655 data_alloc: 218103808 data_used: 7018
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 88670208 unmapped: 22405120 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:50:57.873117+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: osd.1 144 heartbeat osd_stat(store_statfs(0x4fcdff000/0x0/0x4ffc00000, data 0x14ea96/0x22b000, compress 0x0/0x0/0x0, omap 0x18e74, meta 0x2bb718c), peers [0,2] op hist [])
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 88670208 unmapped: 22405120 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:50:58.873268+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 88670208 unmapped: 22405120 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:50:59.873432+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 88670208 unmapped: 22405120 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:51:00.873575+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 88670208 unmapped: 22405120 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:51:01.873705+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: osd.1 144 heartbeat osd_stat(store_statfs(0x4fcdff000/0x0/0x4ffc00000, data 0x14ea96/0x22b000, compress 0x0/0x0/0x0, omap 0x18e74, meta 0x2bb718c), peers [0,2] op hist [])
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:37 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1077655 data_alloc: 218103808 data_used: 7018
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 88670208 unmapped: 22405120 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:51:02.873845+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: osd.1 144 heartbeat osd_stat(store_statfs(0x4fcdff000/0x0/0x4ffc00000, data 0x14ea96/0x22b000, compress 0x0/0x0/0x0, omap 0x18e74, meta 0x2bb718c), peers [0,2] op hist [])
Jan 31 08:51:37 compute-0 ceph-osd[87035]: osd.1 144 heartbeat osd_stat(store_statfs(0x4fcdff000/0x0/0x4ffc00000, data 0x14ea96/0x22b000, compress 0x0/0x0/0x0, omap 0x18e74, meta 0x2bb718c), peers [0,2] op hist [])
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 88670208 unmapped: 22405120 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:51:03.873977+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 88801280 unmapped: 22274048 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:51:04.874139+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: do_command 'config diff' '{prefix=config diff}'
Jan 31 08:51:37 compute-0 ceph-osd[87035]: do_command 'config diff' '{prefix=config diff}' result is 0 bytes
Jan 31 08:51:37 compute-0 ceph-osd[87035]: do_command 'config show' '{prefix=config show}'
Jan 31 08:51:37 compute-0 ceph-osd[87035]: do_command 'config show' '{prefix=config show}' result is 0 bytes
Jan 31 08:51:37 compute-0 ceph-osd[87035]: do_command 'counter dump' '{prefix=counter dump}'
Jan 31 08:51:37 compute-0 ceph-osd[87035]: do_command 'counter dump' '{prefix=counter dump}' result is 0 bytes
Jan 31 08:51:37 compute-0 ceph-osd[87035]: do_command 'counter schema' '{prefix=counter schema}'
Jan 31 08:51:37 compute-0 ceph-osd[87035]: do_command 'counter schema' '{prefix=counter schema}' result is 0 bytes
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 89407488 unmapped: 21667840 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:51:05.874303+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: tick
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_tickets
Jan 31 08:51:37 compute-0 ceph-osd[87035]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:51:06.874416+0000)
Jan 31 08:51:37 compute-0 ceph-osd[87035]: prioritycache tune_memory target: 4294967296 mapped: 89481216 unmapped: 21594112 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:37 compute-0 ceph-osd[87035]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:37 compute-0 ceph-osd[87035]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1077655 data_alloc: 218103808 data_used: 7018
Jan 31 08:51:37 compute-0 ceph-osd[87035]: do_command 'log dump' '{prefix=log dump}'
Jan 31 08:51:37 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1487: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:51:37 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "node ls"} v 0)
Jan 31 08:51:37 compute-0 ceph-mon[75227]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3811170452' entity='client.admin' cmd={"prefix": "node ls"} : dispatch
Jan 31 08:51:37 compute-0 rsyslogd[1007]: imjournal from <np0005603663:ceph-osd>: begin to drop messages due to rate-limiting
Jan 31 08:51:38 compute-0 podman[262473]: 2026-01-31 08:51:38.03270548 +0000 UTC m=+0.961508571 container remove 3be3ced4a2a00120c0ee8ba50c9e19e5abed8fd9db9fff58081846f84e3900ad (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=agitated_sammet, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Jan 31 08:51:38 compute-0 systemd[1]: libpod-conmon-3be3ced4a2a00120c0ee8ba50c9e19e5abed8fd9db9fff58081846f84e3900ad.scope: Deactivated successfully.
Jan 31 08:51:38 compute-0 podman[262613]: 2026-01-31 08:51:38.178980407 +0000 UTC m=+0.049277234 container create 4b6b2cd6ec95da0dfa3ef3be44b848d1d8638ace93f605fee119043423431586 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nostalgic_kepler, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 08:51:38 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "log last", "channel": "cephadm", "format": "json-pretty"} v 0)
Jan 31 08:51:38 compute-0 ceph-mon[75227]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2075495847' entity='client.admin' cmd={"prefix": "log last", "channel": "cephadm", "format": "json-pretty"} : dispatch
Jan 31 08:51:38 compute-0 podman[262613]: 2026-01-31 08:51:38.157142055 +0000 UTC m=+0.027438922 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 08:51:38 compute-0 systemd[1]: Started libpod-conmon-4b6b2cd6ec95da0dfa3ef3be44b848d1d8638ace93f605fee119043423431586.scope.
Jan 31 08:51:38 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:51:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/132048f7981605c2ae950b4b1ef2ef5262882703db48d1e421f517ff3f99fe10/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 08:51:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/132048f7981605c2ae950b4b1ef2ef5262882703db48d1e421f517ff3f99fe10/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 08:51:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/132048f7981605c2ae950b4b1ef2ef5262882703db48d1e421f517ff3f99fe10/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 08:51:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/132048f7981605c2ae950b4b1ef2ef5262882703db48d1e421f517ff3f99fe10/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 08:51:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/132048f7981605c2ae950b4b1ef2ef5262882703db48d1e421f517ff3f99fe10/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 31 08:51:38 compute-0 nova_compute[238824]: 2026-01-31 08:51:38.339 238828 DEBUG oslo_service.periodic_task [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:51:38 compute-0 podman[262613]: 2026-01-31 08:51:38.363853294 +0000 UTC m=+0.234150151 container init 4b6b2cd6ec95da0dfa3ef3be44b848d1d8638ace93f605fee119043423431586 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nostalgic_kepler, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 08:51:38 compute-0 ceph-mon[75227]: from='client.14692 -' entity='client.admin' cmd=[{"prefix": "balancer status detail", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 31 08:51:38 compute-0 ceph-mon[75227]: from='client.14696 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 31 08:51:38 compute-0 ceph-mon[75227]: from='client.? 192.168.122.100:0/3811170452' entity='client.admin' cmd={"prefix": "node ls"} : dispatch
Jan 31 08:51:38 compute-0 ceph-mon[75227]: from='client.? 192.168.122.100:0/2075495847' entity='client.admin' cmd={"prefix": "log last", "channel": "cephadm", "format": "json-pretty"} : dispatch
Jan 31 08:51:38 compute-0 podman[262613]: 2026-01-31 08:51:38.370974436 +0000 UTC m=+0.241271263 container start 4b6b2cd6ec95da0dfa3ef3be44b848d1d8638ace93f605fee119043423431586 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nostalgic_kepler, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 08:51:38 compute-0 podman[262613]: 2026-01-31 08:51:38.39460603 +0000 UTC m=+0.264902857 container attach 4b6b2cd6ec95da0dfa3ef3be44b848d1d8638ace93f605fee119043423431586 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nostalgic_kepler, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 08:51:38 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush class ls"} v 0)
Jan 31 08:51:38 compute-0 ceph-mon[75227]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/240149101' entity='client.admin' cmd={"prefix": "osd crush class ls"} : dispatch
Jan 31 08:51:38 compute-0 nova_compute[238824]: 2026-01-31 08:51:38.441 238828 DEBUG oslo_concurrency.lockutils [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:51:38 compute-0 nova_compute[238824]: 2026-01-31 08:51:38.441 238828 DEBUG oslo_concurrency.lockutils [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:51:38 compute-0 nova_compute[238824]: 2026-01-31 08:51:38.441 238828 DEBUG oslo_concurrency.lockutils [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:51:38 compute-0 nova_compute[238824]: 2026-01-31 08:51:38.442 238828 DEBUG nova.compute.resource_tracker [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 31 08:51:38 compute-0 nova_compute[238824]: 2026-01-31 08:51:38.442 238828 DEBUG oslo_concurrency.processutils [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:51:38 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr dump", "format": "json-pretty"} v 0)
Jan 31 08:51:38 compute-0 ceph-mon[75227]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3369653470' entity='client.admin' cmd={"prefix": "mgr dump", "format": "json-pretty"} : dispatch
Jan 31 08:51:38 compute-0 nostalgic_kepler[262634]: --> passed data devices: 0 physical, 3 LVM
Jan 31 08:51:38 compute-0 nostalgic_kepler[262634]: --> All data devices are unavailable
Jan 31 08:51:38 compute-0 systemd[1]: libpod-4b6b2cd6ec95da0dfa3ef3be44b848d1d8638ace93f605fee119043423431586.scope: Deactivated successfully.
Jan 31 08:51:38 compute-0 podman[262613]: 2026-01-31 08:51:38.818368111 +0000 UTC m=+0.688664938 container died 4b6b2cd6ec95da0dfa3ef3be44b848d1d8638ace93f605fee119043423431586 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nostalgic_kepler, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Jan 31 08:51:38 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush dump"} v 0)
Jan 31 08:51:38 compute-0 ceph-mon[75227]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2677397529' entity='client.admin' cmd={"prefix": "osd crush dump"} : dispatch
Jan 31 08:51:38 compute-0 systemd[1]: var-lib-containers-storage-overlay-132048f7981605c2ae950b4b1ef2ef5262882703db48d1e421f517ff3f99fe10-merged.mount: Deactivated successfully.
Jan 31 08:51:38 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 31 08:51:38 compute-0 ceph-mon[75227]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2545235842' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 31 08:51:38 compute-0 nova_compute[238824]: 2026-01-31 08:51:38.981 238828 DEBUG oslo_concurrency.processutils [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.539s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:51:39 compute-0 podman[262613]: 2026-01-31 08:51:39.005950155 +0000 UTC m=+0.876246982 container remove 4b6b2cd6ec95da0dfa3ef3be44b848d1d8638ace93f605fee119043423431586 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nostalgic_kepler, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 08:51:39 compute-0 systemd[1]: libpod-conmon-4b6b2cd6ec95da0dfa3ef3be44b848d1d8638ace93f605fee119043423431586.scope: Deactivated successfully.
Jan 31 08:51:39 compute-0 sudo[262388]: pam_unix(sudo:session): session closed for user root
Jan 31 08:51:39 compute-0 sudo[262789]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 08:51:39 compute-0 sudo[262789]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:51:39 compute-0 sudo[262789]: pam_unix(sudo:session): session closed for user root
Jan 31 08:51:39 compute-0 nova_compute[238824]: 2026-01-31 08:51:39.125 238828 WARNING nova.virt.libvirt.driver [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 31 08:51:39 compute-0 nova_compute[238824]: 2026-01-31 08:51:39.126 238828 DEBUG nova.compute.resource_tracker [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4678MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 31 08:51:39 compute-0 nova_compute[238824]: 2026-01-31 08:51:39.126 238828 DEBUG oslo_concurrency.lockutils [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 31 08:51:39 compute-0 nova_compute[238824]: 2026-01-31 08:51:39.127 238828 DEBUG oslo_concurrency.lockutils [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 31 08:51:39 compute-0 sudo[262814]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/82c880e6-d992-5408-8b12-efff9c275473/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 82c880e6-d992-5408-8b12-efff9c275473 -- lvm list --format json
Jan 31 08:51:39 compute-0 sudo[262814]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:51:39 compute-0 nova_compute[238824]: 2026-01-31 08:51:39.213 238828 DEBUG nova.compute.resource_tracker [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 31 08:51:39 compute-0 nova_compute[238824]: 2026-01-31 08:51:39.213 238828 DEBUG nova.compute.resource_tracker [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 31 08:51:39 compute-0 nova_compute[238824]: 2026-01-31 08:51:39.248 238828 DEBUG oslo_concurrency.processutils [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 31 08:51:39 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush rule ls"} v 0)
Jan 31 08:51:39 compute-0 ceph-mon[75227]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1053334413' entity='client.admin' cmd={"prefix": "osd crush rule ls"} : dispatch
Jan 31 08:51:39 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr metadata", "format": "json-pretty"} v 0)
Jan 31 08:51:39 compute-0 ceph-mon[75227]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3763206130' entity='client.admin' cmd={"prefix": "mgr metadata", "format": "json-pretty"} : dispatch
Jan 31 08:51:39 compute-0 podman[262885]: 2026-01-31 08:51:39.431405784 +0000 UTC m=+0.019213988 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 08:51:39 compute-0 ceph-mon[75227]: pgmap v1487: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:51:39 compute-0 ceph-mon[75227]: from='client.? 192.168.122.100:0/240149101' entity='client.admin' cmd={"prefix": "osd crush class ls"} : dispatch
Jan 31 08:51:39 compute-0 ceph-mon[75227]: from='client.? 192.168.122.100:0/3369653470' entity='client.admin' cmd={"prefix": "mgr dump", "format": "json-pretty"} : dispatch
Jan 31 08:51:39 compute-0 ceph-mon[75227]: from='client.? 192.168.122.100:0/2677397529' entity='client.admin' cmd={"prefix": "osd crush dump"} : dispatch
Jan 31 08:51:39 compute-0 ceph-mon[75227]: from='client.? 192.168.122.100:0/2545235842' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 31 08:51:39 compute-0 ceph-mon[75227]: from='client.? 192.168.122.100:0/1053334413' entity='client.admin' cmd={"prefix": "osd crush rule ls"} : dispatch
Jan 31 08:51:39 compute-0 ceph-mon[75227]: from='client.? 192.168.122.100:0/3763206130' entity='client.admin' cmd={"prefix": "mgr metadata", "format": "json-pretty"} : dispatch
Jan 31 08:51:39 compute-0 podman[262885]: 2026-01-31 08:51:39.573969125 +0000 UTC m=+0.161777299 container create 93fc40e797d13867d31188c53487e65542db5d705e875fda91ea5cf76bc8c086 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_leakey, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Jan 31 08:51:39 compute-0 systemd[1]: Started libpod-conmon-93fc40e797d13867d31188c53487e65542db5d705e875fda91ea5cf76bc8c086.scope.
Jan 31 08:51:39 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:51:39 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 31 08:51:39 compute-0 ceph-mon[75227]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1714001896' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 31 08:51:39 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush show-tunables"} v 0)
Jan 31 08:51:39 compute-0 ceph-mon[75227]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/202479044' entity='client.admin' cmd={"prefix": "osd crush show-tunables"} : dispatch
Jan 31 08:51:39 compute-0 nova_compute[238824]: 2026-01-31 08:51:39.790 238828 DEBUG oslo_concurrency.processutils [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.542s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 31 08:51:39 compute-0 nova_compute[238824]: 2026-01-31 08:51:39.795 238828 DEBUG nova.compute.provider_tree [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Inventory has not changed in ProviderTree for provider: 6d4ff98f-eb37-47a1-bfaf-01e7f5329d98 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 31 08:51:39 compute-0 nova_compute[238824]: 2026-01-31 08:51:39.824 238828 DEBUG nova.scheduler.client.report [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Inventory has not changed for provider 6d4ff98f-eb37-47a1-bfaf-01e7f5329d98 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 31 08:51:39 compute-0 nova_compute[238824]: 2026-01-31 08:51:39.827 238828 DEBUG nova.compute.resource_tracker [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 31 08:51:39 compute-0 nova_compute[238824]: 2026-01-31 08:51:39.828 238828 DEBUG oslo_concurrency.lockutils [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.701s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 31 08:51:39 compute-0 podman[262885]: 2026-01-31 08:51:39.866357065 +0000 UTC m=+0.454165299 container init 93fc40e797d13867d31188c53487e65542db5d705e875fda91ea5cf76bc8c086 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_leakey, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2)
Jan 31 08:51:39 compute-0 podman[262885]: 2026-01-31 08:51:39.876426151 +0000 UTC m=+0.464234315 container start 93fc40e797d13867d31188c53487e65542db5d705e875fda91ea5cf76bc8c086 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_leakey, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Jan 31 08:51:39 compute-0 focused_leakey[262942]: 167 167
Jan 31 08:51:39 compute-0 systemd[1]: libpod-93fc40e797d13867d31188c53487e65542db5d705e875fda91ea5cf76bc8c086.scope: Deactivated successfully.
Jan 31 08:51:39 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr module ls", "format": "json-pretty"} v 0)
Jan 31 08:51:39 compute-0 ceph-mon[75227]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/617319252' entity='client.admin' cmd={"prefix": "mgr module ls", "format": "json-pretty"} : dispatch
Jan 31 08:51:39 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1488: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:51:39 compute-0 podman[262885]: 2026-01-31 08:51:39.946488327 +0000 UTC m=+0.534296511 container attach 93fc40e797d13867d31188c53487e65542db5d705e875fda91ea5cf76bc8c086 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_leakey, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 08:51:39 compute-0 podman[262885]: 2026-01-31 08:51:39.947840616 +0000 UTC m=+0.535648800 container died 93fc40e797d13867d31188c53487e65542db5d705e875fda91ea5cf76bc8c086 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_leakey, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, ceph=True, CEPH_REF=tentacle, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Jan 31 08:51:40 compute-0 systemd[1]: var-lib-containers-storage-overlay-4f93d0a101e5262d5a334d2c11bf9eff1c8e01d42e3e37f1f26a7d4db1d2d6ae-merged.mount: Deactivated successfully.
Jan 31 08:51:40 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush tree", "show_shadow": true} v 0)
Jan 31 08:51:40 compute-0 ceph-mon[75227]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3504867010' entity='client.admin' cmd={"prefix": "osd crush tree", "show_shadow": true} : dispatch
Jan 31 08:51:40 compute-0 podman[262885]: 2026-01-31 08:51:40.397304538 +0000 UTC m=+0.985112712 container remove 93fc40e797d13867d31188c53487e65542db5d705e875fda91ea5cf76bc8c086 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_leakey, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, ceph=True)
Jan 31 08:51:40 compute-0 systemd[1]: libpod-conmon-93fc40e797d13867d31188c53487e65542db5d705e875fda91ea5cf76bc8c086.scope: Deactivated successfully.
Jan 31 08:51:40 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr services", "format": "json-pretty"} v 0)
Jan 31 08:51:40 compute-0 ceph-mon[75227]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2309244850' entity='client.admin' cmd={"prefix": "mgr services", "format": "json-pretty"} : dispatch
Jan 31 08:51:40 compute-0 podman[263053]: 2026-01-31 08:51:40.506594172 +0000 UTC m=+0.028612586 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 08:51:40 compute-0 ceph-mon[75227]: from='client.? 192.168.122.100:0/1714001896' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 31 08:51:40 compute-0 ceph-mon[75227]: from='client.? 192.168.122.100:0/202479044' entity='client.admin' cmd={"prefix": "osd crush show-tunables"} : dispatch
Jan 31 08:51:40 compute-0 ceph-mon[75227]: from='client.? 192.168.122.100:0/617319252' entity='client.admin' cmd={"prefix": "mgr module ls", "format": "json-pretty"} : dispatch
Jan 31 08:51:40 compute-0 ceph-mon[75227]: from='client.? 192.168.122.100:0/3504867010' entity='client.admin' cmd={"prefix": "osd crush tree", "show_shadow": true} : dispatch
Jan 31 08:51:40 compute-0 ceph-mon[75227]: from='client.? 192.168.122.100:0/2309244850' entity='client.admin' cmd={"prefix": "mgr services", "format": "json-pretty"} : dispatch
Jan 31 08:51:40 compute-0 podman[263053]: 2026-01-31 08:51:40.688478403 +0000 UTC m=+0.210496797 container create 753969ecc458448d63fbf582da66f9095bb669cb5363e889800ed12682b27060 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=interesting_austin, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Jan 31 08:51:40 compute-0 nova_compute[238824]: 2026-01-31 08:51:40.823 238828 DEBUG oslo_service.periodic_task [None req-809931dc-e1a1-4c01-b9d8-e7955a1651c6 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 31 08:51:40 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd erasure-code-profile ls"} v 0)
Jan 31 08:51:40 compute-0 ceph-mon[75227]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3186937143' entity='client.admin' cmd={"prefix": "osd erasure-code-profile ls"} : dispatch
Jan 31 08:51:40 compute-0 systemd[1]: Started libpod-conmon-753969ecc458448d63fbf582da66f9095bb669cb5363e889800ed12682b27060.scope.
Jan 31 08:51:40 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:51:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4e82010edbeafc19de48d1c985f0728be8f249d7ae124efa0678ce41b26929ec/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 08:51:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4e82010edbeafc19de48d1c985f0728be8f249d7ae124efa0678ce41b26929ec/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 08:51:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4e82010edbeafc19de48d1c985f0728be8f249d7ae124efa0678ce41b26929ec/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 08:51:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4e82010edbeafc19de48d1c985f0728be8f249d7ae124efa0678ce41b26929ec/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 08:51:41 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr stat", "format": "json-pretty"} v 0)
Jan 31 08:51:41 compute-0 ceph-mon[75227]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/441531515' entity='client.admin' cmd={"prefix": "mgr stat", "format": "json-pretty"} : dispatch
Jan 31 08:51:41 compute-0 podman[263053]: 2026-01-31 08:51:41.074478109 +0000 UTC m=+0.596496533 container init 753969ecc458448d63fbf582da66f9095bb669cb5363e889800ed12682b27060 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=interesting_austin, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 08:51:41 compute-0 podman[263053]: 2026-01-31 08:51:41.08014 +0000 UTC m=+0.602158404 container start 753969ecc458448d63fbf582da66f9095bb669cb5363e889800ed12682b27060 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=interesting_austin, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030)
Jan 31 08:51:41 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:51:41 compute-0 podman[263053]: 2026-01-31 08:51:41.176162125 +0000 UTC m=+0.698180529 container attach 753969ecc458448d63fbf582da66f9095bb669cb5363e889800ed12682b27060 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=interesting_austin, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 08:51:41 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata"} v 0)
Jan 31 08:51:41 compute-0 ceph-mon[75227]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/414758100' entity='client.admin' cmd={"prefix": "osd metadata"} : dispatch
Jan 31 08:51:41 compute-0 interesting_austin[263104]: {
Jan 31 08:51:41 compute-0 interesting_austin[263104]:     "0": [
Jan 31 08:51:41 compute-0 interesting_austin[263104]:         {
Jan 31 08:51:41 compute-0 interesting_austin[263104]:             "devices": [
Jan 31 08:51:41 compute-0 interesting_austin[263104]:                 "/dev/loop3"
Jan 31 08:51:41 compute-0 interesting_austin[263104]:             ],
Jan 31 08:51:41 compute-0 interesting_austin[263104]:             "lv_name": "ceph_lv0",
Jan 31 08:51:41 compute-0 interesting_austin[263104]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 08:51:41 compute-0 interesting_austin[263104]:             "lv_size": "21470642176",
Jan 31 08:51:41 compute-0 interesting_austin[263104]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=MTsNbY-MKaT-jGv0-3onj-5WQa-gnK0-BbfLsK,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=82c880e6-d992-5408-8b12-efff9c275473,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=39c36249-2898-4a76-b317-8e4ca379866f,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 31 08:51:41 compute-0 interesting_austin[263104]:             "lv_uuid": "MTsNbY-MKaT-jGv0-3onj-5WQa-gnK0-BbfLsK",
Jan 31 08:51:41 compute-0 interesting_austin[263104]:             "name": "ceph_lv0",
Jan 31 08:51:41 compute-0 interesting_austin[263104]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 08:51:41 compute-0 interesting_austin[263104]:             "tags": {
Jan 31 08:51:41 compute-0 interesting_austin[263104]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 31 08:51:41 compute-0 interesting_austin[263104]:                 "ceph.block_uuid": "MTsNbY-MKaT-jGv0-3onj-5WQa-gnK0-BbfLsK",
Jan 31 08:51:41 compute-0 interesting_austin[263104]:                 "ceph.cephx_lockbox_secret": "",
Jan 31 08:51:41 compute-0 interesting_austin[263104]:                 "ceph.cluster_fsid": "82c880e6-d992-5408-8b12-efff9c275473",
Jan 31 08:51:41 compute-0 interesting_austin[263104]:                 "ceph.cluster_name": "ceph",
Jan 31 08:51:41 compute-0 interesting_austin[263104]:                 "ceph.crush_device_class": "",
Jan 31 08:51:41 compute-0 interesting_austin[263104]:                 "ceph.encrypted": "0",
Jan 31 08:51:41 compute-0 interesting_austin[263104]:                 "ceph.objectstore": "bluestore",
Jan 31 08:51:41 compute-0 interesting_austin[263104]:                 "ceph.osd_fsid": "39c36249-2898-4a76-b317-8e4ca379866f",
Jan 31 08:51:41 compute-0 interesting_austin[263104]:                 "ceph.osd_id": "0",
Jan 31 08:51:41 compute-0 interesting_austin[263104]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 31 08:51:41 compute-0 interesting_austin[263104]:                 "ceph.type": "block",
Jan 31 08:51:41 compute-0 interesting_austin[263104]:                 "ceph.vdo": "0",
Jan 31 08:51:41 compute-0 interesting_austin[263104]:                 "ceph.with_tpm": "0"
Jan 31 08:51:41 compute-0 interesting_austin[263104]:             },
Jan 31 08:51:41 compute-0 interesting_austin[263104]:             "type": "block",
Jan 31 08:51:41 compute-0 interesting_austin[263104]:             "vg_name": "ceph_vg0"
Jan 31 08:51:41 compute-0 interesting_austin[263104]:         }
Jan 31 08:51:41 compute-0 interesting_austin[263104]:     ],
Jan 31 08:51:41 compute-0 interesting_austin[263104]:     "1": [
Jan 31 08:51:41 compute-0 interesting_austin[263104]:         {
Jan 31 08:51:41 compute-0 interesting_austin[263104]:             "devices": [
Jan 31 08:51:41 compute-0 interesting_austin[263104]:                 "/dev/loop4"
Jan 31 08:51:41 compute-0 interesting_austin[263104]:             ],
Jan 31 08:51:41 compute-0 interesting_austin[263104]:             "lv_name": "ceph_lv1",
Jan 31 08:51:41 compute-0 interesting_austin[263104]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 31 08:51:41 compute-0 interesting_austin[263104]:             "lv_size": "21470642176",
Jan 31 08:51:41 compute-0 interesting_austin[263104]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=p93Mbf-DMxT-pcUt-jSJE-SFna-oscq-yTAd40,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=82c880e6-d992-5408-8b12-efff9c275473,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=dacad4fa-56d8-4937-b2d8-306fb75187f3,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 31 08:51:41 compute-0 interesting_austin[263104]:             "lv_uuid": "p93Mbf-DMxT-pcUt-jSJE-SFna-oscq-yTAd40",
Jan 31 08:51:41 compute-0 interesting_austin[263104]:             "name": "ceph_lv1",
Jan 31 08:51:41 compute-0 interesting_austin[263104]:             "path": "/dev/ceph_vg1/ceph_lv1",
Jan 31 08:51:41 compute-0 interesting_austin[263104]:             "tags": {
Jan 31 08:51:41 compute-0 interesting_austin[263104]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 31 08:51:41 compute-0 interesting_austin[263104]:                 "ceph.block_uuid": "p93Mbf-DMxT-pcUt-jSJE-SFna-oscq-yTAd40",
Jan 31 08:51:41 compute-0 interesting_austin[263104]:                 "ceph.cephx_lockbox_secret": "",
Jan 31 08:51:41 compute-0 interesting_austin[263104]:                 "ceph.cluster_fsid": "82c880e6-d992-5408-8b12-efff9c275473",
Jan 31 08:51:41 compute-0 interesting_austin[263104]:                 "ceph.cluster_name": "ceph",
Jan 31 08:51:41 compute-0 interesting_austin[263104]:                 "ceph.crush_device_class": "",
Jan 31 08:51:41 compute-0 interesting_austin[263104]:                 "ceph.encrypted": "0",
Jan 31 08:51:41 compute-0 interesting_austin[263104]:                 "ceph.objectstore": "bluestore",
Jan 31 08:51:41 compute-0 interesting_austin[263104]:                 "ceph.osd_fsid": "dacad4fa-56d8-4937-b2d8-306fb75187f3",
Jan 31 08:51:41 compute-0 interesting_austin[263104]:                 "ceph.osd_id": "1",
Jan 31 08:51:41 compute-0 interesting_austin[263104]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 31 08:51:41 compute-0 interesting_austin[263104]:                 "ceph.type": "block",
Jan 31 08:51:41 compute-0 interesting_austin[263104]:                 "ceph.vdo": "0",
Jan 31 08:51:41 compute-0 interesting_austin[263104]:                 "ceph.with_tpm": "0"
Jan 31 08:51:41 compute-0 interesting_austin[263104]:             },
Jan 31 08:51:41 compute-0 interesting_austin[263104]:             "type": "block",
Jan 31 08:51:41 compute-0 interesting_austin[263104]:             "vg_name": "ceph_vg1"
Jan 31 08:51:41 compute-0 interesting_austin[263104]:         }
Jan 31 08:51:41 compute-0 interesting_austin[263104]:     ],
Jan 31 08:51:41 compute-0 interesting_austin[263104]:     "2": [
Jan 31 08:51:41 compute-0 interesting_austin[263104]:         {
Jan 31 08:51:41 compute-0 interesting_austin[263104]:             "devices": [
Jan 31 08:51:41 compute-0 interesting_austin[263104]:                 "/dev/loop5"
Jan 31 08:51:41 compute-0 interesting_austin[263104]:             ],
Jan 31 08:51:41 compute-0 interesting_austin[263104]:             "lv_name": "ceph_lv2",
Jan 31 08:51:41 compute-0 interesting_austin[263104]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 31 08:51:41 compute-0 interesting_austin[263104]:             "lv_size": "21470642176",
Jan 31 08:51:41 compute-0 interesting_austin[263104]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=fxd7JU-HnwP-NvcE-M4xv-EgEF-kK7y-w6dXCS,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=82c880e6-d992-5408-8b12-efff9c275473,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=faa25865-e7b6-44f9-8188-08bf287b941b,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 31 08:51:41 compute-0 interesting_austin[263104]:             "lv_uuid": "fxd7JU-HnwP-NvcE-M4xv-EgEF-kK7y-w6dXCS",
Jan 31 08:51:41 compute-0 interesting_austin[263104]:             "name": "ceph_lv2",
Jan 31 08:51:41 compute-0 interesting_austin[263104]:             "path": "/dev/ceph_vg2/ceph_lv2",
Jan 31 08:51:41 compute-0 interesting_austin[263104]:             "tags": {
Jan 31 08:51:41 compute-0 interesting_austin[263104]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 31 08:51:41 compute-0 interesting_austin[263104]:                 "ceph.block_uuid": "fxd7JU-HnwP-NvcE-M4xv-EgEF-kK7y-w6dXCS",
Jan 31 08:51:41 compute-0 interesting_austin[263104]:                 "ceph.cephx_lockbox_secret": "",
Jan 31 08:51:41 compute-0 interesting_austin[263104]:                 "ceph.cluster_fsid": "82c880e6-d992-5408-8b12-efff9c275473",
Jan 31 08:51:41 compute-0 interesting_austin[263104]:                 "ceph.cluster_name": "ceph",
Jan 31 08:51:41 compute-0 interesting_austin[263104]:                 "ceph.crush_device_class": "",
Jan 31 08:51:41 compute-0 interesting_austin[263104]:                 "ceph.encrypted": "0",
Jan 31 08:51:41 compute-0 interesting_austin[263104]:                 "ceph.objectstore": "bluestore",
Jan 31 08:51:41 compute-0 interesting_austin[263104]:                 "ceph.osd_fsid": "faa25865-e7b6-44f9-8188-08bf287b941b",
Jan 31 08:51:41 compute-0 interesting_austin[263104]:                 "ceph.osd_id": "2",
Jan 31 08:51:41 compute-0 interesting_austin[263104]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 31 08:51:41 compute-0 interesting_austin[263104]:                 "ceph.type": "block",
Jan 31 08:51:41 compute-0 interesting_austin[263104]:                 "ceph.vdo": "0",
Jan 31 08:51:41 compute-0 interesting_austin[263104]:                 "ceph.with_tpm": "0"
Jan 31 08:51:41 compute-0 interesting_austin[263104]:             },
Jan 31 08:51:41 compute-0 interesting_austin[263104]:             "type": "block",
Jan 31 08:51:41 compute-0 interesting_austin[263104]:             "vg_name": "ceph_vg2"
Jan 31 08:51:41 compute-0 interesting_austin[263104]:         }
Jan 31 08:51:41 compute-0 interesting_austin[263104]:     ]
Jan 31 08:51:41 compute-0 interesting_austin[263104]: }
Jan 31 08:51:41 compute-0 systemd[1]: libpod-753969ecc458448d63fbf582da66f9095bb669cb5363e889800ed12682b27060.scope: Deactivated successfully.
Jan 31 08:51:41 compute-0 podman[263053]: 2026-01-31 08:51:41.391373736 +0000 UTC m=+0.913392140 container died 753969ecc458448d63fbf582da66f9095bb669cb5363e889800ed12682b27060 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=interesting_austin, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 08:51:41 compute-0 systemd[1]: var-lib-containers-storage-overlay-4e82010edbeafc19de48d1c985f0728be8f249d7ae124efa0678ce41b26929ec-merged.mount: Deactivated successfully.
Jan 31 08:51:41 compute-0 podman[263053]: 2026-01-31 08:51:41.446370812 +0000 UTC m=+0.968389216 container remove 753969ecc458448d63fbf582da66f9095bb669cb5363e889800ed12682b27060 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=interesting_austin, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 08:51:41 compute-0 systemd[1]: libpod-conmon-753969ecc458448d63fbf582da66f9095bb669cb5363e889800ed12682b27060.scope: Deactivated successfully.
Jan 31 08:51:41 compute-0 podman[263198]: 2026-01-31 08:51:41.496099299 +0000 UTC m=+0.075393309 container health_status 5cc46d1955888fed41771eb977e7f9416e280539f01559118253757fe3eb0869 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, org.label-schema.build-date=20260127, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '3b798815db4eef76ff54d2cfe5801aee605c637f16e47a4297de289ab1fdb9c1-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Jan 31 08:51:41 compute-0 sudo[262814]: pam_unix(sudo:session): session closed for user root
Jan 31 08:51:41 compute-0 podman[263185]: 2026-01-31 08:51:41.521010049 +0000 UTC m=+0.099297560 container health_status 14c3c41ef3ecc0cb180f3b1c6b2646401de390411c75f2edf127900ced71a3ae (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '3b798815db4eef76ff54d2cfe5801aee605c637f16e47a4297de289ab1fdb9c1-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9-65bbc77fee1fd2a6ee5bad6d8d287ba80799aefa403fa54c6d8976ba75addce9'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ovn_controller)
Jan 31 08:51:41 compute-0 sudo[263239]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 31 08:51:41 compute-0 sudo[263239]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:51:41 compute-0 sudo[263239]: pam_unix(sudo:session): session closed for user root
Jan 31 08:51:41 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr versions", "format": "json-pretty"} v 0)
Jan 31 08:51:41 compute-0 ceph-mon[75227]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2206635308' entity='client.admin' cmd={"prefix": "mgr versions", "format": "json-pretty"} : dispatch
Jan 31 08:51:41 compute-0 sudo[263266]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/82c880e6-d992-5408-8b12-efff9c275473/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 82c880e6-d992-5408-8b12-efff9c275473 -- raw list --format json
Jan 31 08:51:41 compute-0 sudo[263266]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:51:41 compute-0 ceph-mon[75227]: pgmap v1488: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:51:41 compute-0 ceph-mon[75227]: from='client.? 192.168.122.100:0/3186937143' entity='client.admin' cmd={"prefix": "osd erasure-code-profile ls"} : dispatch
Jan 31 08:51:41 compute-0 ceph-mon[75227]: from='client.? 192.168.122.100:0/441531515' entity='client.admin' cmd={"prefix": "mgr stat", "format": "json-pretty"} : dispatch
Jan 31 08:51:41 compute-0 ceph-mon[75227]: from='client.? 192.168.122.100:0/414758100' entity='client.admin' cmd={"prefix": "osd metadata"} : dispatch
Jan 31 08:51:41 compute-0 ceph-mon[75227]: from='client.? 192.168.122.100:0/2206635308' entity='client.admin' cmd={"prefix": "mgr versions", "format": "json-pretty"} : dispatch
Jan 31 08:51:41 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd utilization"} v 0)
Jan 31 08:51:41 compute-0 ceph-mon[75227]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1949232717' entity='client.admin' cmd={"prefix": "osd utilization"} : dispatch
Jan 31 08:51:41 compute-0 podman[263332]: 2026-01-31 08:51:41.908747284 +0000 UTC m=+0.060249047 container create c8d99e342937555a53bbcd7cc7dfb996c2b65a726ec10eb8a93e3d25bb2d88f7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=lucid_chatterjee, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.vendor=CentOS)
Jan 31 08:51:41 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1489: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:51:41 compute-0 podman[263332]: 2026-01-31 08:51:41.864997498 +0000 UTC m=+0.016499281 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:17:37.586314+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 74792960 unmapped: 696320 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:42 compute-0 ceph-osd[85971]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1001423 data_alloc: 218103808 data_used: 4212
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:17:38.586651+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 74801152 unmapped: 688128 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:17:39.586795+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 74801152 unmapped: 688128 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:17:40.586999+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 74801152 unmapped: 688128 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:17:41.587171+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 74809344 unmapped: 679936 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:17:42.587326+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 74817536 unmapped: 671744 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcea6000/0x0/0x4ffc00000, data 0xc0d0e/0x186000, compress 0x0/0x0/0x0, omap 0x1681f, meta 0x2bb97e1), peers [1,2] op hist [])
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:17:43.587473+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:42 compute-0 ceph-osd[85971]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1001423 data_alloc: 218103808 data_used: 4212
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 74825728 unmapped: 663552 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:17:44.587631+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 74817536 unmapped: 671744 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:17:45.587751+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 74825728 unmapped: 663552 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:17:46.587888+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 74825728 unmapped: 663552 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:17:47.588106+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 74825728 unmapped: 663552 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:17:48.588245+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:42 compute-0 ceph-osd[85971]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1001423 data_alloc: 218103808 data_used: 4212
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 74833920 unmapped: 655360 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcea6000/0x0/0x4ffc00000, data 0xc0d0e/0x186000, compress 0x0/0x0/0x0, omap 0x1681f, meta 0x2bb97e1), peers [1,2] op hist [])
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:17:49.588446+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 74833920 unmapped: 655360 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:17:50.588624+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 74842112 unmapped: 647168 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:17:51.588821+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 74842112 unmapped: 647168 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:17:52.589017+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 74842112 unmapped: 647168 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcea6000/0x0/0x4ffc00000, data 0xc0d0e/0x186000, compress 0x0/0x0/0x0, omap 0x1681f, meta 0x2bb97e1), peers [1,2] op hist [])
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:17:53.589182+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:42 compute-0 ceph-osd[85971]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1001423 data_alloc: 218103808 data_used: 4212
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 74850304 unmapped: 638976 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcea6000/0x0/0x4ffc00000, data 0xc0d0e/0x186000, compress 0x0/0x0/0x0, omap 0x1681f, meta 0x2bb97e1), peers [1,2] op hist [])
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:17:54.589354+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 74858496 unmapped: 630784 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcea6000/0x0/0x4ffc00000, data 0xc0d0e/0x186000, compress 0x0/0x0/0x0, omap 0x1681f, meta 0x2bb97e1), peers [1,2] op hist [])
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:17:55.589506+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 74858496 unmapped: 630784 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:17:56.589657+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 74866688 unmapped: 622592 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:17:57.589844+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 74874880 unmapped: 614400 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:17:58.590010+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:42 compute-0 ceph-osd[85971]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1001423 data_alloc: 218103808 data_used: 4212
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 74874880 unmapped: 614400 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 291.579528809s of 291.617797852s, submitted: 8
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:17:59.590156+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcea6000/0x0/0x4ffc00000, data 0xc0d0e/0x186000, compress 0x0/0x0/0x0, omap 0x1681f, meta 0x2bb97e1), peers [1,2] op hist [])
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 75546624 unmapped: 991232 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:18:00.590327+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 75546624 unmapped: 991232 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:18:01.590458+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 75546624 unmapped: 991232 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcea6000/0x0/0x4ffc00000, data 0xc0d0e/0x186000, compress 0x0/0x0/0x0, omap 0x1681f, meta 0x2bb97e1), peers [1,2] op hist [])
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:18:02.590627+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 75554816 unmapped: 983040 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:18:03.590785+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:42 compute-0 ceph-osd[85971]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1001423 data_alloc: 218103808 data_used: 4212
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 75554816 unmapped: 983040 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcea6000/0x0/0x4ffc00000, data 0xc0d0e/0x186000, compress 0x0/0x0/0x0, omap 0x1681f, meta 0x2bb97e1), peers [1,2] op hist [])
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:18:04.590935+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 75554816 unmapped: 983040 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcea6000/0x0/0x4ffc00000, data 0xc0d0e/0x186000, compress 0x0/0x0/0x0, omap 0x1681f, meta 0x2bb97e1), peers [1,2] op hist [])
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:18:05.591071+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 75571200 unmapped: 966656 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:18:06.591208+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 75571200 unmapped: 966656 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:18:07.591330+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 75587584 unmapped: 950272 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:18:08.591467+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:42 compute-0 ceph-osd[85971]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1001423 data_alloc: 218103808 data_used: 4212
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 75587584 unmapped: 950272 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:18:09.591573+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 75587584 unmapped: 950272 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:18:10.591761+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 75595776 unmapped: 942080 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcea6000/0x0/0x4ffc00000, data 0xc0d0e/0x186000, compress 0x0/0x0/0x0, omap 0x1681f, meta 0x2bb97e1), peers [1,2] op hist [])
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:18:11.591927+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcea6000/0x0/0x4ffc00000, data 0xc0d0e/0x186000, compress 0x0/0x0/0x0, omap 0x1681f, meta 0x2bb97e1), peers [1,2] op hist [])
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 75595776 unmapped: 942080 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcea6000/0x0/0x4ffc00000, data 0xc0d0e/0x186000, compress 0x0/0x0/0x0, omap 0x1681f, meta 0x2bb97e1), peers [1,2] op hist [])
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:18:12.592076+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 75612160 unmapped: 925696 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:18:13.592201+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:42 compute-0 ceph-osd[85971]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1001423 data_alloc: 218103808 data_used: 4212
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 75612160 unmapped: 925696 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:18:14.592334+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 75612160 unmapped: 925696 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcea6000/0x0/0x4ffc00000, data 0xc0d0e/0x186000, compress 0x0/0x0/0x0, omap 0x1681f, meta 0x2bb97e1), peers [1,2] op hist [])
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:18:15.592725+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 75628544 unmapped: 909312 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcea6000/0x0/0x4ffc00000, data 0xc0d0e/0x186000, compress 0x0/0x0/0x0, omap 0x1681f, meta 0x2bb97e1), peers [1,2] op hist [])
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:18:16.592891+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 75628544 unmapped: 909312 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:18:17.593117+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 75636736 unmapped: 901120 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:18:18.593310+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:42 compute-0 ceph-osd[85971]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1001423 data_alloc: 218103808 data_used: 4212
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 75636736 unmapped: 901120 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:18:19.593423+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 75644928 unmapped: 892928 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:18:20.593616+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 75644928 unmapped: 892928 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcea6000/0x0/0x4ffc00000, data 0xc0d0e/0x186000, compress 0x0/0x0/0x0, omap 0x1681f, meta 0x2bb97e1), peers [1,2] op hist [])
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:18:21.593744+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 75644928 unmapped: 892928 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:18:22.593918+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 75653120 unmapped: 884736 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:18:23.594072+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:42 compute-0 ceph-osd[85971]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1001423 data_alloc: 218103808 data_used: 4212
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 75661312 unmapped: 876544 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:18:24.594220+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 75669504 unmapped: 868352 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:18:25.594342+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 75669504 unmapped: 868352 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcea6000/0x0/0x4ffc00000, data 0xc0d0e/0x186000, compress 0x0/0x0/0x0, omap 0x1681f, meta 0x2bb97e1), peers [1,2] op hist [])
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:18:26.594461+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 75669504 unmapped: 868352 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:18:27.594599+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 75677696 unmapped: 860160 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:18:28.594712+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:42 compute-0 ceph-osd[85971]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1001423 data_alloc: 218103808 data_used: 4212
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 75677696 unmapped: 860160 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:18:29.594853+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 75677696 unmapped: 860160 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:18:30.595037+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 75685888 unmapped: 851968 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:18:31.595165+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcea6000/0x0/0x4ffc00000, data 0xc0d0e/0x186000, compress 0x0/0x0/0x0, omap 0x1681f, meta 0x2bb97e1), peers [1,2] op hist [])
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 75685888 unmapped: 851968 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:18:32.595335+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 75694080 unmapped: 843776 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:18:33.595494+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:42 compute-0 ceph-osd[85971]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1001423 data_alloc: 218103808 data_used: 4212
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 75694080 unmapped: 843776 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcea6000/0x0/0x4ffc00000, data 0xc0d0e/0x186000, compress 0x0/0x0/0x0, omap 0x1681f, meta 0x2bb97e1), peers [1,2] op hist [])
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:18:34.595609+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 75694080 unmapped: 843776 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:18:35.595804+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 75702272 unmapped: 835584 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:18:36.595923+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 75710464 unmapped: 827392 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:18:37.596063+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 75718656 unmapped: 819200 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:18:38.596202+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcea6000/0x0/0x4ffc00000, data 0xc0d0e/0x186000, compress 0x0/0x0/0x0, omap 0x1681f, meta 0x2bb97e1), peers [1,2] op hist [])
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:42 compute-0 ceph-osd[85971]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1001423 data_alloc: 218103808 data_used: 4212
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 75718656 unmapped: 819200 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcea6000/0x0/0x4ffc00000, data 0xc0d0e/0x186000, compress 0x0/0x0/0x0, omap 0x1681f, meta 0x2bb97e1), peers [1,2] op hist [])
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:18:39.596315+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 75726848 unmapped: 811008 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:18:40.596506+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcea6000/0x0/0x4ffc00000, data 0xc0d0e/0x186000, compress 0x0/0x0/0x0, omap 0x1681f, meta 0x2bb97e1), peers [1,2] op hist [])
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 75726848 unmapped: 811008 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:18:41.596640+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 75735040 unmapped: 802816 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:18:42.596792+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 75735040 unmapped: 802816 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:18:43.596933+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcea6000/0x0/0x4ffc00000, data 0xc0d0e/0x186000, compress 0x0/0x0/0x0, omap 0x1681f, meta 0x2bb97e1), peers [1,2] op hist [])
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:42 compute-0 ceph-osd[85971]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1001423 data_alloc: 218103808 data_used: 4212
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 75735040 unmapped: 802816 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:18:44.597067+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 75743232 unmapped: 794624 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:18:45.597213+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 75743232 unmapped: 794624 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:18:46.597379+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 75751424 unmapped: 786432 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcea6000/0x0/0x4ffc00000, data 0xc0d0e/0x186000, compress 0x0/0x0/0x0, omap 0x1681f, meta 0x2bb97e1), peers [1,2] op hist [])
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:18:47.597597+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 75767808 unmapped: 770048 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:18:48.597766+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:42 compute-0 ceph-osd[85971]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1001423 data_alloc: 218103808 data_used: 4212
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 75767808 unmapped: 770048 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:18:49.597928+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 75767808 unmapped: 770048 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcea6000/0x0/0x4ffc00000, data 0xc0d0e/0x186000, compress 0x0/0x0/0x0, omap 0x1681f, meta 0x2bb97e1), peers [1,2] op hist [])
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:18:50.598118+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 75767808 unmapped: 770048 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:18:51.598281+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 75767808 unmapped: 770048 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:18:52.598447+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 75767808 unmapped: 770048 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:18:53.598594+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:42 compute-0 ceph-osd[85971]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1001423 data_alloc: 218103808 data_used: 4212
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 75767808 unmapped: 770048 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcea6000/0x0/0x4ffc00000, data 0xc0d0e/0x186000, compress 0x0/0x0/0x0, omap 0x1681f, meta 0x2bb97e1), peers [1,2] op hist [])
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:18:54.598721+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 75767808 unmapped: 770048 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:18:55.598887+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 75767808 unmapped: 770048 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcea6000/0x0/0x4ffc00000, data 0xc0d0e/0x186000, compress 0x0/0x0/0x0, omap 0x1681f, meta 0x2bb97e1), peers [1,2] op hist [])
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:18:56.599059+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 75776000 unmapped: 761856 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:18:57.599403+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 75776000 unmapped: 761856 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:18:58.599531+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:42 compute-0 ceph-osd[85971]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1001423 data_alloc: 218103808 data_used: 4212
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 75776000 unmapped: 761856 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcea6000/0x0/0x4ffc00000, data 0xc0d0e/0x186000, compress 0x0/0x0/0x0, omap 0x1681f, meta 0x2bb97e1), peers [1,2] op hist [])
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:18:59.599647+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 75776000 unmapped: 761856 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:19:00.599817+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 75776000 unmapped: 761856 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:19:01.599952+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 75784192 unmapped: 753664 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:19:02.600091+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 75800576 unmapped: 737280 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:19:03.600198+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:42 compute-0 ceph-osd[85971]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1001423 data_alloc: 218103808 data_used: 4212
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 75800576 unmapped: 737280 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:19:04.600373+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 75800576 unmapped: 737280 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcea6000/0x0/0x4ffc00000, data 0xc0d0e/0x186000, compress 0x0/0x0/0x0, omap 0x1681f, meta 0x2bb97e1), peers [1,2] op hist [])
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:19:05.600553+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 75800576 unmapped: 737280 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:19:06.600722+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 75800576 unmapped: 737280 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:19:07.600932+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 75800576 unmapped: 737280 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:19:08.601079+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:42 compute-0 ceph-osd[85971]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1001423 data_alloc: 218103808 data_used: 4212
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 75800576 unmapped: 737280 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:19:09.601235+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 75800576 unmapped: 737280 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:19:10.601534+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcea6000/0x0/0x4ffc00000, data 0xc0d0e/0x186000, compress 0x0/0x0/0x0, omap 0x1681f, meta 0x2bb97e1), peers [1,2] op hist [])
Jan 31 08:51:42 compute-0 ceph-osd[85971]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcea6000/0x0/0x4ffc00000, data 0xc0d0e/0x186000, compress 0x0/0x0/0x0, omap 0x1681f, meta 0x2bb97e1), peers [1,2] op hist [])
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 75800576 unmapped: 737280 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:19:11.601673+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 75825152 unmapped: 712704 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:19:12.601817+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 75825152 unmapped: 712704 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:19:13.602144+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:42 compute-0 ceph-osd[85971]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1001423 data_alloc: 218103808 data_used: 4212
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 75825152 unmapped: 712704 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:19:14.602284+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 75825152 unmapped: 712704 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:19:15.602440+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 75825152 unmapped: 712704 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcea6000/0x0/0x4ffc00000, data 0xc0d0e/0x186000, compress 0x0/0x0/0x0, omap 0x1681f, meta 0x2bb97e1), peers [1,2] op hist [])
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:19:16.631715+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 75825152 unmapped: 712704 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:19:17.631894+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 75825152 unmapped: 712704 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:19:18.632023+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:42 compute-0 ceph-osd[85971]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1001423 data_alloc: 218103808 data_used: 4212
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 75825152 unmapped: 712704 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:19:19.632223+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 75825152 unmapped: 712704 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:19:20.632409+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcea6000/0x0/0x4ffc00000, data 0xc0d0e/0x186000, compress 0x0/0x0/0x0, omap 0x1681f, meta 0x2bb97e1), peers [1,2] op hist [])
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 75825152 unmapped: 712704 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:19:21.632550+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 75825152 unmapped: 712704 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:19:22.632692+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 75825152 unmapped: 712704 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:19:23.632917+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:42 compute-0 ceph-osd[85971]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1001423 data_alloc: 218103808 data_used: 4212
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 75825152 unmapped: 712704 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:19:24.633093+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 75825152 unmapped: 712704 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:19:25.633234+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 75825152 unmapped: 712704 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:19:26.633330+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcea6000/0x0/0x4ffc00000, data 0xc0d0e/0x186000, compress 0x0/0x0/0x0, omap 0x1681f, meta 0x2bb97e1), peers [1,2] op hist [])
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 75825152 unmapped: 712704 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:19:27.633534+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 75833344 unmapped: 704512 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:19:28.633669+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:42 compute-0 ceph-osd[85971]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1001423 data_alloc: 218103808 data_used: 4212
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 75833344 unmapped: 704512 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:19:29.633797+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 75833344 unmapped: 704512 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:19:30.633948+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 75833344 unmapped: 704512 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:19:31.634080+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcea6000/0x0/0x4ffc00000, data 0xc0d0e/0x186000, compress 0x0/0x0/0x0, omap 0x1681f, meta 0x2bb97e1), peers [1,2] op hist [])
Jan 31 08:51:42 compute-0 ceph-osd[85971]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcea6000/0x0/0x4ffc00000, data 0xc0d0e/0x186000, compress 0x0/0x0/0x0, omap 0x1681f, meta 0x2bb97e1), peers [1,2] op hist [])
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 75833344 unmapped: 704512 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:19:32.634230+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 75833344 unmapped: 704512 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:19:33.673873+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcea6000/0x0/0x4ffc00000, data 0xc0d0e/0x186000, compress 0x0/0x0/0x0, omap 0x1681f, meta 0x2bb97e1), peers [1,2] op hist [])
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:42 compute-0 ceph-osd[85971]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1001423 data_alloc: 218103808 data_used: 4212
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 75833344 unmapped: 704512 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:19:34.674019+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 75833344 unmapped: 704512 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:19:35.674164+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 75833344 unmapped: 704512 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:19:36.674301+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcea6000/0x0/0x4ffc00000, data 0xc0d0e/0x186000, compress 0x0/0x0/0x0, omap 0x1681f, meta 0x2bb97e1), peers [1,2] op hist [])
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 75841536 unmapped: 696320 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:19:37.674426+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 75841536 unmapped: 696320 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:19:38.674579+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:42 compute-0 ceph-osd[85971]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1001423 data_alloc: 218103808 data_used: 4212
Jan 31 08:51:42 compute-0 ceph-osd[85971]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcea6000/0x0/0x4ffc00000, data 0xc0d0e/0x186000, compress 0x0/0x0/0x0, omap 0x1681f, meta 0x2bb97e1), peers [1,2] op hist [])
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 75841536 unmapped: 696320 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:19:39.674775+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 75841536 unmapped: 696320 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:19:40.675043+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 75841536 unmapped: 696320 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:19:41.675181+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 75841536 unmapped: 696320 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:19:42.675375+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcea6000/0x0/0x4ffc00000, data 0xc0d0e/0x186000, compress 0x0/0x0/0x0, omap 0x1681f, meta 0x2bb97e1), peers [1,2] op hist [])
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 75841536 unmapped: 696320 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:19:43.675594+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:42 compute-0 ceph-osd[85971]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1001423 data_alloc: 218103808 data_used: 4212
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 75841536 unmapped: 696320 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:19:44.675750+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 75841536 unmapped: 696320 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:19:45.675891+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 75841536 unmapped: 696320 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:19:46.676038+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcea6000/0x0/0x4ffc00000, data 0xc0d0e/0x186000, compress 0x0/0x0/0x0, omap 0x1681f, meta 0x2bb97e1), peers [1,2] op hist [])
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 75841536 unmapped: 696320 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:19:47.676439+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 75841536 unmapped: 696320 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:19:48.676592+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:42 compute-0 ceph-osd[85971]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1001423 data_alloc: 218103808 data_used: 4212
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 75841536 unmapped: 696320 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:19:49.676738+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcea6000/0x0/0x4ffc00000, data 0xc0d0e/0x186000, compress 0x0/0x0/0x0, omap 0x1681f, meta 0x2bb97e1), peers [1,2] op hist [])
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 75833344 unmapped: 704512 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:19:50.676928+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 75833344 unmapped: 704512 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:19:51.677135+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 75833344 unmapped: 704512 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:19:52.677479+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 75833344 unmapped: 704512 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:19:53.677786+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:42 compute-0 ceph-osd[85971]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1001423 data_alloc: 218103808 data_used: 4212
Jan 31 08:51:42 compute-0 ceph-osd[85971]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcea6000/0x0/0x4ffc00000, data 0xc0d0e/0x186000, compress 0x0/0x0/0x0, omap 0x1681f, meta 0x2bb97e1), peers [1,2] op hist [])
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 75833344 unmapped: 704512 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:19:54.677924+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 75833344 unmapped: 704512 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:19:55.678071+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 75833344 unmapped: 704512 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:19:56.678235+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 75841536 unmapped: 696320 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:19:57.678379+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 75841536 unmapped: 696320 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:19:58.678514+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:42 compute-0 ceph-osd[85971]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1001423 data_alloc: 218103808 data_used: 4212
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 75841536 unmapped: 696320 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcea6000/0x0/0x4ffc00000, data 0xc0d0e/0x186000, compress 0x0/0x0/0x0, omap 0x1681f, meta 0x2bb97e1), peers [1,2] op hist [])
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:19:59.678635+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 75849728 unmapped: 688128 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:20:00.678783+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcea6000/0x0/0x4ffc00000, data 0xc0d0e/0x186000, compress 0x0/0x0/0x0, omap 0x1681f, meta 0x2bb97e1), peers [1,2] op hist [])
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 75849728 unmapped: 688128 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:20:01.678974+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 75866112 unmapped: 671744 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:20:02.679166+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 75866112 unmapped: 671744 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:20:03.679301+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:42 compute-0 ceph-osd[85971]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1001423 data_alloc: 218103808 data_used: 4212
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 75866112 unmapped: 671744 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:20:04.679447+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 75866112 unmapped: 671744 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:20:05.679619+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 75866112 unmapped: 671744 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcea6000/0x0/0x4ffc00000, data 0xc0d0e/0x186000, compress 0x0/0x0/0x0, omap 0x1681f, meta 0x2bb97e1), peers [1,2] op hist [])
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:20:06.679788+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 75866112 unmapped: 671744 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:20:07.679929+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 75866112 unmapped: 671744 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:20:08.680099+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:42 compute-0 ceph-osd[85971]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1001423 data_alloc: 218103808 data_used: 4212
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 75866112 unmapped: 671744 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:20:09.680266+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcea6000/0x0/0x4ffc00000, data 0xc0d0e/0x186000, compress 0x0/0x0/0x0, omap 0x1681f, meta 0x2bb97e1), peers [1,2] op hist [])
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 75866112 unmapped: 671744 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:20:10.681021+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcea6000/0x0/0x4ffc00000, data 0xc0d0e/0x186000, compress 0x0/0x0/0x0, omap 0x1681f, meta 0x2bb97e1), peers [1,2] op hist [])
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 75866112 unmapped: 671744 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:20:11.681211+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 75866112 unmapped: 671744 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:20:12.681373+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 75866112 unmapped: 671744 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:20:13.681552+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:42 compute-0 ceph-osd[85971]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1001423 data_alloc: 218103808 data_used: 4212
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 75866112 unmapped: 671744 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:20:14.681823+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 75882496 unmapped: 655360 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:20:15.681998+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 75882496 unmapped: 655360 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcea6000/0x0/0x4ffc00000, data 0xc0d0e/0x186000, compress 0x0/0x0/0x0, omap 0x1681f, meta 0x2bb97e1), peers [1,2] op hist [])
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:20:16.682145+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 75882496 unmapped: 655360 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:20:17.682307+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcea6000/0x0/0x4ffc00000, data 0xc0d0e/0x186000, compress 0x0/0x0/0x0, omap 0x1681f, meta 0x2bb97e1), peers [1,2] op hist [])
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 75882496 unmapped: 655360 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:20:18.682505+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:42 compute-0 ceph-osd[85971]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1001423 data_alloc: 218103808 data_used: 4212
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 75882496 unmapped: 655360 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:20:19.682666+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 75882496 unmapped: 655360 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:20:20.682879+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 75882496 unmapped: 655360 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:20:21.683070+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 75898880 unmapped: 638976 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:20:22.683321+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 75898880 unmapped: 638976 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcea6000/0x0/0x4ffc00000, data 0xc0d0e/0x186000, compress 0x0/0x0/0x0, omap 0x1681f, meta 0x2bb97e1), peers [1,2] op hist [])
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:20:23.683475+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:42 compute-0 ceph-osd[85971]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1001423 data_alloc: 218103808 data_used: 4212
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 75907072 unmapped: 630784 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:20:24.683721+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 75907072 unmapped: 630784 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:20:25.683945+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 75907072 unmapped: 630784 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:20:26.684154+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 75907072 unmapped: 630784 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:20:27.684312+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 75907072 unmapped: 630784 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:20:28.684472+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcea6000/0x0/0x4ffc00000, data 0xc0d0e/0x186000, compress 0x0/0x0/0x0, omap 0x1681f, meta 0x2bb97e1), peers [1,2] op hist [])
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:42 compute-0 ceph-osd[85971]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1001423 data_alloc: 218103808 data_used: 4212
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 75907072 unmapped: 630784 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:20:29.684659+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 75907072 unmapped: 630784 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:20:30.685072+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 75907072 unmapped: 630784 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:20:31.685194+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcea6000/0x0/0x4ffc00000, data 0xc0d0e/0x186000, compress 0x0/0x0/0x0, omap 0x1681f, meta 0x2bb97e1), peers [1,2] op hist [])
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 75907072 unmapped: 630784 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:20:32.685335+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcea6000/0x0/0x4ffc00000, data 0xc0d0e/0x186000, compress 0x0/0x0/0x0, omap 0x1681f, meta 0x2bb97e1), peers [1,2] op hist [])
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 75907072 unmapped: 630784 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:20:33.685522+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:42 compute-0 ceph-osd[85971]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1001423 data_alloc: 218103808 data_used: 4212
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 75907072 unmapped: 630784 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:20:34.685709+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 75907072 unmapped: 630784 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:20:35.685892+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 75907072 unmapped: 630784 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:20:36.686065+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 75907072 unmapped: 630784 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcea6000/0x0/0x4ffc00000, data 0xc0d0e/0x186000, compress 0x0/0x0/0x0, omap 0x1681f, meta 0x2bb97e1), peers [1,2] op hist [])
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:20:37.686229+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcea6000/0x0/0x4ffc00000, data 0xc0d0e/0x186000, compress 0x0/0x0/0x0, omap 0x1681f, meta 0x2bb97e1), peers [1,2] op hist [])
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 75907072 unmapped: 630784 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcea6000/0x0/0x4ffc00000, data 0xc0d0e/0x186000, compress 0x0/0x0/0x0, omap 0x1681f, meta 0x2bb97e1), peers [1,2] op hist [])
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:20:38.686419+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:42 compute-0 ceph-osd[85971]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1001423 data_alloc: 218103808 data_used: 4212
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 75907072 unmapped: 630784 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:20:39.686596+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 75907072 unmapped: 630784 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:20:40.686756+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 75907072 unmapped: 630784 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:20:41.686897+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 75907072 unmapped: 630784 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:20:42.687083+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 75907072 unmapped: 630784 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:20:43.687246+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:42 compute-0 ceph-osd[85971]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1001423 data_alloc: 218103808 data_used: 4212
Jan 31 08:51:42 compute-0 ceph-osd[85971]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcea6000/0x0/0x4ffc00000, data 0xc0d0e/0x186000, compress 0x0/0x0/0x0, omap 0x1681f, meta 0x2bb97e1), peers [1,2] op hist [])
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 75907072 unmapped: 630784 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:20:44.687419+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 75923456 unmapped: 614400 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:20:45.687578+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 75923456 unmapped: 614400 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:20:46.687701+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcea6000/0x0/0x4ffc00000, data 0xc0d0e/0x186000, compress 0x0/0x0/0x0, omap 0x1681f, meta 0x2bb97e1), peers [1,2] op hist [])
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 75923456 unmapped: 614400 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:20:47.687807+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcea6000/0x0/0x4ffc00000, data 0xc0d0e/0x186000, compress 0x0/0x0/0x0, omap 0x1681f, meta 0x2bb97e1), peers [1,2] op hist [])
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 75923456 unmapped: 614400 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:20:48.687985+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:42 compute-0 ceph-osd[85971]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1001423 data_alloc: 218103808 data_used: 4212
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 75923456 unmapped: 614400 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:20:49.688216+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 75923456 unmapped: 614400 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:20:50.688496+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 75923456 unmapped: 614400 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:20:51.688644+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 75923456 unmapped: 614400 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:20:52.688768+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 75923456 unmapped: 614400 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 systemd[1]: Started libpod-conmon-c8d99e342937555a53bbcd7cc7dfb996c2b65a726ec10eb8a93e3d25bb2d88f7.scope.
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:20:53.688936+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcea6000/0x0/0x4ffc00000, data 0xc0d0e/0x186000, compress 0x0/0x0/0x0, omap 0x1681f, meta 0x2bb97e1), peers [1,2] op hist [])
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:42 compute-0 ceph-osd[85971]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1001423 data_alloc: 218103808 data_used: 4212
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 75923456 unmapped: 614400 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:20:54.689150+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 75923456 unmapped: 614400 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:20:55.689305+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 75923456 unmapped: 614400 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:20:56.689500+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 75939840 unmapped: 598016 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:20:57.689690+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcea6000/0x0/0x4ffc00000, data 0xc0d0e/0x186000, compress 0x0/0x0/0x0, omap 0x1681f, meta 0x2bb97e1), peers [1,2] op hist [])
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 75939840 unmapped: 598016 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:20:58.689825+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:42 compute-0 ceph-osd[85971]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1001423 data_alloc: 218103808 data_used: 4212
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 75939840 unmapped: 598016 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:20:59.689963+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 75923456 unmapped: 614400 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:21:00.690177+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 75923456 unmapped: 614400 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:21:01.690381+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 75931648 unmapped: 606208 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:21:02.690518+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcea6000/0x0/0x4ffc00000, data 0xc0d0e/0x186000, compress 0x0/0x0/0x0, omap 0x1681f, meta 0x2bb97e1), peers [1,2] op hist [])
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 75948032 unmapped: 589824 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:21:03.690686+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:42 compute-0 ceph-osd[85971]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1001423 data_alloc: 218103808 data_used: 4212
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 75956224 unmapped: 581632 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:21:04.690843+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 75972608 unmapped: 565248 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:21:05.691006+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 75972608 unmapped: 565248 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:21:06.691205+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 75980800 unmapped: 557056 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:21:07.691333+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 75980800 unmapped: 557056 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:21:08.691469+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcea6000/0x0/0x4ffc00000, data 0xc0d0e/0x186000, compress 0x0/0x0/0x0, omap 0x1681f, meta 0x2bb97e1), peers [1,2] op hist [])
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:42 compute-0 ceph-osd[85971]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1001423 data_alloc: 218103808 data_used: 4212
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 75980800 unmapped: 557056 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:21:09.691632+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 75980800 unmapped: 557056 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:21:10.691799+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 75980800 unmapped: 557056 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:21:11.691929+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcea6000/0x0/0x4ffc00000, data 0xc0d0e/0x186000, compress 0x0/0x0/0x0, omap 0x1681f, meta 0x2bb97e1), peers [1,2] op hist [])
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 75980800 unmapped: 557056 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:21:12.692114+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 75980800 unmapped: 557056 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:21:13.692315+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:42 compute-0 ceph-osd[85971]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1001423 data_alloc: 218103808 data_used: 4212
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 75980800 unmapped: 557056 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:21:14.692494+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 75980800 unmapped: 557056 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:21:15.692661+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 75980800 unmapped: 557056 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:21:16.692838+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcea6000/0x0/0x4ffc00000, data 0xc0d0e/0x186000, compress 0x0/0x0/0x0, omap 0x1681f, meta 0x2bb97e1), peers [1,2] op hist [])
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 75980800 unmapped: 557056 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:21:17.693010+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcea6000/0x0/0x4ffc00000, data 0xc0d0e/0x186000, compress 0x0/0x0/0x0, omap 0x1681f, meta 0x2bb97e1), peers [1,2] op hist [])
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 75980800 unmapped: 557056 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:21:18.693422+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:42 compute-0 ceph-osd[85971]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1001423 data_alloc: 218103808 data_used: 4212
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 75980800 unmapped: 557056 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:21:19.693623+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 75980800 unmapped: 557056 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:21:20.693788+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:21:21.693955+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 75980800 unmapped: 557056 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:21:22.694175+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 75980800 unmapped: 557056 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcea6000/0x0/0x4ffc00000, data 0xc0d0e/0x186000, compress 0x0/0x0/0x0, omap 0x1681f, meta 0x2bb97e1), peers [1,2] op hist [])
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:21:23.694375+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 75980800 unmapped: 557056 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:42 compute-0 ceph-osd[85971]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1001423 data_alloc: 218103808 data_used: 4212
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:21:24.694534+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 75980800 unmapped: 557056 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcea6000/0x0/0x4ffc00000, data 0xc0d0e/0x186000, compress 0x0/0x0/0x0, omap 0x1681f, meta 0x2bb97e1), peers [1,2] op hist [])
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:21:25.694649+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 75997184 unmapped: 540672 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:21:26.694776+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 75997184 unmapped: 540672 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:21:27.694909+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 75997184 unmapped: 540672 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcea6000/0x0/0x4ffc00000, data 0xc0d0e/0x186000, compress 0x0/0x0/0x0, omap 0x1681f, meta 0x2bb97e1), peers [1,2] op hist [])
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:21:28.695028+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 75997184 unmapped: 540672 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:42 compute-0 ceph-osd[85971]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1001423 data_alloc: 218103808 data_used: 4212
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:21:29.695133+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 75997184 unmapped: 540672 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:21:30.695344+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 75997184 unmapped: 540672 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:21:31.695544+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 75997184 unmapped: 540672 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcea6000/0x0/0x4ffc00000, data 0xc0d0e/0x186000, compress 0x0/0x0/0x0, omap 0x1681f, meta 0x2bb97e1), peers [1,2] op hist [])
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:21:32.695715+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 76013568 unmapped: 524288 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:21:33.695887+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 76013568 unmapped: 524288 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:42 compute-0 ceph-osd[85971]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1001423 data_alloc: 218103808 data_used: 4212
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:21:34.696084+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 76013568 unmapped: 524288 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:21:35.696241+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 76013568 unmapped: 524288 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:21:36.696445+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 76013568 unmapped: 524288 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:21:37.696703+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 76021760 unmapped: 516096 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcea6000/0x0/0x4ffc00000, data 0xc0d0e/0x186000, compress 0x0/0x0/0x0, omap 0x1681f, meta 0x2bb97e1), peers [1,2] op hist [])
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:21:38.696871+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 76021760 unmapped: 516096 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcea6000/0x0/0x4ffc00000, data 0xc0d0e/0x186000, compress 0x0/0x0/0x0, omap 0x1681f, meta 0x2bb97e1), peers [1,2] op hist [])
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:42 compute-0 ceph-osd[85971]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1001423 data_alloc: 218103808 data_used: 4212
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:21:39.697021+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 76021760 unmapped: 516096 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:21:40.697194+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 76021760 unmapped: 516096 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:21:41.697308+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 76021760 unmapped: 516096 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcea6000/0x0/0x4ffc00000, data 0xc0d0e/0x186000, compress 0x0/0x0/0x0, omap 0x1681f, meta 0x2bb97e1), peers [1,2] op hist [])
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:21:42.697563+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 76021760 unmapped: 516096 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:21:43.697749+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 76021760 unmapped: 516096 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:42 compute-0 ceph-osd[85971]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1001423 data_alloc: 218103808 data_used: 4212
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:21:44.697924+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 76021760 unmapped: 516096 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:21:45.698115+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 76021760 unmapped: 516096 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:21:46.698289+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 76021760 unmapped: 516096 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:21:47.698481+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 76021760 unmapped: 516096 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcea6000/0x0/0x4ffc00000, data 0xc0d0e/0x186000, compress 0x0/0x0/0x0, omap 0x1681f, meta 0x2bb97e1), peers [1,2] op hist [])
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:21:48.698625+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 76021760 unmapped: 516096 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:42 compute-0 ceph-osd[85971]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1001423 data_alloc: 218103808 data_used: 4212
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:21:49.698773+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 76021760 unmapped: 516096 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcea6000/0x0/0x4ffc00000, data 0xc0d0e/0x186000, compress 0x0/0x0/0x0, omap 0x1681f, meta 0x2bb97e1), peers [1,2] op hist [])
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:21:50.698952+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 76029952 unmapped: 507904 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:21:51.699105+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 76029952 unmapped: 507904 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:21:52.699237+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 76029952 unmapped: 507904 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:21:53.699371+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 76029952 unmapped: 507904 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcea6000/0x0/0x4ffc00000, data 0xc0d0e/0x186000, compress 0x0/0x0/0x0, omap 0x1681f, meta 0x2bb97e1), peers [1,2] op hist [])
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:42 compute-0 ceph-osd[85971]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1001423 data_alloc: 218103808 data_used: 4212
Jan 31 08:51:42 compute-0 ceph-osd[85971]: mgrc ms_handle_reset ms_handle_reset con 0x561e03188000
Jan 31 08:51:42 compute-0 ceph-osd[85971]: mgrc reconnect Terminating session with v2:192.168.122.100:6800/2264315754
Jan 31 08:51:42 compute-0 ceph-osd[85971]: mgrc reconnect Starting new session with [v2:192.168.122.100:6800/2264315754,v1:192.168.122.100:6801/2264315754]
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: get_auth_request con 0x561e030b6c00 auth_method 0
Jan 31 08:51:42 compute-0 ceph-osd[85971]: mgrc handle_mgr_configure stats_period=5
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:21:54.699519+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 76513280 unmapped: 24576 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:21:55.699672+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 76513280 unmapped: 24576 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:21:56.699818+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 76513280 unmapped: 24576 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:21:57.699994+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 76513280 unmapped: 24576 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcea6000/0x0/0x4ffc00000, data 0xc0d0e/0x186000, compress 0x0/0x0/0x0, omap 0x1681f, meta 0x2bb97e1), peers [1,2] op hist [])
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:21:58.700152+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 76513280 unmapped: 24576 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:42 compute-0 ceph-osd[85971]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1001423 data_alloc: 218103808 data_used: 4212
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:21:59.700296+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 76513280 unmapped: 24576 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:22:00.700560+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 76513280 unmapped: 24576 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcea6000/0x0/0x4ffc00000, data 0xc0d0e/0x186000, compress 0x0/0x0/0x0, omap 0x1681f, meta 0x2bb97e1), peers [1,2] op hist [])
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:22:01.700741+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 76341248 unmapped: 196608 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: osd.0 127 ms_handle_reset con 0x561e0275b400 session 0x561e03cf8000
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: handle_auth_request added challenge on 0x561e053e4800
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:22:02.700919+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 76341248 unmapped: 196608 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-mgr[75519]: log_channel(audit) log [DBG] : from='client.14734 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:22:03.701074+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 76341248 unmapped: 196608 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcea6000/0x0/0x4ffc00000, data 0xc0d0e/0x186000, compress 0x0/0x0/0x0, omap 0x1681f, meta 0x2bb97e1), peers [1,2] op hist [])
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:42 compute-0 ceph-osd[85971]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1001423 data_alloc: 218103808 data_used: 4212
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:22:04.701294+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 76341248 unmapped: 196608 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:22:05.701435+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 76341248 unmapped: 196608 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:22:06.701617+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 76341248 unmapped: 196608 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:22:07.701770+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 76349440 unmapped: 188416 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:22:08.701919+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 76349440 unmapped: 188416 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:42 compute-0 ceph-osd[85971]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1001423 data_alloc: 218103808 data_used: 4212
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:22:09.702044+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 76349440 unmapped: 188416 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcea6000/0x0/0x4ffc00000, data 0xc0d0e/0x186000, compress 0x0/0x0/0x0, omap 0x1681f, meta 0x2bb97e1), peers [1,2] op hist [])
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:22:10.702204+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 76349440 unmapped: 188416 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:22:11.702387+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 76349440 unmapped: 188416 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:22:12.702572+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 76349440 unmapped: 188416 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcea6000/0x0/0x4ffc00000, data 0xc0d0e/0x186000, compress 0x0/0x0/0x0, omap 0x1681f, meta 0x2bb97e1), peers [1,2] op hist [])
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:22:13.702775+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 76349440 unmapped: 188416 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:42 compute-0 ceph-osd[85971]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1001423 data_alloc: 218103808 data_used: 4212
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:22:14.702923+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 76349440 unmapped: 188416 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcea6000/0x0/0x4ffc00000, data 0xc0d0e/0x186000, compress 0x0/0x0/0x0, omap 0x1681f, meta 0x2bb97e1), peers [1,2] op hist [])
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:22:15.703065+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 76349440 unmapped: 188416 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:22:16.703223+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 76349440 unmapped: 188416 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:22:17.703403+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 76357632 unmapped: 180224 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcea6000/0x0/0x4ffc00000, data 0xc0d0e/0x186000, compress 0x0/0x0/0x0, omap 0x1681f, meta 0x2bb97e1), peers [1,2] op hist [])
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:22:18.703574+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 76357632 unmapped: 180224 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:42 compute-0 ceph-osd[85971]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1001423 data_alloc: 218103808 data_used: 4212
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:22:19.703724+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 76357632 unmapped: 180224 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:22:20.703885+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 76357632 unmapped: 180224 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:22:21.704043+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 76357632 unmapped: 180224 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcea6000/0x0/0x4ffc00000, data 0xc0d0e/0x186000, compress 0x0/0x0/0x0, omap 0x1681f, meta 0x2bb97e1), peers [1,2] op hist [])
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:22:22.704201+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 76357632 unmapped: 180224 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcea6000/0x0/0x4ffc00000, data 0xc0d0e/0x186000, compress 0x0/0x0/0x0, omap 0x1681f, meta 0x2bb97e1), peers [1,2] op hist [])
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:22:23.704370+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 76357632 unmapped: 180224 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:42 compute-0 ceph-osd[85971]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1001423 data_alloc: 218103808 data_used: 4212
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:22:24.704564+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 76357632 unmapped: 180224 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:22:25.704698+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 76357632 unmapped: 180224 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:22:26.704888+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 76357632 unmapped: 180224 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:22:27.705034+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 76365824 unmapped: 172032 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcea6000/0x0/0x4ffc00000, data 0xc0d0e/0x186000, compress 0x0/0x0/0x0, omap 0x1681f, meta 0x2bb97e1), peers [1,2] op hist [])
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:22:28.705177+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 76365824 unmapped: 172032 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:42 compute-0 ceph-osd[85971]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1001423 data_alloc: 218103808 data_used: 4212
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:22:29.705386+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 76365824 unmapped: 172032 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:22:30.705591+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 76365824 unmapped: 172032 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:22:31.705729+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 76365824 unmapped: 172032 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:22:32.705949+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 76374016 unmapped: 163840 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcea6000/0x0/0x4ffc00000, data 0xc0d0e/0x186000, compress 0x0/0x0/0x0, omap 0x1681f, meta 0x2bb97e1), peers [1,2] op hist [])
Jan 31 08:51:42 compute-0 ceph-osd[85971]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcea6000/0x0/0x4ffc00000, data 0xc0d0e/0x186000, compress 0x0/0x0/0x0, omap 0x1681f, meta 0x2bb97e1), peers [1,2] op hist [])
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:22:33.706148+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 76374016 unmapped: 163840 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:42 compute-0 ceph-osd[85971]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1001423 data_alloc: 218103808 data_used: 4212
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:22:34.706320+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 76374016 unmapped: 163840 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcea6000/0x0/0x4ffc00000, data 0xc0d0e/0x186000, compress 0x0/0x0/0x0, omap 0x1681f, meta 0x2bb97e1), peers [1,2] op hist [])
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:22:35.706488+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 76374016 unmapped: 163840 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:22:36.706809+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 76374016 unmapped: 163840 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:22:37.707047+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 76374016 unmapped: 163840 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:22:38.707235+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 76374016 unmapped: 163840 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:42 compute-0 ceph-osd[85971]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1001423 data_alloc: 218103808 data_used: 4212
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:22:39.707524+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 76374016 unmapped: 163840 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:22:40.707833+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 76374016 unmapped: 163840 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcea6000/0x0/0x4ffc00000, data 0xc0d0e/0x186000, compress 0x0/0x0/0x0, omap 0x1681f, meta 0x2bb97e1), peers [1,2] op hist [])
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:22:41.708024+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 76374016 unmapped: 163840 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcea6000/0x0/0x4ffc00000, data 0xc0d0e/0x186000, compress 0x0/0x0/0x0, omap 0x1681f, meta 0x2bb97e1), peers [1,2] op hist [])
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:22:42.708165+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 76374016 unmapped: 163840 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:22:43.708369+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 76374016 unmapped: 163840 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:42 compute-0 ceph-osd[85971]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1001423 data_alloc: 218103808 data_used: 4212
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:22:44.708557+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 76374016 unmapped: 163840 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:22:45.708780+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 76374016 unmapped: 163840 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:22:46.708968+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 76374016 unmapped: 163840 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:22:47.709080+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 76374016 unmapped: 163840 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcea6000/0x0/0x4ffc00000, data 0xc0d0e/0x186000, compress 0x0/0x0/0x0, omap 0x1681f, meta 0x2bb97e1), peers [1,2] op hist [])
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:22:48.709378+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 76374016 unmapped: 163840 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:22:49.709805+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:42 compute-0 ceph-osd[85971]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1001423 data_alloc: 218103808 data_used: 4212
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 76374016 unmapped: 163840 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:22:50.709971+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 76374016 unmapped: 163840 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:22:51.710158+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 76374016 unmapped: 163840 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:22:52.710399+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 76374016 unmapped: 163840 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:22:53.710585+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 76374016 unmapped: 163840 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcea6000/0x0/0x4ffc00000, data 0xc0d0e/0x186000, compress 0x0/0x0/0x0, omap 0x1681f, meta 0x2bb97e1), peers [1,2] op hist [])
Jan 31 08:51:42 compute-0 ceph-osd[85971]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcea6000/0x0/0x4ffc00000, data 0xc0d0e/0x186000, compress 0x0/0x0/0x0, omap 0x1681f, meta 0x2bb97e1), peers [1,2] op hist [])
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:22:54.710776+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:42 compute-0 ceph-osd[85971]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1001423 data_alloc: 218103808 data_used: 4212
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 76374016 unmapped: 163840 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:22:55.711004+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 76374016 unmapped: 163840 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:22:56.711166+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 76374016 unmapped: 163840 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:22:57.711408+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 76374016 unmapped: 163840 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcea6000/0x0/0x4ffc00000, data 0xc0d0e/0x186000, compress 0x0/0x0/0x0, omap 0x1681f, meta 0x2bb97e1), peers [1,2] op hist [])
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:22:58.711554+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 76374016 unmapped: 163840 heap: 76537856 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: handle_auth_request added challenge on 0x561e053e5800
Jan 31 08:51:42 compute-0 ceph-osd[85971]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 300.245971680s of 300.500274658s, submitted: 106
Jan 31 08:51:42 compute-0 ceph-osd[85971]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:22:59.711687+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:42 compute-0 ceph-osd[85971]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1001807 data_alloc: 218103808 data_used: 4672
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 77545472 unmapped: 40960 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:23:00.711849+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 77545472 unmapped: 40960 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:23:01.712037+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 77545472 unmapped: 40960 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:23:02.712232+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 77545472 unmapped: 40960 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcea6000/0x0/0x4ffc00000, data 0xc0d0e/0x186000, compress 0x0/0x0/0x0, omap 0x1681f, meta 0x2bb97e1), peers [1,2] op hist [])
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:23:03.712386+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 77545472 unmapped: 40960 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcea6000/0x0/0x4ffc00000, data 0xc0d0e/0x186000, compress 0x0/0x0/0x0, omap 0x1681f, meta 0x2bb97e1), peers [1,2] op hist [])
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:23:04.712640+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:42 compute-0 ceph-osd[85971]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1001807 data_alloc: 218103808 data_used: 4672
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 77545472 unmapped: 40960 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:23:05.712813+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 77545472 unmapped: 40960 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:23:06.713020+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 77545472 unmapped: 40960 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:23:07.713241+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 77545472 unmapped: 40960 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcea6000/0x0/0x4ffc00000, data 0xc0d0e/0x186000, compress 0x0/0x0/0x0, omap 0x1681f, meta 0x2bb97e1), peers [1,2] op hist [])
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:23:08.713600+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 77545472 unmapped: 40960 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcea6000/0x0/0x4ffc00000, data 0xc0d0e/0x186000, compress 0x0/0x0/0x0, omap 0x1681f, meta 0x2bb97e1), peers [1,2] op hist [])
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:23:09.713792+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:42 compute-0 ceph-osd[85971]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1001807 data_alloc: 218103808 data_used: 4672
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 77545472 unmapped: 40960 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:23:10.714001+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 77545472 unmapped: 40960 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:23:11.714172+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 77545472 unmapped: 40960 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:23:12.714309+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 77545472 unmapped: 40960 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:23:13.714460+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 77545472 unmapped: 40960 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:23:14.714647+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:42 compute-0 ceph-osd[85971]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1001807 data_alloc: 218103808 data_used: 4672
Jan 31 08:51:42 compute-0 ceph-osd[85971]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcea6000/0x0/0x4ffc00000, data 0xc0d0e/0x186000, compress 0x0/0x0/0x0, omap 0x1681f, meta 0x2bb97e1), peers [1,2] op hist [])
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 77545472 unmapped: 40960 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:23:15.714851+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 77545472 unmapped: 40960 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:23:16.715052+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 77545472 unmapped: 40960 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:23:17.715201+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcea6000/0x0/0x4ffc00000, data 0xc0d0e/0x186000, compress 0x0/0x0/0x0, omap 0x1681f, meta 0x2bb97e1), peers [1,2] op hist [])
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 77545472 unmapped: 40960 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:23:18.715339+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 77545472 unmapped: 40960 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcea6000/0x0/0x4ffc00000, data 0xc0d0e/0x186000, compress 0x0/0x0/0x0, omap 0x1681f, meta 0x2bb97e1), peers [1,2] op hist [])
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:23:19.715703+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:42 compute-0 ceph-osd[85971]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1001807 data_alloc: 218103808 data_used: 4672
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 77545472 unmapped: 40960 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:23:20.715975+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 77545472 unmapped: 40960 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:23:21.716138+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 77545472 unmapped: 40960 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:23:22.716374+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 77545472 unmapped: 40960 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:23:23.716545+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 77545472 unmapped: 40960 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:23:24.716688+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:42 compute-0 ceph-osd[85971]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1001807 data_alloc: 218103808 data_used: 4672
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 77545472 unmapped: 40960 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcea6000/0x0/0x4ffc00000, data 0xc0d0e/0x186000, compress 0x0/0x0/0x0, omap 0x1681f, meta 0x2bb97e1), peers [1,2] op hist [])
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:23:25.716856+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 77545472 unmapped: 40960 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcea6000/0x0/0x4ffc00000, data 0xc0d0e/0x186000, compress 0x0/0x0/0x0, omap 0x1681f, meta 0x2bb97e1), peers [1,2] op hist [])
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:23:26.717054+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 77553664 unmapped: 32768 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:23:27.717314+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 77553664 unmapped: 32768 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcea6000/0x0/0x4ffc00000, data 0xc0d0e/0x186000, compress 0x0/0x0/0x0, omap 0x1681f, meta 0x2bb97e1), peers [1,2] op hist [])
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:23:28.717460+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 77553664 unmapped: 32768 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:23:29.717623+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:42 compute-0 ceph-osd[85971]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1001807 data_alloc: 218103808 data_used: 4672
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 77553664 unmapped: 32768 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:23:30.717907+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 77553664 unmapped: 32768 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcea6000/0x0/0x4ffc00000, data 0xc0d0e/0x186000, compress 0x0/0x0/0x0, omap 0x1681f, meta 0x2bb97e1), peers [1,2] op hist [])
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:23:31.718152+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 77561856 unmapped: 24576 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:23:32.718331+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 77561856 unmapped: 24576 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:23:33.718457+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 77561856 unmapped: 24576 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:23:34.718639+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:42 compute-0 ceph-osd[85971]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1001807 data_alloc: 218103808 data_used: 4672
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 77561856 unmapped: 24576 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:23:35.718787+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcea6000/0x0/0x4ffc00000, data 0xc0d0e/0x186000, compress 0x0/0x0/0x0, omap 0x1681f, meta 0x2bb97e1), peers [1,2] op hist [])
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 77561856 unmapped: 24576 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:23:36.718946+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 77561856 unmapped: 24576 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:23:37.719118+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 77561856 unmapped: 24576 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:23:38.719366+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 77561856 unmapped: 24576 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:23:39.719494+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:42 compute-0 ceph-osd[85971]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1001807 data_alloc: 218103808 data_used: 4672
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 77561856 unmapped: 24576 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:23:40.719666+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 77561856 unmapped: 24576 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:23:41.719822+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcea6000/0x0/0x4ffc00000, data 0xc0d0e/0x186000, compress 0x0/0x0/0x0, omap 0x1681f, meta 0x2bb97e1), peers [1,2] op hist [])
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 77570048 unmapped: 16384 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcea6000/0x0/0x4ffc00000, data 0xc0d0e/0x186000, compress 0x0/0x0/0x0, omap 0x1681f, meta 0x2bb97e1), peers [1,2] op hist [])
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:23:42.720018+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 77570048 unmapped: 16384 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcea6000/0x0/0x4ffc00000, data 0xc0d0e/0x186000, compress 0x0/0x0/0x0, omap 0x1681f, meta 0x2bb97e1), peers [1,2] op hist [])
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:23:43.720191+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcea6000/0x0/0x4ffc00000, data 0xc0d0e/0x186000, compress 0x0/0x0/0x0, omap 0x1681f, meta 0x2bb97e1), peers [1,2] op hist [])
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 77570048 unmapped: 16384 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:23:44.720347+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:42 compute-0 ceph-osd[85971]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1001807 data_alloc: 218103808 data_used: 4672
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 77586432 unmapped: 0 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:23:45.720499+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 77586432 unmapped: 0 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:23:46.720857+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcea6000/0x0/0x4ffc00000, data 0xc0d0e/0x186000, compress 0x0/0x0/0x0, omap 0x1681f, meta 0x2bb97e1), peers [1,2] op hist [])
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 77586432 unmapped: 0 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:23:47.721037+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 77586432 unmapped: 0 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:23:48.721226+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 77586432 unmapped: 0 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:23:49.721319+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:42 compute-0 ceph-osd[85971]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1001807 data_alloc: 218103808 data_used: 4672
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 77586432 unmapped: 0 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcea6000/0x0/0x4ffc00000, data 0xc0d0e/0x186000, compress 0x0/0x0/0x0, omap 0x1681f, meta 0x2bb97e1), peers [1,2] op hist [])
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:23:50.721544+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 77586432 unmapped: 0 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:23:51.721740+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 77586432 unmapped: 0 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:23:52.721928+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 77586432 unmapped: 0 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:23:53.722093+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 77586432 unmapped: 0 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:23:54.722324+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:42 compute-0 ceph-osd[85971]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1001807 data_alloc: 218103808 data_used: 4672
Jan 31 08:51:42 compute-0 ceph-osd[85971]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcea6000/0x0/0x4ffc00000, data 0xc0d0e/0x186000, compress 0x0/0x0/0x0, omap 0x1681f, meta 0x2bb97e1), peers [1,2] op hist [])
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 77586432 unmapped: 0 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:23:55.722515+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 77586432 unmapped: 0 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcea6000/0x0/0x4ffc00000, data 0xc0d0e/0x186000, compress 0x0/0x0/0x0, omap 0x1681f, meta 0x2bb97e1), peers [1,2] op hist [])
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:23:56.722673+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 77586432 unmapped: 0 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:23:57.722816+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 77586432 unmapped: 0 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:23:58.722942+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 77586432 unmapped: 0 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:23:59.723085+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:42 compute-0 ceph-osd[85971]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1001807 data_alloc: 218103808 data_used: 4672
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 77586432 unmapped: 0 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:24:00.723203+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcea6000/0x0/0x4ffc00000, data 0xc0d0e/0x186000, compress 0x0/0x0/0x0, omap 0x1681f, meta 0x2bb97e1), peers [1,2] op hist [])
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 77586432 unmapped: 0 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:24:01.724463+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 77586432 unmapped: 0 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:24:02.724715+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 77594624 unmapped: 1040384 heap: 78635008 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:24:03.724896+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 77594624 unmapped: 1040384 heap: 78635008 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcea6000/0x0/0x4ffc00000, data 0xc0d0e/0x186000, compress 0x0/0x0/0x0, omap 0x1681f, meta 0x2bb97e1), peers [1,2] op hist [])
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:24:04.725056+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:42 compute-0 ceph-osd[85971]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1001807 data_alloc: 218103808 data_used: 4672
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 77611008 unmapped: 1024000 heap: 78635008 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcea6000/0x0/0x4ffc00000, data 0xc0d0e/0x186000, compress 0x0/0x0/0x0, omap 0x1681f, meta 0x2bb97e1), peers [1,2] op hist [])
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:24:05.725346+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 77611008 unmapped: 1024000 heap: 78635008 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:24:06.725539+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 77619200 unmapped: 1015808 heap: 78635008 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:24:07.725734+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 77619200 unmapped: 1015808 heap: 78635008 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:24:08.725924+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 77619200 unmapped: 1015808 heap: 78635008 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:24:09.726127+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:42 compute-0 ceph-osd[85971]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1001807 data_alloc: 218103808 data_used: 4672
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 77619200 unmapped: 1015808 heap: 78635008 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcea6000/0x0/0x4ffc00000, data 0xc0d0e/0x186000, compress 0x0/0x0/0x0, omap 0x1681f, meta 0x2bb97e1), peers [1,2] op hist [])
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:24:10.726386+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcea6000/0x0/0x4ffc00000, data 0xc0d0e/0x186000, compress 0x0/0x0/0x0, omap 0x1681f, meta 0x2bb97e1), peers [1,2] op hist [])
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 77619200 unmapped: 1015808 heap: 78635008 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcea6000/0x0/0x4ffc00000, data 0xc0d0e/0x186000, compress 0x0/0x0/0x0, omap 0x1681f, meta 0x2bb97e1), peers [1,2] op hist [])
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:24:11.726594+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 77627392 unmapped: 1007616 heap: 78635008 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:24:12.726772+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcea6000/0x0/0x4ffc00000, data 0xc0d0e/0x186000, compress 0x0/0x0/0x0, omap 0x1681f, meta 0x2bb97e1), peers [1,2] op hist [])
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 77627392 unmapped: 1007616 heap: 78635008 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:24:13.726893+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcea6000/0x0/0x4ffc00000, data 0xc0d0e/0x186000, compress 0x0/0x0/0x0, omap 0x1681f, meta 0x2bb97e1), peers [1,2] op hist [])
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 77643776 unmapped: 991232 heap: 78635008 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:24:14.727083+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:42 compute-0 ceph-osd[85971]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1001807 data_alloc: 218103808 data_used: 4672
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 77643776 unmapped: 991232 heap: 78635008 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcea6000/0x0/0x4ffc00000, data 0xc0d0e/0x186000, compress 0x0/0x0/0x0, omap 0x1681f, meta 0x2bb97e1), peers [1,2] op hist [])
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:24:15.727367+0000)
Jan 31 08:51:42 compute-0 rsyslogd[1007]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 77643776 unmapped: 991232 heap: 78635008 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:24:16.727598+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcea6000/0x0/0x4ffc00000, data 0xc0d0e/0x186000, compress 0x0/0x0/0x0, omap 0x1681f, meta 0x2bb97e1), peers [1,2] op hist [])
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 77651968 unmapped: 983040 heap: 78635008 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:24:17.727779+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 77651968 unmapped: 983040 heap: 78635008 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:24:18.727964+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 77651968 unmapped: 983040 heap: 78635008 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:24:19.728187+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:42 compute-0 ceph-osd[85971]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1001807 data_alloc: 218103808 data_used: 4672
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 77651968 unmapped: 983040 heap: 78635008 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:24:20.728435+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 77651968 unmapped: 983040 heap: 78635008 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:24:21.728584+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcea6000/0x0/0x4ffc00000, data 0xc0d0e/0x186000, compress 0x0/0x0/0x0, omap 0x1681f, meta 0x2bb97e1), peers [1,2] op hist [])
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 77651968 unmapped: 983040 heap: 78635008 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:24:22.728703+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 77651968 unmapped: 983040 heap: 78635008 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:24:23.728849+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 77651968 unmapped: 983040 heap: 78635008 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:24:24.728998+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:42 compute-0 ceph-osd[85971]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1001807 data_alloc: 218103808 data_used: 4672
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 77651968 unmapped: 983040 heap: 78635008 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:24:25.729155+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 77651968 unmapped: 983040 heap: 78635008 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:24:26.729426+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 77651968 unmapped: 983040 heap: 78635008 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:24:27.729613+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcea6000/0x0/0x4ffc00000, data 0xc0d0e/0x186000, compress 0x0/0x0/0x0, omap 0x1681f, meta 0x2bb97e1), peers [1,2] op hist [])
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 77651968 unmapped: 983040 heap: 78635008 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:24:28.729812+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 77651968 unmapped: 983040 heap: 78635008 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:24:29.729957+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:42 compute-0 ceph-osd[85971]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1001807 data_alloc: 218103808 data_used: 4672
Jan 31 08:51:42 compute-0 ceph-osd[85971]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcea6000/0x0/0x4ffc00000, data 0xc0d0e/0x186000, compress 0x0/0x0/0x0, omap 0x1681f, meta 0x2bb97e1), peers [1,2] op hist [])
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 77660160 unmapped: 974848 heap: 78635008 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:24:30.730158+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 77660160 unmapped: 974848 heap: 78635008 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:24:31.730338+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 77660160 unmapped: 974848 heap: 78635008 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:24:32.730435+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 77660160 unmapped: 974848 heap: 78635008 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:24:33.730609+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcea6000/0x0/0x4ffc00000, data 0xc0d0e/0x186000, compress 0x0/0x0/0x0, omap 0x1681f, meta 0x2bb97e1), peers [1,2] op hist [])
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 77660160 unmapped: 974848 heap: 78635008 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:24:34.730831+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcea6000/0x0/0x4ffc00000, data 0xc0d0e/0x186000, compress 0x0/0x0/0x0, omap 0x1681f, meta 0x2bb97e1), peers [1,2] op hist [])
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:42 compute-0 ceph-osd[85971]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1001807 data_alloc: 218103808 data_used: 4672
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 77660160 unmapped: 974848 heap: 78635008 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:24:35.731016+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 77660160 unmapped: 974848 heap: 78635008 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:24:36.731231+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcea6000/0x0/0x4ffc00000, data 0xc0d0e/0x186000, compress 0x0/0x0/0x0, omap 0x1681f, meta 0x2bb97e1), peers [1,2] op hist [])
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 77660160 unmapped: 974848 heap: 78635008 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcea6000/0x0/0x4ffc00000, data 0xc0d0e/0x186000, compress 0x0/0x0/0x0, omap 0x1681f, meta 0x2bb97e1), peers [1,2] op hist [])
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:24:37.731412+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 77660160 unmapped: 974848 heap: 78635008 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:24:38.731616+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcea6000/0x0/0x4ffc00000, data 0xc0d0e/0x186000, compress 0x0/0x0/0x0, omap 0x1681f, meta 0x2bb97e1), peers [1,2] op hist [])
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 77660160 unmapped: 974848 heap: 78635008 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:24:39.731770+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:42 compute-0 ceph-osd[85971]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1001807 data_alloc: 218103808 data_used: 4672
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 77660160 unmapped: 974848 heap: 78635008 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:24:40.731985+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 77660160 unmapped: 974848 heap: 78635008 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:24:41.732192+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 77660160 unmapped: 974848 heap: 78635008 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:24:42.732339+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcea6000/0x0/0x4ffc00000, data 0xc0d0e/0x186000, compress 0x0/0x0/0x0, omap 0x1681f, meta 0x2bb97e1), peers [1,2] op hist [])
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 77660160 unmapped: 974848 heap: 78635008 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:24:43.732486+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcea6000/0x0/0x4ffc00000, data 0xc0d0e/0x186000, compress 0x0/0x0/0x0, omap 0x1681f, meta 0x2bb97e1), peers [1,2] op hist [])
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 77660160 unmapped: 974848 heap: 78635008 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:24:44.732684+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:42 compute-0 ceph-osd[85971]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1001807 data_alloc: 218103808 data_used: 4672
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 77660160 unmapped: 974848 heap: 78635008 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:24:45.732799+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 77660160 unmapped: 974848 heap: 78635008 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:24:46.732926+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 77668352 unmapped: 966656 heap: 78635008 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:24:47.733296+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcea6000/0x0/0x4ffc00000, data 0xc0d0e/0x186000, compress 0x0/0x0/0x0, omap 0x1681f, meta 0x2bb97e1), peers [1,2] op hist [])
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 77668352 unmapped: 966656 heap: 78635008 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:24:48.733441+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcea6000/0x0/0x4ffc00000, data 0xc0d0e/0x186000, compress 0x0/0x0/0x0, omap 0x1681f, meta 0x2bb97e1), peers [1,2] op hist [])
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 77668352 unmapped: 966656 heap: 78635008 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:24:49.733568+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:42 compute-0 ceph-osd[85971]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1001807 data_alloc: 218103808 data_used: 4672
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 77668352 unmapped: 966656 heap: 78635008 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:24:50.733730+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 77668352 unmapped: 966656 heap: 78635008 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:24:51.733878+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 77668352 unmapped: 966656 heap: 78635008 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:24:52.734021+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 77668352 unmapped: 966656 heap: 78635008 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:24:53.734243+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcea6000/0x0/0x4ffc00000, data 0xc0d0e/0x186000, compress 0x0/0x0/0x0, omap 0x1681f, meta 0x2bb97e1), peers [1,2] op hist [])
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 77668352 unmapped: 966656 heap: 78635008 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:24:54.734478+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:42 compute-0 ceph-osd[85971]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1001807 data_alloc: 218103808 data_used: 4672
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 77668352 unmapped: 966656 heap: 78635008 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:24:55.734636+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 77668352 unmapped: 966656 heap: 78635008 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:24:56.734840+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 77668352 unmapped: 966656 heap: 78635008 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:24:57.735061+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcea6000/0x0/0x4ffc00000, data 0xc0d0e/0x186000, compress 0x0/0x0/0x0, omap 0x1681f, meta 0x2bb97e1), peers [1,2] op hist [])
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 77668352 unmapped: 966656 heap: 78635008 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:24:58.735318+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcea6000/0x0/0x4ffc00000, data 0xc0d0e/0x186000, compress 0x0/0x0/0x0, omap 0x1681f, meta 0x2bb97e1), peers [1,2] op hist [])
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 77668352 unmapped: 966656 heap: 78635008 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:24:59.735467+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcea6000/0x0/0x4ffc00000, data 0xc0d0e/0x186000, compress 0x0/0x0/0x0, omap 0x1681f, meta 0x2bb97e1), peers [1,2] op hist [])
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:42 compute-0 ceph-osd[85971]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1001807 data_alloc: 218103808 data_used: 4672
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 77668352 unmapped: 966656 heap: 78635008 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:25:00.735652+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 77676544 unmapped: 958464 heap: 78635008 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:25:01.735841+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 77676544 unmapped: 958464 heap: 78635008 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:25:02.736064+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 77676544 unmapped: 958464 heap: 78635008 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:25:03.736238+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 77676544 unmapped: 958464 heap: 78635008 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:25:04.736484+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcea6000/0x0/0x4ffc00000, data 0xc0d0e/0x186000, compress 0x0/0x0/0x0, omap 0x1681f, meta 0x2bb97e1), peers [1,2] op hist [])
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:42 compute-0 ceph-osd[85971]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1001807 data_alloc: 218103808 data_used: 4672
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 77676544 unmapped: 958464 heap: 78635008 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:25:05.736725+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 77676544 unmapped: 958464 heap: 78635008 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:25:06.736860+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 77676544 unmapped: 958464 heap: 78635008 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:25:07.737063+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 77676544 unmapped: 958464 heap: 78635008 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:25:08.737196+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcea6000/0x0/0x4ffc00000, data 0xc0d0e/0x186000, compress 0x0/0x0/0x0, omap 0x1681f, meta 0x2bb97e1), peers [1,2] op hist [])
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 77676544 unmapped: 958464 heap: 78635008 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:25:09.737331+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:42 compute-0 ceph-osd[85971]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1001807 data_alloc: 218103808 data_used: 4672
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 77676544 unmapped: 958464 heap: 78635008 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:25:10.737533+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 77676544 unmapped: 958464 heap: 78635008 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:25:11.737709+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 77676544 unmapped: 958464 heap: 78635008 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:25:12.737902+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 77676544 unmapped: 958464 heap: 78635008 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:25:13.738087+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcea6000/0x0/0x4ffc00000, data 0xc0d0e/0x186000, compress 0x0/0x0/0x0, omap 0x1681f, meta 0x2bb97e1), peers [1,2] op hist [])
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 77676544 unmapped: 958464 heap: 78635008 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:25:14.738305+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:42 compute-0 ceph-osd[85971]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1001807 data_alloc: 218103808 data_used: 4672
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 77676544 unmapped: 958464 heap: 78635008 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:25:15.738452+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 77676544 unmapped: 958464 heap: 78635008 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:25:16.738613+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 77684736 unmapped: 950272 heap: 78635008 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:25:17.738743+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 77684736 unmapped: 950272 heap: 78635008 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:25:18.738950+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcea6000/0x0/0x4ffc00000, data 0xc0d0e/0x186000, compress 0x0/0x0/0x0, omap 0x1681f, meta 0x2bb97e1), peers [1,2] op hist [])
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 77684736 unmapped: 950272 heap: 78635008 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:25:19.739123+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcea6000/0x0/0x4ffc00000, data 0xc0d0e/0x186000, compress 0x0/0x0/0x0, omap 0x1681f, meta 0x2bb97e1), peers [1,2] op hist [])
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:42 compute-0 ceph-osd[85971]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1001807 data_alloc: 218103808 data_used: 4672
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 77692928 unmapped: 942080 heap: 78635008 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:25:20.739349+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 77692928 unmapped: 942080 heap: 78635008 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:25:21.739564+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 77701120 unmapped: 933888 heap: 78635008 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:25:22.749952+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcea6000/0x0/0x4ffc00000, data 0xc0d0e/0x186000, compress 0x0/0x0/0x0, omap 0x1681f, meta 0x2bb97e1), peers [1,2] op hist [])
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 77701120 unmapped: 933888 heap: 78635008 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:25:23.750173+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 77701120 unmapped: 933888 heap: 78635008 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:25:24.750319+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcea6000/0x0/0x4ffc00000, data 0xc0d0e/0x186000, compress 0x0/0x0/0x0, omap 0x1681f, meta 0x2bb97e1), peers [1,2] op hist [])
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:42 compute-0 ceph-osd[85971]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1001807 data_alloc: 218103808 data_used: 4672
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 77701120 unmapped: 933888 heap: 78635008 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:25:25.750445+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 77701120 unmapped: 933888 heap: 78635008 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:25:26.750675+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcea6000/0x0/0x4ffc00000, data 0xc0d0e/0x186000, compress 0x0/0x0/0x0, omap 0x1681f, meta 0x2bb97e1), peers [1,2] op hist [])
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 77709312 unmapped: 925696 heap: 78635008 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:25:27.750841+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 77709312 unmapped: 925696 heap: 78635008 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:25:28.750977+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 77709312 unmapped: 925696 heap: 78635008 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:25:29.751171+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:42 compute-0 ceph-osd[85971]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1001807 data_alloc: 218103808 data_used: 4672
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 77709312 unmapped: 925696 heap: 78635008 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:25:30.751393+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcea6000/0x0/0x4ffc00000, data 0xc0d0e/0x186000, compress 0x0/0x0/0x0, omap 0x1681f, meta 0x2bb97e1), peers [1,2] op hist [])
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 77709312 unmapped: 925696 heap: 78635008 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:25:31.751523+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 77709312 unmapped: 925696 heap: 78635008 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:25:32.751679+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 77709312 unmapped: 925696 heap: 78635008 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:25:33.751855+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcea6000/0x0/0x4ffc00000, data 0xc0d0e/0x186000, compress 0x0/0x0/0x0, omap 0x1681f, meta 0x2bb97e1), peers [1,2] op hist [])
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 77709312 unmapped: 925696 heap: 78635008 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:25:34.751993+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:42 compute-0 ceph-osd[85971]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1001807 data_alloc: 218103808 data_used: 4672
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 77709312 unmapped: 925696 heap: 78635008 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:25:35.752197+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 77709312 unmapped: 925696 heap: 78635008 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:25:36.752410+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcea6000/0x0/0x4ffc00000, data 0xc0d0e/0x186000, compress 0x0/0x0/0x0, omap 0x1681f, meta 0x2bb97e1), peers [1,2] op hist [])
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 77709312 unmapped: 925696 heap: 78635008 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:25:37.752558+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 77709312 unmapped: 925696 heap: 78635008 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:25:38.752713+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 77709312 unmapped: 925696 heap: 78635008 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:25:39.752898+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:42 compute-0 ceph-osd[85971]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1001807 data_alloc: 218103808 data_used: 4672
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 77709312 unmapped: 925696 heap: 78635008 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:25:40.753325+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcea6000/0x0/0x4ffc00000, data 0xc0d0e/0x186000, compress 0x0/0x0/0x0, omap 0x1681f, meta 0x2bb97e1), peers [1,2] op hist [])
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 77709312 unmapped: 925696 heap: 78635008 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:25:41.753524+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 77709312 unmapped: 925696 heap: 78635008 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:25:42.753700+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 77709312 unmapped: 925696 heap: 78635008 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:25:43.753859+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 77709312 unmapped: 925696 heap: 78635008 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:25:44.754006+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcea6000/0x0/0x4ffc00000, data 0xc0d0e/0x186000, compress 0x0/0x0/0x0, omap 0x1681f, meta 0x2bb97e1), peers [1,2] op hist [])
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:42 compute-0 ceph-osd[85971]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1001807 data_alloc: 218103808 data_used: 4672
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:25:45.754175+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 77709312 unmapped: 925696 heap: 78635008 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:25:46.754330+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 77709312 unmapped: 925696 heap: 78635008 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:25:47.754483+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 77725696 unmapped: 909312 heap: 78635008 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:25:48.754612+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 77725696 unmapped: 909312 heap: 78635008 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcea6000/0x0/0x4ffc00000, data 0xc0d0e/0x186000, compress 0x0/0x0/0x0, omap 0x1681f, meta 0x2bb97e1), peers [1,2] op hist [])
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:25:49.754748+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 77725696 unmapped: 909312 heap: 78635008 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcea6000/0x0/0x4ffc00000, data 0xc0d0e/0x186000, compress 0x0/0x0/0x0, omap 0x1681f, meta 0x2bb97e1), peers [1,2] op hist [])
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:42 compute-0 ceph-osd[85971]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1001807 data_alloc: 218103808 data_used: 4672
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:25:50.755019+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 77725696 unmapped: 909312 heap: 78635008 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:25:51.755286+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 77725696 unmapped: 909312 heap: 78635008 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:25:52.755473+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 77725696 unmapped: 909312 heap: 78635008 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcea6000/0x0/0x4ffc00000, data 0xc0d0e/0x186000, compress 0x0/0x0/0x0, omap 0x1681f, meta 0x2bb97e1), peers [1,2] op hist [])
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:25:53.755637+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 77725696 unmapped: 909312 heap: 78635008 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:25:54.755802+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 77725696 unmapped: 909312 heap: 78635008 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:42 compute-0 ceph-osd[85971]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1001807 data_alloc: 218103808 data_used: 4672
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:25:55.755968+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 77725696 unmapped: 909312 heap: 78635008 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:25:56.756208+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 77725696 unmapped: 909312 heap: 78635008 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:25:57.756415+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 77725696 unmapped: 909312 heap: 78635008 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:25:58.756659+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 77725696 unmapped: 909312 heap: 78635008 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcea6000/0x0/0x4ffc00000, data 0xc0d0e/0x186000, compress 0x0/0x0/0x0, omap 0x1681f, meta 0x2bb97e1), peers [1,2] op hist [])
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:25:59.756814+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 77725696 unmapped: 909312 heap: 78635008 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:42 compute-0 ceph-osd[85971]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1001807 data_alloc: 218103808 data_used: 4672
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:26:00.757022+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 77733888 unmapped: 901120 heap: 78635008 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:26:01.757190+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 77733888 unmapped: 901120 heap: 78635008 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:26:02.757385+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 77733888 unmapped: 901120 heap: 78635008 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcea6000/0x0/0x4ffc00000, data 0xc0d0e/0x186000, compress 0x0/0x0/0x0, omap 0x1681f, meta 0x2bb97e1), peers [1,2] op hist [])
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:26:03.757541+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 77733888 unmapped: 901120 heap: 78635008 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcea6000/0x0/0x4ffc00000, data 0xc0d0e/0x186000, compress 0x0/0x0/0x0, omap 0x1681f, meta 0x2bb97e1), peers [1,2] op hist [])
Jan 31 08:51:42 compute-0 ceph-osd[85971]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcea6000/0x0/0x4ffc00000, data 0xc0d0e/0x186000, compress 0x0/0x0/0x0, omap 0x1681f, meta 0x2bb97e1), peers [1,2] op hist [])
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:26:04.757695+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 77733888 unmapped: 901120 heap: 78635008 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:42 compute-0 ceph-osd[85971]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1001807 data_alloc: 218103808 data_used: 4672
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:26:05.757852+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 77733888 unmapped: 901120 heap: 78635008 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:26:06.758045+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 77733888 unmapped: 901120 heap: 78635008 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:26:07.758202+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 77733888 unmapped: 901120 heap: 78635008 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:26:08.758329+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 77733888 unmapped: 901120 heap: 78635008 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:26:09.758488+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 77733888 unmapped: 901120 heap: 78635008 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:42 compute-0 ceph-osd[85971]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1001807 data_alloc: 218103808 data_used: 4672
Jan 31 08:51:42 compute-0 ceph-osd[85971]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcea6000/0x0/0x4ffc00000, data 0xc0d0e/0x186000, compress 0x0/0x0/0x0, omap 0x1681f, meta 0x2bb97e1), peers [1,2] op hist [])
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:26:10.758746+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 77733888 unmapped: 901120 heap: 78635008 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:26:11.758879+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 77733888 unmapped: 901120 heap: 78635008 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:26:12.759055+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 77733888 unmapped: 901120 heap: 78635008 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcea6000/0x0/0x4ffc00000, data 0xc0d0e/0x186000, compress 0x0/0x0/0x0, omap 0x1681f, meta 0x2bb97e1), peers [1,2] op hist [])
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:26:13.759231+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 77733888 unmapped: 901120 heap: 78635008 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:26:14.759393+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 77733888 unmapped: 901120 heap: 78635008 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:42 compute-0 ceph-osd[85971]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1001807 data_alloc: 218103808 data_used: 4672
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:26:15.759465+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 77733888 unmapped: 901120 heap: 78635008 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcea6000/0x0/0x4ffc00000, data 0xc0d0e/0x186000, compress 0x0/0x0/0x0, omap 0x1681f, meta 0x2bb97e1), peers [1,2] op hist [])
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:26:16.759668+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 77733888 unmapped: 901120 heap: 78635008 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:26:17.759816+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 77742080 unmapped: 892928 heap: 78635008 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:26:18.759972+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 77742080 unmapped: 892928 heap: 78635008 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcea6000/0x0/0x4ffc00000, data 0xc0d0e/0x186000, compress 0x0/0x0/0x0, omap 0x1681f, meta 0x2bb97e1), peers [1,2] op hist [])
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:26:19.760139+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 77742080 unmapped: 892928 heap: 78635008 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:42 compute-0 ceph-osd[85971]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1001807 data_alloc: 218103808 data_used: 4672
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:26:20.760367+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 77742080 unmapped: 892928 heap: 78635008 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:26:21.760508+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 77742080 unmapped: 892928 heap: 78635008 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:26:22.760655+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcea6000/0x0/0x4ffc00000, data 0xc0d0e/0x186000, compress 0x0/0x0/0x0, omap 0x1681f, meta 0x2bb97e1), peers [1,2] op hist [])
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 77742080 unmapped: 892928 heap: 78635008 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:26:23.760753+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 77742080 unmapped: 892928 heap: 78635008 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcea6000/0x0/0x4ffc00000, data 0xc0d0e/0x186000, compress 0x0/0x0/0x0, omap 0x1681f, meta 0x2bb97e1), peers [1,2] op hist [])
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:26:24.760898+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 77742080 unmapped: 892928 heap: 78635008 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:42 compute-0 ceph-osd[85971]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1001807 data_alloc: 218103808 data_used: 4672
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:26:25.761043+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 77742080 unmapped: 892928 heap: 78635008 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:26:26.761182+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 77742080 unmapped: 892928 heap: 78635008 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcea6000/0x0/0x4ffc00000, data 0xc0d0e/0x186000, compress 0x0/0x0/0x0, omap 0x1681f, meta 0x2bb97e1), peers [1,2] op hist [])
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:26:27.761327+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 77742080 unmapped: 892928 heap: 78635008 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:26:28.761478+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 77742080 unmapped: 892928 heap: 78635008 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:26:29.761616+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 77742080 unmapped: 892928 heap: 78635008 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:42 compute-0 ceph-osd[85971]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1001807 data_alloc: 218103808 data_used: 4672
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:26:30.761840+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcea6000/0x0/0x4ffc00000, data 0xc0d0e/0x186000, compress 0x0/0x0/0x0, omap 0x1681f, meta 0x2bb97e1), peers [1,2] op hist [])
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 77742080 unmapped: 892928 heap: 78635008 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:26:31.761963+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 77742080 unmapped: 892928 heap: 78635008 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:26:32.762169+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 77742080 unmapped: 892928 heap: 78635008 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:26:33.762320+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 77742080 unmapped: 892928 heap: 78635008 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:26:34.762451+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 77742080 unmapped: 892928 heap: 78635008 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:42 compute-0 ceph-osd[85971]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1001807 data_alloc: 218103808 data_used: 4672
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:26:35.762596+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 77742080 unmapped: 892928 heap: 78635008 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:26:36.762770+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcea6000/0x0/0x4ffc00000, data 0xc0d0e/0x186000, compress 0x0/0x0/0x0, omap 0x1681f, meta 0x2bb97e1), peers [1,2] op hist [])
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 77750272 unmapped: 884736 heap: 78635008 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:26:37.762962+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 77750272 unmapped: 884736 heap: 78635008 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:26:38.763135+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 77750272 unmapped: 884736 heap: 78635008 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:26:39.763279+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 77750272 unmapped: 884736 heap: 78635008 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:42 compute-0 ceph-osd[85971]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1001807 data_alloc: 218103808 data_used: 4672
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:26:40.763492+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 77750272 unmapped: 884736 heap: 78635008 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:26:41.763731+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 77750272 unmapped: 884736 heap: 78635008 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcea6000/0x0/0x4ffc00000, data 0xc0d0e/0x186000, compress 0x0/0x0/0x0, omap 0x1681f, meta 0x2bb97e1), peers [1,2] op hist [])
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:26:42.763923+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 77750272 unmapped: 884736 heap: 78635008 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:26:43.764059+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 77750272 unmapped: 884736 heap: 78635008 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:26:44.764197+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 77750272 unmapped: 884736 heap: 78635008 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:42 compute-0 ceph-osd[85971]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1001807 data_alloc: 218103808 data_used: 4672
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:26:45.764590+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Cumulative writes: 5823 writes, 24K keys, 5823 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.02 MB/s
                                           Cumulative WAL: 5823 writes, 961 syncs, 6.06 writes per sync, written: 0.02 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 248 writes, 372 keys, 248 commit groups, 1.0 writes per commit group, ingest: 0.13 MB, 0.00 MB/s
                                           Interval WAL: 248 writes, 124 syncs, 2.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.01              0.00         1    0.010       0      0       0.0       0.0
                                            Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.01              0.00         1    0.010       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.1      0.01              0.00         1    0.010       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x561e014618d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.7e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
                                           
                                           ** Compaction Stats [m-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x561e014618d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.7e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-0] **
                                           
                                           ** Compaction Stats [m-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x561e014618d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.7e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-1] **
                                           
                                           ** Compaction Stats [m-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x561e014618d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.7e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-2] **
                                           
                                           ** Compaction Stats [p-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.56 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.01              0.00         1    0.012       0      0       0.0       0.0
                                            Sum      1/0    1.56 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.01              0.00         1    0.012       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.1      0.01              0.00         1    0.012       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x561e014618d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.7e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-0] **
                                           
                                           ** Compaction Stats [p-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x561e014618d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.7e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-1] **
                                           
                                           ** Compaction Stats [p-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x561e014618d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.7e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-2] **
                                           
                                           ** Compaction Stats [O-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x561e01461a30#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 8e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-0] **
                                           
                                           ** Compaction Stats [O-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x561e01461a30#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 8e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-1] **
                                           
                                           ** Compaction Stats [O-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.25 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.01              0.00         1    0.010       0      0       0.0       0.0
                                            Sum      1/0    1.25 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.01              0.00         1    0.010       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.1      0.01              0.00         1    0.010       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x561e01461a30#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 8e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-2] **
                                           
                                           ** Compaction Stats [L] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [L] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.003       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x561e014618d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.7e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [L] **
                                           
                                           ** Compaction Stats [P] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [P] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x561e014618d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.7e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [P] **
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 77783040 unmapped: 851968 heap: 78635008 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:26:46.765195+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 77791232 unmapped: 843776 heap: 78635008 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:26:47.765393+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcea6000/0x0/0x4ffc00000, data 0xc0d0e/0x186000, compress 0x0/0x0/0x0, omap 0x1681f, meta 0x2bb97e1), peers [1,2] op hist [])
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 77791232 unmapped: 843776 heap: 78635008 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:26:48.765707+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 77791232 unmapped: 843776 heap: 78635008 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:26:49.765963+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 77791232 unmapped: 843776 heap: 78635008 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:42 compute-0 ceph-osd[85971]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1001807 data_alloc: 218103808 data_used: 4672
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:26:50.766398+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 77791232 unmapped: 843776 heap: 78635008 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:26:51.766571+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 77799424 unmapped: 835584 heap: 78635008 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:26:52.766863+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 77799424 unmapped: 835584 heap: 78635008 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcea6000/0x0/0x4ffc00000, data 0xc0d0e/0x186000, compress 0x0/0x0/0x0, omap 0x1681f, meta 0x2bb97e1), peers [1,2] op hist [])
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:26:53.767144+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 77799424 unmapped: 835584 heap: 78635008 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:26:54.767304+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 77799424 unmapped: 835584 heap: 78635008 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:42 compute-0 ceph-osd[85971]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1001807 data_alloc: 218103808 data_used: 4672
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:26:55.767487+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 77799424 unmapped: 835584 heap: 78635008 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:26:56.767725+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 77799424 unmapped: 835584 heap: 78635008 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:26:57.767937+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcea6000/0x0/0x4ffc00000, data 0xc0d0e/0x186000, compress 0x0/0x0/0x0, omap 0x1681f, meta 0x2bb97e1), peers [1,2] op hist [])
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 77799424 unmapped: 835584 heap: 78635008 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:26:58.768146+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 77799424 unmapped: 835584 heap: 78635008 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:26:59.768334+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 77799424 unmapped: 835584 heap: 78635008 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:42 compute-0 ceph-osd[85971]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1001807 data_alloc: 218103808 data_used: 4672
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:27:00.768501+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 77799424 unmapped: 835584 heap: 78635008 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:27:01.768630+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcea6000/0x0/0x4ffc00000, data 0xc0d0e/0x186000, compress 0x0/0x0/0x0, omap 0x1681f, meta 0x2bb97e1), peers [1,2] op hist [])
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 77799424 unmapped: 835584 heap: 78635008 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:27:02.768760+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 77799424 unmapped: 835584 heap: 78635008 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:27:03.768956+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 77799424 unmapped: 835584 heap: 78635008 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:27:04.769169+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 77799424 unmapped: 835584 heap: 78635008 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcea6000/0x0/0x4ffc00000, data 0xc0d0e/0x186000, compress 0x0/0x0/0x0, omap 0x1681f, meta 0x2bb97e1), peers [1,2] op hist [])
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:27:05.769361+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:42 compute-0 ceph-osd[85971]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1001807 data_alloc: 218103808 data_used: 4672
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 77799424 unmapped: 835584 heap: 78635008 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcea6000/0x0/0x4ffc00000, data 0xc0d0e/0x186000, compress 0x0/0x0/0x0, omap 0x1681f, meta 0x2bb97e1), peers [1,2] op hist [])
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:27:06.769520+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 77799424 unmapped: 835584 heap: 78635008 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:27:07.769667+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 77799424 unmapped: 835584 heap: 78635008 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:27:08.769807+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcea6000/0x0/0x4ffc00000, data 0xc0d0e/0x186000, compress 0x0/0x0/0x0, omap 0x1681f, meta 0x2bb97e1), peers [1,2] op hist [])
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 77799424 unmapped: 835584 heap: 78635008 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:27:09.770018+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 77799424 unmapped: 835584 heap: 78635008 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:27:10.770299+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:42 compute-0 ceph-osd[85971]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1001807 data_alloc: 218103808 data_used: 4672
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 77799424 unmapped: 835584 heap: 78635008 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:27:11.770460+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 77807616 unmapped: 827392 heap: 78635008 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:27:12.770625+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 77807616 unmapped: 827392 heap: 78635008 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:27:13.770774+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 77807616 unmapped: 827392 heap: 78635008 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcea6000/0x0/0x4ffc00000, data 0xc0d0e/0x186000, compress 0x0/0x0/0x0, omap 0x1681f, meta 0x2bb97e1), peers [1,2] op hist [])
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:27:14.770951+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 77807616 unmapped: 827392 heap: 78635008 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:27:15.771171+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:42 compute-0 ceph-osd[85971]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1001807 data_alloc: 218103808 data_used: 4672
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 77807616 unmapped: 827392 heap: 78635008 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:27:16.771384+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 77807616 unmapped: 827392 heap: 78635008 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:27:17.771573+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 77807616 unmapped: 827392 heap: 78635008 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcea6000/0x0/0x4ffc00000, data 0xc0d0e/0x186000, compress 0x0/0x0/0x0, omap 0x1681f, meta 0x2bb97e1), peers [1,2] op hist [])
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:27:18.771712+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcea6000/0x0/0x4ffc00000, data 0xc0d0e/0x186000, compress 0x0/0x0/0x0, omap 0x1681f, meta 0x2bb97e1), peers [1,2] op hist [])
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 77807616 unmapped: 827392 heap: 78635008 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:27:19.771848+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 77807616 unmapped: 827392 heap: 78635008 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcea6000/0x0/0x4ffc00000, data 0xc0d0e/0x186000, compress 0x0/0x0/0x0, omap 0x1681f, meta 0x2bb97e1), peers [1,2] op hist [])
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:27:20.772068+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:42 compute-0 ceph-osd[85971]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1001807 data_alloc: 218103808 data_used: 4672
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 77807616 unmapped: 827392 heap: 78635008 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcea6000/0x0/0x4ffc00000, data 0xc0d0e/0x186000, compress 0x0/0x0/0x0, omap 0x1681f, meta 0x2bb97e1), peers [1,2] op hist [])
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:27:21.772231+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 77807616 unmapped: 827392 heap: 78635008 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:27:22.772383+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 77807616 unmapped: 827392 heap: 78635008 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:27:23.772556+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 77807616 unmapped: 827392 heap: 78635008 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:27:24.772721+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 77807616 unmapped: 827392 heap: 78635008 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:27:25.772876+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:42 compute-0 ceph-osd[85971]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1001807 data_alloc: 218103808 data_used: 4672
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 77807616 unmapped: 827392 heap: 78635008 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:27:26.773054+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcea6000/0x0/0x4ffc00000, data 0xc0d0e/0x186000, compress 0x0/0x0/0x0, omap 0x1681f, meta 0x2bb97e1), peers [1,2] op hist [])
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 77815808 unmapped: 819200 heap: 78635008 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcea6000/0x0/0x4ffc00000, data 0xc0d0e/0x186000, compress 0x0/0x0/0x0, omap 0x1681f, meta 0x2bb97e1), peers [1,2] op hist [])
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:27:27.773218+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 77815808 unmapped: 819200 heap: 78635008 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcea6000/0x0/0x4ffc00000, data 0xc0d0e/0x186000, compress 0x0/0x0/0x0, omap 0x1681f, meta 0x2bb97e1), peers [1,2] op hist [])
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:27:28.773437+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 77815808 unmapped: 819200 heap: 78635008 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:27:29.773640+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 77815808 unmapped: 819200 heap: 78635008 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:27:30.773850+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:42 compute-0 ceph-osd[85971]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1001807 data_alloc: 218103808 data_used: 4672
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 77815808 unmapped: 819200 heap: 78635008 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:27:31.774023+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 77815808 unmapped: 819200 heap: 78635008 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:27:32.774187+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 77815808 unmapped: 819200 heap: 78635008 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:27:33.774331+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 77815808 unmapped: 819200 heap: 78635008 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcea6000/0x0/0x4ffc00000, data 0xc0d0e/0x186000, compress 0x0/0x0/0x0, omap 0x1681f, meta 0x2bb97e1), peers [1,2] op hist [])
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:27:34.774458+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 77815808 unmapped: 819200 heap: 78635008 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:27:35.774589+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:42 compute-0 ceph-osd[85971]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1001807 data_alloc: 218103808 data_used: 4672
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 77815808 unmapped: 819200 heap: 78635008 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:27:36.774804+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 77824000 unmapped: 811008 heap: 78635008 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:27:37.774975+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcea6000/0x0/0x4ffc00000, data 0xc0d0e/0x186000, compress 0x0/0x0/0x0, omap 0x1681f, meta 0x2bb97e1), peers [1,2] op hist [])
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 77824000 unmapped: 811008 heap: 78635008 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:27:38.775185+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcea6000/0x0/0x4ffc00000, data 0xc0d0e/0x186000, compress 0x0/0x0/0x0, omap 0x1681f, meta 0x2bb97e1), peers [1,2] op hist [])
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 77824000 unmapped: 811008 heap: 78635008 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:27:39.775360+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 77824000 unmapped: 811008 heap: 78635008 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:27:40.775612+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:42 compute-0 ceph-osd[85971]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1001807 data_alloc: 218103808 data_used: 4672
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 77824000 unmapped: 811008 heap: 78635008 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:27:41.775823+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 77840384 unmapped: 794624 heap: 78635008 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcea6000/0x0/0x4ffc00000, data 0xc0d0e/0x186000, compress 0x0/0x0/0x0, omap 0x1681f, meta 0x2bb97e1), peers [1,2] op hist [])
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:27:42.775985+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 77840384 unmapped: 794624 heap: 78635008 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:27:43.776117+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 77840384 unmapped: 794624 heap: 78635008 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:27:44.776357+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 77840384 unmapped: 794624 heap: 78635008 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:27:45.776545+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:42 compute-0 ceph-osd[85971]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1001807 data_alloc: 218103808 data_used: 4672
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 77840384 unmapped: 794624 heap: 78635008 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcea6000/0x0/0x4ffc00000, data 0xc0d0e/0x186000, compress 0x0/0x0/0x0, omap 0x1681f, meta 0x2bb97e1), peers [1,2] op hist [])
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:27:46.776864+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 77840384 unmapped: 794624 heap: 78635008 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:27:47.777010+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 77840384 unmapped: 794624 heap: 78635008 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcea6000/0x0/0x4ffc00000, data 0xc0d0e/0x186000, compress 0x0/0x0/0x0, omap 0x1681f, meta 0x2bb97e1), peers [1,2] op hist [])
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:27:48.777174+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 77840384 unmapped: 794624 heap: 78635008 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:27:49.777339+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 77840384 unmapped: 794624 heap: 78635008 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:27:50.777571+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:42 compute-0 ceph-osd[85971]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1001807 data_alloc: 218103808 data_used: 4672
Jan 31 08:51:42 compute-0 ceph-osd[85971]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcea6000/0x0/0x4ffc00000, data 0xc0d0e/0x186000, compress 0x0/0x0/0x0, omap 0x1681f, meta 0x2bb97e1), peers [1,2] op hist [])
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 77840384 unmapped: 794624 heap: 78635008 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:27:51.777764+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 77840384 unmapped: 794624 heap: 78635008 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcea6000/0x0/0x4ffc00000, data 0xc0d0e/0x186000, compress 0x0/0x0/0x0, omap 0x1681f, meta 0x2bb97e1), peers [1,2] op hist [])
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:27:52.777933+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 77840384 unmapped: 794624 heap: 78635008 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:27:53.778081+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 77840384 unmapped: 794624 heap: 78635008 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:27:54.778229+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 77840384 unmapped: 794624 heap: 78635008 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:27:55.778367+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:42 compute-0 ceph-osd[85971]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1001807 data_alloc: 218103808 data_used: 4672
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 77840384 unmapped: 794624 heap: 78635008 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:27:56.778638+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcea6000/0x0/0x4ffc00000, data 0xc0d0e/0x186000, compress 0x0/0x0/0x0, omap 0x1681f, meta 0x2bb97e1), peers [1,2] op hist [])
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 77840384 unmapped: 794624 heap: 78635008 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:27:57.778928+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcea6000/0x0/0x4ffc00000, data 0xc0d0e/0x186000, compress 0x0/0x0/0x0, omap 0x1681f, meta 0x2bb97e1), peers [1,2] op hist [])
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 77840384 unmapped: 794624 heap: 78635008 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:27:58.779189+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 77840384 unmapped: 794624 heap: 78635008 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 299.960815430s of 300.004333496s, submitted: 18
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:27:59.779326+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 77946880 unmapped: 688128 heap: 78635008 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:28:00.779538+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:42 compute-0 ceph-osd[85971]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1001807 data_alloc: 218103808 data_used: 4672
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 77971456 unmapped: 663552 heap: 78635008 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:28:01.779677+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcea6000/0x0/0x4ffc00000, data 0xc0d0e/0x186000, compress 0x0/0x0/0x0, omap 0x1681f, meta 0x2bb97e1), peers [1,2] op hist [])
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 78045184 unmapped: 589824 heap: 78635008 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:28:02.779806+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 78217216 unmapped: 1466368 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:28:03.779928+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 78274560 unmapped: 1409024 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:28:04.780077+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 78241792 unmapped: 1441792 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:28:05.780315+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:42 compute-0 ceph-osd[85971]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1001807 data_alloc: 218103808 data_used: 4672
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 78274560 unmapped: 1409024 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:28:06.780529+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 78307328 unmapped: 1376256 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcea6000/0x0/0x4ffc00000, data 0xc0d0e/0x186000, compress 0x0/0x0/0x0, omap 0x1681f, meta 0x2bb97e1), peers [1,2] op hist [])
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:28:07.780707+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 78307328 unmapped: 1376256 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:28:08.780873+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 78307328 unmapped: 1376256 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:28:09.781027+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 78307328 unmapped: 1376256 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:28:10.781226+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:42 compute-0 ceph-osd[85971]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1001807 data_alloc: 218103808 data_used: 4672
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 78307328 unmapped: 1376256 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:28:11.781374+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 78307328 unmapped: 1376256 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:28:12.781531+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcea6000/0x0/0x4ffc00000, data 0xc0d0e/0x186000, compress 0x0/0x0/0x0, omap 0x1681f, meta 0x2bb97e1), peers [1,2] op hist [])
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 78307328 unmapped: 1376256 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:28:13.781735+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 78307328 unmapped: 1376256 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:28:14.781902+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 78307328 unmapped: 1376256 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:28:15.782755+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcea6000/0x0/0x4ffc00000, data 0xc0d0e/0x186000, compress 0x0/0x0/0x0, omap 0x1681f, meta 0x2bb97e1), peers [1,2] op hist [])
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:42 compute-0 ceph-osd[85971]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1001807 data_alloc: 218103808 data_used: 4672
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 78307328 unmapped: 1376256 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:28:16.783081+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 78307328 unmapped: 1376256 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:28:17.783289+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 78307328 unmapped: 1376256 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:28:18.783702+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 78307328 unmapped: 1376256 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:28:19.784023+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 78307328 unmapped: 1376256 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:28:20.784518+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:42 compute-0 ceph-osd[85971]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1001807 data_alloc: 218103808 data_used: 4672
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 78307328 unmapped: 1376256 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcea6000/0x0/0x4ffc00000, data 0xc0d0e/0x186000, compress 0x0/0x0/0x0, omap 0x1681f, meta 0x2bb97e1), peers [1,2] op hist [])
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:28:21.784730+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 78307328 unmapped: 1376256 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:28:22.785120+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 78307328 unmapped: 1376256 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:28:23.785368+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 78307328 unmapped: 1376256 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:28:24.785728+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 78307328 unmapped: 1376256 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:28:25.785871+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:42 compute-0 ceph-osd[85971]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1001807 data_alloc: 218103808 data_used: 4672
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 78307328 unmapped: 1376256 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:28:26.786028+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 78307328 unmapped: 1376256 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcea6000/0x0/0x4ffc00000, data 0xc0d0e/0x186000, compress 0x0/0x0/0x0, omap 0x1681f, meta 0x2bb97e1), peers [1,2] op hist [])
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:28:27.786170+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 78307328 unmapped: 1376256 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:28:28.786543+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 78307328 unmapped: 1376256 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:28:29.786840+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcea6000/0x0/0x4ffc00000, data 0xc0d0e/0x186000, compress 0x0/0x0/0x0, omap 0x1681f, meta 0x2bb97e1), peers [1,2] op hist [])
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 78307328 unmapped: 1376256 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:28:30.787213+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:42 compute-0 ceph-osd[85971]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1001807 data_alloc: 218103808 data_used: 4672
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 78307328 unmapped: 1376256 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:28:31.787461+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 78307328 unmapped: 1376256 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:28:32.787631+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 78307328 unmapped: 1376256 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:28:33.787785+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 78307328 unmapped: 1376256 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:28:34.787929+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcea6000/0x0/0x4ffc00000, data 0xc0d0e/0x186000, compress 0x0/0x0/0x0, omap 0x1681f, meta 0x2bb97e1), peers [1,2] op hist [])
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 78315520 unmapped: 1368064 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:28:35.788077+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:42 compute-0 ceph-osd[85971]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1001807 data_alloc: 218103808 data_used: 4672
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 78315520 unmapped: 1368064 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:28:36.788305+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 78315520 unmapped: 1368064 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:28:37.788467+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 78315520 unmapped: 1368064 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:28:38.788605+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 78315520 unmapped: 1368064 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:28:39.788778+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcea6000/0x0/0x4ffc00000, data 0xc0d0e/0x186000, compress 0x0/0x0/0x0, omap 0x1681f, meta 0x2bb97e1), peers [1,2] op hist [])
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 78315520 unmapped: 1368064 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:28:40.788973+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:42 compute-0 ceph-osd[85971]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1001807 data_alloc: 218103808 data_used: 4672
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 78315520 unmapped: 1368064 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcea6000/0x0/0x4ffc00000, data 0xc0d0e/0x186000, compress 0x0/0x0/0x0, omap 0x1681f, meta 0x2bb97e1), peers [1,2] op hist [])
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:28:41.789141+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 78315520 unmapped: 1368064 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:28:42.789319+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 78315520 unmapped: 1368064 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:28:43.789572+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcea6000/0x0/0x4ffc00000, data 0xc0d0e/0x186000, compress 0x0/0x0/0x0, omap 0x1681f, meta 0x2bb97e1), peers [1,2] op hist [])
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 78315520 unmapped: 1368064 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:28:44.789784+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 78315520 unmapped: 1368064 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:28:45.789979+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:42 compute-0 ceph-osd[85971]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1001807 data_alloc: 218103808 data_used: 4672
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 78315520 unmapped: 1368064 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:28:46.790195+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 78315520 unmapped: 1368064 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:28:47.790459+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 78315520 unmapped: 1368064 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:28:48.790621+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 78315520 unmapped: 1368064 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:28:49.790753+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcea6000/0x0/0x4ffc00000, data 0xc0d0e/0x186000, compress 0x0/0x0/0x0, omap 0x1681f, meta 0x2bb97e1), peers [1,2] op hist [])
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 78315520 unmapped: 1368064 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:28:50.790940+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:42 compute-0 ceph-osd[85971]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1001807 data_alloc: 218103808 data_used: 4672
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 78315520 unmapped: 1368064 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:28:51.791091+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 78315520 unmapped: 1368064 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:28:52.791386+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 78315520 unmapped: 1368064 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:28:53.791639+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcea6000/0x0/0x4ffc00000, data 0xc0d0e/0x186000, compress 0x0/0x0/0x0, omap 0x1681f, meta 0x2bb97e1), peers [1,2] op hist [])
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 78315520 unmapped: 1368064 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:28:54.791823+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 78315520 unmapped: 1368064 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:28:55.791965+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: osd.0 127 heartbeat osd_stat(store_statfs(0x4fcea6000/0x0/0x4ffc00000, data 0xc0d0e/0x186000, compress 0x0/0x0/0x0, omap 0x1681f, meta 0x2bb97e1), peers [1,2] op hist [])
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:42 compute-0 ceph-osd[85971]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1001807 data_alloc: 218103808 data_used: 4672
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 78315520 unmapped: 1368064 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:28:56.792170+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 78315520 unmapped: 1368064 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:28:57.792348+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 78315520 unmapped: 1368064 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: osd.0 127 handle_osd_map epochs [128,128], i have 127, src has [1,128]
Jan 31 08:51:42 compute-0 ceph-osd[85971]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 56.710681915s of 59.369503021s, submitted: 106
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:28:58.792498+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 78094336 unmapped: 1589248 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: osd.0 128 heartbeat osd_stat(store_statfs(0x4fcea1000/0x0/0x4ffc00000, data 0xc28aa/0x189000, compress 0x0/0x0/0x0, omap 0x16a2d, meta 0x2bb95d3), peers [1,2] op hist [])
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:28:59.792686+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: osd.0 128 handle_osd_map epochs [129,129], i have 128, src has [1,129]
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 78020608 unmapped: 1662976 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:29:00.792867+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: osd.0 129 heartbeat osd_stat(store_statfs(0x4fce9c000/0x0/0x4ffc00000, data 0xc449a/0x18c000, compress 0x0/0x0/0x0, omap 0x16ce1, meta 0x2bb931f), peers [1,2] op hist [])
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:42 compute-0 ceph-osd[85971]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1008539 data_alloc: 218103808 data_used: 4672
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 78020608 unmapped: 1662976 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:29:01.793004+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: osd.0 129 heartbeat osd_stat(store_statfs(0x4fce9c000/0x0/0x4ffc00000, data 0xc449a/0x18c000, compress 0x0/0x0/0x0, omap 0x16ce1, meta 0x2bb931f), peers [1,2] op hist [])
Jan 31 08:51:42 compute-0 ceph-osd[85971]: osd.0 129 heartbeat osd_stat(store_statfs(0x4fce9c000/0x0/0x4ffc00000, data 0xc449a/0x18c000, compress 0x0/0x0/0x0, omap 0x16ce1, meta 0x2bb931f), peers [1,2] op hist [])
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 78020608 unmapped: 1662976 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:29:02.801151+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _renew_subs
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 31 08:51:42 compute-0 ceph-osd[85971]: osd.0 129 handle_osd_map epochs [130,130], i have 129, src has [1,130]
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 78036992 unmapped: 1646592 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:29:03.801383+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 78036992 unmapped: 1646592 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:29:04.801576+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 78036992 unmapped: 1646592 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:29:05.801717+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:42 compute-0 ceph-osd[85971]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1010849 data_alloc: 218103808 data_used: 4672
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 78036992 unmapped: 1646592 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:29:06.801935+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: osd.0 130 heartbeat osd_stat(store_statfs(0x4fce9b000/0x0/0x4ffc00000, data 0xc6052/0x18f000, compress 0x0/0x0/0x0, omap 0x16f97, meta 0x2bb9069), peers [1,2] op hist [])
Jan 31 08:51:42 compute-0 ceph-osd[85971]: osd.0 130 handle_osd_map epochs [131,131], i have 130, src has [1,131]
Jan 31 08:51:42 compute-0 ceph-osd[85971]: osd.0 130 handle_osd_map epochs [131,131], i have 131, src has [1,131]
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _renew_subs
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 78036992 unmapped: 1646592 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:29:07.802140+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 78036992 unmapped: 1646592 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:29:08.802366+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: osd.0 131 handle_osd_map epochs [132,132], i have 131, src has [1,132]
Jan 31 08:51:42 compute-0 ceph-osd[85971]: osd.0 131 handle_osd_map epochs [131,132], i have 132, src has [1,132]
Jan 31 08:51:42 compute-0 ceph-osd[85971]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.232539177s of 10.651485443s, submitted: 15
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 78086144 unmapped: 1597440 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:29:09.802582+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 78086144 unmapped: 1597440 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:29:10.802915+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:42 compute-0 ceph-osd[85971]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1016397 data_alloc: 218103808 data_used: 4672
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 78086144 unmapped: 1597440 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:29:11.803214+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 78086144 unmapped: 1597440 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:29:12.803456+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: osd.0 132 heartbeat osd_stat(store_statfs(0x4fce95000/0x0/0x4ffc00000, data 0xc9689/0x195000, compress 0x0/0x0/0x0, omap 0x17445, meta 0x2bb8bbb), peers [1,2] op hist [])
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 78086144 unmapped: 1597440 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:29:13.803593+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:29:14.803780+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 78086144 unmapped: 1597440 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: handle_auth_request added challenge on 0x561e053e5c00
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:29:15.803962+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 78249984 unmapped: 1433600 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:42 compute-0 ceph-osd[85971]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1016397 data_alloc: 218103808 data_used: 4672
Jan 31 08:51:42 compute-0 ceph-osd[85971]: osd.0 132 handle_osd_map epochs [132,133], i have 132, src has [1,133]
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:29:16.804080+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 78249984 unmapped: 1433600 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _renew_subs
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 31 08:51:42 compute-0 ceph-osd[85971]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fce92000/0x0/0x4ffc00000, data 0xcb279/0x198000, compress 0x0/0x0/0x0, omap 0x17700, meta 0x2bb8900), peers [1,2] op hist [])
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:29:17.804317+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 78282752 unmapped: 1400832 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: osd.0 133 ms_handle_reset con 0x561e053e5c00 session 0x561e03d98a80
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:29:18.804509+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 78307328 unmapped: 1376256 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:29:19.804651+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 78307328 unmapped: 1376256 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:29:20.804883+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 78307328 unmapped: 1376256 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: handle_auth_request added challenge on 0x561e03aa5c00
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:42 compute-0 ceph-osd[85971]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1019957 data_alloc: 218103808 data_used: 4672
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:29:21.805084+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 78495744 unmapped: 1187840 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fce94000/0x0/0x4ffc00000, data 0xcb279/0x198000, compress 0x0/0x0/0x0, omap 0x17877, meta 0x2bb8789), peers [1,2] op hist [])
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:29:22.805218+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: osd.0 133 handle_osd_map epochs [134,134], i have 133, src has [1,134]
Jan 31 08:51:42 compute-0 ceph-osd[85971]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 13.136819839s of 13.394361496s, submitted: 22
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 78495744 unmapped: 1187840 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: osd.0 134 heartbeat osd_stat(store_statfs(0x4fce94000/0x0/0x4ffc00000, data 0xcb279/0x198000, compress 0x0/0x0/0x0, omap 0x17877, meta 0x2bb8789), peers [1,2] op hist [])
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:29:23.805485+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 78495744 unmapped: 1187840 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: osd.0 134 handle_osd_map epochs [134,135], i have 134, src has [1,135]
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:29:24.805725+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 78495744 unmapped: 1187840 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: osd.0 135 heartbeat osd_stat(store_statfs(0x4fce8f000/0x0/0x4ffc00000, data 0xcce69/0x19b000, compress 0x0/0x0/0x0, omap 0x17b2c, meta 0x2bb84d4), peers [1,2] op hist [])
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:29:25.805926+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 78495744 unmapped: 1187840 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: osd.0 135 ms_handle_reset con 0x561e03aa5c00 session 0x561e03d98000
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:42 compute-0 ceph-osd[85971]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1026689 data_alloc: 218103808 data_used: 4672
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:29:26.806112+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 78487552 unmapped: 1196032 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _renew_subs
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:29:27.806318+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 78487552 unmapped: 1196032 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:29:28.806506+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 78487552 unmapped: 1196032 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:29:29.806668+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 78487552 unmapped: 1196032 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: osd.0 135 handle_osd_map epochs [136,136], i have 135, src has [1,136]
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:29:30.806898+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 78487552 unmapped: 1196032 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fce89000/0x0/0x4ffc00000, data 0xd0383/0x1a1000, compress 0x0/0x0/0x0, omap 0x180fe, meta 0x2bb7f02), peers [1,2] op hist [])
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:42 compute-0 ceph-osd[85971]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1028999 data_alloc: 218103808 data_used: 4672
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:29:31.807130+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 78487552 unmapped: 1196032 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:29:32.807328+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 78487552 unmapped: 1196032 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:29:33.807514+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 78487552 unmapped: 1196032 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:29:34.807669+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 78487552 unmapped: 1196032 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:29:35.807805+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 78487552 unmapped: 1196032 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fce89000/0x0/0x4ffc00000, data 0xd0383/0x1a1000, compress 0x0/0x0/0x0, omap 0x180fe, meta 0x2bb7f02), peers [1,2] op hist [])
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:42 compute-0 ceph-osd[85971]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1028999 data_alloc: 218103808 data_used: 4672
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:29:36.807953+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 78487552 unmapped: 1196032 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _renew_subs
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:29:37.808094+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 78487552 unmapped: 1196032 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fce89000/0x0/0x4ffc00000, data 0xd0383/0x1a1000, compress 0x0/0x0/0x0, omap 0x180fe, meta 0x2bb7f02), peers [1,2] op hist [])
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:29:38.808235+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 78487552 unmapped: 1196032 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:29:39.808401+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 78487552 unmapped: 1196032 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:29:40.809202+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 78487552 unmapped: 1196032 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:42 compute-0 ceph-osd[85971]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1028999 data_alloc: 218103808 data_used: 4672
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:29:41.809313+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 78487552 unmapped: 1196032 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:29:42.809527+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fce89000/0x0/0x4ffc00000, data 0xd0383/0x1a1000, compress 0x0/0x0/0x0, omap 0x180fe, meta 0x2bb7f02), peers [1,2] op hist [])
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 78487552 unmapped: 1196032 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:29:43.809868+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 78487552 unmapped: 1196032 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:29:44.810065+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 78487552 unmapped: 1196032 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:29:45.810234+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 78487552 unmapped: 1196032 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fce89000/0x0/0x4ffc00000, data 0xd0383/0x1a1000, compress 0x0/0x0/0x0, omap 0x180fe, meta 0x2bb7f02), peers [1,2] op hist [])
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:42 compute-0 ceph-osd[85971]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1028999 data_alloc: 218103808 data_used: 4672
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:29:46.810528+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 78487552 unmapped: 1196032 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:29:47.810709+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 78495744 unmapped: 1187840 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:29:48.810909+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 78495744 unmapped: 1187840 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:29:49.811074+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 78495744 unmapped: 1187840 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:29:50.811326+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 78495744 unmapped: 1187840 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:42 compute-0 ceph-osd[85971]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1028999 data_alloc: 218103808 data_used: 4672
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:29:51.811633+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 78495744 unmapped: 1187840 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fce89000/0x0/0x4ffc00000, data 0xd0383/0x1a1000, compress 0x0/0x0/0x0, omap 0x180fe, meta 0x2bb7f02), peers [1,2] op hist [])
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:29:52.811948+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 78495744 unmapped: 1187840 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:29:53.812121+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 78495744 unmapped: 1187840 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:29:54.812379+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 78495744 unmapped: 1187840 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fce89000/0x0/0x4ffc00000, data 0xd0383/0x1a1000, compress 0x0/0x0/0x0, omap 0x180fe, meta 0x2bb7f02), peers [1,2] op hist [])
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:29:55.812680+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 78495744 unmapped: 1187840 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fce89000/0x0/0x4ffc00000, data 0xd0383/0x1a1000, compress 0x0/0x0/0x0, omap 0x180fe, meta 0x2bb7f02), peers [1,2] op hist [])
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:42 compute-0 ceph-osd[85971]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1028999 data_alloc: 218103808 data_used: 4672
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:29:56.812879+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 78495744 unmapped: 1187840 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:29:57.813195+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fce89000/0x0/0x4ffc00000, data 0xd0383/0x1a1000, compress 0x0/0x0/0x0, omap 0x180fe, meta 0x2bb7f02), peers [1,2] op hist [])
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 78495744 unmapped: 1187840 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fce89000/0x0/0x4ffc00000, data 0xd0383/0x1a1000, compress 0x0/0x0/0x0, omap 0x180fe, meta 0x2bb7f02), peers [1,2] op hist [])
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:29:58.813381+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 78495744 unmapped: 1187840 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:29:59.813630+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 78495744 unmapped: 1187840 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:30:00.813994+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 78495744 unmapped: 1187840 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:42 compute-0 ceph-osd[85971]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1028999 data_alloc: 218103808 data_used: 4672
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:30:01.814285+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 78495744 unmapped: 1187840 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:30:02.814531+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 78495744 unmapped: 1187840 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fce89000/0x0/0x4ffc00000, data 0xd0383/0x1a1000, compress 0x0/0x0/0x0, omap 0x180fe, meta 0x2bb7f02), peers [1,2] op hist [])
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:30:03.814851+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 78495744 unmapped: 1187840 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fce89000/0x0/0x4ffc00000, data 0xd0383/0x1a1000, compress 0x0/0x0/0x0, omap 0x180fe, meta 0x2bb7f02), peers [1,2] op hist [])
Jan 31 08:51:42 compute-0 ceph-osd[85971]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fce89000/0x0/0x4ffc00000, data 0xd0383/0x1a1000, compress 0x0/0x0/0x0, omap 0x180fe, meta 0x2bb7f02), peers [1,2] op hist [])
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:30:04.815154+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 78512128 unmapped: 1171456 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:30:05.815373+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 78512128 unmapped: 1171456 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:42 compute-0 ceph-osd[85971]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1028999 data_alloc: 218103808 data_used: 4672
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:30:06.815539+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 78512128 unmapped: 1171456 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:30:07.815708+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 78512128 unmapped: 1171456 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:30:08.815898+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 78512128 unmapped: 1171456 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:30:09.816082+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fce89000/0x0/0x4ffc00000, data 0xd0383/0x1a1000, compress 0x0/0x0/0x0, omap 0x180fe, meta 0x2bb7f02), peers [1,2] op hist [])
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 78512128 unmapped: 1171456 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:30:10.816487+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 78512128 unmapped: 1171456 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fce89000/0x0/0x4ffc00000, data 0xd0383/0x1a1000, compress 0x0/0x0/0x0, omap 0x180fe, meta 0x2bb7f02), peers [1,2] op hist [])
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:42 compute-0 ceph-osd[85971]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1028999 data_alloc: 218103808 data_used: 4672
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:30:11.816674+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 78512128 unmapped: 1171456 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:30:12.816936+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 78512128 unmapped: 1171456 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:30:13.817112+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fce89000/0x0/0x4ffc00000, data 0xd0383/0x1a1000, compress 0x0/0x0/0x0, omap 0x180fe, meta 0x2bb7f02), peers [1,2] op hist [])
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 78512128 unmapped: 1171456 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:30:14.817461+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 78512128 unmapped: 1171456 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:30:15.817670+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 78512128 unmapped: 1171456 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fce89000/0x0/0x4ffc00000, data 0xd0383/0x1a1000, compress 0x0/0x0/0x0, omap 0x180fe, meta 0x2bb7f02), peers [1,2] op hist [])
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:42 compute-0 ceph-osd[85971]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1028999 data_alloc: 218103808 data_used: 4672
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:30:16.817861+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 78512128 unmapped: 1171456 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:30:17.818034+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 78512128 unmapped: 1171456 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:30:18.818289+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 78512128 unmapped: 1171456 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:30:19.818586+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 78512128 unmapped: 1171456 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:30:20.818886+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 78512128 unmapped: 1171456 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:42 compute-0 ceph-osd[85971]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1028999 data_alloc: 218103808 data_used: 4672
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:30:21.819142+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 78512128 unmapped: 1171456 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fce89000/0x0/0x4ffc00000, data 0xd0383/0x1a1000, compress 0x0/0x0/0x0, omap 0x180fe, meta 0x2bb7f02), peers [1,2] op hist [])
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:30:22.819339+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 78512128 unmapped: 1171456 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:30:23.819550+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 78512128 unmapped: 1171456 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:30:24.819808+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fce89000/0x0/0x4ffc00000, data 0xd0383/0x1a1000, compress 0x0/0x0/0x0, omap 0x180fe, meta 0x2bb7f02), peers [1,2] op hist [])
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 78528512 unmapped: 1155072 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:30:25.819985+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 78528512 unmapped: 1155072 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fce89000/0x0/0x4ffc00000, data 0xd0383/0x1a1000, compress 0x0/0x0/0x0, omap 0x180fe, meta 0x2bb7f02), peers [1,2] op hist [])
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:42 compute-0 ceph-osd[85971]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1028999 data_alloc: 218103808 data_used: 4672
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:30:26.820142+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 78528512 unmapped: 1155072 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:30:27.820363+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 78528512 unmapped: 1155072 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:30:28.820522+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 78528512 unmapped: 1155072 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:30:29.820681+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 78528512 unmapped: 1155072 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fce89000/0x0/0x4ffc00000, data 0xd0383/0x1a1000, compress 0x0/0x0/0x0, omap 0x180fe, meta 0x2bb7f02), peers [1,2] op hist [])
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:30:30.820903+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 78528512 unmapped: 1155072 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:42 compute-0 ceph-osd[85971]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1028999 data_alloc: 218103808 data_used: 4672
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:30:31.821050+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 78528512 unmapped: 1155072 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:30:32.821203+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 78528512 unmapped: 1155072 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:30:33.824015+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fce89000/0x0/0x4ffc00000, data 0xd0383/0x1a1000, compress 0x0/0x0/0x0, omap 0x180fe, meta 0x2bb7f02), peers [1,2] op hist [])
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 78528512 unmapped: 1155072 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:30:34.824152+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fce89000/0x0/0x4ffc00000, data 0xd0383/0x1a1000, compress 0x0/0x0/0x0, omap 0x180fe, meta 0x2bb7f02), peers [1,2] op hist [])
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 78528512 unmapped: 1155072 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:30:35.824303+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 78528512 unmapped: 1155072 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:30:36.824405+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:42 compute-0 ceph-osd[85971]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1028999 data_alloc: 218103808 data_used: 4672
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 78528512 unmapped: 1155072 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:30:37.824543+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 78528512 unmapped: 1155072 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fce89000/0x0/0x4ffc00000, data 0xd0383/0x1a1000, compress 0x0/0x0/0x0, omap 0x180fe, meta 0x2bb7f02), peers [1,2] op hist [])
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:30:38.824691+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 78528512 unmapped: 1155072 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:30:39.824911+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 78528512 unmapped: 1155072 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fce89000/0x0/0x4ffc00000, data 0xd0383/0x1a1000, compress 0x0/0x0/0x0, omap 0x180fe, meta 0x2bb7f02), peers [1,2] op hist [])
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:30:40.825101+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 78528512 unmapped: 1155072 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:30:41.825278+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:42 compute-0 ceph-osd[85971]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1028999 data_alloc: 218103808 data_used: 4672
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 78528512 unmapped: 1155072 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:30:42.825432+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 78528512 unmapped: 1155072 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:30:43.825619+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 78528512 unmapped: 1155072 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:30:44.825762+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fce89000/0x0/0x4ffc00000, data 0xd0383/0x1a1000, compress 0x0/0x0/0x0, omap 0x180fe, meta 0x2bb7f02), peers [1,2] op hist [])
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 78544896 unmapped: 1138688 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:30:45.825940+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 78544896 unmapped: 1138688 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:30:46.826095+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:42 compute-0 ceph-osd[85971]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1028999 data_alloc: 218103808 data_used: 4672
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 78544896 unmapped: 1138688 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:30:47.826411+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 78544896 unmapped: 1138688 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fce89000/0x0/0x4ffc00000, data 0xd0383/0x1a1000, compress 0x0/0x0/0x0, omap 0x180fe, meta 0x2bb7f02), peers [1,2] op hist [])
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:30:48.826666+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 78544896 unmapped: 1138688 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:30:49.826860+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 78544896 unmapped: 1138688 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fce89000/0x0/0x4ffc00000, data 0xd0383/0x1a1000, compress 0x0/0x0/0x0, omap 0x180fe, meta 0x2bb7f02), peers [1,2] op hist [])
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:30:50.827242+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fce89000/0x0/0x4ffc00000, data 0xd0383/0x1a1000, compress 0x0/0x0/0x0, omap 0x180fe, meta 0x2bb7f02), peers [1,2] op hist [])
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 78553088 unmapped: 1130496 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:30:51.827451+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:42 compute-0 ceph-osd[85971]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1028999 data_alloc: 218103808 data_used: 4672
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 78553088 unmapped: 1130496 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:30:52.827579+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 78553088 unmapped: 1130496 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:30:53.827726+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 78553088 unmapped: 1130496 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:30:54.827891+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fce89000/0x0/0x4ffc00000, data 0xd0383/0x1a1000, compress 0x0/0x0/0x0, omap 0x180fe, meta 0x2bb7f02), peers [1,2] op hist [])
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 78553088 unmapped: 1130496 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:30:55.828079+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 78553088 unmapped: 1130496 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:30:56.828312+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:42 compute-0 ceph-osd[85971]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1028999 data_alloc: 218103808 data_used: 4672
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 78553088 unmapped: 1130496 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:30:57.828500+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 78553088 unmapped: 1130496 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:30:58.828678+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 78553088 unmapped: 1130496 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:30:59.828892+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 78553088 unmapped: 1130496 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fce89000/0x0/0x4ffc00000, data 0xd0383/0x1a1000, compress 0x0/0x0/0x0, omap 0x180fe, meta 0x2bb7f02), peers [1,2] op hist [])
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:31:00.829140+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 78553088 unmapped: 1130496 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:31:01.829388+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:42 compute-0 ceph-osd[85971]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1028999 data_alloc: 218103808 data_used: 4672
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 78553088 unmapped: 1130496 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:31:02.829549+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 78561280 unmapped: 1122304 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: bluestore.MempoolThread fragmentation_score=0.000115 took=0.000017s
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:31:03.829743+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 78561280 unmapped: 1122304 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:31:04.829909+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 78577664 unmapped: 1105920 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:31:05.830128+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 78577664 unmapped: 1105920 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fce89000/0x0/0x4ffc00000, data 0xd0383/0x1a1000, compress 0x0/0x0/0x0, omap 0x180fe, meta 0x2bb7f02), peers [1,2] op hist [])
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:31:06.830309+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:42 compute-0 ceph-osd[85971]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1028999 data_alloc: 218103808 data_used: 4672
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 78585856 unmapped: 1097728 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:31:07.830567+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 78585856 unmapped: 1097728 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:31:08.830769+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 78585856 unmapped: 1097728 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:31:09.830940+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 78585856 unmapped: 1097728 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fce89000/0x0/0x4ffc00000, data 0xd0383/0x1a1000, compress 0x0/0x0/0x0, omap 0x180fe, meta 0x2bb7f02), peers [1,2] op hist [])
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:31:10.831231+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 78585856 unmapped: 1097728 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:31:11.831466+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:42 compute-0 ceph-osd[85971]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1028999 data_alloc: 218103808 data_used: 4672
Jan 31 08:51:42 compute-0 ceph-osd[85971]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fce89000/0x0/0x4ffc00000, data 0xd0383/0x1a1000, compress 0x0/0x0/0x0, omap 0x180fe, meta 0x2bb7f02), peers [1,2] op hist [])
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 78585856 unmapped: 1097728 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:31:12.831697+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 78585856 unmapped: 1097728 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:31:13.831919+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 78585856 unmapped: 1097728 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:31:14.832067+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 78585856 unmapped: 1097728 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:31:15.832395+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 78585856 unmapped: 1097728 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:31:16.832660+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:42 compute-0 ceph-osd[85971]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1028999 data_alloc: 218103808 data_used: 4672
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 78585856 unmapped: 1097728 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fce89000/0x0/0x4ffc00000, data 0xd0383/0x1a1000, compress 0x0/0x0/0x0, omap 0x180fe, meta 0x2bb7f02), peers [1,2] op hist [])
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:31:17.832925+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 78585856 unmapped: 1097728 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fce89000/0x0/0x4ffc00000, data 0xd0383/0x1a1000, compress 0x0/0x0/0x0, omap 0x180fe, meta 0x2bb7f02), peers [1,2] op hist [])
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:31:18.833094+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 78585856 unmapped: 1097728 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:31:19.833308+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 78585856 unmapped: 1097728 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:31:20.833519+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fce89000/0x0/0x4ffc00000, data 0xd0383/0x1a1000, compress 0x0/0x0/0x0, omap 0x180fe, meta 0x2bb7f02), peers [1,2] op hist [])
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 78585856 unmapped: 1097728 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:31:21.833715+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:42 compute-0 ceph-osd[85971]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1028999 data_alloc: 218103808 data_used: 4672
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 78585856 unmapped: 1097728 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:31:22.833883+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 78585856 unmapped: 1097728 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:31:23.834037+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 78585856 unmapped: 1097728 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fce89000/0x0/0x4ffc00000, data 0xd0383/0x1a1000, compress 0x0/0x0/0x0, omap 0x180fe, meta 0x2bb7f02), peers [1,2] op hist [])
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:31:24.834346+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 78585856 unmapped: 1097728 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:31:25.834504+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fce89000/0x0/0x4ffc00000, data 0xd0383/0x1a1000, compress 0x0/0x0/0x0, omap 0x180fe, meta 0x2bb7f02), peers [1,2] op hist [])
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 78585856 unmapped: 1097728 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:31:26.834656+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:42 compute-0 ceph-osd[85971]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1028999 data_alloc: 218103808 data_used: 4672
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 78585856 unmapped: 1097728 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:31:27.834863+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 78585856 unmapped: 1097728 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:31:28.835068+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 78585856 unmapped: 1097728 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fce89000/0x0/0x4ffc00000, data 0xd0383/0x1a1000, compress 0x0/0x0/0x0, omap 0x180fe, meta 0x2bb7f02), peers [1,2] op hist [])
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:31:29.835200+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 78585856 unmapped: 1097728 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:31:30.835399+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 78585856 unmapped: 1097728 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:31:31.835584+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:42 compute-0 ceph-osd[85971]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1028999 data_alloc: 218103808 data_used: 4672
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 78585856 unmapped: 1097728 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:31:32.835749+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fce89000/0x0/0x4ffc00000, data 0xd0383/0x1a1000, compress 0x0/0x0/0x0, omap 0x180fe, meta 0x2bb7f02), peers [1,2] op hist [])
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 78585856 unmapped: 1097728 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:31:33.835939+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fce89000/0x0/0x4ffc00000, data 0xd0383/0x1a1000, compress 0x0/0x0/0x0, omap 0x180fe, meta 0x2bb7f02), peers [1,2] op hist [])
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 78585856 unmapped: 1097728 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:31:34.836158+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 78585856 unmapped: 1097728 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:31:35.836570+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 78585856 unmapped: 1097728 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:31:36.836711+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:42 compute-0 ceph-osd[85971]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1028999 data_alloc: 218103808 data_used: 4672
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 78585856 unmapped: 1097728 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:31:37.836903+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 78585856 unmapped: 1097728 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:31:38.837104+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 78585856 unmapped: 1097728 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:31:39.837348+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fce89000/0x0/0x4ffc00000, data 0xd0383/0x1a1000, compress 0x0/0x0/0x0, omap 0x180fe, meta 0x2bb7f02), peers [1,2] op hist [])
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 78585856 unmapped: 1097728 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:31:40.837598+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 78585856 unmapped: 1097728 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:31:41.837782+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:42 compute-0 ceph-osd[85971]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1028999 data_alloc: 218103808 data_used: 4672
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 78585856 unmapped: 1097728 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:31:42.837989+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 78585856 unmapped: 1097728 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:31:43.838121+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 78585856 unmapped: 1097728 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:31:44.838292+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fce89000/0x0/0x4ffc00000, data 0xd0383/0x1a1000, compress 0x0/0x0/0x0, omap 0x180fe, meta 0x2bb7f02), peers [1,2] op hist [])
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 78585856 unmapped: 1097728 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:31:45.838464+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 78585856 unmapped: 1097728 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:31:46.838718+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:42 compute-0 ceph-osd[85971]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1028999 data_alloc: 218103808 data_used: 4672
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 78585856 unmapped: 1097728 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:31:47.838902+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 78585856 unmapped: 1097728 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:31:48.839085+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 78585856 unmapped: 1097728 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:31:49.839295+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 78585856 unmapped: 1097728 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fce89000/0x0/0x4ffc00000, data 0xd0383/0x1a1000, compress 0x0/0x0/0x0, omap 0x180fe, meta 0x2bb7f02), peers [1,2] op hist [])
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:31:50.839466+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fce89000/0x0/0x4ffc00000, data 0xd0383/0x1a1000, compress 0x0/0x0/0x0, omap 0x180fe, meta 0x2bb7f02), peers [1,2] op hist [])
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 78585856 unmapped: 1097728 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:31:51.839583+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:42 compute-0 ceph-osd[85971]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1028999 data_alloc: 218103808 data_used: 4672
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 78585856 unmapped: 1097728 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:31:52.839856+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fce89000/0x0/0x4ffc00000, data 0xd0383/0x1a1000, compress 0x0/0x0/0x0, omap 0x180fe, meta 0x2bb7f02), peers [1,2] op hist [])
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 78585856 unmapped: 1097728 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:31:53.840038+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 78585856 unmapped: 1097728 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:31:54.840209+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 78585856 unmapped: 1097728 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:31:55.840408+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fce89000/0x0/0x4ffc00000, data 0xd0383/0x1a1000, compress 0x0/0x0/0x0, omap 0x180fe, meta 0x2bb7f02), peers [1,2] op hist [])
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 78585856 unmapped: 1097728 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:31:56.840565+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fce89000/0x0/0x4ffc00000, data 0xd0383/0x1a1000, compress 0x0/0x0/0x0, omap 0x180fe, meta 0x2bb7f02), peers [1,2] op hist [])
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:42 compute-0 ceph-osd[85971]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1028999 data_alloc: 218103808 data_used: 4672
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 78585856 unmapped: 1097728 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:31:57.840714+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 78585856 unmapped: 1097728 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:31:58.840858+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 78585856 unmapped: 1097728 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:31:59.840944+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 78585856 unmapped: 1097728 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:32:00.841125+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 78585856 unmapped: 1097728 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fce89000/0x0/0x4ffc00000, data 0xd0383/0x1a1000, compress 0x0/0x0/0x0, omap 0x180fe, meta 0x2bb7f02), peers [1,2] op hist [])
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:32:01.841434+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:42 compute-0 ceph-osd[85971]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1028999 data_alloc: 218103808 data_used: 4672
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 78585856 unmapped: 1097728 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:32:02.841621+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 78585856 unmapped: 1097728 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:32:03.841775+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 78585856 unmapped: 1097728 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:32:04.841918+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 78585856 unmapped: 1097728 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fce89000/0x0/0x4ffc00000, data 0xd0383/0x1a1000, compress 0x0/0x0/0x0, omap 0x180fe, meta 0x2bb7f02), peers [1,2] op hist [])
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:32:05.842093+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 78585856 unmapped: 1097728 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:32:06.842288+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:42 compute-0 ceph-osd[85971]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1028999 data_alloc: 218103808 data_used: 4672
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 78585856 unmapped: 1097728 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:32:07.842402+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 78585856 unmapped: 1097728 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:32:08.842507+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 78585856 unmapped: 1097728 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:32:09.842640+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 78585856 unmapped: 1097728 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:32:10.842805+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 78585856 unmapped: 1097728 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fce89000/0x0/0x4ffc00000, data 0xd0383/0x1a1000, compress 0x0/0x0/0x0, omap 0x180fe, meta 0x2bb7f02), peers [1,2] op hist [])
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:32:11.843050+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:42 compute-0 ceph-osd[85971]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1028999 data_alloc: 218103808 data_used: 4672
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 78577664 unmapped: 1105920 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:32:12.843191+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fce89000/0x0/0x4ffc00000, data 0xd0383/0x1a1000, compress 0x0/0x0/0x0, omap 0x180fe, meta 0x2bb7f02), peers [1,2] op hist [])
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 78577664 unmapped: 1105920 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:32:13.843335+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 78577664 unmapped: 1105920 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:32:14.843488+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 78577664 unmapped: 1105920 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:32:15.843673+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 78577664 unmapped: 1105920 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:32:16.843791+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:42 compute-0 ceph-osd[85971]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1028999 data_alloc: 218103808 data_used: 4672
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 78577664 unmapped: 1105920 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:32:17.843936+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 78577664 unmapped: 1105920 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:32:18.844062+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fce89000/0x0/0x4ffc00000, data 0xd0383/0x1a1000, compress 0x0/0x0/0x0, omap 0x180fe, meta 0x2bb7f02), peers [1,2] op hist [])
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 78577664 unmapped: 1105920 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:32:19.844190+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 78577664 unmapped: 1105920 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:32:20.844364+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 78577664 unmapped: 1105920 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:32:21.844495+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fce89000/0x0/0x4ffc00000, data 0xd0383/0x1a1000, compress 0x0/0x0/0x0, omap 0x180fe, meta 0x2bb7f02), peers [1,2] op hist [])
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:42 compute-0 ceph-osd[85971]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1028999 data_alloc: 218103808 data_used: 4672
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 78577664 unmapped: 1105920 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:32:22.844610+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 78577664 unmapped: 1105920 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:32:23.844741+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 78585856 unmapped: 1097728 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:32:24.844907+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 78585856 unmapped: 1097728 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:32:25.845021+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 78585856 unmapped: 1097728 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:32:26.845298+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fce89000/0x0/0x4ffc00000, data 0xd0383/0x1a1000, compress 0x0/0x0/0x0, omap 0x180fe, meta 0x2bb7f02), peers [1,2] op hist [])
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:42 compute-0 ceph-osd[85971]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1028999 data_alloc: 218103808 data_used: 4672
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 78585856 unmapped: 1097728 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:32:27.845459+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 78585856 unmapped: 1097728 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:32:28.845620+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 78585856 unmapped: 1097728 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:32:29.845793+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 78585856 unmapped: 1097728 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:32:30.846110+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fce89000/0x0/0x4ffc00000, data 0xd0383/0x1a1000, compress 0x0/0x0/0x0, omap 0x180fe, meta 0x2bb7f02), peers [1,2] op hist [])
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 78585856 unmapped: 1097728 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:32:31.846272+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:42 compute-0 ceph-osd[85971]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1028999 data_alloc: 218103808 data_used: 4672
Jan 31 08:51:42 compute-0 ceph-osd[85971]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fce89000/0x0/0x4ffc00000, data 0xd0383/0x1a1000, compress 0x0/0x0/0x0, omap 0x180fe, meta 0x2bb7f02), peers [1,2] op hist [])
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 78585856 unmapped: 1097728 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:32:32.846420+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 78585856 unmapped: 1097728 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:32:33.846552+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fce89000/0x0/0x4ffc00000, data 0xd0383/0x1a1000, compress 0x0/0x0/0x0, omap 0x180fe, meta 0x2bb7f02), peers [1,2] op hist [])
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 78585856 unmapped: 1097728 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:32:34.847003+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 78585856 unmapped: 1097728 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:32:35.847402+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 78585856 unmapped: 1097728 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:32:36.847820+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:42 compute-0 ceph-osd[85971]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1028999 data_alloc: 218103808 data_used: 4672
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 78585856 unmapped: 1097728 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:32:37.848111+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 78585856 unmapped: 1097728 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:32:38.848468+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fce89000/0x0/0x4ffc00000, data 0xd0383/0x1a1000, compress 0x0/0x0/0x0, omap 0x180fe, meta 0x2bb7f02), peers [1,2] op hist [])
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 78585856 unmapped: 1097728 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:32:39.848620+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 78585856 unmapped: 1097728 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:32:40.848804+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 78585856 unmapped: 1097728 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:32:41.849050+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:42 compute-0 ceph-osd[85971]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1028999 data_alloc: 218103808 data_used: 4672
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:32:42.849223+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 78585856 unmapped: 1097728 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:32:43.849371+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 78585856 unmapped: 1097728 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:32:44.849553+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 78585856 unmapped: 1097728 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fce89000/0x0/0x4ffc00000, data 0xd0383/0x1a1000, compress 0x0/0x0/0x0, omap 0x180fe, meta 0x2bb7f02), peers [1,2] op hist [])
Jan 31 08:51:42 compute-0 ceph-osd[85971]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fce89000/0x0/0x4ffc00000, data 0xd0383/0x1a1000, compress 0x0/0x0/0x0, omap 0x180fe, meta 0x2bb7f02), peers [1,2] op hist [])
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:32:45.849739+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 78585856 unmapped: 1097728 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:32:46.849915+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 78585856 unmapped: 1097728 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:42 compute-0 ceph-osd[85971]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1028999 data_alloc: 218103808 data_used: 4672
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:32:47.850076+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 78585856 unmapped: 1097728 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:32:48.850208+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 78585856 unmapped: 1097728 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:32:49.850346+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 78585856 unmapped: 1097728 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:32:50.850557+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 78585856 unmapped: 1097728 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fce89000/0x0/0x4ffc00000, data 0xd0383/0x1a1000, compress 0x0/0x0/0x0, omap 0x180fe, meta 0x2bb7f02), peers [1,2] op hist [])
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:32:51.850723+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 78585856 unmapped: 1097728 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:42 compute-0 ceph-osd[85971]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1028999 data_alloc: 218103808 data_used: 4672
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:32:52.851064+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 78585856 unmapped: 1097728 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:32:53.851334+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 78585856 unmapped: 1097728 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fce89000/0x0/0x4ffc00000, data 0xd0383/0x1a1000, compress 0x0/0x0/0x0, omap 0x180fe, meta 0x2bb7f02), peers [1,2] op hist [])
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:32:54.851560+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 78585856 unmapped: 1097728 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:32:55.851716+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 78585856 unmapped: 1097728 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:32:56.851983+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 78585856 unmapped: 1097728 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fce89000/0x0/0x4ffc00000, data 0xd0383/0x1a1000, compress 0x0/0x0/0x0, omap 0x180fe, meta 0x2bb7f02), peers [1,2] op hist [])
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:42 compute-0 ceph-osd[85971]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1028999 data_alloc: 218103808 data_used: 4672
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:32:57.852185+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 78585856 unmapped: 1097728 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:32:58.852451+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 78585856 unmapped: 1097728 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:32:59.852586+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 78585856 unmapped: 1097728 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:33:00.852793+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 78585856 unmapped: 1097728 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:33:01.852960+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 78585856 unmapped: 1097728 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:42 compute-0 ceph-osd[85971]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1028999 data_alloc: 218103808 data_used: 4672
Jan 31 08:51:42 compute-0 ceph-osd[85971]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fce89000/0x0/0x4ffc00000, data 0xd0383/0x1a1000, compress 0x0/0x0/0x0, omap 0x180fe, meta 0x2bb7f02), peers [1,2] op hist [])
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:33:02.853086+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 78585856 unmapped: 1097728 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:33:03.853213+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 78585856 unmapped: 1097728 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:33:04.853339+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 78585856 unmapped: 1097728 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:33:05.853502+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 78585856 unmapped: 1097728 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fce89000/0x0/0x4ffc00000, data 0xd0383/0x1a1000, compress 0x0/0x0/0x0, omap 0x180fe, meta 0x2bb7f02), peers [1,2] op hist [])
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:33:06.853599+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 78585856 unmapped: 1097728 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:42 compute-0 ceph-osd[85971]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1028999 data_alloc: 218103808 data_used: 4672
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:33:07.853736+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 78585856 unmapped: 1097728 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:33:08.853842+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 78585856 unmapped: 1097728 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:33:09.854045+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 78585856 unmapped: 1097728 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fce89000/0x0/0x4ffc00000, data 0xd0383/0x1a1000, compress 0x0/0x0/0x0, omap 0x180fe, meta 0x2bb7f02), peers [1,2] op hist [])
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:33:10.854335+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 78585856 unmapped: 1097728 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:33:11.854429+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 78585856 unmapped: 1097728 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:42 compute-0 ceph-osd[85971]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1028999 data_alloc: 218103808 data_used: 4672
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:33:12.854553+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 78585856 unmapped: 1097728 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fce89000/0x0/0x4ffc00000, data 0xd0383/0x1a1000, compress 0x0/0x0/0x0, omap 0x180fe, meta 0x2bb7f02), peers [1,2] op hist [])
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:33:13.854685+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 78585856 unmapped: 1097728 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:33:14.854801+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 78585856 unmapped: 1097728 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:33:15.854915+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 78585856 unmapped: 1097728 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:33:16.855071+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 78585856 unmapped: 1097728 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:42 compute-0 ceph-osd[85971]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1028999 data_alloc: 218103808 data_used: 4672
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:33:17.855224+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 78585856 unmapped: 1097728 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:33:18.855366+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fce89000/0x0/0x4ffc00000, data 0xd0383/0x1a1000, compress 0x0/0x0/0x0, omap 0x180fe, meta 0x2bb7f02), peers [1,2] op hist [])
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 78585856 unmapped: 1097728 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:33:19.855483+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 78585856 unmapped: 1097728 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:33:20.855650+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 78585856 unmapped: 1097728 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:33:21.855779+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 78585856 unmapped: 1097728 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fce89000/0x0/0x4ffc00000, data 0xd0383/0x1a1000, compress 0x0/0x0/0x0, omap 0x180fe, meta 0x2bb7f02), peers [1,2] op hist [])
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:42 compute-0 ceph-osd[85971]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1028999 data_alloc: 218103808 data_used: 4672
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:33:22.855891+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fce89000/0x0/0x4ffc00000, data 0xd0383/0x1a1000, compress 0x0/0x0/0x0, omap 0x180fe, meta 0x2bb7f02), peers [1,2] op hist [])
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 78585856 unmapped: 1097728 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:33:23.856014+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 78585856 unmapped: 1097728 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fce89000/0x0/0x4ffc00000, data 0xd0383/0x1a1000, compress 0x0/0x0/0x0, omap 0x180fe, meta 0x2bb7f02), peers [1,2] op hist [])
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:33:24.856163+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 78585856 unmapped: 1097728 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:33:25.856373+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 78585856 unmapped: 1097728 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:33:26.856562+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 78585856 unmapped: 1097728 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fce89000/0x0/0x4ffc00000, data 0xd0383/0x1a1000, compress 0x0/0x0/0x0, omap 0x180fe, meta 0x2bb7f02), peers [1,2] op hist [])
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:42 compute-0 ceph-osd[85971]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1028999 data_alloc: 218103808 data_used: 4672
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:33:27.856730+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 78585856 unmapped: 1097728 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:33:28.856872+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 78585856 unmapped: 1097728 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:33:29.857047+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 78585856 unmapped: 1097728 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:33:30.857291+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 78585856 unmapped: 1097728 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:33:31.857463+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 78585856 unmapped: 1097728 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fce89000/0x0/0x4ffc00000, data 0xd0383/0x1a1000, compress 0x0/0x0/0x0, omap 0x180fe, meta 0x2bb7f02), peers [1,2] op hist [])
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:42 compute-0 ceph-osd[85971]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1028999 data_alloc: 218103808 data_used: 4672
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:33:32.857593+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 78585856 unmapped: 1097728 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:33:33.857756+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 78585856 unmapped: 1097728 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fce89000/0x0/0x4ffc00000, data 0xd0383/0x1a1000, compress 0x0/0x0/0x0, omap 0x180fe, meta 0x2bb7f02), peers [1,2] op hist [])
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:33:34.857942+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 78585856 unmapped: 1097728 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:33:35.858050+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 78585856 unmapped: 1097728 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:33:36.858207+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 78585856 unmapped: 1097728 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:42 compute-0 ceph-osd[85971]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1028999 data_alloc: 218103808 data_used: 4672
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:33:37.858357+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 78585856 unmapped: 1097728 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fce89000/0x0/0x4ffc00000, data 0xd0383/0x1a1000, compress 0x0/0x0/0x0, omap 0x180fe, meta 0x2bb7f02), peers [1,2] op hist [])
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:33:38.858676+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 78585856 unmapped: 1097728 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:33:39.859221+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 78585856 unmapped: 1097728 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:33:40.859542+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fce89000/0x0/0x4ffc00000, data 0xd0383/0x1a1000, compress 0x0/0x0/0x0, omap 0x180fe, meta 0x2bb7f02), peers [1,2] op hist [])
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 78585856 unmapped: 1097728 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:33:41.859772+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 78585856 unmapped: 1097728 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:42 compute-0 ceph-osd[85971]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1028999 data_alloc: 218103808 data_used: 4672
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:33:42.859916+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 78585856 unmapped: 1097728 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:33:43.860054+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 78585856 unmapped: 1097728 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:33:44.860240+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 78585856 unmapped: 1097728 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:33:45.860589+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 78585856 unmapped: 1097728 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fce89000/0x0/0x4ffc00000, data 0xd0383/0x1a1000, compress 0x0/0x0/0x0, omap 0x180fe, meta 0x2bb7f02), peers [1,2] op hist [])
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:33:46.860729+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 78585856 unmapped: 1097728 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:42 compute-0 ceph-osd[85971]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1028999 data_alloc: 218103808 data_used: 4672
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:33:47.860892+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 78585856 unmapped: 1097728 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:33:48.861375+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 78585856 unmapped: 1097728 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:33:49.862353+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 78585856 unmapped: 1097728 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:33:50.862801+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fce89000/0x0/0x4ffc00000, data 0xd0383/0x1a1000, compress 0x0/0x0/0x0, omap 0x180fe, meta 0x2bb7f02), peers [1,2] op hist [])
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 78585856 unmapped: 1097728 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fce89000/0x0/0x4ffc00000, data 0xd0383/0x1a1000, compress 0x0/0x0/0x0, omap 0x180fe, meta 0x2bb7f02), peers [1,2] op hist [])
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:33:51.863169+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 78585856 unmapped: 1097728 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:42 compute-0 ceph-osd[85971]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1028999 data_alloc: 218103808 data_used: 4672
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:33:52.863335+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 78585856 unmapped: 1097728 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:33:53.863602+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 78585856 unmapped: 1097728 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fce89000/0x0/0x4ffc00000, data 0xd0383/0x1a1000, compress 0x0/0x0/0x0, omap 0x180fe, meta 0x2bb7f02), peers [1,2] op hist [])
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:33:54.863852+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 78585856 unmapped: 1097728 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:33:55.863988+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 78585856 unmapped: 1097728 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:33:56.864189+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 78585856 unmapped: 1097728 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:42 compute-0 ceph-osd[85971]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1028999 data_alloc: 218103808 data_used: 4672
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:33:57.864380+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 78585856 unmapped: 1097728 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:33:58.864598+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fce89000/0x0/0x4ffc00000, data 0xd0383/0x1a1000, compress 0x0/0x0/0x0, omap 0x180fe, meta 0x2bb7f02), peers [1,2] op hist [])
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 78585856 unmapped: 1097728 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:33:59.864768+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 78585856 unmapped: 1097728 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:34:00.864930+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 78585856 unmapped: 1097728 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:34:01.865129+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 78585856 unmapped: 1097728 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fce89000/0x0/0x4ffc00000, data 0xd0383/0x1a1000, compress 0x0/0x0/0x0, omap 0x180fe, meta 0x2bb7f02), peers [1,2] op hist [])
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:42 compute-0 ceph-osd[85971]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1028999 data_alloc: 218103808 data_used: 4672
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:34:02.865472+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 78585856 unmapped: 1097728 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:34:03.865642+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 78585856 unmapped: 1097728 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:34:04.865853+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 78585856 unmapped: 1097728 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fce89000/0x0/0x4ffc00000, data 0xd0383/0x1a1000, compress 0x0/0x0/0x0, omap 0x180fe, meta 0x2bb7f02), peers [1,2] op hist [])
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:34:05.866067+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 78585856 unmapped: 1097728 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:34:06.866196+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 78585856 unmapped: 1097728 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fce89000/0x0/0x4ffc00000, data 0xd0383/0x1a1000, compress 0x0/0x0/0x0, omap 0x180fe, meta 0x2bb7f02), peers [1,2] op hist [])
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:42 compute-0 ceph-osd[85971]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1028999 data_alloc: 218103808 data_used: 4672
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:34:07.866355+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 78585856 unmapped: 1097728 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:34:08.872332+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 78585856 unmapped: 1097728 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:34:09.872522+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 78585856 unmapped: 1097728 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fce89000/0x0/0x4ffc00000, data 0xd0383/0x1a1000, compress 0x0/0x0/0x0, omap 0x180fe, meta 0x2bb7f02), peers [1,2] op hist [])
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:34:10.872644+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 78585856 unmapped: 1097728 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:34:11.872744+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 78585856 unmapped: 1097728 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fce89000/0x0/0x4ffc00000, data 0xd0383/0x1a1000, compress 0x0/0x0/0x0, omap 0x180fe, meta 0x2bb7f02), peers [1,2] op hist [])
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:42 compute-0 ceph-osd[85971]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1028999 data_alloc: 218103808 data_used: 4672
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:34:12.872878+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 78585856 unmapped: 1097728 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:34:13.872998+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 78585856 unmapped: 1097728 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:34:14.873135+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 78585856 unmapped: 1097728 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fce89000/0x0/0x4ffc00000, data 0xd0383/0x1a1000, compress 0x0/0x0/0x0, omap 0x180fe, meta 0x2bb7f02), peers [1,2] op hist [])
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:34:15.873288+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 78585856 unmapped: 1097728 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:34:16.873416+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 78585856 unmapped: 1097728 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:42 compute-0 ceph-osd[85971]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1028999 data_alloc: 218103808 data_used: 4672
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:34:17.873558+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 78585856 unmapped: 1097728 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:34:18.873731+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 78585856 unmapped: 1097728 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:34:19.873914+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 78585856 unmapped: 1097728 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fce89000/0x0/0x4ffc00000, data 0xd0383/0x1a1000, compress 0x0/0x0/0x0, omap 0x180fe, meta 0x2bb7f02), peers [1,2] op hist [])
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:34:20.874087+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fce89000/0x0/0x4ffc00000, data 0xd0383/0x1a1000, compress 0x0/0x0/0x0, omap 0x180fe, meta 0x2bb7f02), peers [1,2] op hist [])
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 78585856 unmapped: 1097728 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:34:21.874281+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 78585856 unmapped: 1097728 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:42 compute-0 ceph-osd[85971]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1028999 data_alloc: 218103808 data_used: 4672
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:34:22.874465+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 78585856 unmapped: 1097728 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:34:23.874648+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 78585856 unmapped: 1097728 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fce89000/0x0/0x4ffc00000, data 0xd0383/0x1a1000, compress 0x0/0x0/0x0, omap 0x180fe, meta 0x2bb7f02), peers [1,2] op hist [])
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:34:24.874800+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 78585856 unmapped: 1097728 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:34:25.874924+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 78585856 unmapped: 1097728 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:34:26.875090+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 78585856 unmapped: 1097728 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:42 compute-0 ceph-osd[85971]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1028999 data_alloc: 218103808 data_used: 4672
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:34:27.875234+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 78585856 unmapped: 1097728 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:34:28.875410+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 78585856 unmapped: 1097728 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:34:29.875598+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 78585856 unmapped: 1097728 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fce89000/0x0/0x4ffc00000, data 0xd0383/0x1a1000, compress 0x0/0x0/0x0, omap 0x180fe, meta 0x2bb7f02), peers [1,2] op hist [])
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:34:30.875780+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 78585856 unmapped: 1097728 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:34:31.875937+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 78585856 unmapped: 1097728 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:34:32.876045+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:42 compute-0 ceph-osd[85971]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1028999 data_alloc: 218103808 data_used: 4672
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 78585856 unmapped: 1097728 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:34:33.876215+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 78585856 unmapped: 1097728 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:34:34.876386+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 78585856 unmapped: 1097728 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:34:35.876580+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fce89000/0x0/0x4ffc00000, data 0xd0383/0x1a1000, compress 0x0/0x0/0x0, omap 0x180fe, meta 0x2bb7f02), peers [1,2] op hist [])
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 78585856 unmapped: 1097728 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:34:36.876753+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 78585856 unmapped: 1097728 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:34:37.876916+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:42 compute-0 ceph-osd[85971]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1028999 data_alloc: 218103808 data_used: 4672
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 78585856 unmapped: 1097728 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:34:38.877113+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 78585856 unmapped: 1097728 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:34:39.877233+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 78585856 unmapped: 1097728 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:34:40.877423+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 78585856 unmapped: 1097728 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:34:41.877587+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fce89000/0x0/0x4ffc00000, data 0xd0383/0x1a1000, compress 0x0/0x0/0x0, omap 0x180fe, meta 0x2bb7f02), peers [1,2] op hist [])
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 78585856 unmapped: 1097728 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:34:42.877741+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:42 compute-0 ceph-osd[85971]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1028999 data_alloc: 218103808 data_used: 4672
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 78585856 unmapped: 1097728 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:34:43.877910+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 78585856 unmapped: 1097728 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:34:44.878179+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fce89000/0x0/0x4ffc00000, data 0xd0383/0x1a1000, compress 0x0/0x0/0x0, omap 0x180fe, meta 0x2bb7f02), peers [1,2] op hist [])
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 78585856 unmapped: 1097728 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:34:45.878365+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 78585856 unmapped: 1097728 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:34:46.878501+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 78585856 unmapped: 1097728 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:34:47.878670+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:42 compute-0 ceph-osd[85971]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1028999 data_alloc: 218103808 data_used: 4672
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 78585856 unmapped: 1097728 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:34:48.878859+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 78585856 unmapped: 1097728 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:34:49.879090+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 78585856 unmapped: 1097728 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:34:50.879326+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fce89000/0x0/0x4ffc00000, data 0xd0383/0x1a1000, compress 0x0/0x0/0x0, omap 0x180fe, meta 0x2bb7f02), peers [1,2] op hist [])
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 78585856 unmapped: 1097728 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:34:51.879614+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 78585856 unmapped: 1097728 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:34:52.879795+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:42 compute-0 ceph-osd[85971]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1028999 data_alloc: 218103808 data_used: 4672
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 78585856 unmapped: 1097728 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:34:53.879955+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 78585856 unmapped: 1097728 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:34:54.880114+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fce89000/0x0/0x4ffc00000, data 0xd0383/0x1a1000, compress 0x0/0x0/0x0, omap 0x180fe, meta 0x2bb7f02), peers [1,2] op hist [])
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 78585856 unmapped: 1097728 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:34:55.880321+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 78585856 unmapped: 1097728 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:34:56.882943+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fce89000/0x0/0x4ffc00000, data 0xd0383/0x1a1000, compress 0x0/0x0/0x0, omap 0x180fe, meta 0x2bb7f02), peers [1,2] op hist [])
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 78585856 unmapped: 1097728 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:34:57.883166+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:42 compute-0 ceph-osd[85971]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1028999 data_alloc: 218103808 data_used: 4672
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 78585856 unmapped: 1097728 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:34:58.883349+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 78585856 unmapped: 1097728 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:34:59.883526+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fce89000/0x0/0x4ffc00000, data 0xd0383/0x1a1000, compress 0x0/0x0/0x0, omap 0x180fe, meta 0x2bb7f02), peers [1,2] op hist [])
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 78585856 unmapped: 1097728 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:35:00.883716+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 78585856 unmapped: 1097728 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:35:01.883886+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 78585856 unmapped: 1097728 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:35:02.884025+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:42 compute-0 ceph-osd[85971]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1028999 data_alloc: 218103808 data_used: 4672
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 78585856 unmapped: 1097728 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:35:03.884170+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 78585856 unmapped: 1097728 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:35:04.884316+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fce89000/0x0/0x4ffc00000, data 0xd0383/0x1a1000, compress 0x0/0x0/0x0, omap 0x180fe, meta 0x2bb7f02), peers [1,2] op hist [])
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 78585856 unmapped: 1097728 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:35:05.884459+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 78585856 unmapped: 1097728 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:35:06.884600+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 78585856 unmapped: 1097728 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:35:07.884766+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:42 compute-0 ceph-osd[85971]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1028999 data_alloc: 218103808 data_used: 4672
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 78585856 unmapped: 1097728 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:35:08.884938+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 78585856 unmapped: 1097728 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 rsyslogd[1007]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:35:09.885098+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fce89000/0x0/0x4ffc00000, data 0xd0383/0x1a1000, compress 0x0/0x0/0x0, omap 0x180fe, meta 0x2bb7f02), peers [1,2] op hist [])
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 78585856 unmapped: 1097728 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:35:10.885298+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 78585856 unmapped: 1097728 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:35:11.885437+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 78585856 unmapped: 1097728 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:35:12.885596+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:42 compute-0 ceph-osd[85971]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1028999 data_alloc: 218103808 data_used: 4672
Jan 31 08:51:42 compute-0 ceph-osd[85971]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fce89000/0x0/0x4ffc00000, data 0xd0383/0x1a1000, compress 0x0/0x0/0x0, omap 0x180fe, meta 0x2bb7f02), peers [1,2] op hist [])
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 78585856 unmapped: 1097728 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:35:13.885740+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 78585856 unmapped: 1097728 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:35:14.885909+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 78585856 unmapped: 1097728 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:35:15.886071+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 78585856 unmapped: 1097728 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:35:16.886230+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 78585856 unmapped: 1097728 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:35:17.886451+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:42 compute-0 ceph-osd[85971]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1028999 data_alloc: 218103808 data_used: 4672
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 78585856 unmapped: 1097728 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:35:18.886603+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fce89000/0x0/0x4ffc00000, data 0xd0383/0x1a1000, compress 0x0/0x0/0x0, omap 0x180fe, meta 0x2bb7f02), peers [1,2] op hist [])
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 78585856 unmapped: 1097728 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:35:19.886741+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 78585856 unmapped: 1097728 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:35:20.886987+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 78585856 unmapped: 1097728 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:35:21.887129+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fce89000/0x0/0x4ffc00000, data 0xd0383/0x1a1000, compress 0x0/0x0/0x0, omap 0x180fe, meta 0x2bb7f02), peers [1,2] op hist [])
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 78585856 unmapped: 1097728 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:35:22.887247+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:42 compute-0 ceph-osd[85971]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1028999 data_alloc: 218103808 data_used: 4672
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 78585856 unmapped: 1097728 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:35:23.887514+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 78585856 unmapped: 1097728 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:35:24.887664+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fce89000/0x0/0x4ffc00000, data 0xd0383/0x1a1000, compress 0x0/0x0/0x0, omap 0x180fe, meta 0x2bb7f02), peers [1,2] op hist [])
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 78585856 unmapped: 1097728 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:35:25.887793+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 78585856 unmapped: 1097728 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:35:26.887941+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 78585856 unmapped: 1097728 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:35:27.888069+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:42 compute-0 ceph-osd[85971]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1028999 data_alloc: 218103808 data_used: 4672
Jan 31 08:51:42 compute-0 ceph-osd[85971]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fce89000/0x0/0x4ffc00000, data 0xd0383/0x1a1000, compress 0x0/0x0/0x0, omap 0x180fe, meta 0x2bb7f02), peers [1,2] op hist [])
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 78585856 unmapped: 1097728 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:35:28.888326+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 78585856 unmapped: 1097728 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:35:29.888566+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fce89000/0x0/0x4ffc00000, data 0xd0383/0x1a1000, compress 0x0/0x0/0x0, omap 0x180fe, meta 0x2bb7f02), peers [1,2] op hist [])
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 78585856 unmapped: 1097728 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:35:30.888846+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 78585856 unmapped: 1097728 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:35:31.889037+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 78585856 unmapped: 1097728 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:35:32.889164+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:42 compute-0 ceph-osd[85971]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1028999 data_alloc: 218103808 data_used: 4672
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 78585856 unmapped: 1097728 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:35:33.889294+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 78585856 unmapped: 1097728 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fce89000/0x0/0x4ffc00000, data 0xd0383/0x1a1000, compress 0x0/0x0/0x0, omap 0x180fe, meta 0x2bb7f02), peers [1,2] op hist [])
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:35:34.889474+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 78585856 unmapped: 1097728 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:35:35.889589+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 78585856 unmapped: 1097728 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:35:36.889759+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fce89000/0x0/0x4ffc00000, data 0xd0383/0x1a1000, compress 0x0/0x0/0x0, omap 0x180fe, meta 0x2bb7f02), peers [1,2] op hist [])
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 78585856 unmapped: 1097728 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:35:37.889921+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:42 compute-0 ceph-osd[85971]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1028999 data_alloc: 218103808 data_used: 4672
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 78585856 unmapped: 1097728 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fce89000/0x0/0x4ffc00000, data 0xd0383/0x1a1000, compress 0x0/0x0/0x0, omap 0x180fe, meta 0x2bb7f02), peers [1,2] op hist [])
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:35:38.890156+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 78585856 unmapped: 1097728 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:35:39.890426+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 78585856 unmapped: 1097728 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:35:40.890665+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 78585856 unmapped: 1097728 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:35:41.890863+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 78585856 unmapped: 1097728 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:35:42.891033+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:42 compute-0 ceph-osd[85971]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1028999 data_alloc: 218103808 data_used: 4672
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 78585856 unmapped: 1097728 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:35:43.891169+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 78585856 unmapped: 1097728 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fce89000/0x0/0x4ffc00000, data 0xd0383/0x1a1000, compress 0x0/0x0/0x0, omap 0x180fe, meta 0x2bb7f02), peers [1,2] op hist [])
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:35:44.891328+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 78585856 unmapped: 1097728 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:35:45.891491+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 78585856 unmapped: 1097728 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:35:46.891625+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 78585856 unmapped: 1097728 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:35:47.891793+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:42 compute-0 ceph-osd[85971]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1028999 data_alloc: 218103808 data_used: 4672
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 78585856 unmapped: 1097728 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:35:48.891973+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 78585856 unmapped: 1097728 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:35:49.892129+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fce89000/0x0/0x4ffc00000, data 0xd0383/0x1a1000, compress 0x0/0x0/0x0, omap 0x180fe, meta 0x2bb7f02), peers [1,2] op hist [])
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 78585856 unmapped: 1097728 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:35:50.892317+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fce89000/0x0/0x4ffc00000, data 0xd0383/0x1a1000, compress 0x0/0x0/0x0, omap 0x180fe, meta 0x2bb7f02), peers [1,2] op hist [])
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 78585856 unmapped: 1097728 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:35:51.892428+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 78585856 unmapped: 1097728 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:35:52.892636+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:42 compute-0 ceph-osd[85971]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1028999 data_alloc: 218103808 data_used: 4672
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 78585856 unmapped: 1097728 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:35:53.892810+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 78585856 unmapped: 1097728 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:35:54.893074+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fce89000/0x0/0x4ffc00000, data 0xd0383/0x1a1000, compress 0x0/0x0/0x0, omap 0x180fe, meta 0x2bb7f02), peers [1,2] op hist [])
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 78585856 unmapped: 1097728 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:35:55.893305+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 78585856 unmapped: 1097728 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:35:56.893497+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 78585856 unmapped: 1097728 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:35:57.893711+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fce89000/0x0/0x4ffc00000, data 0xd0383/0x1a1000, compress 0x0/0x0/0x0, omap 0x180fe, meta 0x2bb7f02), peers [1,2] op hist [])
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:42 compute-0 ceph-osd[85971]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1028999 data_alloc: 218103808 data_used: 4672
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 78585856 unmapped: 1097728 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:35:58.893912+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 78585856 unmapped: 1097728 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:35:59.894057+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 78585856 unmapped: 1097728 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:36:00.894296+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fce89000/0x0/0x4ffc00000, data 0xd0383/0x1a1000, compress 0x0/0x0/0x0, omap 0x180fe, meta 0x2bb7f02), peers [1,2] op hist [])
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 78585856 unmapped: 1097728 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:36:01.894506+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fce89000/0x0/0x4ffc00000, data 0xd0383/0x1a1000, compress 0x0/0x0/0x0, omap 0x180fe, meta 0x2bb7f02), peers [1,2] op hist [])
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 78585856 unmapped: 1097728 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:36:02.894717+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:42 compute-0 ceph-osd[85971]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1028999 data_alloc: 218103808 data_used: 4672
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 78585856 unmapped: 1097728 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:36:03.894914+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 78585856 unmapped: 1097728 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:36:04.895110+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 78585856 unmapped: 1097728 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:36:05.895244+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fce89000/0x0/0x4ffc00000, data 0xd0383/0x1a1000, compress 0x0/0x0/0x0, omap 0x180fe, meta 0x2bb7f02), peers [1,2] op hist [])
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 78585856 unmapped: 1097728 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:36:06.895427+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 78585856 unmapped: 1097728 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:36:07.895565+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fce89000/0x0/0x4ffc00000, data 0xd0383/0x1a1000, compress 0x0/0x0/0x0, omap 0x180fe, meta 0x2bb7f02), peers [1,2] op hist [])
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:42 compute-0 ceph-osd[85971]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1028999 data_alloc: 218103808 data_used: 4672
Jan 31 08:51:42 compute-0 ceph-osd[85971]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fce89000/0x0/0x4ffc00000, data 0xd0383/0x1a1000, compress 0x0/0x0/0x0, omap 0x180fe, meta 0x2bb7f02), peers [1,2] op hist [])
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 78585856 unmapped: 1097728 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:36:08.895732+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:36:09.895964+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 78585856 unmapped: 1097728 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:36:10.896312+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 78585856 unmapped: 1097728 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:36:11.896496+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 78585856 unmapped: 1097728 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:36:12.896706+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 78585856 unmapped: 1097728 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:42 compute-0 ceph-osd[85971]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1028999 data_alloc: 218103808 data_used: 4672
Jan 31 08:51:42 compute-0 ceph-osd[85971]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fce89000/0x0/0x4ffc00000, data 0xd0383/0x1a1000, compress 0x0/0x0/0x0, omap 0x180fe, meta 0x2bb7f02), peers [1,2] op hist [])
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:36:13.896880+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 78585856 unmapped: 1097728 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:36:14.897131+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 78585856 unmapped: 1097728 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:36:15.897290+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 78585856 unmapped: 1097728 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:36:16.897444+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 78585856 unmapped: 1097728 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:36:17.897657+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 78585856 unmapped: 1097728 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:42 compute-0 ceph-osd[85971]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1028999 data_alloc: 218103808 data_used: 4672
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:36:18.897839+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 78585856 unmapped: 1097728 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fce89000/0x0/0x4ffc00000, data 0xd0383/0x1a1000, compress 0x0/0x0/0x0, omap 0x180fe, meta 0x2bb7f02), peers [1,2] op hist [])
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:36:19.897989+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 78585856 unmapped: 1097728 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:36:20.898190+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 78594048 unmapped: 1089536 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:36:21.898347+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 78594048 unmapped: 1089536 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:36:22.898535+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 78594048 unmapped: 1089536 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:42 compute-0 ceph-osd[85971]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1028999 data_alloc: 218103808 data_used: 4672
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:36:23.898646+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 78594048 unmapped: 1089536 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:36:24.898771+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 78594048 unmapped: 1089536 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fce89000/0x0/0x4ffc00000, data 0xd0383/0x1a1000, compress 0x0/0x0/0x0, omap 0x180fe, meta 0x2bb7f02), peers [1,2] op hist [])
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:36:25.898946+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 78594048 unmapped: 1089536 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:36:26.899114+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 78594048 unmapped: 1089536 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:36:27.899336+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 78594048 unmapped: 1089536 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:42 compute-0 ceph-osd[85971]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1028999 data_alloc: 218103808 data_used: 4672
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:36:28.899470+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 78594048 unmapped: 1089536 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:36:29.899634+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 78594048 unmapped: 1089536 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: osd.0 136 heartbeat osd_stat(store_statfs(0x4fce89000/0x0/0x4ffc00000, data 0xd0383/0x1a1000, compress 0x0/0x0/0x0, omap 0x180fe, meta 0x2bb7f02), peers [1,2] op hist [])
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:36:30.899803+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 78594048 unmapped: 1089536 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:36:31.899958+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 78594048 unmapped: 1089536 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:36:32.900163+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 78594048 unmapped: 1089536 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:42 compute-0 ceph-osd[85971]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1028999 data_alloc: 218103808 data_used: 4672
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:36:33.900314+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 78594048 unmapped: 1089536 heap: 79683584 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: handle_auth_request added challenge on 0x561e0538a000
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:36:34.900445+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 431.169952393s of 432.121490479s, submitted: 33
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 87228416 unmapped: 9240576 heap: 96468992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: osd.0 136 handle_osd_map epochs [137,137], i have 136, src has [1,137]
Jan 31 08:51:42 compute-0 ceph-osd[85971]: osd.0 136 handle_osd_map epochs [136,137], i have 137, src has [1,137]
Jan 31 08:51:42 compute-0 ceph-osd[85971]: osd.0 137 ms_handle_reset con 0x561e0538a000 session 0x561e03d98540
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:36:35.900607+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 78839808 unmapped: 17629184 heap: 96468992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: osd.0 137 heartbeat osd_stat(store_statfs(0x4fc68b000/0x0/0x4ffc00000, data 0x8d0383/0x9a1000, compress 0x0/0x0/0x0, omap 0x180fe, meta 0x2bb7f02), peers [1,2] op hist [])
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: handle_auth_request added challenge on 0x561e0538a400
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:36:36.900762+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 79077376 unmapped: 17391616 heap: 96468992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _renew_subs
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 31 08:51:42 compute-0 ceph-osd[85971]: osd.0 137 handle_osd_map epochs [138,138], i have 137, src has [1,138]
Jan 31 08:51:42 compute-0 ceph-osd[85971]: osd.0 138 ms_handle_reset con 0x561e0538a400 session 0x561e05984000
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:36:37.900927+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 78848000 unmapped: 17620992 heap: 96468992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:42 compute-0 ceph-osd[85971]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1103110 data_alloc: 218103808 data_used: 4672
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:36:38.901106+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 78848000 unmapped: 17620992 heap: 96468992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:36:39.901417+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 78848000 unmapped: 17620992 heap: 96468992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: osd.0 138 heartbeat osd_stat(store_statfs(0x4fc211000/0x0/0x4ffc00000, data 0xd43aee/0xe19000, compress 0x0/0x0/0x0, omap 0x1866a, meta 0x2bb7996), peers [1,2] op hist [])
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:36:40.901630+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 78848000 unmapped: 17620992 heap: 96468992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:36:41.901838+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 78848000 unmapped: 17620992 heap: 96468992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:36:42.902098+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 78848000 unmapped: 17620992 heap: 96468992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:42 compute-0 ceph-osd[85971]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1103110 data_alloc: 218103808 data_used: 4672
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:36:43.902334+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 78848000 unmapped: 17620992 heap: 96468992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:36:44.902499+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: osd.0 138 heartbeat osd_stat(store_statfs(0x4fc211000/0x0/0x4ffc00000, data 0xd43aee/0xe19000, compress 0x0/0x0/0x0, omap 0x1866a, meta 0x2bb7996), peers [1,2] op hist [])
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 78848000 unmapped: 17620992 heap: 96468992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:36:45.902729+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 1800.1 total, 600.0 interval
                                           Cumulative writes: 6186 writes, 25K keys, 6186 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.01 MB/s
                                           Cumulative WAL: 6186 writes, 1125 syncs, 5.50 writes per sync, written: 0.02 GB, 0.01 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 363 writes, 834 keys, 363 commit groups, 1.0 writes per commit group, ingest: 0.49 MB, 0.00 MB/s
                                           Interval WAL: 363 writes, 164 syncs, 2.21 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 78848000 unmapped: 17620992 heap: 96468992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:36:46.903009+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 78848000 unmapped: 17620992 heap: 96468992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: osd.0 138 heartbeat osd_stat(store_statfs(0x4fc211000/0x0/0x4ffc00000, data 0xd43aee/0xe19000, compress 0x0/0x0/0x0, omap 0x1866a, meta 0x2bb7996), peers [1,2] op hist [])
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _renew_subs
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:36:47.903732+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 78848000 unmapped: 17620992 heap: 96468992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: osd.0 138 heartbeat osd_stat(store_statfs(0x4fc211000/0x0/0x4ffc00000, data 0xd43aee/0xe19000, compress 0x0/0x0/0x0, omap 0x1866a, meta 0x2bb7996), peers [1,2] op hist [])
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:42 compute-0 ceph-osd[85971]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1103110 data_alloc: 218103808 data_used: 4672
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:36:48.903890+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 78848000 unmapped: 17620992 heap: 96468992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: osd.0 138 heartbeat osd_stat(store_statfs(0x4fc211000/0x0/0x4ffc00000, data 0xd43aee/0xe19000, compress 0x0/0x0/0x0, omap 0x1866a, meta 0x2bb7996), peers [1,2] op hist [])
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: handle_auth_request added challenge on 0x561e0538a400
Jan 31 08:51:42 compute-0 ceph-osd[85971]: osd.0 138 handle_osd_map epochs [139,139], i have 138, src has [1,139]
Jan 31 08:51:42 compute-0 ceph-osd[85971]: osd.0 138 handle_osd_map epochs [138,139], i have 139, src has [1,139]
Jan 31 08:51:42 compute-0 ceph-osd[85971]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 14.457365036s of 14.707196236s, submitted: 23
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:36:49.904019+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: osd.0 139 ms_handle_reset con 0x561e0538a400 session 0x561e050c61c0
Jan 31 08:51:42 compute-0 ceph-osd[85971]: osd.0 139 heartbeat osd_stat(store_statfs(0x4fc20e000/0x0/0x4ffc00000, data 0xd43aee/0xe19000, compress 0x0/0x0/0x0, omap 0x1866a, meta 0x2bb7996), peers [1,2] op hist [])
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 78872576 unmapped: 17596416 heap: 96468992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:36:50.904218+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 78872576 unmapped: 17596416 heap: 96468992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: osd.0 139 heartbeat osd_stat(store_statfs(0x4fca0e000/0x0/0x4ffc00000, data 0x5456de/0x61c000, compress 0x0/0x0/0x0, omap 0x18a82, meta 0x2bb757e), peers [1,2] op hist [])
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:36:51.904470+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 78872576 unmapped: 17596416 heap: 96468992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: handle_auth_request added challenge on 0x561e03aa5c00
Jan 31 08:51:42 compute-0 ceph-osd[85971]: osd.0 139 heartbeat osd_stat(store_statfs(0x4fca10000/0x0/0x4ffc00000, data 0x5456de/0x61c000, compress 0x0/0x0/0x0, omap 0x18a82, meta 0x2bb757e), peers [1,2] op hist [])
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:36:52.904604+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _renew_subs
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 31 08:51:42 compute-0 ceph-osd[85971]: osd.0 139 handle_osd_map epochs [140,140], i have 139, src has [1,140]
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 78905344 unmapped: 17563648 heap: 96468992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: osd.0 140 ms_handle_reset con 0x561e03aa5c00 session 0x561e052b9340
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:42 compute-0 ceph-osd[85971]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1044745 data_alloc: 218103808 data_used: 4672
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:36:53.904793+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 78929920 unmapped: 17539072 heap: 96468992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: mgrc ms_handle_reset ms_handle_reset con 0x561e030b6c00
Jan 31 08:51:42 compute-0 ceph-osd[85971]: mgrc reconnect Terminating session with v2:192.168.122.100:6800/2264315754
Jan 31 08:51:42 compute-0 ceph-osd[85971]: mgrc reconnect Starting new session with [v2:192.168.122.100:6800/2264315754,v1:192.168.122.100:6801/2264315754]
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: get_auth_request con 0x561e0275a800 auth_method 0
Jan 31 08:51:42 compute-0 ceph-osd[85971]: mgrc handle_mgr_configure stats_period=5
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:36:54.905049+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 79298560 unmapped: 17170432 heap: 96468992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:36:55.905238+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 79298560 unmapped: 17170432 heap: 96468992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: osd.0 140 heartbeat osd_stat(store_statfs(0x4fca0c000/0x0/0x4ffc00000, data 0xd729b/0x1ad000, compress 0x0/0x0/0x0, omap 0x18e4a, meta 0x2bb71b6), peers [1,2] op hist [])
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:36:56.905492+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 79298560 unmapped: 17170432 heap: 96468992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _renew_subs
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:36:57.905674+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 79298560 unmapped: 17170432 heap: 96468992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: osd.0 140 heartbeat osd_stat(store_statfs(0x4fca0c000/0x0/0x4ffc00000, data 0xd729b/0x1ad000, compress 0x0/0x0/0x0, omap 0x18e4a, meta 0x2bb71b6), peers [1,2] op hist [])
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:42 compute-0 ceph-osd[85971]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1044745 data_alloc: 218103808 data_used: 4672
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:36:58.905877+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 79298560 unmapped: 17170432 heap: 96468992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:36:59.906415+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 79298560 unmapped: 17170432 heap: 96468992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: osd.0 140 handle_osd_map epochs [141,141], i have 140, src has [1,141]
Jan 31 08:51:42 compute-0 ceph-osd[85971]: osd.0 140 handle_osd_map epochs [140,141], i have 141, src has [1,141]
Jan 31 08:51:42 compute-0 ceph-osd[85971]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.769979477s of 10.906690598s, submitted: 43
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:37:00.908000+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 79298560 unmapped: 17170432 heap: 96468992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: osd.0 141 ms_handle_reset con 0x561e053e4800 session 0x561e056cd180
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: handle_auth_request added challenge on 0x561e04fb0000
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:37:01.908152+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 79298560 unmapped: 17170432 heap: 96468992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:37:02.908301+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 79298560 unmapped: 17170432 heap: 96468992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:42 compute-0 ceph-osd[85971]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1047439 data_alloc: 218103808 data_used: 8733
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:37:03.908475+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 79298560 unmapped: 17170432 heap: 96468992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fce7a000/0x0/0x4ffc00000, data 0xd8d1a/0x1b0000, compress 0x0/0x0/0x0, omap 0x19398, meta 0x2bb6c68), peers [1,2] op hist [])
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:37:04.908660+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 79298560 unmapped: 17170432 heap: 96468992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:37:05.908873+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fce7a000/0x0/0x4ffc00000, data 0xd8d1a/0x1b0000, compress 0x0/0x0/0x0, omap 0x19398, meta 0x2bb6c68), peers [1,2] op hist [])
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 79298560 unmapped: 17170432 heap: 96468992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:37:06.909115+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 79298560 unmapped: 17170432 heap: 96468992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _renew_subs
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:37:07.909337+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 79298560 unmapped: 17170432 heap: 96468992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:42 compute-0 ceph-osd[85971]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1047439 data_alloc: 218103808 data_used: 8733
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:37:08.909540+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 79298560 unmapped: 17170432 heap: 96468992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:37:09.909667+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 79298560 unmapped: 17170432 heap: 96468992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:37:10.909813+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 79298560 unmapped: 17170432 heap: 96468992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fce7a000/0x0/0x4ffc00000, data 0xd8d1a/0x1b0000, compress 0x0/0x0/0x0, omap 0x19398, meta 0x2bb6c68), peers [1,2] op hist [])
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:37:11.910003+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 79282176 unmapped: 17186816 heap: 96468992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:37:12.910210+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 79282176 unmapped: 17186816 heap: 96468992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:42 compute-0 ceph-osd[85971]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1047439 data_alloc: 218103808 data_used: 8733
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:37:13.910470+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 79282176 unmapped: 17186816 heap: 96468992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:37:14.910634+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 79282176 unmapped: 17186816 heap: 96468992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fce7a000/0x0/0x4ffc00000, data 0xd8d1a/0x1b0000, compress 0x0/0x0/0x0, omap 0x19398, meta 0x2bb6c68), peers [1,2] op hist [])
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:37:15.910798+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 79282176 unmapped: 17186816 heap: 96468992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:37:16.910983+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 79282176 unmapped: 17186816 heap: 96468992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:37:17.911172+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 79282176 unmapped: 17186816 heap: 96468992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fce7a000/0x0/0x4ffc00000, data 0xd8d1a/0x1b0000, compress 0x0/0x0/0x0, omap 0x19398, meta 0x2bb6c68), peers [1,2] op hist [])
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:42 compute-0 ceph-osd[85971]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1047439 data_alloc: 218103808 data_used: 8733
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:37:18.911311+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 79282176 unmapped: 17186816 heap: 96468992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:37:19.911462+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fce7a000/0x0/0x4ffc00000, data 0xd8d1a/0x1b0000, compress 0x0/0x0/0x0, omap 0x19398, meta 0x2bb6c68), peers [1,2] op hist [])
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 79282176 unmapped: 17186816 heap: 96468992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:37:20.911684+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 79282176 unmapped: 17186816 heap: 96468992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:37:21.911841+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 79282176 unmapped: 17186816 heap: 96468992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fce7a000/0x0/0x4ffc00000, data 0xd8d1a/0x1b0000, compress 0x0/0x0/0x0, omap 0x19398, meta 0x2bb6c68), peers [1,2] op hist [])
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:37:22.911975+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fce7a000/0x0/0x4ffc00000, data 0xd8d1a/0x1b0000, compress 0x0/0x0/0x0, omap 0x19398, meta 0x2bb6c68), peers [1,2] op hist [])
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 79282176 unmapped: 17186816 heap: 96468992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:42 compute-0 ceph-osd[85971]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1047439 data_alloc: 218103808 data_used: 8733
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:37:23.912354+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 79282176 unmapped: 17186816 heap: 96468992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:37:24.912568+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 79282176 unmapped: 17186816 heap: 96468992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fce7a000/0x0/0x4ffc00000, data 0xd8d1a/0x1b0000, compress 0x0/0x0/0x0, omap 0x19398, meta 0x2bb6c68), peers [1,2] op hist [])
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:37:25.912943+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 79282176 unmapped: 17186816 heap: 96468992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:37:26.913161+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 79282176 unmapped: 17186816 heap: 96468992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:37:27.913467+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 79282176 unmapped: 17186816 heap: 96468992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:42 compute-0 ceph-osd[85971]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1047439 data_alloc: 218103808 data_used: 8733
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:37:28.915778+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 79282176 unmapped: 17186816 heap: 96468992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fce7a000/0x0/0x4ffc00000, data 0xd8d1a/0x1b0000, compress 0x0/0x0/0x0, omap 0x19398, meta 0x2bb6c68), peers [1,2] op hist [])
Jan 31 08:51:42 compute-0 ceph-osd[85971]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fce7a000/0x0/0x4ffc00000, data 0xd8d1a/0x1b0000, compress 0x0/0x0/0x0, omap 0x19398, meta 0x2bb6c68), peers [1,2] op hist [])
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:37:29.916054+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 79282176 unmapped: 17186816 heap: 96468992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:37:30.916360+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 79282176 unmapped: 17186816 heap: 96468992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:37:31.916589+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 79282176 unmapped: 17186816 heap: 96468992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:37:32.916832+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 79282176 unmapped: 17186816 heap: 96468992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:42 compute-0 ceph-osd[85971]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1047439 data_alloc: 218103808 data_used: 8733
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:37:33.916953+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 79282176 unmapped: 17186816 heap: 96468992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fce7a000/0x0/0x4ffc00000, data 0xd8d1a/0x1b0000, compress 0x0/0x0/0x0, omap 0x19398, meta 0x2bb6c68), peers [1,2] op hist [])
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:37:34.917146+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 79282176 unmapped: 17186816 heap: 96468992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:37:35.917561+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 79282176 unmapped: 17186816 heap: 96468992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:37:36.918461+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 79282176 unmapped: 17186816 heap: 96468992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:37:37.919353+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 79282176 unmapped: 17186816 heap: 96468992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:42 compute-0 ceph-osd[85971]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1047439 data_alloc: 218103808 data_used: 8733
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:37:38.920153+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fce7a000/0x0/0x4ffc00000, data 0xd8d1a/0x1b0000, compress 0x0/0x0/0x0, omap 0x19398, meta 0x2bb6c68), peers [1,2] op hist [])
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 79282176 unmapped: 17186816 heap: 96468992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:37:39.920761+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 79282176 unmapped: 17186816 heap: 96468992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:37:40.921220+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 79282176 unmapped: 17186816 heap: 96468992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:37:41.921477+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fce7a000/0x0/0x4ffc00000, data 0xd8d1a/0x1b0000, compress 0x0/0x0/0x0, omap 0x19398, meta 0x2bb6c68), peers [1,2] op hist [])
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 79282176 unmapped: 17186816 heap: 96468992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:37:42.921591+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fce7a000/0x0/0x4ffc00000, data 0xd8d1a/0x1b0000, compress 0x0/0x0/0x0, omap 0x19398, meta 0x2bb6c68), peers [1,2] op hist [])
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 79282176 unmapped: 17186816 heap: 96468992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:42 compute-0 ceph-osd[85971]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1047439 data_alloc: 218103808 data_used: 8733
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:37:43.922386+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 79282176 unmapped: 17186816 heap: 96468992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:37:44.922553+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 79282176 unmapped: 17186816 heap: 96468992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:37:45.922692+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 79282176 unmapped: 17186816 heap: 96468992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:37:46.922946+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fce7a000/0x0/0x4ffc00000, data 0xd8d1a/0x1b0000, compress 0x0/0x0/0x0, omap 0x19398, meta 0x2bb6c68), peers [1,2] op hist [])
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 79282176 unmapped: 17186816 heap: 96468992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:37:47.923103+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 79282176 unmapped: 17186816 heap: 96468992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:42 compute-0 ceph-osd[85971]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1047439 data_alloc: 218103808 data_used: 8733
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:37:48.923390+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 79282176 unmapped: 17186816 heap: 96468992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:37:49.923549+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 79282176 unmapped: 17186816 heap: 96468992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:37:50.923921+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 79282176 unmapped: 17186816 heap: 96468992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:37:51.924096+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fce7a000/0x0/0x4ffc00000, data 0xd8d1a/0x1b0000, compress 0x0/0x0/0x0, omap 0x19398, meta 0x2bb6c68), peers [1,2] op hist [])
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 79282176 unmapped: 17186816 heap: 96468992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:37:52.924328+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 79282176 unmapped: 17186816 heap: 96468992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:42 compute-0 ceph-osd[85971]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1047439 data_alloc: 218103808 data_used: 8733
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:37:53.924480+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: osd.0 141 ms_handle_reset con 0x561e03ba0000 session 0x561e03dac8c0
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: handle_auth_request added challenge on 0x561e0538b000
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 79282176 unmapped: 17186816 heap: 96468992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:37:54.924630+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 79282176 unmapped: 17186816 heap: 96468992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:37:55.924764+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 79282176 unmapped: 17186816 heap: 96468992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fce7a000/0x0/0x4ffc00000, data 0xd8d1a/0x1b0000, compress 0x0/0x0/0x0, omap 0x19398, meta 0x2bb6c68), peers [1,2] op hist [])
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:37:56.925050+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 79282176 unmapped: 17186816 heap: 96468992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:37:57.926201+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 79282176 unmapped: 17186816 heap: 96468992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:42 compute-0 ceph-osd[85971]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1047439 data_alloc: 218103808 data_used: 8733
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:37:58.926963+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 79282176 unmapped: 17186816 heap: 96468992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fce7a000/0x0/0x4ffc00000, data 0xd8d1a/0x1b0000, compress 0x0/0x0/0x0, omap 0x19398, meta 0x2bb6c68), peers [1,2] op hist [])
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:37:59.927148+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 59.294551849s of 59.448009491s, submitted: 9
Jan 31 08:51:42 compute-0 ceph-osd[85971]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81403904 unmapped: 15065088 heap: 96468992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:38:00.927444+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fce7c000/0x0/0x4ffc00000, data 0xd8d1a/0x1b0000, compress 0x0/0x0/0x0, omap 0x19398, meta 0x2bb6c68), peers [1,2] op hist [0,0,0,0,0,0,1])
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81428480 unmapped: 15040512 heap: 96468992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fce7c000/0x0/0x4ffc00000, data 0xd8d1a/0x1b0000, compress 0x0/0x0/0x0, omap 0x19398, meta 0x2bb6c68), peers [1,2] op hist [1])
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:38:01.927572+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fce7c000/0x0/0x4ffc00000, data 0xd8d1a/0x1b0000, compress 0x0/0x0/0x0, omap 0x19398, meta 0x2bb6c68), peers [1,2] op hist [])
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81240064 unmapped: 15228928 heap: 96468992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:38:02.928438+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81240064 unmapped: 15228928 heap: 96468992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:42 compute-0 ceph-osd[85971]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1046719 data_alloc: 218103808 data_used: 8733
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:38:03.928723+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81240064 unmapped: 15228928 heap: 96468992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:38:04.928894+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fce7c000/0x0/0x4ffc00000, data 0xd8d1a/0x1b0000, compress 0x0/0x0/0x0, omap 0x19398, meta 0x2bb6c68), peers [1,2] op hist [])
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81256448 unmapped: 15212544 heap: 96468992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:38:05.929307+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81256448 unmapped: 15212544 heap: 96468992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:38:06.929524+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81256448 unmapped: 15212544 heap: 96468992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:38:07.930083+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fce7c000/0x0/0x4ffc00000, data 0xd8d1a/0x1b0000, compress 0x0/0x0/0x0, omap 0x19398, meta 0x2bb6c68), peers [1,2] op hist [])
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81256448 unmapped: 15212544 heap: 96468992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:38:08.930333+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:42 compute-0 ceph-osd[85971]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1046719 data_alloc: 218103808 data_used: 8733
Jan 31 08:51:42 compute-0 ceph-osd[85971]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fce7c000/0x0/0x4ffc00000, data 0xd8d1a/0x1b0000, compress 0x0/0x0/0x0, omap 0x19398, meta 0x2bb6c68), peers [1,2] op hist [])
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81256448 unmapped: 15212544 heap: 96468992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:38:09.930523+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81272832 unmapped: 15196160 heap: 96468992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:38:10.930744+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fce7c000/0x0/0x4ffc00000, data 0xd8d1a/0x1b0000, compress 0x0/0x0/0x0, omap 0x19398, meta 0x2bb6c68), peers [1,2] op hist [])
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81272832 unmapped: 15196160 heap: 96468992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:38:11.930963+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81272832 unmapped: 15196160 heap: 96468992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:38:12.931154+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81272832 unmapped: 15196160 heap: 96468992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:38:13.931399+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:42 compute-0 ceph-osd[85971]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1046719 data_alloc: 218103808 data_used: 8733
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81272832 unmapped: 15196160 heap: 96468992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:38:14.931534+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81272832 unmapped: 15196160 heap: 96468992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:38:15.931734+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81272832 unmapped: 15196160 heap: 96468992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:38:16.931888+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fce7c000/0x0/0x4ffc00000, data 0xd8d1a/0x1b0000, compress 0x0/0x0/0x0, omap 0x19398, meta 0x2bb6c68), peers [1,2] op hist [])
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81272832 unmapped: 15196160 heap: 96468992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:38:17.932021+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fce7c000/0x0/0x4ffc00000, data 0xd8d1a/0x1b0000, compress 0x0/0x0/0x0, omap 0x19398, meta 0x2bb6c68), peers [1,2] op hist [])
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81272832 unmapped: 15196160 heap: 96468992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:38:18.932286+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:42 compute-0 ceph-osd[85971]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1046719 data_alloc: 218103808 data_used: 8733
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81272832 unmapped: 15196160 heap: 96468992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:38:19.932651+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81272832 unmapped: 15196160 heap: 96468992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:38:20.932797+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81272832 unmapped: 15196160 heap: 96468992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:38:21.933105+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81272832 unmapped: 15196160 heap: 96468992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:38:22.933347+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fce7c000/0x0/0x4ffc00000, data 0xd8d1a/0x1b0000, compress 0x0/0x0/0x0, omap 0x19398, meta 0x2bb6c68), peers [1,2] op hist [])
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81272832 unmapped: 15196160 heap: 96468992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:38:23.933570+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:42 compute-0 ceph-osd[85971]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1046719 data_alloc: 218103808 data_used: 8733
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81272832 unmapped: 15196160 heap: 96468992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:38:24.934408+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fce7c000/0x0/0x4ffc00000, data 0xd8d1a/0x1b0000, compress 0x0/0x0/0x0, omap 0x19398, meta 0x2bb6c68), peers [1,2] op hist [])
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81272832 unmapped: 15196160 heap: 96468992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:38:25.934570+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81272832 unmapped: 15196160 heap: 96468992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:38:26.934702+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81272832 unmapped: 15196160 heap: 96468992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:38:27.934875+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81272832 unmapped: 15196160 heap: 96468992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:38:28.935056+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:42 compute-0 ceph-osd[85971]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1046719 data_alloc: 218103808 data_used: 8733
Jan 31 08:51:42 compute-0 ceph-osd[85971]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fce7c000/0x0/0x4ffc00000, data 0xd8d1a/0x1b0000, compress 0x0/0x0/0x0, omap 0x19398, meta 0x2bb6c68), peers [1,2] op hist [])
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81289216 unmapped: 15179776 heap: 96468992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:38:29.935189+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81289216 unmapped: 15179776 heap: 96468992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:38:30.935346+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81289216 unmapped: 15179776 heap: 96468992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:38:31.935447+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81289216 unmapped: 15179776 heap: 96468992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:38:32.935609+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81289216 unmapped: 15179776 heap: 96468992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fce7c000/0x0/0x4ffc00000, data 0xd8d1a/0x1b0000, compress 0x0/0x0/0x0, omap 0x19398, meta 0x2bb6c68), peers [1,2] op hist [])
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:38:33.935752+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:42 compute-0 ceph-osd[85971]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1046719 data_alloc: 218103808 data_used: 8733
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81289216 unmapped: 15179776 heap: 96468992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:38:35.221287+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81289216 unmapped: 15179776 heap: 96468992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:38:36.221401+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81289216 unmapped: 15179776 heap: 96468992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:38:37.221537+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81289216 unmapped: 15179776 heap: 96468992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:38:38.221706+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fce7c000/0x0/0x4ffc00000, data 0xd8d1a/0x1b0000, compress 0x0/0x0/0x0, omap 0x19398, meta 0x2bb6c68), peers [1,2] op hist [])
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81289216 unmapped: 15179776 heap: 96468992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:42 compute-0 ceph-osd[85971]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1046719 data_alloc: 218103808 data_used: 8733
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:38:39.221878+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81289216 unmapped: 15179776 heap: 96468992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:38:40.222043+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81289216 unmapped: 15179776 heap: 96468992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fce7c000/0x0/0x4ffc00000, data 0xd8d1a/0x1b0000, compress 0x0/0x0/0x0, omap 0x19398, meta 0x2bb6c68), peers [1,2] op hist [])
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:38:41.222238+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81289216 unmapped: 15179776 heap: 96468992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:38:42.222433+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81289216 unmapped: 15179776 heap: 96468992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:38:43.222612+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81289216 unmapped: 15179776 heap: 96468992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:42 compute-0 ceph-osd[85971]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1046719 data_alloc: 218103808 data_used: 8733
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:38:44.222786+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fce7c000/0x0/0x4ffc00000, data 0xd8d1a/0x1b0000, compress 0x0/0x0/0x0, omap 0x19398, meta 0x2bb6c68), peers [1,2] op hist [])
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81289216 unmapped: 15179776 heap: 96468992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:38:45.222944+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81289216 unmapped: 15179776 heap: 96468992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:38:46.223158+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fce7c000/0x0/0x4ffc00000, data 0xd8d1a/0x1b0000, compress 0x0/0x0/0x0, omap 0x19398, meta 0x2bb6c68), peers [1,2] op hist [])
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81289216 unmapped: 15179776 heap: 96468992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:38:47.223367+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81289216 unmapped: 15179776 heap: 96468992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:38:48.223520+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81289216 unmapped: 15179776 heap: 96468992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:42 compute-0 ceph-osd[85971]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1046719 data_alloc: 218103808 data_used: 8733
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:38:49.223688+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81305600 unmapped: 15163392 heap: 96468992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fce7c000/0x0/0x4ffc00000, data 0xd8d1a/0x1b0000, compress 0x0/0x0/0x0, omap 0x19398, meta 0x2bb6c68), peers [1,2] op hist [])
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:38:50.223871+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fce7c000/0x0/0x4ffc00000, data 0xd8d1a/0x1b0000, compress 0x0/0x0/0x0, omap 0x19398, meta 0x2bb6c68), peers [1,2] op hist [])
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81305600 unmapped: 15163392 heap: 96468992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:38:51.224060+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81305600 unmapped: 15163392 heap: 96468992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:38:52.224203+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81305600 unmapped: 15163392 heap: 96468992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fce7c000/0x0/0x4ffc00000, data 0xd8d1a/0x1b0000, compress 0x0/0x0/0x0, omap 0x19398, meta 0x2bb6c68), peers [1,2] op hist [])
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:38:53.224354+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81305600 unmapped: 15163392 heap: 96468992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fce7c000/0x0/0x4ffc00000, data 0xd8d1a/0x1b0000, compress 0x0/0x0/0x0, omap 0x19398, meta 0x2bb6c68), peers [1,2] op hist [])
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:42 compute-0 ceph-osd[85971]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1046719 data_alloc: 218103808 data_used: 8733
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:38:54.224512+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81305600 unmapped: 15163392 heap: 96468992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:38:55.224669+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81305600 unmapped: 15163392 heap: 96468992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:38:56.224807+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81305600 unmapped: 15163392 heap: 96468992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:38:57.224973+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81305600 unmapped: 15163392 heap: 96468992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:38:58.225156+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81305600 unmapped: 15163392 heap: 96468992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:42 compute-0 ceph-osd[85971]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1046719 data_alloc: 218103808 data_used: 8733
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:38:59.225344+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fce7c000/0x0/0x4ffc00000, data 0xd8d1a/0x1b0000, compress 0x0/0x0/0x0, omap 0x19398, meta 0x2bb6c68), peers [1,2] op hist [])
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81305600 unmapped: 15163392 heap: 96468992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:39:00.225504+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81305600 unmapped: 15163392 heap: 96468992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:39:01.225717+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81305600 unmapped: 15163392 heap: 96468992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:39:02.226298+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fce7c000/0x0/0x4ffc00000, data 0xd8d1a/0x1b0000, compress 0x0/0x0/0x0, omap 0x19398, meta 0x2bb6c68), peers [1,2] op hist [])
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81305600 unmapped: 15163392 heap: 96468992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:39:03.226416+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81313792 unmapped: 15155200 heap: 96468992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:42 compute-0 ceph-osd[85971]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1046719 data_alloc: 218103808 data_used: 8733
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:39:04.226570+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81313792 unmapped: 15155200 heap: 96468992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:39:05.226979+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81313792 unmapped: 15155200 heap: 96468992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:39:06.227153+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81313792 unmapped: 15155200 heap: 96468992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:39:07.227862+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81313792 unmapped: 15155200 heap: 96468992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fce7c000/0x0/0x4ffc00000, data 0xd8d1a/0x1b0000, compress 0x0/0x0/0x0, omap 0x19398, meta 0x2bb6c68), peers [1,2] op hist [])
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:39:08.228348+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81313792 unmapped: 15155200 heap: 96468992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:42 compute-0 ceph-osd[85971]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1046719 data_alloc: 218103808 data_used: 8733
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:39:09.228792+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81330176 unmapped: 15138816 heap: 96468992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:39:10.229049+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81330176 unmapped: 15138816 heap: 96468992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:39:11.229391+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fce7c000/0x0/0x4ffc00000, data 0xd8d1a/0x1b0000, compress 0x0/0x0/0x0, omap 0x19398, meta 0x2bb6c68), peers [1,2] op hist [])
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81330176 unmapped: 15138816 heap: 96468992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:39:12.229751+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fce7c000/0x0/0x4ffc00000, data 0xd8d1a/0x1b0000, compress 0x0/0x0/0x0, omap 0x19398, meta 0x2bb6c68), peers [1,2] op hist [])
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81330176 unmapped: 15138816 heap: 96468992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:39:13.230314+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81330176 unmapped: 15138816 heap: 96468992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:39:14.230681+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:42 compute-0 ceph-osd[85971]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1046719 data_alloc: 218103808 data_used: 8733
Jan 31 08:51:42 compute-0 ceph-osd[85971]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fce7c000/0x0/0x4ffc00000, data 0xd8d1a/0x1b0000, compress 0x0/0x0/0x0, omap 0x19398, meta 0x2bb6c68), peers [1,2] op hist [])
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81330176 unmapped: 15138816 heap: 96468992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:39:15.230954+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81330176 unmapped: 15138816 heap: 96468992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:39:16.231333+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fce7c000/0x0/0x4ffc00000, data 0xd8d1a/0x1b0000, compress 0x0/0x0/0x0, omap 0x19398, meta 0x2bb6c68), peers [1,2] op hist [])
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81330176 unmapped: 15138816 heap: 96468992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:39:17.231548+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fce7c000/0x0/0x4ffc00000, data 0xd8d1a/0x1b0000, compress 0x0/0x0/0x0, omap 0x19398, meta 0x2bb6c68), peers [1,2] op hist [])
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81330176 unmapped: 15138816 heap: 96468992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:39:18.231830+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fce7c000/0x0/0x4ffc00000, data 0xd8d1a/0x1b0000, compress 0x0/0x0/0x0, omap 0x19398, meta 0x2bb6c68), peers [1,2] op hist [])
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81330176 unmapped: 15138816 heap: 96468992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:39:19.232236+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:42 compute-0 ceph-osd[85971]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1046719 data_alloc: 218103808 data_used: 8733
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81330176 unmapped: 15138816 heap: 96468992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:39:20.232402+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81330176 unmapped: 15138816 heap: 96468992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:39:21.232710+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81330176 unmapped: 15138816 heap: 96468992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:39:22.232915+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81330176 unmapped: 15138816 heap: 96468992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:39:23.233115+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81330176 unmapped: 15138816 heap: 96468992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fce7c000/0x0/0x4ffc00000, data 0xd8d1a/0x1b0000, compress 0x0/0x0/0x0, omap 0x19398, meta 0x2bb6c68), peers [1,2] op hist [])
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:39:24.233368+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:42 compute-0 ceph-osd[85971]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1046719 data_alloc: 218103808 data_used: 8733
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81330176 unmapped: 15138816 heap: 96468992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:39:25.233552+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81330176 unmapped: 15138816 heap: 96468992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:39:26.233732+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fce7c000/0x0/0x4ffc00000, data 0xd8d1a/0x1b0000, compress 0x0/0x0/0x0, omap 0x19398, meta 0x2bb6c68), peers [1,2] op hist [])
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81330176 unmapped: 15138816 heap: 96468992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:39:27.233966+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fce7c000/0x0/0x4ffc00000, data 0xd8d1a/0x1b0000, compress 0x0/0x0/0x0, omap 0x19398, meta 0x2bb6c68), peers [1,2] op hist [])
Jan 31 08:51:42 compute-0 ceph-osd[85971]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fce7c000/0x0/0x4ffc00000, data 0xd8d1a/0x1b0000, compress 0x0/0x0/0x0, omap 0x19398, meta 0x2bb6c68), peers [1,2] op hist [])
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81330176 unmapped: 15138816 heap: 96468992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:39:28.234162+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81330176 unmapped: 15138816 heap: 96468992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:39:29.234376+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:42 compute-0 ceph-osd[85971]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1046719 data_alloc: 218103808 data_used: 8733
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81346560 unmapped: 15122432 heap: 96468992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:39:30.234609+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81346560 unmapped: 15122432 heap: 96468992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:39:31.234862+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81346560 unmapped: 15122432 heap: 96468992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:39:32.235096+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81346560 unmapped: 15122432 heap: 96468992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:39:33.235294+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fce7c000/0x0/0x4ffc00000, data 0xd8d1a/0x1b0000, compress 0x0/0x0/0x0, omap 0x19398, meta 0x2bb6c68), peers [1,2] op hist [])
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81346560 unmapped: 15122432 heap: 96468992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:39:34.235486+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:42 compute-0 ceph-osd[85971]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1046719 data_alloc: 218103808 data_used: 8733
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81346560 unmapped: 15122432 heap: 96468992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:39:35.235734+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81346560 unmapped: 15122432 heap: 96468992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:39:36.236206+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81346560 unmapped: 15122432 heap: 96468992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:39:37.236416+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81346560 unmapped: 15122432 heap: 96468992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:39:38.236603+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fce7c000/0x0/0x4ffc00000, data 0xd8d1a/0x1b0000, compress 0x0/0x0/0x0, omap 0x19398, meta 0x2bb6c68), peers [1,2] op hist [])
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81346560 unmapped: 15122432 heap: 96468992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:39:39.236764+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:42 compute-0 ceph-osd[85971]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1046719 data_alloc: 218103808 data_used: 8733
Jan 31 08:51:42 compute-0 ceph-osd[85971]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fce7c000/0x0/0x4ffc00000, data 0xd8d1a/0x1b0000, compress 0x0/0x0/0x0, omap 0x19398, meta 0x2bb6c68), peers [1,2] op hist [])
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81346560 unmapped: 15122432 heap: 96468992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:39:40.236918+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81346560 unmapped: 15122432 heap: 96468992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:39:41.237110+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fce7c000/0x0/0x4ffc00000, data 0xd8d1a/0x1b0000, compress 0x0/0x0/0x0, omap 0x19398, meta 0x2bb6c68), peers [1,2] op hist [])
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81346560 unmapped: 15122432 heap: 96468992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:39:42.237380+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81346560 unmapped: 15122432 heap: 96468992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:39:43.237636+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81346560 unmapped: 15122432 heap: 96468992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:39:44.237833+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:42 compute-0 ceph-osd[85971]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1046719 data_alloc: 218103808 data_used: 8733
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81346560 unmapped: 15122432 heap: 96468992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:39:45.238039+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81346560 unmapped: 15122432 heap: 96468992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:39:46.238290+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fce7c000/0x0/0x4ffc00000, data 0xd8d1a/0x1b0000, compress 0x0/0x0/0x0, omap 0x19398, meta 0x2bb6c68), peers [1,2] op hist [])
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81346560 unmapped: 15122432 heap: 96468992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:39:47.238526+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81346560 unmapped: 15122432 heap: 96468992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:39:48.238732+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fce7c000/0x0/0x4ffc00000, data 0xd8d1a/0x1b0000, compress 0x0/0x0/0x0, omap 0x19398, meta 0x2bb6c68), peers [1,2] op hist [])
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81346560 unmapped: 15122432 heap: 96468992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:39:49.238894+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:42 compute-0 ceph-osd[85971]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1046719 data_alloc: 218103808 data_used: 8733
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81371136 unmapped: 15097856 heap: 96468992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:39:50.239005+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fce7c000/0x0/0x4ffc00000, data 0xd8d1a/0x1b0000, compress 0x0/0x0/0x0, omap 0x19398, meta 0x2bb6c68), peers [1,2] op hist [])
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:39:51.239191+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81371136 unmapped: 15097856 heap: 96468992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:39:52.239378+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81371136 unmapped: 15097856 heap: 96468992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fce7c000/0x0/0x4ffc00000, data 0xd8d1a/0x1b0000, compress 0x0/0x0/0x0, omap 0x19398, meta 0x2bb6c68), peers [1,2] op hist [])
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:39:53.239548+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81371136 unmapped: 15097856 heap: 96468992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fce7c000/0x0/0x4ffc00000, data 0xd8d1a/0x1b0000, compress 0x0/0x0/0x0, omap 0x19398, meta 0x2bb6c68), peers [1,2] op hist [])
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:39:54.239715+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81371136 unmapped: 15097856 heap: 96468992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:42 compute-0 ceph-osd[85971]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1046719 data_alloc: 218103808 data_used: 8733
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:39:55.239861+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81371136 unmapped: 15097856 heap: 96468992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fce7c000/0x0/0x4ffc00000, data 0xd8d1a/0x1b0000, compress 0x0/0x0/0x0, omap 0x19398, meta 0x2bb6c68), peers [1,2] op hist [])
Jan 31 08:51:42 compute-0 ceph-osd[85971]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fce7c000/0x0/0x4ffc00000, data 0xd8d1a/0x1b0000, compress 0x0/0x0/0x0, omap 0x19398, meta 0x2bb6c68), peers [1,2] op hist [])
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:39:56.240064+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81371136 unmapped: 15097856 heap: 96468992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:39:57.240287+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81371136 unmapped: 15097856 heap: 96468992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fce7c000/0x0/0x4ffc00000, data 0xd8d1a/0x1b0000, compress 0x0/0x0/0x0, omap 0x19398, meta 0x2bb6c68), peers [1,2] op hist [])
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:39:58.240417+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81371136 unmapped: 15097856 heap: 96468992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:39:59.240534+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81371136 unmapped: 15097856 heap: 96468992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:42 compute-0 ceph-osd[85971]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1046719 data_alloc: 218103808 data_used: 8733
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:40:00.240655+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81371136 unmapped: 15097856 heap: 96468992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:40:01.240802+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81371136 unmapped: 15097856 heap: 96468992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:40:02.240921+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81371136 unmapped: 15097856 heap: 96468992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:40:03.241048+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81371136 unmapped: 15097856 heap: 96468992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fce7c000/0x0/0x4ffc00000, data 0xd8d1a/0x1b0000, compress 0x0/0x0/0x0, omap 0x19398, meta 0x2bb6c68), peers [1,2] op hist [])
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:40:04.241207+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81371136 unmapped: 15097856 heap: 96468992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:42 compute-0 ceph-osd[85971]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1046719 data_alloc: 218103808 data_used: 8733
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:40:05.241416+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81371136 unmapped: 15097856 heap: 96468992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:40:06.241566+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81371136 unmapped: 15097856 heap: 96468992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fce7c000/0x0/0x4ffc00000, data 0xd8d1a/0x1b0000, compress 0x0/0x0/0x0, omap 0x19398, meta 0x2bb6c68), peers [1,2] op hist [])
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:40:07.241757+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81371136 unmapped: 15097856 heap: 96468992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fce7c000/0x0/0x4ffc00000, data 0xd8d1a/0x1b0000, compress 0x0/0x0/0x0, omap 0x19398, meta 0x2bb6c68), peers [1,2] op hist [])
Jan 31 08:51:42 compute-0 ceph-osd[85971]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fce7c000/0x0/0x4ffc00000, data 0xd8d1a/0x1b0000, compress 0x0/0x0/0x0, omap 0x19398, meta 0x2bb6c68), peers [1,2] op hist [])
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:40:08.241953+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81371136 unmapped: 15097856 heap: 96468992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:40:09.242149+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81371136 unmapped: 15097856 heap: 96468992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:42 compute-0 ceph-osd[85971]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1046719 data_alloc: 218103808 data_used: 8733
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:40:10.242312+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81387520 unmapped: 15081472 heap: 96468992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:40:11.242487+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81387520 unmapped: 15081472 heap: 96468992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:40:12.242653+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81387520 unmapped: 15081472 heap: 96468992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:40:13.242799+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81387520 unmapped: 15081472 heap: 96468992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fce7c000/0x0/0x4ffc00000, data 0xd8d1a/0x1b0000, compress 0x0/0x0/0x0, omap 0x19398, meta 0x2bb6c68), peers [1,2] op hist [])
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:40:14.242935+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81387520 unmapped: 15081472 heap: 96468992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:42 compute-0 ceph-osd[85971]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1046719 data_alloc: 218103808 data_used: 8733
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:40:15.243106+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81387520 unmapped: 15081472 heap: 96468992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fce7c000/0x0/0x4ffc00000, data 0xd8d1a/0x1b0000, compress 0x0/0x0/0x0, omap 0x19398, meta 0x2bb6c68), peers [1,2] op hist [])
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:40:16.243334+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81387520 unmapped: 15081472 heap: 96468992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:40:17.243481+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81387520 unmapped: 15081472 heap: 96468992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:40:18.243722+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81387520 unmapped: 15081472 heap: 96468992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:40:19.243946+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81387520 unmapped: 15081472 heap: 96468992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:42 compute-0 ceph-osd[85971]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1046719 data_alloc: 218103808 data_used: 8733
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:40:20.244135+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81387520 unmapped: 15081472 heap: 96468992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:40:21.244318+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fce7c000/0x0/0x4ffc00000, data 0xd8d1a/0x1b0000, compress 0x0/0x0/0x0, omap 0x19398, meta 0x2bb6c68), peers [1,2] op hist [])
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81387520 unmapped: 15081472 heap: 96468992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:40:22.244474+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81387520 unmapped: 15081472 heap: 96468992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:40:23.244652+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81387520 unmapped: 15081472 heap: 96468992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:40:24.244807+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81387520 unmapped: 15081472 heap: 96468992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:42 compute-0 ceph-osd[85971]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1046719 data_alloc: 218103808 data_used: 8733
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:40:25.244978+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81387520 unmapped: 15081472 heap: 96468992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fce7c000/0x0/0x4ffc00000, data 0xd8d1a/0x1b0000, compress 0x0/0x0/0x0, omap 0x19398, meta 0x2bb6c68), peers [1,2] op hist [])
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:40:26.245221+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81387520 unmapped: 15081472 heap: 96468992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:40:27.245357+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81387520 unmapped: 15081472 heap: 96468992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fce7c000/0x0/0x4ffc00000, data 0xd8d1a/0x1b0000, compress 0x0/0x0/0x0, omap 0x19398, meta 0x2bb6c68), peers [1,2] op hist [])
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:40:28.245528+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81387520 unmapped: 15081472 heap: 96468992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:40:29.245712+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81387520 unmapped: 15081472 heap: 96468992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:42 compute-0 ceph-osd[85971]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1046719 data_alloc: 218103808 data_used: 8733
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:40:30.245839+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81403904 unmapped: 15065088 heap: 96468992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:40:31.246045+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81403904 unmapped: 15065088 heap: 96468992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fce7c000/0x0/0x4ffc00000, data 0xd8d1a/0x1b0000, compress 0x0/0x0/0x0, omap 0x19398, meta 0x2bb6c68), peers [1,2] op hist [])
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:40:32.246185+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81403904 unmapped: 15065088 heap: 96468992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:40:33.246325+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81403904 unmapped: 15065088 heap: 96468992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:40:34.246500+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81403904 unmapped: 15065088 heap: 96468992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:42 compute-0 ceph-osd[85971]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1046719 data_alloc: 218103808 data_used: 8733
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:40:35.246637+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81403904 unmapped: 15065088 heap: 96468992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:40:36.246852+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81403904 unmapped: 15065088 heap: 96468992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fce7c000/0x0/0x4ffc00000, data 0xd8d1a/0x1b0000, compress 0x0/0x0/0x0, omap 0x19398, meta 0x2bb6c68), peers [1,2] op hist [])
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:40:37.246976+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81403904 unmapped: 15065088 heap: 96468992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:40:38.247118+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81403904 unmapped: 15065088 heap: 96468992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:40:39.247269+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81403904 unmapped: 15065088 heap: 96468992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:42 compute-0 ceph-osd[85971]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1046719 data_alloc: 218103808 data_used: 8733
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:40:40.247394+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81403904 unmapped: 15065088 heap: 96468992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:40:41.247565+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fce7c000/0x0/0x4ffc00000, data 0xd8d1a/0x1b0000, compress 0x0/0x0/0x0, omap 0x19398, meta 0x2bb6c68), peers [1,2] op hist [])
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81403904 unmapped: 15065088 heap: 96468992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:40:42.247696+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81403904 unmapped: 15065088 heap: 96468992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:40:43.247841+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81403904 unmapped: 15065088 heap: 96468992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:40:44.247966+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81403904 unmapped: 15065088 heap: 96468992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:42 compute-0 ceph-osd[85971]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1046719 data_alloc: 218103808 data_used: 8733
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:40:45.248089+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81403904 unmapped: 15065088 heap: 96468992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:40:46.248215+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81403904 unmapped: 15065088 heap: 96468992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fce7c000/0x0/0x4ffc00000, data 0xd8d1a/0x1b0000, compress 0x0/0x0/0x0, omap 0x19398, meta 0x2bb6c68), peers [1,2] op hist [])
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:40:47.248353+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81403904 unmapped: 15065088 heap: 96468992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:40:48.248487+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81403904 unmapped: 15065088 heap: 96468992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:40:49.248619+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81403904 unmapped: 15065088 heap: 96468992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:42 compute-0 ceph-osd[85971]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1046719 data_alloc: 218103808 data_used: 8733
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:40:50.248782+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81420288 unmapped: 15048704 heap: 96468992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:40:51.248980+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81420288 unmapped: 15048704 heap: 96468992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:40:52.249061+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81420288 unmapped: 15048704 heap: 96468992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fce7c000/0x0/0x4ffc00000, data 0xd8d1a/0x1b0000, compress 0x0/0x0/0x0, omap 0x19398, meta 0x2bb6c68), peers [1,2] op hist [])
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:40:53.249184+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81420288 unmapped: 15048704 heap: 96468992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:40:54.249321+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81420288 unmapped: 15048704 heap: 96468992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:42 compute-0 ceph-osd[85971]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1046719 data_alloc: 218103808 data_used: 8733
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:40:55.249484+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81420288 unmapped: 15048704 heap: 96468992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:40:56.249627+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81420288 unmapped: 15048704 heap: 96468992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:40:57.249755+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81420288 unmapped: 15048704 heap: 96468992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fce7c000/0x0/0x4ffc00000, data 0xd8d1a/0x1b0000, compress 0x0/0x0/0x0, omap 0x19398, meta 0x2bb6c68), peers [1,2] op hist [])
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:40:58.249841+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81420288 unmapped: 15048704 heap: 96468992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:40:59.249990+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81420288 unmapped: 15048704 heap: 96468992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:42 compute-0 ceph-osd[85971]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1046719 data_alloc: 218103808 data_used: 8733
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:41:00.250127+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81420288 unmapped: 15048704 heap: 96468992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:41:01.250385+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81420288 unmapped: 15048704 heap: 96468992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:41:02.250539+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81420288 unmapped: 15048704 heap: 96468992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:41:03.250660+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81420288 unmapped: 15048704 heap: 96468992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fce7c000/0x0/0x4ffc00000, data 0xd8d1a/0x1b0000, compress 0x0/0x0/0x0, omap 0x19398, meta 0x2bb6c68), peers [1,2] op hist [])
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:41:04.250812+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81420288 unmapped: 15048704 heap: 96468992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:42 compute-0 ceph-osd[85971]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1046719 data_alloc: 218103808 data_used: 8733
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:41:05.250955+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81420288 unmapped: 15048704 heap: 96468992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:41:06.251123+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81420288 unmapped: 15048704 heap: 96468992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fce7c000/0x0/0x4ffc00000, data 0xd8d1a/0x1b0000, compress 0x0/0x0/0x0, omap 0x19398, meta 0x2bb6c68), peers [1,2] op hist [])
Jan 31 08:51:42 compute-0 ceph-osd[85971]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fce7c000/0x0/0x4ffc00000, data 0xd8d1a/0x1b0000, compress 0x0/0x0/0x0, omap 0x19398, meta 0x2bb6c68), peers [1,2] op hist [])
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:41:07.251307+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81420288 unmapped: 15048704 heap: 96468992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:41:08.251440+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81420288 unmapped: 15048704 heap: 96468992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:41:09.251559+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81420288 unmapped: 15048704 heap: 96468992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:42 compute-0 ceph-osd[85971]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1046719 data_alloc: 218103808 data_used: 8733
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:41:10.251695+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81199104 unmapped: 15269888 heap: 96468992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:41:11.251863+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81199104 unmapped: 15269888 heap: 96468992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:41:12.252001+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81199104 unmapped: 15269888 heap: 96468992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fce7c000/0x0/0x4ffc00000, data 0xd8d1a/0x1b0000, compress 0x0/0x0/0x0, omap 0x19398, meta 0x2bb6c68), peers [1,2] op hist [])
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:41:13.252121+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81199104 unmapped: 15269888 heap: 96468992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:41:14.252246+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81199104 unmapped: 15269888 heap: 96468992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:42 compute-0 ceph-osd[85971]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1046719 data_alloc: 218103808 data_used: 8733
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:41:15.252371+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fce7c000/0x0/0x4ffc00000, data 0xd8d1a/0x1b0000, compress 0x0/0x0/0x0, omap 0x19398, meta 0x2bb6c68), peers [1,2] op hist [])
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81199104 unmapped: 15269888 heap: 96468992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:41:16.252499+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81199104 unmapped: 15269888 heap: 96468992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:41:17.252616+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81199104 unmapped: 15269888 heap: 96468992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:41:18.252777+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81199104 unmapped: 15269888 heap: 96468992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:41:19.253044+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81199104 unmapped: 15269888 heap: 96468992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:42 compute-0 ceph-osd[85971]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1046719 data_alloc: 218103808 data_used: 8733
Jan 31 08:51:42 compute-0 ceph-osd[85971]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fce7c000/0x0/0x4ffc00000, data 0xd8d1a/0x1b0000, compress 0x0/0x0/0x0, omap 0x19398, meta 0x2bb6c68), peers [1,2] op hist [])
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:41:20.253181+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81199104 unmapped: 15269888 heap: 96468992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:41:21.253383+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81199104 unmapped: 15269888 heap: 96468992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fce7c000/0x0/0x4ffc00000, data 0xd8d1a/0x1b0000, compress 0x0/0x0/0x0, omap 0x19398, meta 0x2bb6c68), peers [1,2] op hist [])
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:41:22.253570+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81199104 unmapped: 15269888 heap: 96468992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:41:23.253727+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81199104 unmapped: 15269888 heap: 96468992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:41:24.253935+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81199104 unmapped: 15269888 heap: 96468992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:42 compute-0 ceph-osd[85971]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1046719 data_alloc: 218103808 data_used: 8733
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:41:25.254112+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81199104 unmapped: 15269888 heap: 96468992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:41:26.254293+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fce7c000/0x0/0x4ffc00000, data 0xd8d1a/0x1b0000, compress 0x0/0x0/0x0, omap 0x19398, meta 0x2bb6c68), peers [1,2] op hist [])
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81199104 unmapped: 15269888 heap: 96468992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:41:27.254488+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81199104 unmapped: 15269888 heap: 96468992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:41:28.254661+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81199104 unmapped: 15269888 heap: 96468992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:41:29.254825+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81199104 unmapped: 15269888 heap: 96468992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:42 compute-0 ceph-osd[85971]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1046719 data_alloc: 218103808 data_used: 8733
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:41:30.254992+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81199104 unmapped: 15269888 heap: 96468992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fce7c000/0x0/0x4ffc00000, data 0xd8d1a/0x1b0000, compress 0x0/0x0/0x0, omap 0x19398, meta 0x2bb6c68), peers [1,2] op hist [])
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:41:31.255151+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81199104 unmapped: 15269888 heap: 96468992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:41:32.255312+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81199104 unmapped: 15269888 heap: 96468992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:41:33.255495+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81199104 unmapped: 15269888 heap: 96468992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:41:34.255644+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81199104 unmapped: 15269888 heap: 96468992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:42 compute-0 ceph-osd[85971]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1046719 data_alloc: 218103808 data_used: 8733
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:41:35.255824+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81199104 unmapped: 15269888 heap: 96468992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fce7c000/0x0/0x4ffc00000, data 0xd8d1a/0x1b0000, compress 0x0/0x0/0x0, omap 0x19398, meta 0x2bb6c68), peers [1,2] op hist [])
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:41:36.256118+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81199104 unmapped: 15269888 heap: 96468992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:41:37.256348+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fce7c000/0x0/0x4ffc00000, data 0xd8d1a/0x1b0000, compress 0x0/0x0/0x0, omap 0x19398, meta 0x2bb6c68), peers [1,2] op hist [])
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81199104 unmapped: 15269888 heap: 96468992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:41:38.256494+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fce7c000/0x0/0x4ffc00000, data 0xd8d1a/0x1b0000, compress 0x0/0x0/0x0, omap 0x19398, meta 0x2bb6c68), peers [1,2] op hist [])
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81199104 unmapped: 15269888 heap: 96468992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:41:39.256617+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81199104 unmapped: 15269888 heap: 96468992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:42 compute-0 ceph-osd[85971]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1046719 data_alloc: 218103808 data_used: 8733
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:41:40.256780+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81199104 unmapped: 15269888 heap: 96468992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:41:41.257022+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81199104 unmapped: 15269888 heap: 96468992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:41:42.257185+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81199104 unmapped: 15269888 heap: 96468992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:41:43.257443+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fce7c000/0x0/0x4ffc00000, data 0xd8d1a/0x1b0000, compress 0x0/0x0/0x0, omap 0x19398, meta 0x2bb6c68), peers [1,2] op hist [])
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81199104 unmapped: 15269888 heap: 96468992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:41:44.257726+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81199104 unmapped: 15269888 heap: 96468992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:42 compute-0 ceph-osd[85971]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1046719 data_alloc: 218103808 data_used: 8733
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:41:45.257874+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81199104 unmapped: 15269888 heap: 96468992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:41:46.258011+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81199104 unmapped: 15269888 heap: 96468992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:41:47.258151+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fce7c000/0x0/0x4ffc00000, data 0xd8d1a/0x1b0000, compress 0x0/0x0/0x0, omap 0x19398, meta 0x2bb6c68), peers [1,2] op hist [])
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81199104 unmapped: 15269888 heap: 96468992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:41:48.258329+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fce7c000/0x0/0x4ffc00000, data 0xd8d1a/0x1b0000, compress 0x0/0x0/0x0, omap 0x19398, meta 0x2bb6c68), peers [1,2] op hist [])
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81199104 unmapped: 15269888 heap: 96468992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:41:49.258502+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81199104 unmapped: 15269888 heap: 96468992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:42 compute-0 ceph-osd[85971]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1046719 data_alloc: 218103808 data_used: 8733
Jan 31 08:51:42 compute-0 ceph-osd[85971]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fce7c000/0x0/0x4ffc00000, data 0xd8d1a/0x1b0000, compress 0x0/0x0/0x0, omap 0x19398, meta 0x2bb6c68), peers [1,2] op hist [])
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:41:50.258672+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81199104 unmapped: 15269888 heap: 96468992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:41:51.258889+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81199104 unmapped: 15269888 heap: 96468992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:41:52.259028+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81199104 unmapped: 15269888 heap: 96468992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:41:53.259183+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fce7c000/0x0/0x4ffc00000, data 0xd8d1a/0x1b0000, compress 0x0/0x0/0x0, omap 0x19398, meta 0x2bb6c68), peers [1,2] op hist [])
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81199104 unmapped: 15269888 heap: 96468992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:41:54.259318+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81199104 unmapped: 15269888 heap: 96468992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:42 compute-0 ceph-osd[85971]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1046719 data_alloc: 218103808 data_used: 8733
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:41:55.259482+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81199104 unmapped: 15269888 heap: 96468992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:41:56.259667+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81199104 unmapped: 15269888 heap: 96468992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:41:57.259918+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81199104 unmapped: 15269888 heap: 96468992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:41:58.260080+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fce7c000/0x0/0x4ffc00000, data 0xd8d1a/0x1b0000, compress 0x0/0x0/0x0, omap 0x19398, meta 0x2bb6c68), peers [1,2] op hist [])
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81199104 unmapped: 15269888 heap: 96468992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:41:59.260379+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81199104 unmapped: 15269888 heap: 96468992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:42 compute-0 ceph-osd[85971]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1046719 data_alloc: 218103808 data_used: 8733
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:42:00.260567+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81199104 unmapped: 15269888 heap: 96468992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:42:01.260785+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81199104 unmapped: 15269888 heap: 96468992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fce7c000/0x0/0x4ffc00000, data 0xd8d1a/0x1b0000, compress 0x0/0x0/0x0, omap 0x19398, meta 0x2bb6c68), peers [1,2] op hist [])
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:42:02.260948+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81199104 unmapped: 15269888 heap: 96468992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:42:03.261161+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81199104 unmapped: 15269888 heap: 96468992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:42:04.261333+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81199104 unmapped: 15269888 heap: 96468992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:42 compute-0 ceph-osd[85971]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1046719 data_alloc: 218103808 data_used: 8733
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:42:05.261519+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fce7c000/0x0/0x4ffc00000, data 0xd8d1a/0x1b0000, compress 0x0/0x0/0x0, omap 0x19398, meta 0x2bb6c68), peers [1,2] op hist [])
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81199104 unmapped: 15269888 heap: 96468992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:42:06.261661+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81199104 unmapped: 15269888 heap: 96468992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:42:07.261813+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81199104 unmapped: 15269888 heap: 96468992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:42:08.261973+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81199104 unmapped: 15269888 heap: 96468992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:42:09.262177+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81215488 unmapped: 15253504 heap: 96468992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:42 compute-0 ceph-osd[85971]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1046719 data_alloc: 218103808 data_used: 8733
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:42:10.262329+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81215488 unmapped: 15253504 heap: 96468992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:42:11.262572+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fce7c000/0x0/0x4ffc00000, data 0xd8d1a/0x1b0000, compress 0x0/0x0/0x0, omap 0x19398, meta 0x2bb6c68), peers [1,2] op hist [])
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81215488 unmapped: 15253504 heap: 96468992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:42:12.262743+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81215488 unmapped: 15253504 heap: 96468992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:42:13.262906+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81215488 unmapped: 15253504 heap: 96468992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:42:14.263651+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81215488 unmapped: 15253504 heap: 96468992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:42 compute-0 ceph-osd[85971]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1046719 data_alloc: 218103808 data_used: 8733
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:42:15.264113+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81215488 unmapped: 15253504 heap: 96468992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:42:16.265555+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fce7c000/0x0/0x4ffc00000, data 0xd8d1a/0x1b0000, compress 0x0/0x0/0x0, omap 0x19398, meta 0x2bb6c68), peers [1,2] op hist [])
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81215488 unmapped: 15253504 heap: 96468992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:42:17.265914+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81215488 unmapped: 15253504 heap: 96468992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:42:18.266136+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81215488 unmapped: 15253504 heap: 96468992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:42:19.266321+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fce7c000/0x0/0x4ffc00000, data 0xd8d1a/0x1b0000, compress 0x0/0x0/0x0, omap 0x19398, meta 0x2bb6c68), peers [1,2] op hist [])
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81215488 unmapped: 15253504 heap: 96468992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:42 compute-0 ceph-osd[85971]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1046719 data_alloc: 218103808 data_used: 8733
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:42:20.267373+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81215488 unmapped: 15253504 heap: 96468992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:42:21.267574+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81215488 unmapped: 15253504 heap: 96468992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:42:22.268357+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81215488 unmapped: 15253504 heap: 96468992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:42:23.268509+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fce7c000/0x0/0x4ffc00000, data 0xd8d1a/0x1b0000, compress 0x0/0x0/0x0, omap 0x19398, meta 0x2bb6c68), peers [1,2] op hist [])
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81215488 unmapped: 15253504 heap: 96468992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:42:24.268650+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81215488 unmapped: 15253504 heap: 96468992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:42 compute-0 ceph-osd[85971]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1046719 data_alloc: 218103808 data_used: 8733
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:42:25.268990+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fce7c000/0x0/0x4ffc00000, data 0xd8d1a/0x1b0000, compress 0x0/0x0/0x0, omap 0x19398, meta 0x2bb6c68), peers [1,2] op hist [])
Jan 31 08:51:42 compute-0 ceph-osd[85971]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fce7c000/0x0/0x4ffc00000, data 0xd8d1a/0x1b0000, compress 0x0/0x0/0x0, omap 0x19398, meta 0x2bb6c68), peers [1,2] op hist [])
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81215488 unmapped: 15253504 heap: 96468992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:42:26.269225+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81215488 unmapped: 15253504 heap: 96468992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:42:27.269326+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81215488 unmapped: 15253504 heap: 96468992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:42:28.269537+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fce7c000/0x0/0x4ffc00000, data 0xd8d1a/0x1b0000, compress 0x0/0x0/0x0, omap 0x19398, meta 0x2bb6c68), peers [1,2] op hist [])
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81215488 unmapped: 15253504 heap: 96468992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fce7c000/0x0/0x4ffc00000, data 0xd8d1a/0x1b0000, compress 0x0/0x0/0x0, omap 0x19398, meta 0x2bb6c68), peers [1,2] op hist [])
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:42:29.269714+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81215488 unmapped: 15253504 heap: 96468992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:42 compute-0 ceph-osd[85971]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1046719 data_alloc: 218103808 data_used: 8733
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:42:30.269835+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fce7c000/0x0/0x4ffc00000, data 0xd8d1a/0x1b0000, compress 0x0/0x0/0x0, omap 0x19398, meta 0x2bb6c68), peers [1,2] op hist [])
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81215488 unmapped: 15253504 heap: 96468992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:42:31.269989+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81215488 unmapped: 15253504 heap: 96468992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:42:32.270326+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81215488 unmapped: 15253504 heap: 96468992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fce7c000/0x0/0x4ffc00000, data 0xd8d1a/0x1b0000, compress 0x0/0x0/0x0, omap 0x19398, meta 0x2bb6c68), peers [1,2] op hist [])
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:42:33.270507+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81215488 unmapped: 15253504 heap: 96468992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:42:34.270694+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81215488 unmapped: 15253504 heap: 96468992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:42 compute-0 ceph-osd[85971]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1046719 data_alloc: 218103808 data_used: 8733
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:42:35.270917+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81215488 unmapped: 15253504 heap: 96468992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:42:36.271133+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81215488 unmapped: 15253504 heap: 96468992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:42:37.271413+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fce7c000/0x0/0x4ffc00000, data 0xd8d1a/0x1b0000, compress 0x0/0x0/0x0, omap 0x19398, meta 0x2bb6c68), peers [1,2] op hist [])
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81215488 unmapped: 15253504 heap: 96468992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:42:38.271616+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81215488 unmapped: 15253504 heap: 96468992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fce7c000/0x0/0x4ffc00000, data 0xd8d1a/0x1b0000, compress 0x0/0x0/0x0, omap 0x19398, meta 0x2bb6c68), peers [1,2] op hist [])
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:42:39.271805+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fce7c000/0x0/0x4ffc00000, data 0xd8d1a/0x1b0000, compress 0x0/0x0/0x0, omap 0x19398, meta 0x2bb6c68), peers [1,2] op hist [])
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81215488 unmapped: 15253504 heap: 96468992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:42 compute-0 ceph-osd[85971]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1046719 data_alloc: 218103808 data_used: 8733
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:42:40.272047+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81215488 unmapped: 15253504 heap: 96468992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:42:41.272408+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81215488 unmapped: 15253504 heap: 96468992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:42:42.272603+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81215488 unmapped: 15253504 heap: 96468992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:42:43.272776+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81215488 unmapped: 15253504 heap: 96468992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fce7c000/0x0/0x4ffc00000, data 0xd8d1a/0x1b0000, compress 0x0/0x0/0x0, omap 0x19398, meta 0x2bb6c68), peers [1,2] op hist [])
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:42:44.272959+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81215488 unmapped: 15253504 heap: 96468992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:42 compute-0 ceph-osd[85971]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1046719 data_alloc: 218103808 data_used: 8733
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:42:45.273316+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fce7c000/0x0/0x4ffc00000, data 0xd8d1a/0x1b0000, compress 0x0/0x0/0x0, omap 0x19398, meta 0x2bb6c68), peers [1,2] op hist [])
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81215488 unmapped: 15253504 heap: 96468992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:42:46.273949+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81215488 unmapped: 15253504 heap: 96468992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:42:47.274518+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81215488 unmapped: 15253504 heap: 96468992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:42:48.275157+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81215488 unmapped: 15253504 heap: 96468992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:42:49.275566+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fce7c000/0x0/0x4ffc00000, data 0xd8d1a/0x1b0000, compress 0x0/0x0/0x0, omap 0x19398, meta 0x2bb6c68), peers [1,2] op hist [])
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81215488 unmapped: 15253504 heap: 96468992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:42 compute-0 ceph-osd[85971]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1046719 data_alloc: 218103808 data_used: 8733
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:42:50.275912+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81215488 unmapped: 15253504 heap: 96468992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:42:51.276221+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81215488 unmapped: 15253504 heap: 96468992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:42:52.276409+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81215488 unmapped: 15253504 heap: 96468992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:42:53.276762+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81215488 unmapped: 15253504 heap: 96468992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fce7c000/0x0/0x4ffc00000, data 0xd8d1a/0x1b0000, compress 0x0/0x0/0x0, omap 0x19398, meta 0x2bb6c68), peers [1,2] op hist [])
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:42:54.276978+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81215488 unmapped: 15253504 heap: 96468992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:42 compute-0 ceph-osd[85971]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1046719 data_alloc: 218103808 data_used: 8733
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:42:55.277221+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81215488 unmapped: 15253504 heap: 96468992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:42:56.277341+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81215488 unmapped: 15253504 heap: 96468992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:42:57.277585+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fce7c000/0x0/0x4ffc00000, data 0xd8d1a/0x1b0000, compress 0x0/0x0/0x0, omap 0x19398, meta 0x2bb6c68), peers [1,2] op hist [])
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81215488 unmapped: 15253504 heap: 96468992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:42:58.277873+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81215488 unmapped: 15253504 heap: 96468992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:42:59.278124+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fce7c000/0x0/0x4ffc00000, data 0xd8d1a/0x1b0000, compress 0x0/0x0/0x0, omap 0x19398, meta 0x2bb6c68), peers [1,2] op hist [])
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81215488 unmapped: 15253504 heap: 96468992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:43:00.278301+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:42 compute-0 ceph-osd[85971]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1046719 data_alloc: 218103808 data_used: 8733
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81215488 unmapped: 15253504 heap: 96468992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:43:01.278507+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81215488 unmapped: 15253504 heap: 96468992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:43:02.278691+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81215488 unmapped: 15253504 heap: 96468992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:43:03.278837+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fce7c000/0x0/0x4ffc00000, data 0xd8d1a/0x1b0000, compress 0x0/0x0/0x0, omap 0x19398, meta 0x2bb6c68), peers [1,2] op hist [])
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81215488 unmapped: 15253504 heap: 96468992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:43:04.279057+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81215488 unmapped: 15253504 heap: 96468992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:43:05.279301+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:42 compute-0 ceph-osd[85971]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1046719 data_alloc: 218103808 data_used: 8733
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81215488 unmapped: 15253504 heap: 96468992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:43:06.279520+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fce7c000/0x0/0x4ffc00000, data 0xd8d1a/0x1b0000, compress 0x0/0x0/0x0, omap 0x19398, meta 0x2bb6c68), peers [1,2] op hist [])
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81215488 unmapped: 15253504 heap: 96468992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:43:07.279732+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81215488 unmapped: 15253504 heap: 96468992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:43:08.279936+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81215488 unmapped: 15253504 heap: 96468992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:43:09.280132+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81215488 unmapped: 15253504 heap: 96468992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:43:10.280321+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:42 compute-0 ceph-osd[85971]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1046719 data_alloc: 218103808 data_used: 8733
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81215488 unmapped: 15253504 heap: 96468992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:43:11.280547+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fce7c000/0x0/0x4ffc00000, data 0xd8d1a/0x1b0000, compress 0x0/0x0/0x0, omap 0x19398, meta 0x2bb6c68), peers [1,2] op hist [])
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81215488 unmapped: 15253504 heap: 96468992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:43:12.280725+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81215488 unmapped: 15253504 heap: 96468992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:43:13.280971+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81215488 unmapped: 15253504 heap: 96468992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:43:14.281130+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81215488 unmapped: 15253504 heap: 96468992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:43:15.281327+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:42 compute-0 ceph-osd[85971]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1046719 data_alloc: 218103808 data_used: 8733
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81215488 unmapped: 15253504 heap: 96468992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:43:16.281460+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fce7c000/0x0/0x4ffc00000, data 0xd8d1a/0x1b0000, compress 0x0/0x0/0x0, omap 0x19398, meta 0x2bb6c68), peers [1,2] op hist [])
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81215488 unmapped: 15253504 heap: 96468992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:43:17.281665+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fce7c000/0x0/0x4ffc00000, data 0xd8d1a/0x1b0000, compress 0x0/0x0/0x0, omap 0x19398, meta 0x2bb6c68), peers [1,2] op hist [])
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81215488 unmapped: 15253504 heap: 96468992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:43:18.281793+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81215488 unmapped: 15253504 heap: 96468992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:43:19.282028+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81215488 unmapped: 15253504 heap: 96468992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:43:20.282310+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fce7c000/0x0/0x4ffc00000, data 0xd8d1a/0x1b0000, compress 0x0/0x0/0x0, omap 0x19398, meta 0x2bb6c68), peers [1,2] op hist [])
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:42 compute-0 ceph-osd[85971]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1046719 data_alloc: 218103808 data_used: 8733
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81215488 unmapped: 15253504 heap: 96468992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:43:21.282552+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81215488 unmapped: 15253504 heap: 96468992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:43:22.282746+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81215488 unmapped: 15253504 heap: 96468992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:43:23.282950+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81215488 unmapped: 15253504 heap: 96468992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:43:24.283111+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81215488 unmapped: 15253504 heap: 96468992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:43:25.283323+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:42 compute-0 ceph-osd[85971]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1046719 data_alloc: 218103808 data_used: 8733
Jan 31 08:51:42 compute-0 ceph-osd[85971]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fce7c000/0x0/0x4ffc00000, data 0xd8d1a/0x1b0000, compress 0x0/0x0/0x0, omap 0x19398, meta 0x2bb6c68), peers [1,2] op hist [])
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81215488 unmapped: 15253504 heap: 96468992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:43:26.283446+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:43:27.283622+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81215488 unmapped: 15253504 heap: 96468992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:43:28.283784+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81215488 unmapped: 15253504 heap: 96468992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:43:29.283998+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81215488 unmapped: 15253504 heap: 96468992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fce7c000/0x0/0x4ffc00000, data 0xd8d1a/0x1b0000, compress 0x0/0x0/0x0, omap 0x19398, meta 0x2bb6c68), peers [1,2] op hist [])
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:43:30.284205+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81215488 unmapped: 15253504 heap: 96468992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:42 compute-0 ceph-osd[85971]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1046719 data_alloc: 218103808 data_used: 8733
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:43:31.284431+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81215488 unmapped: 15253504 heap: 96468992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:43:32.284581+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81215488 unmapped: 15253504 heap: 96468992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:43:33.284725+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81215488 unmapped: 15253504 heap: 96468992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:43:34.284972+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81215488 unmapped: 15253504 heap: 96468992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:43:35.285121+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81215488 unmapped: 15253504 heap: 96468992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:42 compute-0 ceph-osd[85971]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1046719 data_alloc: 218103808 data_used: 8733
Jan 31 08:51:42 compute-0 ceph-osd[85971]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fce7c000/0x0/0x4ffc00000, data 0xd8d1a/0x1b0000, compress 0x0/0x0/0x0, omap 0x19398, meta 0x2bb6c68), peers [1,2] op hist [])
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:43:36.285317+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81215488 unmapped: 15253504 heap: 96468992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:43:37.285450+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81215488 unmapped: 15253504 heap: 96468992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:43:38.285615+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81215488 unmapped: 15253504 heap: 96468992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:43:39.285775+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81215488 unmapped: 15253504 heap: 96468992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:43:40.285910+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81215488 unmapped: 15253504 heap: 96468992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:42 compute-0 ceph-osd[85971]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1046719 data_alloc: 218103808 data_used: 8733
Jan 31 08:51:42 compute-0 ceph-osd[85971]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fce7c000/0x0/0x4ffc00000, data 0xd8d1a/0x1b0000, compress 0x0/0x0/0x0, omap 0x19398, meta 0x2bb6c68), peers [1,2] op hist [])
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:43:41.286095+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81215488 unmapped: 15253504 heap: 96468992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:43:42.286284+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81215488 unmapped: 15253504 heap: 96468992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fce7c000/0x0/0x4ffc00000, data 0xd8d1a/0x1b0000, compress 0x0/0x0/0x0, omap 0x19398, meta 0x2bb6c68), peers [1,2] op hist [])
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:43:43.286433+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81215488 unmapped: 15253504 heap: 96468992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:43:44.286564+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81215488 unmapped: 15253504 heap: 96468992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:43:45.286781+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81215488 unmapped: 15253504 heap: 96468992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:42 compute-0 ceph-osd[85971]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1046719 data_alloc: 218103808 data_used: 8733
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:43:46.286982+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81215488 unmapped: 15253504 heap: 96468992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:43:47.287138+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fce7c000/0x0/0x4ffc00000, data 0xd8d1a/0x1b0000, compress 0x0/0x0/0x0, omap 0x19398, meta 0x2bb6c68), peers [1,2] op hist [])
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81215488 unmapped: 15253504 heap: 96468992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:43:48.287386+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81215488 unmapped: 15253504 heap: 96468992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:43:49.287620+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81215488 unmapped: 15253504 heap: 96468992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:43:50.287866+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81215488 unmapped: 15253504 heap: 96468992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:42 compute-0 ceph-osd[85971]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1046719 data_alloc: 218103808 data_used: 8733
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:43:51.288126+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81215488 unmapped: 15253504 heap: 96468992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fce7c000/0x0/0x4ffc00000, data 0xd8d1a/0x1b0000, compress 0x0/0x0/0x0, omap 0x19398, meta 0x2bb6c68), peers [1,2] op hist [])
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:43:52.288315+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81215488 unmapped: 15253504 heap: 96468992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:43:53.288484+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81215488 unmapped: 15253504 heap: 96468992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fce7c000/0x0/0x4ffc00000, data 0xd8d1a/0x1b0000, compress 0x0/0x0/0x0, omap 0x19398, meta 0x2bb6c68), peers [1,2] op hist [])
Jan 31 08:51:42 compute-0 ceph-osd[85971]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fce7c000/0x0/0x4ffc00000, data 0xd8d1a/0x1b0000, compress 0x0/0x0/0x0, omap 0x19398, meta 0x2bb6c68), peers [1,2] op hist [])
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:43:54.288691+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81215488 unmapped: 15253504 heap: 96468992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:43:55.288922+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81215488 unmapped: 15253504 heap: 96468992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:42 compute-0 ceph-osd[85971]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1046719 data_alloc: 218103808 data_used: 8733
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:43:56.289130+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81215488 unmapped: 15253504 heap: 96468992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fce7c000/0x0/0x4ffc00000, data 0xd8d1a/0x1b0000, compress 0x0/0x0/0x0, omap 0x19398, meta 0x2bb6c68), peers [1,2] op hist [])
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:43:57.289352+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81215488 unmapped: 15253504 heap: 96468992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:43:58.289549+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81215488 unmapped: 15253504 heap: 96468992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:43:59.289780+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81215488 unmapped: 15253504 heap: 96468992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:44:00.289972+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81215488 unmapped: 15253504 heap: 96468992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:42 compute-0 ceph-osd[85971]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1046719 data_alloc: 218103808 data_used: 8733
Jan 31 08:51:42 compute-0 ceph-osd[85971]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fce7c000/0x0/0x4ffc00000, data 0xd8d1a/0x1b0000, compress 0x0/0x0/0x0, omap 0x19398, meta 0x2bb6c68), peers [1,2] op hist [])
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:44:01.290162+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81215488 unmapped: 15253504 heap: 96468992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:44:02.290362+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fce7c000/0x0/0x4ffc00000, data 0xd8d1a/0x1b0000, compress 0x0/0x0/0x0, omap 0x19398, meta 0x2bb6c68), peers [1,2] op hist [])
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81215488 unmapped: 15253504 heap: 96468992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:44:03.290544+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81215488 unmapped: 15253504 heap: 96468992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:44:04.290773+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81215488 unmapped: 15253504 heap: 96468992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:44:05.291005+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81215488 unmapped: 15253504 heap: 96468992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:42 compute-0 ceph-osd[85971]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1046719 data_alloc: 218103808 data_used: 8733
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:44:06.291179+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81215488 unmapped: 15253504 heap: 96468992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:44:07.291387+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81215488 unmapped: 15253504 heap: 96468992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fce7c000/0x0/0x4ffc00000, data 0xd8d1a/0x1b0000, compress 0x0/0x0/0x0, omap 0x19398, meta 0x2bb6c68), peers [1,2] op hist [])
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:44:08.291606+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81215488 unmapped: 15253504 heap: 96468992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:44:09.291819+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81215488 unmapped: 15253504 heap: 96468992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:44:10.292014+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81215488 unmapped: 15253504 heap: 96468992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:42 compute-0 ceph-osd[85971]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1046719 data_alloc: 218103808 data_used: 8733
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:44:11.292293+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81215488 unmapped: 15253504 heap: 96468992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:44:12.292476+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81215488 unmapped: 15253504 heap: 96468992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fce7c000/0x0/0x4ffc00000, data 0xd8d1a/0x1b0000, compress 0x0/0x0/0x0, omap 0x19398, meta 0x2bb6c68), peers [1,2] op hist [])
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:44:13.292641+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81215488 unmapped: 15253504 heap: 96468992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:44:14.292834+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81215488 unmapped: 15253504 heap: 96468992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:44:15.292972+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81215488 unmapped: 15253504 heap: 96468992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:42 compute-0 ceph-osd[85971]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1046719 data_alloc: 218103808 data_used: 8733
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:44:16.293179+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81215488 unmapped: 15253504 heap: 96468992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fce7c000/0x0/0x4ffc00000, data 0xd8d1a/0x1b0000, compress 0x0/0x0/0x0, omap 0x19398, meta 0x2bb6c68), peers [1,2] op hist [])
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:44:17.293372+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81215488 unmapped: 15253504 heap: 96468992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:44:18.293626+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81215488 unmapped: 15253504 heap: 96468992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:44:19.293854+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fce7c000/0x0/0x4ffc00000, data 0xd8d1a/0x1b0000, compress 0x0/0x0/0x0, omap 0x19398, meta 0x2bb6c68), peers [1,2] op hist [])
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81215488 unmapped: 15253504 heap: 96468992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:44:20.294039+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81215488 unmapped: 15253504 heap: 96468992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:42 compute-0 ceph-osd[85971]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1046719 data_alloc: 218103808 data_used: 8733
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:44:21.294326+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81215488 unmapped: 15253504 heap: 96468992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fce7c000/0x0/0x4ffc00000, data 0xd8d1a/0x1b0000, compress 0x0/0x0/0x0, omap 0x19398, meta 0x2bb6c68), peers [1,2] op hist [])
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:44:22.294488+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81215488 unmapped: 15253504 heap: 96468992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:44:23.294685+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81215488 unmapped: 15253504 heap: 96468992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:44:24.294897+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81215488 unmapped: 15253504 heap: 96468992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:44:25.295135+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81215488 unmapped: 15253504 heap: 96468992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:42 compute-0 ceph-osd[85971]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1046719 data_alloc: 218103808 data_used: 8733
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:44:26.295346+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81215488 unmapped: 15253504 heap: 96468992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fce7c000/0x0/0x4ffc00000, data 0xd8d1a/0x1b0000, compress 0x0/0x0/0x0, omap 0x19398, meta 0x2bb6c68), peers [1,2] op hist [])
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:44:27.295549+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81215488 unmapped: 15253504 heap: 96468992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:44:28.295770+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81215488 unmapped: 15253504 heap: 96468992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:44:29.295949+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81215488 unmapped: 15253504 heap: 96468992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:44:30.296128+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81215488 unmapped: 15253504 heap: 96468992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:42 compute-0 ceph-osd[85971]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1046719 data_alloc: 218103808 data_used: 8733
Jan 31 08:51:42 compute-0 ceph-osd[85971]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fce7c000/0x0/0x4ffc00000, data 0xd8d1a/0x1b0000, compress 0x0/0x0/0x0, omap 0x19398, meta 0x2bb6c68), peers [1,2] op hist [])
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:44:31.296361+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81215488 unmapped: 15253504 heap: 96468992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:44:32.296569+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81215488 unmapped: 15253504 heap: 96468992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:44:33.296820+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81215488 unmapped: 15253504 heap: 96468992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fce7c000/0x0/0x4ffc00000, data 0xd8d1a/0x1b0000, compress 0x0/0x0/0x0, omap 0x19398, meta 0x2bb6c68), peers [1,2] op hist [])
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:44:34.297058+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81215488 unmapped: 15253504 heap: 96468992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:44:35.297339+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81215488 unmapped: 15253504 heap: 96468992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:42 compute-0 ceph-osd[85971]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1046719 data_alloc: 218103808 data_used: 8733
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:44:36.297559+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81215488 unmapped: 15253504 heap: 96468992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:44:37.297686+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81215488 unmapped: 15253504 heap: 96468992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:44:38.297842+0000)
Jan 31 08:51:42 compute-0 podman[263332]: 2026-01-31 08:51:42.212818706 +0000 UTC m=+0.364320489 container init c8d99e342937555a53bbcd7cc7dfb996c2b65a726ec10eb8a93e3d25bb2d88f7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=lucid_chatterjee, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81215488 unmapped: 15253504 heap: 96468992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:44:39.297961+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81215488 unmapped: 15253504 heap: 96468992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fce7c000/0x0/0x4ffc00000, data 0xd8d1a/0x1b0000, compress 0x0/0x0/0x0, omap 0x19398, meta 0x2bb6c68), peers [1,2] op hist [])
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:44:40.298137+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81215488 unmapped: 15253504 heap: 96468992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:42 compute-0 ceph-osd[85971]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1046719 data_alloc: 218103808 data_used: 8733
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:44:41.298308+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fce7c000/0x0/0x4ffc00000, data 0xd8d1a/0x1b0000, compress 0x0/0x0/0x0, omap 0x19398, meta 0x2bb6c68), peers [1,2] op hist [])
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81215488 unmapped: 15253504 heap: 96468992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:44:42.298440+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81215488 unmapped: 15253504 heap: 96468992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fce7c000/0x0/0x4ffc00000, data 0xd8d1a/0x1b0000, compress 0x0/0x0/0x0, omap 0x19398, meta 0x2bb6c68), peers [1,2] op hist [])
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:44:43.298577+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81215488 unmapped: 15253504 heap: 96468992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:44:44.298877+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81215488 unmapped: 15253504 heap: 96468992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:44:45.299214+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81215488 unmapped: 15253504 heap: 96468992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:42 compute-0 ceph-osd[85971]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1046719 data_alloc: 218103808 data_used: 8733
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:44:46.299434+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81215488 unmapped: 15253504 heap: 96468992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:44:47.299644+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fce7c000/0x0/0x4ffc00000, data 0xd8d1a/0x1b0000, compress 0x0/0x0/0x0, omap 0x19398, meta 0x2bb6c68), peers [1,2] op hist [])
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81215488 unmapped: 15253504 heap: 96468992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:44:48.299848+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81215488 unmapped: 15253504 heap: 96468992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:44:49.300006+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81215488 unmapped: 15253504 heap: 96468992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:44:50.300182+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81215488 unmapped: 15253504 heap: 96468992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:42 compute-0 ceph-osd[85971]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1046719 data_alloc: 218103808 data_used: 8733
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:44:51.300411+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81215488 unmapped: 15253504 heap: 96468992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fce7c000/0x0/0x4ffc00000, data 0xd8d1a/0x1b0000, compress 0x0/0x0/0x0, omap 0x19398, meta 0x2bb6c68), peers [1,2] op hist [])
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:44:52.300593+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81215488 unmapped: 15253504 heap: 96468992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:44:53.300789+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81215488 unmapped: 15253504 heap: 96468992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:44:54.300915+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81215488 unmapped: 15253504 heap: 96468992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:44:55.301130+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81215488 unmapped: 15253504 heap: 96468992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:42 compute-0 ceph-osd[85971]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1046719 data_alloc: 218103808 data_used: 8733
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:44:56.301363+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81215488 unmapped: 15253504 heap: 96468992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:44:57.301591+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fce7c000/0x0/0x4ffc00000, data 0xd8d1a/0x1b0000, compress 0x0/0x0/0x0, omap 0x19398, meta 0x2bb6c68), peers [1,2] op hist [])
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81215488 unmapped: 15253504 heap: 96468992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:44:58.301775+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fce7c000/0x0/0x4ffc00000, data 0xd8d1a/0x1b0000, compress 0x0/0x0/0x0, omap 0x19398, meta 0x2bb6c68), peers [1,2] op hist [])
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81215488 unmapped: 15253504 heap: 96468992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:44:59.301980+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81215488 unmapped: 15253504 heap: 96468992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:45:00.302119+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81215488 unmapped: 15253504 heap: 96468992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:42 compute-0 ceph-osd[85971]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1046719 data_alloc: 218103808 data_used: 8733
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:45:01.302360+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81215488 unmapped: 15253504 heap: 96468992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fce7c000/0x0/0x4ffc00000, data 0xd8d1a/0x1b0000, compress 0x0/0x0/0x0, omap 0x19398, meta 0x2bb6c68), peers [1,2] op hist [])
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:45:02.302488+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81215488 unmapped: 15253504 heap: 96468992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:45:03.302644+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81215488 unmapped: 15253504 heap: 96468992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:45:04.302768+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81215488 unmapped: 15253504 heap: 96468992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:45:05.303069+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81215488 unmapped: 15253504 heap: 96468992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:42 compute-0 ceph-osd[85971]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1046719 data_alloc: 218103808 data_used: 8733
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:45:06.303218+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fce7c000/0x0/0x4ffc00000, data 0xd8d1a/0x1b0000, compress 0x0/0x0/0x0, omap 0x19398, meta 0x2bb6c68), peers [1,2] op hist [])
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81215488 unmapped: 15253504 heap: 96468992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:45:07.303434+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81215488 unmapped: 15253504 heap: 96468992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:45:08.303583+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81215488 unmapped: 15253504 heap: 96468992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:45:09.303767+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81215488 unmapped: 15253504 heap: 96468992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fce7c000/0x0/0x4ffc00000, data 0xd8d1a/0x1b0000, compress 0x0/0x0/0x0, omap 0x19398, meta 0x2bb6c68), peers [1,2] op hist [])
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:45:10.303910+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fce7c000/0x0/0x4ffc00000, data 0xd8d1a/0x1b0000, compress 0x0/0x0/0x0, omap 0x19398, meta 0x2bb6c68), peers [1,2] op hist [])
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81215488 unmapped: 15253504 heap: 96468992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:42 compute-0 ceph-osd[85971]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1046719 data_alloc: 218103808 data_used: 8733
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:45:11.304102+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81215488 unmapped: 15253504 heap: 96468992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:45:12.304242+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81215488 unmapped: 15253504 heap: 96468992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:45:13.304388+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81215488 unmapped: 15253504 heap: 96468992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:45:14.304508+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fce7c000/0x0/0x4ffc00000, data 0xd8d1a/0x1b0000, compress 0x0/0x0/0x0, omap 0x19398, meta 0x2bb6c68), peers [1,2] op hist [])
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81215488 unmapped: 15253504 heap: 96468992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:45:15.304671+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81215488 unmapped: 15253504 heap: 96468992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:42 compute-0 ceph-osd[85971]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1046719 data_alloc: 218103808 data_used: 8733
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:45:16.304799+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81215488 unmapped: 15253504 heap: 96468992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fce7c000/0x0/0x4ffc00000, data 0xd8d1a/0x1b0000, compress 0x0/0x0/0x0, omap 0x19398, meta 0x2bb6c68), peers [1,2] op hist [])
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:45:17.304951+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81215488 unmapped: 15253504 heap: 96468992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fce7c000/0x0/0x4ffc00000, data 0xd8d1a/0x1b0000, compress 0x0/0x0/0x0, omap 0x19398, meta 0x2bb6c68), peers [1,2] op hist [])
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:45:18.305095+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81215488 unmapped: 15253504 heap: 96468992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:45:19.305243+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81215488 unmapped: 15253504 heap: 96468992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:45:20.305412+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81215488 unmapped: 15253504 heap: 96468992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:42 compute-0 ceph-osd[85971]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1046719 data_alloc: 218103808 data_used: 8733
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:45:21.305582+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81215488 unmapped: 15253504 heap: 96468992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fce7c000/0x0/0x4ffc00000, data 0xd8d1a/0x1b0000, compress 0x0/0x0/0x0, omap 0x19398, meta 0x2bb6c68), peers [1,2] op hist [])
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:45:22.305726+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81215488 unmapped: 15253504 heap: 96468992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:45:23.305930+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81215488 unmapped: 15253504 heap: 96468992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:45:24.306083+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81215488 unmapped: 15253504 heap: 96468992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:45:25.306309+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81215488 unmapped: 15253504 heap: 96468992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:42 compute-0 ceph-osd[85971]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1046719 data_alloc: 218103808 data_used: 8733
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:45:26.306524+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81215488 unmapped: 15253504 heap: 96468992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fce7c000/0x0/0x4ffc00000, data 0xd8d1a/0x1b0000, compress 0x0/0x0/0x0, omap 0x19398, meta 0x2bb6c68), peers [1,2] op hist [])
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:45:27.306662+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81215488 unmapped: 15253504 heap: 96468992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:45:28.306832+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81215488 unmapped: 15253504 heap: 96468992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:45:29.307040+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81215488 unmapped: 15253504 heap: 96468992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:45:30.307186+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81215488 unmapped: 15253504 heap: 96468992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:42 compute-0 ceph-osd[85971]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1046719 data_alloc: 218103808 data_used: 8733
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:45:31.307389+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81215488 unmapped: 15253504 heap: 96468992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fce7c000/0x0/0x4ffc00000, data 0xd8d1a/0x1b0000, compress 0x0/0x0/0x0, omap 0x19398, meta 0x2bb6c68), peers [1,2] op hist [])
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:45:32.307542+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81215488 unmapped: 15253504 heap: 96468992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:45:33.307785+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81215488 unmapped: 15253504 heap: 96468992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:45:34.307936+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81215488 unmapped: 15253504 heap: 96468992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:45:35.308097+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fce7c000/0x0/0x4ffc00000, data 0xd8d1a/0x1b0000, compress 0x0/0x0/0x0, omap 0x19398, meta 0x2bb6c68), peers [1,2] op hist [])
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81215488 unmapped: 15253504 heap: 96468992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:42 compute-0 ceph-osd[85971]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1046719 data_alloc: 218103808 data_used: 8733
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:45:36.308231+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fce7c000/0x0/0x4ffc00000, data 0xd8d1a/0x1b0000, compress 0x0/0x0/0x0, omap 0x19398, meta 0x2bb6c68), peers [1,2] op hist [])
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81215488 unmapped: 15253504 heap: 96468992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:45:37.308429+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fce7c000/0x0/0x4ffc00000, data 0xd8d1a/0x1b0000, compress 0x0/0x0/0x0, omap 0x19398, meta 0x2bb6c68), peers [1,2] op hist [])
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81215488 unmapped: 15253504 heap: 96468992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81215488 unmapped: 15253504 heap: 96468992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:45:38.957157+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81215488 unmapped: 15253504 heap: 96468992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:45:39.957316+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81215488 unmapped: 15253504 heap: 96468992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:45:40.957507+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:42 compute-0 ceph-osd[85971]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1046719 data_alloc: 218103808 data_used: 8733
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81215488 unmapped: 15253504 heap: 96468992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:45:41.957721+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fce7c000/0x0/0x4ffc00000, data 0xd8d1a/0x1b0000, compress 0x0/0x0/0x0, omap 0x19398, meta 0x2bb6c68), peers [1,2] op hist [])
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81215488 unmapped: 15253504 heap: 96468992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:45:42.957936+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81215488 unmapped: 15253504 heap: 96468992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:45:43.958050+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81215488 unmapped: 15253504 heap: 96468992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:45:44.958185+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81215488 unmapped: 15253504 heap: 96468992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:45:45.958363+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:42 compute-0 ceph-osd[85971]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1046719 data_alloc: 218103808 data_used: 8733
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81215488 unmapped: 15253504 heap: 96468992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:45:46.958478+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fce7c000/0x0/0x4ffc00000, data 0xd8d1a/0x1b0000, compress 0x0/0x0/0x0, omap 0x19398, meta 0x2bb6c68), peers [1,2] op hist [])
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81215488 unmapped: 15253504 heap: 96468992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:45:47.958645+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81215488 unmapped: 15253504 heap: 96468992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:45:48.958797+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81215488 unmapped: 15253504 heap: 96468992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:45:49.959024+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fce7c000/0x0/0x4ffc00000, data 0xd8d1a/0x1b0000, compress 0x0/0x0/0x0, omap 0x19398, meta 0x2bb6c68), peers [1,2] op hist [])
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81215488 unmapped: 15253504 heap: 96468992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:45:50.959176+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:42 compute-0 ceph-osd[85971]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1046719 data_alloc: 218103808 data_used: 8733
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81215488 unmapped: 15253504 heap: 96468992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:45:51.959464+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81215488 unmapped: 15253504 heap: 96468992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:45:52.959614+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81215488 unmapped: 15253504 heap: 96468992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:45:53.959832+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81215488 unmapped: 15253504 heap: 96468992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 podman[263332]: 2026-01-31 08:51:42.223064378 +0000 UTC m=+0.374566131 container start c8d99e342937555a53bbcd7cc7dfb996c2b65a726ec10eb8a93e3d25bb2d88f7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=lucid_chatterjee, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030)
Jan 31 08:51:42 compute-0 lucid_chatterjee[263370]: 167 167
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:45:54.959999+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fce7c000/0x0/0x4ffc00000, data 0xd8d1a/0x1b0000, compress 0x0/0x0/0x0, omap 0x19398, meta 0x2bb6c68), peers [1,2] op hist [])
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81215488 unmapped: 15253504 heap: 96468992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:45:55.960193+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:42 compute-0 ceph-osd[85971]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1046719 data_alloc: 218103808 data_used: 8733
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81215488 unmapped: 15253504 heap: 96468992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:45:56.960345+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81215488 unmapped: 15253504 heap: 96468992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 systemd[1]: libpod-c8d99e342937555a53bbcd7cc7dfb996c2b65a726ec10eb8a93e3d25bb2d88f7.scope: Deactivated successfully.
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:45:57.960506+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fce7c000/0x0/0x4ffc00000, data 0xd8d1a/0x1b0000, compress 0x0/0x0/0x0, omap 0x19398, meta 0x2bb6c68), peers [1,2] op hist [])
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81215488 unmapped: 15253504 heap: 96468992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:45:58.960656+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fce7c000/0x0/0x4ffc00000, data 0xd8d1a/0x1b0000, compress 0x0/0x0/0x0, omap 0x19398, meta 0x2bb6c68), peers [1,2] op hist [])
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81215488 unmapped: 15253504 heap: 96468992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:45:59.960827+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81215488 unmapped: 15253504 heap: 96468992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:46:00.960985+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:42 compute-0 ceph-osd[85971]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1046719 data_alloc: 218103808 data_used: 8733
Jan 31 08:51:42 compute-0 ceph-osd[85971]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fce7c000/0x0/0x4ffc00000, data 0xd8d1a/0x1b0000, compress 0x0/0x0/0x0, omap 0x19398, meta 0x2bb6c68), peers [1,2] op hist [])
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81215488 unmapped: 15253504 heap: 96468992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:46:01.961365+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fce7c000/0x0/0x4ffc00000, data 0xd8d1a/0x1b0000, compress 0x0/0x0/0x0, omap 0x19398, meta 0x2bb6c68), peers [1,2] op hist [])
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81215488 unmapped: 15253504 heap: 96468992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:46:02.961492+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81223680 unmapped: 15245312 heap: 96468992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:46:03.961635+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81223680 unmapped: 15245312 heap: 96468992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:46:04.961778+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81223680 unmapped: 15245312 heap: 96468992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:46:05.961926+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:42 compute-0 ceph-osd[85971]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1046719 data_alloc: 218103808 data_used: 8733
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81223680 unmapped: 15245312 heap: 96468992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:46:06.962046+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:46:07.962197+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81223680 unmapped: 15245312 heap: 96468992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fce7c000/0x0/0x4ffc00000, data 0xd8d1a/0x1b0000, compress 0x0/0x0/0x0, omap 0x19398, meta 0x2bb6c68), peers [1,2] op hist [])
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:46:08.962338+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81223680 unmapped: 15245312 heap: 96468992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:46:09.962560+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81223680 unmapped: 15245312 heap: 96468992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:46:10.962774+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81223680 unmapped: 15245312 heap: 96468992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:42 compute-0 ceph-osd[85971]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1046719 data_alloc: 218103808 data_used: 8733
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:46:11.963227+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81223680 unmapped: 15245312 heap: 96468992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:46:12.963456+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81223680 unmapped: 15245312 heap: 96468992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:46:13.963593+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81223680 unmapped: 15245312 heap: 96468992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fce7c000/0x0/0x4ffc00000, data 0xd8d1a/0x1b0000, compress 0x0/0x0/0x0, omap 0x19398, meta 0x2bb6c68), peers [1,2] op hist [])
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:46:14.963758+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81223680 unmapped: 15245312 heap: 96468992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:46:15.964024+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81223680 unmapped: 15245312 heap: 96468992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:42 compute-0 ceph-osd[85971]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1046719 data_alloc: 218103808 data_used: 8733
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:46:16.964192+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81223680 unmapped: 15245312 heap: 96468992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:46:17.964381+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81223680 unmapped: 15245312 heap: 96468992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fce7c000/0x0/0x4ffc00000, data 0xd8d1a/0x1b0000, compress 0x0/0x0/0x0, omap 0x19398, meta 0x2bb6c68), peers [1,2] op hist [])
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:46:18.964534+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81223680 unmapped: 15245312 heap: 96468992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:46:19.964722+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81223680 unmapped: 15245312 heap: 96468992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:46:20.964871+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81223680 unmapped: 15245312 heap: 96468992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:42 compute-0 ceph-osd[85971]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1046719 data_alloc: 218103808 data_used: 8733
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:46:21.965102+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81223680 unmapped: 15245312 heap: 96468992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:46:22.965247+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81223680 unmapped: 15245312 heap: 96468992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:46:23.965546+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81223680 unmapped: 15245312 heap: 96468992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fce7c000/0x0/0x4ffc00000, data 0xd8d1a/0x1b0000, compress 0x0/0x0/0x0, omap 0x19398, meta 0x2bb6c68), peers [1,2] op hist [])
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:46:24.965721+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81223680 unmapped: 15245312 heap: 96468992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fce7c000/0x0/0x4ffc00000, data 0xd8d1a/0x1b0000, compress 0x0/0x0/0x0, omap 0x19398, meta 0x2bb6c68), peers [1,2] op hist [])
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:46:25.965907+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81223680 unmapped: 15245312 heap: 96468992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:42 compute-0 ceph-osd[85971]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1046719 data_alloc: 218103808 data_used: 8733
Jan 31 08:51:42 compute-0 ceph-osd[85971]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fce7c000/0x0/0x4ffc00000, data 0xd8d1a/0x1b0000, compress 0x0/0x0/0x0, omap 0x19398, meta 0x2bb6c68), peers [1,2] op hist [])
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:46:26.966069+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81223680 unmapped: 15245312 heap: 96468992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:46:27.966401+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81223680 unmapped: 15245312 heap: 96468992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:46:28.966561+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81223680 unmapped: 15245312 heap: 96468992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:46:29.966736+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81223680 unmapped: 15245312 heap: 96468992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:46:30.966893+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81223680 unmapped: 15245312 heap: 96468992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:42 compute-0 ceph-osd[85971]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1046719 data_alloc: 218103808 data_used: 8733
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:46:31.968229+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81223680 unmapped: 15245312 heap: 96468992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fce7c000/0x0/0x4ffc00000, data 0xd8d1a/0x1b0000, compress 0x0/0x0/0x0, omap 0x19398, meta 0x2bb6c68), peers [1,2] op hist [])
Jan 31 08:51:42 compute-0 ceph-osd[85971]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fce7c000/0x0/0x4ffc00000, data 0xd8d1a/0x1b0000, compress 0x0/0x0/0x0, omap 0x19398, meta 0x2bb6c68), peers [1,2] op hist [])
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:46:32.968716+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81223680 unmapped: 15245312 heap: 96468992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:46:33.969142+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81223680 unmapped: 15245312 heap: 96468992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fce7c000/0x0/0x4ffc00000, data 0xd8d1a/0x1b0000, compress 0x0/0x0/0x0, omap 0x19398, meta 0x2bb6c68), peers [1,2] op hist [])
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:46:34.969636+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81223680 unmapped: 15245312 heap: 96468992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fce7c000/0x0/0x4ffc00000, data 0xd8d1a/0x1b0000, compress 0x0/0x0/0x0, omap 0x19398, meta 0x2bb6c68), peers [1,2] op hist [])
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:46:35.970327+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81223680 unmapped: 15245312 heap: 96468992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:42 compute-0 ceph-osd[85971]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1046719 data_alloc: 218103808 data_used: 8733
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:46:36.970697+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81223680 unmapped: 15245312 heap: 96468992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:46:37.971829+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81223680 unmapped: 15245312 heap: 96468992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:46:38.972073+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fce7c000/0x0/0x4ffc00000, data 0xd8d1a/0x1b0000, compress 0x0/0x0/0x0, omap 0x19398, meta 0x2bb6c68), peers [1,2] op hist [])
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81223680 unmapped: 15245312 heap: 96468992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:46:39.972832+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81223680 unmapped: 15245312 heap: 96468992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:46:40.973178+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81223680 unmapped: 15245312 heap: 96468992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:42 compute-0 ceph-osd[85971]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1046719 data_alloc: 218103808 data_used: 8733
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:46:41.973865+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81223680 unmapped: 15245312 heap: 96468992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:46:42.974160+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81223680 unmapped: 15245312 heap: 96468992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:46:43.974370+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81223680 unmapped: 15245312 heap: 96468992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fce7c000/0x0/0x4ffc00000, data 0xd8d1a/0x1b0000, compress 0x0/0x0/0x0, omap 0x19398, meta 0x2bb6c68), peers [1,2] op hist [])
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:46:44.974586+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81223680 unmapped: 15245312 heap: 96468992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:46:45.974773+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 2400.1 total, 600.0 interval
                                           Cumulative writes: 6532 writes, 26K keys, 6532 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.01 MB/s
                                           Cumulative WAL: 6532 writes, 1295 syncs, 5.04 writes per sync, written: 0.02 GB, 0.01 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 346 writes, 673 keys, 346 commit groups, 1.0 writes per commit group, ingest: 0.27 MB, 0.00 MB/s
                                           Interval WAL: 346 writes, 170 syncs, 2.04 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81223680 unmapped: 15245312 heap: 96468992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:42 compute-0 ceph-osd[85971]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1046719 data_alloc: 218103808 data_used: 8733
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:46:46.974937+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81223680 unmapped: 15245312 heap: 96468992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:46:47.975125+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81223680 unmapped: 15245312 heap: 96468992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:46:48.975341+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81223680 unmapped: 15245312 heap: 96468992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:46:49.975632+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fce7c000/0x0/0x4ffc00000, data 0xd8d1a/0x1b0000, compress 0x0/0x0/0x0, omap 0x19398, meta 0x2bb6c68), peers [1,2] op hist [])
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81223680 unmapped: 15245312 heap: 96468992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:46:50.975880+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81223680 unmapped: 15245312 heap: 96468992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:42 compute-0 ceph-osd[85971]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1046719 data_alloc: 218103808 data_used: 8733
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:46:51.976185+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81223680 unmapped: 15245312 heap: 96468992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:46:52.976492+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81223680 unmapped: 15245312 heap: 96468992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:46:53.976696+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81223680 unmapped: 15245312 heap: 96468992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fce7c000/0x0/0x4ffc00000, data 0xd8d1a/0x1b0000, compress 0x0/0x0/0x0, omap 0x19398, meta 0x2bb6c68), peers [1,2] op hist [])
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:46:54.977007+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81223680 unmapped: 15245312 heap: 96468992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:46:55.977186+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81223680 unmapped: 15245312 heap: 96468992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:42 compute-0 ceph-osd[85971]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1046719 data_alloc: 218103808 data_used: 8733
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:46:56.977396+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81223680 unmapped: 15245312 heap: 96468992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:46:57.977586+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81223680 unmapped: 15245312 heap: 96468992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:46:58.977694+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fce7c000/0x0/0x4ffc00000, data 0xd8d1a/0x1b0000, compress 0x0/0x0/0x0, omap 0x19398, meta 0x2bb6c68), peers [1,2] op hist [])
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81223680 unmapped: 15245312 heap: 96468992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:46:59.977850+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fce7c000/0x0/0x4ffc00000, data 0xd8d1a/0x1b0000, compress 0x0/0x0/0x0, omap 0x19398, meta 0x2bb6c68), peers [1,2] op hist [])
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81223680 unmapped: 15245312 heap: 96468992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:47:00.978109+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81223680 unmapped: 15245312 heap: 96468992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:42 compute-0 ceph-osd[85971]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1046719 data_alloc: 218103808 data_used: 8733
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:47:01.978537+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81223680 unmapped: 15245312 heap: 96468992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:47:02.978754+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81240064 unmapped: 15228928 heap: 96468992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:47:03.978953+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81240064 unmapped: 15228928 heap: 96468992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fce7c000/0x0/0x4ffc00000, data 0xd8d1a/0x1b0000, compress 0x0/0x0/0x0, omap 0x19398, meta 0x2bb6c68), peers [1,2] op hist [])
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:47:04.979139+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81240064 unmapped: 15228928 heap: 96468992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:47:05.979349+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81240064 unmapped: 15228928 heap: 96468992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:42 compute-0 ceph-osd[85971]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1046719 data_alloc: 218103808 data_used: 8733
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:47:06.979527+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fce7c000/0x0/0x4ffc00000, data 0xd8d1a/0x1b0000, compress 0x0/0x0/0x0, omap 0x19398, meta 0x2bb6c68), peers [1,2] op hist [])
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81240064 unmapped: 15228928 heap: 96468992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fce7c000/0x0/0x4ffc00000, data 0xd8d1a/0x1b0000, compress 0x0/0x0/0x0, omap 0x19398, meta 0x2bb6c68), peers [1,2] op hist [])
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:47:07.979679+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81240064 unmapped: 15228928 heap: 96468992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:47:08.979819+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81240064 unmapped: 15228928 heap: 96468992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:47:09.979985+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81240064 unmapped: 15228928 heap: 96468992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:47:10.980141+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81240064 unmapped: 15228928 heap: 96468992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:42 compute-0 ceph-osd[85971]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1046719 data_alloc: 218103808 data_used: 8733
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:47:11.980324+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fce7c000/0x0/0x4ffc00000, data 0xd8d1a/0x1b0000, compress 0x0/0x0/0x0, omap 0x19398, meta 0x2bb6c68), peers [1,2] op hist [])
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81240064 unmapped: 15228928 heap: 96468992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:47:12.980470+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81240064 unmapped: 15228928 heap: 96468992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:47:13.980619+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81240064 unmapped: 15228928 heap: 96468992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:47:14.980763+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81240064 unmapped: 15228928 heap: 96468992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:47:15.980951+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81240064 unmapped: 15228928 heap: 96468992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:42 compute-0 ceph-osd[85971]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1046719 data_alloc: 218103808 data_used: 8733
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:47:16.981142+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81240064 unmapped: 15228928 heap: 96468992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fce7c000/0x0/0x4ffc00000, data 0xd8d1a/0x1b0000, compress 0x0/0x0/0x0, omap 0x19398, meta 0x2bb6c68), peers [1,2] op hist [])
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:47:17.981391+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81240064 unmapped: 15228928 heap: 96468992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fce7c000/0x0/0x4ffc00000, data 0xd8d1a/0x1b0000, compress 0x0/0x0/0x0, omap 0x19398, meta 0x2bb6c68), peers [1,2] op hist [])
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:47:18.981509+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81240064 unmapped: 15228928 heap: 96468992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:47:19.981649+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81240064 unmapped: 15228928 heap: 96468992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:47:20.981758+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81240064 unmapped: 15228928 heap: 96468992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:42 compute-0 ceph-osd[85971]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1046719 data_alloc: 218103808 data_used: 8733
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:47:21.981919+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81240064 unmapped: 15228928 heap: 96468992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:47:22.982046+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81240064 unmapped: 15228928 heap: 96468992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:47:23.982180+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fce7c000/0x0/0x4ffc00000, data 0xd8d1a/0x1b0000, compress 0x0/0x0/0x0, omap 0x19398, meta 0x2bb6c68), peers [1,2] op hist [])
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81240064 unmapped: 15228928 heap: 96468992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:47:24.982298+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81240064 unmapped: 15228928 heap: 96468992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:47:25.982448+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81240064 unmapped: 15228928 heap: 96468992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:42 compute-0 ceph-osd[85971]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1046719 data_alloc: 218103808 data_used: 8733
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:47:26.982587+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81240064 unmapped: 15228928 heap: 96468992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:47:27.982715+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81240064 unmapped: 15228928 heap: 96468992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:47:28.982853+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fce7c000/0x0/0x4ffc00000, data 0xd8d1a/0x1b0000, compress 0x0/0x0/0x0, omap 0x19398, meta 0x2bb6c68), peers [1,2] op hist [])
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81240064 unmapped: 15228928 heap: 96468992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:47:29.983092+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81240064 unmapped: 15228928 heap: 96468992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:47:30.983315+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81240064 unmapped: 15228928 heap: 96468992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:42 compute-0 ceph-osd[85971]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1046719 data_alloc: 218103808 data_used: 8733
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:47:31.983487+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81240064 unmapped: 15228928 heap: 96468992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:47:32.983627+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81240064 unmapped: 15228928 heap: 96468992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:47:33.983779+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81240064 unmapped: 15228928 heap: 96468992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:47:34.983925+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fce7c000/0x0/0x4ffc00000, data 0xd8d1a/0x1b0000, compress 0x0/0x0/0x0, omap 0x19398, meta 0x2bb6c68), peers [1,2] op hist [])
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81240064 unmapped: 15228928 heap: 96468992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:47:35.984164+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81240064 unmapped: 15228928 heap: 96468992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:42 compute-0 ceph-osd[85971]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1046719 data_alloc: 218103808 data_used: 8733
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:47:36.984329+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81240064 unmapped: 15228928 heap: 96468992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fce7c000/0x0/0x4ffc00000, data 0xd8d1a/0x1b0000, compress 0x0/0x0/0x0, omap 0x19398, meta 0x2bb6c68), peers [1,2] op hist [])
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:47:37.984512+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81240064 unmapped: 15228928 heap: 96468992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:47:38.984671+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81240064 unmapped: 15228928 heap: 96468992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:47:39.984828+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81240064 unmapped: 15228928 heap: 96468992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:47:40.985017+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81240064 unmapped: 15228928 heap: 96468992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:42 compute-0 ceph-osd[85971]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1046719 data_alloc: 218103808 data_used: 8733
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:47:41.985193+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81240064 unmapped: 15228928 heap: 96468992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fce7c000/0x0/0x4ffc00000, data 0xd8d1a/0x1b0000, compress 0x0/0x0/0x0, omap 0x19398, meta 0x2bb6c68), peers [1,2] op hist [])
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:47:42.985314+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81240064 unmapped: 15228928 heap: 96468992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:47:43.985472+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81240064 unmapped: 15228928 heap: 96468992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:47:44.985608+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81240064 unmapped: 15228928 heap: 96468992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:47:45.985772+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fce7c000/0x0/0x4ffc00000, data 0xd8d1a/0x1b0000, compress 0x0/0x0/0x0, omap 0x19398, meta 0x2bb6c68), peers [1,2] op hist [])
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81240064 unmapped: 15228928 heap: 96468992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:42 compute-0 ceph-osd[85971]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1046719 data_alloc: 218103808 data_used: 8733
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:47:46.985928+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81240064 unmapped: 15228928 heap: 96468992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:47:47.986091+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81240064 unmapped: 15228928 heap: 96468992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:47:48.986307+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fce7c000/0x0/0x4ffc00000, data 0xd8d1a/0x1b0000, compress 0x0/0x0/0x0, omap 0x19398, meta 0x2bb6c68), peers [1,2] op hist [])
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81240064 unmapped: 15228928 heap: 96468992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:47:49.986515+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fce7c000/0x0/0x4ffc00000, data 0xd8d1a/0x1b0000, compress 0x0/0x0/0x0, omap 0x19398, meta 0x2bb6c68), peers [1,2] op hist [])
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81240064 unmapped: 15228928 heap: 96468992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:47:50.986660+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81240064 unmapped: 15228928 heap: 96468992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:42 compute-0 ceph-osd[85971]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1046719 data_alloc: 218103808 data_used: 8733
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:47:51.986916+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81240064 unmapped: 15228928 heap: 96468992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:47:52.987092+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81240064 unmapped: 15228928 heap: 96468992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:47:53.987337+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fce7c000/0x0/0x4ffc00000, data 0xd8d1a/0x1b0000, compress 0x0/0x0/0x0, omap 0x19398, meta 0x2bb6c68), peers [1,2] op hist [])
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81240064 unmapped: 15228928 heap: 96468992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:47:54.987504+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81240064 unmapped: 15228928 heap: 96468992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:47:55.987930+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81240064 unmapped: 15228928 heap: 96468992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:42 compute-0 ceph-osd[85971]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1046719 data_alloc: 218103808 data_used: 8733
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:47:56.988205+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fce7c000/0x0/0x4ffc00000, data 0xd8d1a/0x1b0000, compress 0x0/0x0/0x0, omap 0x19398, meta 0x2bb6c68), peers [1,2] op hist [])
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81240064 unmapped: 15228928 heap: 96468992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:47:57.988476+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81240064 unmapped: 15228928 heap: 96468992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:47:58.988673+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fce7c000/0x0/0x4ffc00000, data 0xd8d1a/0x1b0000, compress 0x0/0x0/0x0, omap 0x19398, meta 0x2bb6c68), peers [1,2] op hist [])
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81240064 unmapped: 15228928 heap: 96468992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:47:59.988859+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 599.551330566s of 600.349853516s, submitted: 124
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81256448 unmapped: 15212544 heap: 96468992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:48:00.989029+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81379328 unmapped: 15089664 heap: 96468992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:42 compute-0 ceph-osd[85971]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1046719 data_alloc: 218103808 data_used: 8733
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:48:01.989234+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81354752 unmapped: 15114240 heap: 96468992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fce7c000/0x0/0x4ffc00000, data 0xd8d1a/0x1b0000, compress 0x0/0x0/0x0, omap 0x19398, meta 0x2bb6c68), peers [1,2] op hist [])
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:48:02.989447+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81469440 unmapped: 14999552 heap: 96468992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:48:03.989597+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81469440 unmapped: 14999552 heap: 96468992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:48:04.989820+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81469440 unmapped: 14999552 heap: 96468992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:48:05.990020+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fce7c000/0x0/0x4ffc00000, data 0xd8d1a/0x1b0000, compress 0x0/0x0/0x0, omap 0x19398, meta 0x2bb6c68), peers [1,2] op hist [])
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81469440 unmapped: 14999552 heap: 96468992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:42 compute-0 ceph-osd[85971]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1046719 data_alloc: 218103808 data_used: 8733
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:48:06.990195+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81469440 unmapped: 14999552 heap: 96468992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:48:07.990397+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81469440 unmapped: 14999552 heap: 96468992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:48:08.990599+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fce7c000/0x0/0x4ffc00000, data 0xd8d1a/0x1b0000, compress 0x0/0x0/0x0, omap 0x19398, meta 0x2bb6c68), peers [1,2] op hist [])
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81469440 unmapped: 14999552 heap: 96468992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:48:09.990806+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81469440 unmapped: 14999552 heap: 96468992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:48:10.991007+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81469440 unmapped: 14999552 heap: 96468992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:42 compute-0 ceph-osd[85971]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1046719 data_alloc: 218103808 data_used: 8733
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:48:11.991317+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81469440 unmapped: 14999552 heap: 96468992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:48:12.991498+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81469440 unmapped: 14999552 heap: 96468992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:48:13.991644+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fce7c000/0x0/0x4ffc00000, data 0xd8d1a/0x1b0000, compress 0x0/0x0/0x0, omap 0x19398, meta 0x2bb6c68), peers [1,2] op hist [])
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81469440 unmapped: 14999552 heap: 96468992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:48:14.991816+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81469440 unmapped: 14999552 heap: 96468992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:48:15.992059+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fce7c000/0x0/0x4ffc00000, data 0xd8d1a/0x1b0000, compress 0x0/0x0/0x0, omap 0x19398, meta 0x2bb6c68), peers [1,2] op hist [])
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81469440 unmapped: 14999552 heap: 96468992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:42 compute-0 ceph-osd[85971]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1046719 data_alloc: 218103808 data_used: 8733
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:48:16.992234+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81469440 unmapped: 14999552 heap: 96468992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:48:17.992456+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81469440 unmapped: 14999552 heap: 96468992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:48:18.992686+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81469440 unmapped: 14999552 heap: 96468992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:48:19.996367+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fce7c000/0x0/0x4ffc00000, data 0xd8d1a/0x1b0000, compress 0x0/0x0/0x0, omap 0x19398, meta 0x2bb6c68), peers [1,2] op hist [])
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81469440 unmapped: 14999552 heap: 96468992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:48:20.996615+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fce7c000/0x0/0x4ffc00000, data 0xd8d1a/0x1b0000, compress 0x0/0x0/0x0, omap 0x19398, meta 0x2bb6c68), peers [1,2] op hist [])
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81469440 unmapped: 14999552 heap: 96468992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:42 compute-0 ceph-osd[85971]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1046719 data_alloc: 218103808 data_used: 8733
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:48:21.996951+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81469440 unmapped: 14999552 heap: 96468992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:48:22.997153+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81469440 unmapped: 14999552 heap: 96468992 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: handle_auth_request added challenge on 0x561e0538b400
Jan 31 08:51:42 compute-0 ceph-osd[85971]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 22.933744431s of 23.542282104s, submitted: 106
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:48:23.997346+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81657856 unmapped: 23207936 heap: 104865792 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:48:24.997617+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: osd.0 141 handle_osd_map epochs [142,142], i have 141, src has [1,142]
Jan 31 08:51:42 compute-0 ceph-osd[85971]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc67c000/0x0/0x4ffc00000, data 0x8d8d1a/0x9b0000, compress 0x0/0x0/0x0, omap 0x19398, meta 0x2bb6c68), peers [1,2] op hist [])
Jan 31 08:51:42 compute-0 ceph-osd[85971]: osd.0 142 ms_handle_reset con 0x561e0538b400 session 0x561e03d98fc0
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81657856 unmapped: 23207936 heap: 104865792 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:48:25.997771+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81657856 unmapped: 23207936 heap: 104865792 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:42 compute-0 ceph-osd[85971]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1091809 data_alloc: 218103808 data_used: 8733
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:48:27.006689+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81657856 unmapped: 23207936 heap: 104865792 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _renew_subs
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:48:28.006873+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81657856 unmapped: 23207936 heap: 104865792 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:48:29.007029+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81657856 unmapped: 23207936 heap: 104865792 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:48:30.007170+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc677000/0x0/0x4ffc00000, data 0x8da8b6/0x9b3000, compress 0x0/0x0/0x0, omap 0x1964d, meta 0x2bb69b3), peers [1,2] op hist [])
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81657856 unmapped: 23207936 heap: 104865792 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:48:31.007422+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81657856 unmapped: 23207936 heap: 104865792 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:42 compute-0 ceph-osd[85971]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1091809 data_alloc: 218103808 data_used: 8733
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:48:32.007642+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81657856 unmapped: 23207936 heap: 104865792 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:48:33.007872+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81657856 unmapped: 23207936 heap: 104865792 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:48:34.008086+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81657856 unmapped: 23207936 heap: 104865792 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:48:35.008220+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81657856 unmapped: 23207936 heap: 104865792 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc677000/0x0/0x4ffc00000, data 0x8da8b6/0x9b3000, compress 0x0/0x0/0x0, omap 0x1964d, meta 0x2bb69b3), peers [1,2] op hist [])
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:48:36.008378+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81657856 unmapped: 23207936 heap: 104865792 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:42 compute-0 ceph-osd[85971]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1091809 data_alloc: 218103808 data_used: 8733
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:48:37.008554+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc677000/0x0/0x4ffc00000, data 0x8da8b6/0x9b3000, compress 0x0/0x0/0x0, omap 0x1964d, meta 0x2bb69b3), peers [1,2] op hist [])
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81657856 unmapped: 23207936 heap: 104865792 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:48:38.008763+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc677000/0x0/0x4ffc00000, data 0x8da8b6/0x9b3000, compress 0x0/0x0/0x0, omap 0x1964d, meta 0x2bb69b3), peers [1,2] op hist [])
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81657856 unmapped: 23207936 heap: 104865792 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:48:39.008935+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81657856 unmapped: 23207936 heap: 104865792 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:48:40.009084+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81657856 unmapped: 23207936 heap: 104865792 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:48:41.009280+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81657856 unmapped: 23207936 heap: 104865792 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:42 compute-0 ceph-osd[85971]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1091809 data_alloc: 218103808 data_used: 8733
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:48:42.009478+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc677000/0x0/0x4ffc00000, data 0x8da8b6/0x9b3000, compress 0x0/0x0/0x0, omap 0x1964d, meta 0x2bb69b3), peers [1,2] op hist [])
Jan 31 08:51:42 compute-0 ceph-osd[85971]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc677000/0x0/0x4ffc00000, data 0x8da8b6/0x9b3000, compress 0x0/0x0/0x0, omap 0x1964d, meta 0x2bb69b3), peers [1,2] op hist [])
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81657856 unmapped: 23207936 heap: 104865792 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:48:43.009611+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81657856 unmapped: 23207936 heap: 104865792 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc677000/0x0/0x4ffc00000, data 0x8da8b6/0x9b3000, compress 0x0/0x0/0x0, omap 0x1964d, meta 0x2bb69b3), peers [1,2] op hist [])
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:48:44.009769+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81657856 unmapped: 23207936 heap: 104865792 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc677000/0x0/0x4ffc00000, data 0x8da8b6/0x9b3000, compress 0x0/0x0/0x0, omap 0x1964d, meta 0x2bb69b3), peers [1,2] op hist [])
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:48:45.009912+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81657856 unmapped: 23207936 heap: 104865792 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:48:46.010057+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81657856 unmapped: 23207936 heap: 104865792 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:42 compute-0 ceph-osd[85971]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1091809 data_alloc: 218103808 data_used: 8733
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:48:47.010234+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81657856 unmapped: 23207936 heap: 104865792 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:48:48.010407+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81657856 unmapped: 23207936 heap: 104865792 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:48:49.010545+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81657856 unmapped: 23207936 heap: 104865792 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:48:50.010682+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc677000/0x0/0x4ffc00000, data 0x8da8b6/0x9b3000, compress 0x0/0x0/0x0, omap 0x1964d, meta 0x2bb69b3), peers [1,2] op hist [])
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81657856 unmapped: 23207936 heap: 104865792 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:48:51.010825+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81657856 unmapped: 23207936 heap: 104865792 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:42 compute-0 ceph-osd[85971]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1091809 data_alloc: 218103808 data_used: 8733
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:48:52.011024+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81657856 unmapped: 23207936 heap: 104865792 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:48:53.011170+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81657856 unmapped: 23207936 heap: 104865792 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:48:54.011316+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc677000/0x0/0x4ffc00000, data 0x8da8b6/0x9b3000, compress 0x0/0x0/0x0, omap 0x1964d, meta 0x2bb69b3), peers [1,2] op hist [])
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81657856 unmapped: 23207936 heap: 104865792 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:48:55.011446+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81657856 unmapped: 23207936 heap: 104865792 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:48:56.011642+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81657856 unmapped: 23207936 heap: 104865792 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:42 compute-0 ceph-osd[85971]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1091809 data_alloc: 218103808 data_used: 8733
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:48:57.011789+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc677000/0x0/0x4ffc00000, data 0x8da8b6/0x9b3000, compress 0x0/0x0/0x0, omap 0x1964d, meta 0x2bb69b3), peers [1,2] op hist [])
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81657856 unmapped: 23207936 heap: 104865792 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:48:58.012683+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81657856 unmapped: 23207936 heap: 104865792 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc677000/0x0/0x4ffc00000, data 0x8da8b6/0x9b3000, compress 0x0/0x0/0x0, omap 0x1964d, meta 0x2bb69b3), peers [1,2] op hist [])
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:48:59.012817+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81657856 unmapped: 23207936 heap: 104865792 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:49:00.012958+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81657856 unmapped: 23207936 heap: 104865792 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:49:01.013089+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81657856 unmapped: 23207936 heap: 104865792 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:42 compute-0 ceph-osd[85971]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1091809 data_alloc: 218103808 data_used: 8733
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:49:02.013375+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc677000/0x0/0x4ffc00000, data 0x8da8b6/0x9b3000, compress 0x0/0x0/0x0, omap 0x1964d, meta 0x2bb69b3), peers [1,2] op hist [])
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81657856 unmapped: 23207936 heap: 104865792 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:49:03.013510+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc677000/0x0/0x4ffc00000, data 0x8da8b6/0x9b3000, compress 0x0/0x0/0x0, omap 0x1964d, meta 0x2bb69b3), peers [1,2] op hist [])
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81657856 unmapped: 23207936 heap: 104865792 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:49:04.013690+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81657856 unmapped: 23207936 heap: 104865792 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:49:05.013837+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81657856 unmapped: 23207936 heap: 104865792 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:49:06.013963+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81657856 unmapped: 23207936 heap: 104865792 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:42 compute-0 ceph-osd[85971]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1091809 data_alloc: 218103808 data_used: 8733
Jan 31 08:51:42 compute-0 ceph-osd[85971]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc677000/0x0/0x4ffc00000, data 0x8da8b6/0x9b3000, compress 0x0/0x0/0x0, omap 0x1964d, meta 0x2bb69b3), peers [1,2] op hist [])
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:49:07.014097+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc677000/0x0/0x4ffc00000, data 0x8da8b6/0x9b3000, compress 0x0/0x0/0x0, omap 0x1964d, meta 0x2bb69b3), peers [1,2] op hist [])
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81657856 unmapped: 23207936 heap: 104865792 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:49:08.014278+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81657856 unmapped: 23207936 heap: 104865792 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:49:09.014412+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81657856 unmapped: 23207936 heap: 104865792 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:49:10.014549+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81657856 unmapped: 23207936 heap: 104865792 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:49:11.015376+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc677000/0x0/0x4ffc00000, data 0x8da8b6/0x9b3000, compress 0x0/0x0/0x0, omap 0x1964d, meta 0x2bb69b3), peers [1,2] op hist [])
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81657856 unmapped: 23207936 heap: 104865792 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:42 compute-0 ceph-osd[85971]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1091809 data_alloc: 218103808 data_used: 8733
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:49:12.015569+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc677000/0x0/0x4ffc00000, data 0x8da8b6/0x9b3000, compress 0x0/0x0/0x0, omap 0x1964d, meta 0x2bb69b3), peers [1,2] op hist [])
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81657856 unmapped: 23207936 heap: 104865792 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:49:13.015846+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81657856 unmapped: 23207936 heap: 104865792 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:49:14.016129+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81657856 unmapped: 23207936 heap: 104865792 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:49:15.016378+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81657856 unmapped: 23207936 heap: 104865792 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:49:16.016604+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81657856 unmapped: 23207936 heap: 104865792 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:42 compute-0 ceph-osd[85971]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1091809 data_alloc: 218103808 data_used: 8733
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:49:17.016802+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81657856 unmapped: 23207936 heap: 104865792 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc677000/0x0/0x4ffc00000, data 0x8da8b6/0x9b3000, compress 0x0/0x0/0x0, omap 0x1964d, meta 0x2bb69b3), peers [1,2] op hist [])
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:49:18.017028+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81657856 unmapped: 23207936 heap: 104865792 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:49:19.017311+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc677000/0x0/0x4ffc00000, data 0x8da8b6/0x9b3000, compress 0x0/0x0/0x0, omap 0x1964d, meta 0x2bb69b3), peers [1,2] op hist [])
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81657856 unmapped: 23207936 heap: 104865792 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:49:20.017448+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81657856 unmapped: 23207936 heap: 104865792 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:49:21.017579+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81657856 unmapped: 23207936 heap: 104865792 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:42 compute-0 ceph-osd[85971]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1091809 data_alloc: 218103808 data_used: 8733
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:49:22.018089+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81657856 unmapped: 23207936 heap: 104865792 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:49:23.018350+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81657856 unmapped: 23207936 heap: 104865792 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:49:24.018502+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81657856 unmapped: 23207936 heap: 104865792 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc677000/0x0/0x4ffc00000, data 0x8da8b6/0x9b3000, compress 0x0/0x0/0x0, omap 0x1964d, meta 0x2bb69b3), peers [1,2] op hist [])
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:49:25.018705+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fc677000/0x0/0x4ffc00000, data 0x8da8b6/0x9b3000, compress 0x0/0x0/0x0, omap 0x1964d, meta 0x2bb69b3), peers [1,2] op hist [])
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81657856 unmapped: 23207936 heap: 104865792 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:49:26.018990+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81657856 unmapped: 23207936 heap: 104865792 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:42 compute-0 ceph-osd[85971]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1091809 data_alloc: 218103808 data_used: 8733
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:49:27.019231+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81657856 unmapped: 23207936 heap: 104865792 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:49:28.019465+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81657856 unmapped: 23207936 heap: 104865792 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: handle_auth_request added challenge on 0x561e0538b800
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:49:29.019625+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: osd.0 142 handle_osd_map epochs [143,143], i have 142, src has [1,143]
Jan 31 08:51:42 compute-0 ceph-osd[85971]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 65.061836243s of 65.149841309s, submitted: 2
Jan 31 08:51:42 compute-0 ceph-osd[85971]: osd.0 143 ms_handle_reset con 0x561e0538b800 session 0x561e05984e00
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81797120 unmapped: 23068672 heap: 104865792 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:49:30.019826+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fce74000/0x0/0x4ffc00000, data 0xdc4a6/0x1b6000, compress 0x0/0x0/0x0, omap 0x199bf, meta 0x2bb6641), peers [1,2] op hist [])
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81797120 unmapped: 23068672 heap: 104865792 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:49:31.020072+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fce74000/0x0/0x4ffc00000, data 0xdc4a6/0x1b6000, compress 0x0/0x0/0x0, omap 0x199bf, meta 0x2bb6641), peers [1,2] op hist [])
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81797120 unmapped: 23068672 heap: 104865792 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:42 compute-0 ceph-osd[85971]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1054315 data_alloc: 218103808 data_used: 8733
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:49:32.020292+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81797120 unmapped: 23068672 heap: 104865792 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:49:33.020485+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: osd.0 143 heartbeat osd_stat(store_statfs(0x4fce74000/0x0/0x4ffc00000, data 0xdc4a6/0x1b6000, compress 0x0/0x0/0x0, omap 0x199bf, meta 0x2bb6641), peers [1,2] op hist [])
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81797120 unmapped: 23068672 heap: 104865792 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:49:34.020658+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81797120 unmapped: 23068672 heap: 104865792 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:49:35.020863+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81797120 unmapped: 23068672 heap: 104865792 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:49:36.021353+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: osd.0 143 handle_osd_map epochs [143,144], i have 143, src has [1,144]
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81797120 unmapped: 23068672 heap: 104865792 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1057089 data_alloc: 218103808 data_used: 8733
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:49:37.021561+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: osd.0 144 heartbeat osd_stat(store_statfs(0x4fce71000/0x0/0x4ffc00000, data 0xddf25/0x1b9000, compress 0x0/0x0/0x0, omap 0x19cf9, meta 0x2bb6307), peers [1,2] op hist [])
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _renew_subs
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81797120 unmapped: 23068672 heap: 104865792 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:49:38.021726+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81797120 unmapped: 23068672 heap: 104865792 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:49:39.021896+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: osd.0 144 heartbeat osd_stat(store_statfs(0x4fce71000/0x0/0x4ffc00000, data 0xddf25/0x1b9000, compress 0x0/0x0/0x0, omap 0x19cf9, meta 0x2bb6307), peers [1,2] op hist [])
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81797120 unmapped: 23068672 heap: 104865792 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:49:40.022074+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81797120 unmapped: 23068672 heap: 104865792 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:49:41.022312+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81797120 unmapped: 23068672 heap: 104865792 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1057089 data_alloc: 218103808 data_used: 8733
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:49:42.022526+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81797120 unmapped: 23068672 heap: 104865792 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:49:43.022681+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81805312 unmapped: 23060480 heap: 104865792 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:49:44.022866+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81805312 unmapped: 23060480 heap: 104865792 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:49:45.023065+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: osd.0 144 heartbeat osd_stat(store_statfs(0x4fce71000/0x0/0x4ffc00000, data 0xddf25/0x1b9000, compress 0x0/0x0/0x0, omap 0x19cf9, meta 0x2bb6307), peers [1,2] op hist [])
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81805312 unmapped: 23060480 heap: 104865792 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:49:46.023335+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81805312 unmapped: 23060480 heap: 104865792 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1057089 data_alloc: 218103808 data_used: 8733
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:49:47.023473+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81805312 unmapped: 23060480 heap: 104865792 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:49:48.023625+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81805312 unmapped: 23060480 heap: 104865792 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:49:49.023786+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81805312 unmapped: 23060480 heap: 104865792 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:49:50.023924+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:49:51.024079+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81805312 unmapped: 23060480 heap: 104865792 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: osd.0 144 heartbeat osd_stat(store_statfs(0x4fce71000/0x0/0x4ffc00000, data 0xddf25/0x1b9000, compress 0x0/0x0/0x0, omap 0x19cf9, meta 0x2bb6307), peers [1,2] op hist [])
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:49:52.024289+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81805312 unmapped: 23060480 heap: 104865792 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1057089 data_alloc: 218103808 data_used: 8733
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:49:53.024471+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81805312 unmapped: 23060480 heap: 104865792 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:49:54.024647+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81805312 unmapped: 23060480 heap: 104865792 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: osd.0 144 heartbeat osd_stat(store_statfs(0x4fce71000/0x0/0x4ffc00000, data 0xddf25/0x1b9000, compress 0x0/0x0/0x0, omap 0x19cf9, meta 0x2bb6307), peers [1,2] op hist [])
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:49:55.025016+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81805312 unmapped: 23060480 heap: 104865792 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: osd.0 144 heartbeat osd_stat(store_statfs(0x4fce71000/0x0/0x4ffc00000, data 0xddf25/0x1b9000, compress 0x0/0x0/0x0, omap 0x19cf9, meta 0x2bb6307), peers [1,2] op hist [])
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:49:56.025222+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81805312 unmapped: 23060480 heap: 104865792 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:49:57.025452+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81805312 unmapped: 23060480 heap: 104865792 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1057089 data_alloc: 218103808 data_used: 8733
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:49:58.025611+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81805312 unmapped: 23060480 heap: 104865792 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:49:59.025779+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81805312 unmapped: 23060480 heap: 104865792 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:50:00.025947+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81805312 unmapped: 23060480 heap: 104865792 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:50:01.026107+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81805312 unmapped: 23060480 heap: 104865792 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: osd.0 144 heartbeat osd_stat(store_statfs(0x4fce71000/0x0/0x4ffc00000, data 0xddf25/0x1b9000, compress 0x0/0x0/0x0, omap 0x19cf9, meta 0x2bb6307), peers [1,2] op hist [])
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:50:02.035046+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81805312 unmapped: 23060480 heap: 104865792 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1057089 data_alloc: 218103808 data_used: 8733
Jan 31 08:51:42 compute-0 ceph-osd[85971]: osd.0 144 heartbeat osd_stat(store_statfs(0x4fce71000/0x0/0x4ffc00000, data 0xddf25/0x1b9000, compress 0x0/0x0/0x0, omap 0x19cf9, meta 0x2bb6307), peers [1,2] op hist [])
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:50:03.035218+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81805312 unmapped: 23060480 heap: 104865792 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:50:04.035378+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81805312 unmapped: 23060480 heap: 104865792 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:50:05.035517+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81805312 unmapped: 23060480 heap: 104865792 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:50:06.035648+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81805312 unmapped: 23060480 heap: 104865792 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:50:07.035802+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81805312 unmapped: 23060480 heap: 104865792 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1057089 data_alloc: 218103808 data_used: 8733
Jan 31 08:51:42 compute-0 ceph-osd[85971]: osd.0 144 heartbeat osd_stat(store_statfs(0x4fce71000/0x0/0x4ffc00000, data 0xddf25/0x1b9000, compress 0x0/0x0/0x0, omap 0x19cf9, meta 0x2bb6307), peers [1,2] op hist [])
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:50:08.036009+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81805312 unmapped: 23060480 heap: 104865792 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:50:09.036163+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81805312 unmapped: 23060480 heap: 104865792 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: osd.0 144 heartbeat osd_stat(store_statfs(0x4fce71000/0x0/0x4ffc00000, data 0xddf25/0x1b9000, compress 0x0/0x0/0x0, omap 0x19cf9, meta 0x2bb6307), peers [1,2] op hist [])
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:50:10.036375+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81805312 unmapped: 23060480 heap: 104865792 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:50:11.036524+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81805312 unmapped: 23060480 heap: 104865792 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:50:12.036924+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81805312 unmapped: 23060480 heap: 104865792 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1057089 data_alloc: 218103808 data_used: 8733
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:50:13.037191+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81805312 unmapped: 23060480 heap: 104865792 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: osd.0 144 heartbeat osd_stat(store_statfs(0x4fce71000/0x0/0x4ffc00000, data 0xddf25/0x1b9000, compress 0x0/0x0/0x0, omap 0x19cf9, meta 0x2bb6307), peers [1,2] op hist [])
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:50:14.037331+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81805312 unmapped: 23060480 heap: 104865792 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:50:15.037543+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81805312 unmapped: 23060480 heap: 104865792 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:50:16.037749+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81805312 unmapped: 23060480 heap: 104865792 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:50:17.037904+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81805312 unmapped: 23060480 heap: 104865792 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1057089 data_alloc: 218103808 data_used: 8733
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:50:18.038117+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81805312 unmapped: 23060480 heap: 104865792 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:50:19.038364+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81805312 unmapped: 23060480 heap: 104865792 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: osd.0 144 heartbeat osd_stat(store_statfs(0x4fce71000/0x0/0x4ffc00000, data 0xddf25/0x1b9000, compress 0x0/0x0/0x0, omap 0x19cf9, meta 0x2bb6307), peers [1,2] op hist [])
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:50:20.038541+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81805312 unmapped: 23060480 heap: 104865792 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:50:21.038672+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81805312 unmapped: 23060480 heap: 104865792 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:50:22.038974+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81805312 unmapped: 23060480 heap: 104865792 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1057089 data_alloc: 218103808 data_used: 8733
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:50:23.039144+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81805312 unmapped: 23060480 heap: 104865792 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:50:24.039322+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81805312 unmapped: 23060480 heap: 104865792 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:50:25.039527+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: osd.0 144 heartbeat osd_stat(store_statfs(0x4fce71000/0x0/0x4ffc00000, data 0xddf25/0x1b9000, compress 0x0/0x0/0x0, omap 0x19cf9, meta 0x2bb6307), peers [1,2] op hist [])
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81805312 unmapped: 23060480 heap: 104865792 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:50:26.039747+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81805312 unmapped: 23060480 heap: 104865792 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:50:27.039906+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81805312 unmapped: 23060480 heap: 104865792 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1057089 data_alloc: 218103808 data_used: 8733
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:50:28.040125+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81805312 unmapped: 23060480 heap: 104865792 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:50:29.040302+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81805312 unmapped: 23060480 heap: 104865792 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:50:30.040442+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81805312 unmapped: 23060480 heap: 104865792 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: osd.0 144 heartbeat osd_stat(store_statfs(0x4fce71000/0x0/0x4ffc00000, data 0xddf25/0x1b9000, compress 0x0/0x0/0x0, omap 0x19cf9, meta 0x2bb6307), peers [1,2] op hist [])
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:50:31.040719+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81805312 unmapped: 23060480 heap: 104865792 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:50:32.040943+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81805312 unmapped: 23060480 heap: 104865792 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1057089 data_alloc: 218103808 data_used: 8733
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:50:33.041241+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81805312 unmapped: 23060480 heap: 104865792 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:50:34.041644+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81805312 unmapped: 23060480 heap: 104865792 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:50:35.041895+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81805312 unmapped: 23060480 heap: 104865792 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: osd.0 144 heartbeat osd_stat(store_statfs(0x4fce71000/0x0/0x4ffc00000, data 0xddf25/0x1b9000, compress 0x0/0x0/0x0, omap 0x19cf9, meta 0x2bb6307), peers [1,2] op hist [])
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:50:36.042168+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81805312 unmapped: 23060480 heap: 104865792 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: osd.0 144 heartbeat osd_stat(store_statfs(0x4fce71000/0x0/0x4ffc00000, data 0xddf25/0x1b9000, compress 0x0/0x0/0x0, omap 0x19cf9, meta 0x2bb6307), peers [1,2] op hist [])
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:50:37.042340+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81805312 unmapped: 23060480 heap: 104865792 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1057089 data_alloc: 218103808 data_used: 8733
Jan 31 08:51:42 compute-0 ceph-osd[85971]: osd.0 144 heartbeat osd_stat(store_statfs(0x4fce71000/0x0/0x4ffc00000, data 0xddf25/0x1b9000, compress 0x0/0x0/0x0, omap 0x19cf9, meta 0x2bb6307), peers [1,2] op hist [])
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:50:38.042586+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81805312 unmapped: 23060480 heap: 104865792 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: osd.0 144 heartbeat osd_stat(store_statfs(0x4fce71000/0x0/0x4ffc00000, data 0xddf25/0x1b9000, compress 0x0/0x0/0x0, omap 0x19cf9, meta 0x2bb6307), peers [1,2] op hist [])
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:50:39.042720+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81805312 unmapped: 23060480 heap: 104865792 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:50:40.042859+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81805312 unmapped: 23060480 heap: 104865792 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:50:41.043042+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81805312 unmapped: 23060480 heap: 104865792 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:50:42.043202+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81805312 unmapped: 23060480 heap: 104865792 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1057089 data_alloc: 218103808 data_used: 8733
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:50:43.043364+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81805312 unmapped: 23060480 heap: 104865792 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:50:44.043561+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81805312 unmapped: 23060480 heap: 104865792 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: osd.0 144 heartbeat osd_stat(store_statfs(0x4fce71000/0x0/0x4ffc00000, data 0xddf25/0x1b9000, compress 0x0/0x0/0x0, omap 0x19cf9, meta 0x2bb6307), peers [1,2] op hist [])
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:50:45.043798+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81805312 unmapped: 23060480 heap: 104865792 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:50:46.043978+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81805312 unmapped: 23060480 heap: 104865792 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: osd.0 144 heartbeat osd_stat(store_statfs(0x4fce71000/0x0/0x4ffc00000, data 0xddf25/0x1b9000, compress 0x0/0x0/0x0, omap 0x19cf9, meta 0x2bb6307), peers [1,2] op hist [])
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:50:47.044183+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81805312 unmapped: 23060480 heap: 104865792 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1057089 data_alloc: 218103808 data_used: 8733
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:50:48.044349+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81805312 unmapped: 23060480 heap: 104865792 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: osd.0 144 heartbeat osd_stat(store_statfs(0x4fce71000/0x0/0x4ffc00000, data 0xddf25/0x1b9000, compress 0x0/0x0/0x0, omap 0x19cf9, meta 0x2bb6307), peers [1,2] op hist [])
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:50:49.044572+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81805312 unmapped: 23060480 heap: 104865792 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: osd.0 144 heartbeat osd_stat(store_statfs(0x4fce71000/0x0/0x4ffc00000, data 0xddf25/0x1b9000, compress 0x0/0x0/0x0, omap 0x19cf9, meta 0x2bb6307), peers [1,2] op hist [])
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:50:50.044716+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81805312 unmapped: 23060480 heap: 104865792 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:50:51.044855+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81805312 unmapped: 23060480 heap: 104865792 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:50:52.045040+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81805312 unmapped: 23060480 heap: 104865792 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1057089 data_alloc: 218103808 data_used: 8733
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:50:53.045215+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81805312 unmapped: 23060480 heap: 104865792 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: osd.0 144 heartbeat osd_stat(store_statfs(0x4fce71000/0x0/0x4ffc00000, data 0xddf25/0x1b9000, compress 0x0/0x0/0x0, omap 0x19cf9, meta 0x2bb6307), peers [1,2] op hist [])
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:50:54.045314+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81805312 unmapped: 23060480 heap: 104865792 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:50:55.045467+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81805312 unmapped: 23060480 heap: 104865792 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:50:56.045577+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81805312 unmapped: 23060480 heap: 104865792 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:50:57.045699+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81805312 unmapped: 23060480 heap: 104865792 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1057089 data_alloc: 218103808 data_used: 8733
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:50:58.045848+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: osd.0 144 heartbeat osd_stat(store_statfs(0x4fce71000/0x0/0x4ffc00000, data 0xddf25/0x1b9000, compress 0x0/0x0/0x0, omap 0x19cf9, meta 0x2bb6307), peers [1,2] op hist [])
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81805312 unmapped: 23060480 heap: 104865792 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:50:59.045967+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81805312 unmapped: 23060480 heap: 104865792 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:51:00.046089+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81805312 unmapped: 23060480 heap: 104865792 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:51:01.046203+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81805312 unmapped: 23060480 heap: 104865792 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:51:02.046298+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: osd.0 144 heartbeat osd_stat(store_statfs(0x4fce71000/0x0/0x4ffc00000, data 0xddf25/0x1b9000, compress 0x0/0x0/0x0, omap 0x19cf9, meta 0x2bb6307), peers [1,2] op hist [])
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81805312 unmapped: 23060480 heap: 104865792 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1057089 data_alloc: 218103808 data_used: 8733
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:51:03.046441+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81805312 unmapped: 23060480 heap: 104865792 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:51:04.046559+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81805312 unmapped: 23060480 heap: 104865792 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:51:05.046690+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81805312 unmapped: 23060480 heap: 104865792 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:51:06.046831+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81805312 unmapped: 23060480 heap: 104865792 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: osd.0 144 heartbeat osd_stat(store_statfs(0x4fce71000/0x0/0x4ffc00000, data 0xddf25/0x1b9000, compress 0x0/0x0/0x0, omap 0x19cf9, meta 0x2bb6307), peers [1,2] op hist [])
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:51:07.046946+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 08:51:42 compute-0 ceph-osd[85971]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81805312 unmapped: 23060480 heap: 104865792 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1057089 data_alloc: 218103808 data_used: 8733
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:51:08.047076+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: osd.0 144 heartbeat osd_stat(store_statfs(0x4fce71000/0x0/0x4ffc00000, data 0xddf25/0x1b9000, compress 0x0/0x0/0x0, omap 0x19cf9, meta 0x2bb6307), peers [1,2] op hist [])
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81805312 unmapped: 23060480 heap: 104865792 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:51:09.047206+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 81977344 unmapped: 22888448 heap: 104865792 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: do_command 'config diff' '{prefix=config diff}'
Jan 31 08:51:42 compute-0 ceph-osd[85971]: do_command 'config diff' '{prefix=config diff}' result is 0 bytes
Jan 31 08:51:42 compute-0 ceph-osd[85971]: do_command 'config show' '{prefix=config show}'
Jan 31 08:51:42 compute-0 ceph-osd[85971]: do_command 'config show' '{prefix=config show}' result is 0 bytes
Jan 31 08:51:42 compute-0 ceph-osd[85971]: do_command 'counter dump' '{prefix=counter dump}'
Jan 31 08:51:42 compute-0 ceph-osd[85971]: do_command 'counter dump' '{prefix=counter dump}' result is 0 bytes
Jan 31 08:51:42 compute-0 ceph-osd[85971]: do_command 'counter schema' '{prefix=counter schema}'
Jan 31 08:51:42 compute-0 ceph-osd[85971]: do_command 'counter schema' '{prefix=counter schema}' result is 0 bytes
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:51:10.047321+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 82583552 unmapped: 22282240 heap: 104865792 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: tick
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_tickets
Jan 31 08:51:42 compute-0 ceph-osd[85971]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-31T08:51:11.047450+0000)
Jan 31 08:51:42 compute-0 ceph-osd[85971]: prioritycache tune_memory target: 4294967296 mapped: 82673664 unmapped: 22192128 heap: 104865792 old mem: 2845415832 new mem: 2845415832
Jan 31 08:51:42 compute-0 ceph-osd[85971]: do_command 'log dump' '{prefix=log dump}'
Jan 31 08:51:42 compute-0 ceph-mgr[75519]: log_channel(audit) log [DBG] : from='client.14736 -' entity='client.admin' cmd=[{"prefix": "telemetry channel ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 08:51:42 compute-0 podman[263332]: 2026-01-31 08:51:42.328482261 +0000 UTC m=+0.479984024 container attach c8d99e342937555a53bbcd7cc7dfb996c2b65a726ec10eb8a93e3d25bb2d88f7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=lucid_chatterjee, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 08:51:42 compute-0 podman[263332]: 2026-01-31 08:51:42.328905983 +0000 UTC m=+0.480407776 container died c8d99e342937555a53bbcd7cc7dfb996c2b65a726ec10eb8a93e3d25bb2d88f7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=lucid_chatterjee, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 08:51:42 compute-0 systemd[1]: var-lib-containers-storage-overlay-948b0b9daad00ed42b330eda5620ea0ce5a5c9c4752d4314d96af3a93d6c01de-merged.mount: Deactivated successfully.
Jan 31 08:51:42 compute-0 ceph-mgr[75519]: log_channel(audit) log [DBG] : from='client.14738 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 31 08:51:42 compute-0 podman[263332]: 2026-01-31 08:51:42.660501399 +0000 UTC m=+0.812003162 container remove c8d99e342937555a53bbcd7cc7dfb996c2b65a726ec10eb8a93e3d25bb2d88f7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=lucid_chatterjee, ceph=True, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle)
Jan 31 08:51:42 compute-0 systemd[1]: libpod-conmon-c8d99e342937555a53bbcd7cc7dfb996c2b65a726ec10eb8a93e3d25bb2d88f7.scope: Deactivated successfully.
Jan 31 08:51:42 compute-0 podman[263474]: 2026-01-31 08:51:42.851478069 +0000 UTC m=+0.091618851 container create e0a4e00c6b910f8e9c57140fbee0ada2d9581aef87ccbda8a6f1bd39fc3e7d62 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=amazing_tharp, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Jan 31 08:51:42 compute-0 ceph-mgr[75519]: log_channel(audit) log [DBG] : from='client.14740 -' entity='client.admin' cmd=[{"prefix": "telemetry collection ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 08:51:42 compute-0 ceph-mon[75227]: from='client.? 192.168.122.100:0/1949232717' entity='client.admin' cmd={"prefix": "osd utilization"} : dispatch
Jan 31 08:51:42 compute-0 podman[263474]: 2026-01-31 08:51:42.778083908 +0000 UTC m=+0.018224720 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 31 08:51:43 compute-0 ceph-mgr[75519]: log_channel(audit) log [DBG] : from='client.14742 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 31 08:51:43 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config get", "who": "client.rgw.rgw.compute-0.dnvgmk", "name": "rgw_frontends"} v 0)
Jan 31 08:51:43 compute-0 ceph-mon[75227]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "config get", "who": "client.rgw.rgw.compute-0.dnvgmk", "name": "rgw_frontends"} : dispatch
Jan 31 08:51:43 compute-0 systemd[1]: Started libpod-conmon-e0a4e00c6b910f8e9c57140fbee0ada2d9581aef87ccbda8a6f1bd39fc3e7d62.scope.
Jan 31 08:51:43 compute-0 systemd[1]: Started libcrun container.
Jan 31 08:51:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/40995e59e178fda0159ee2ff148e5f8627c6942eb5481c776be7739e0a1ccba2/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 08:51:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/40995e59e178fda0159ee2ff148e5f8627c6942eb5481c776be7739e0a1ccba2/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 08:51:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/40995e59e178fda0159ee2ff148e5f8627c6942eb5481c776be7739e0a1ccba2/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 08:51:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/40995e59e178fda0159ee2ff148e5f8627c6942eb5481c776be7739e0a1ccba2/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 08:51:43 compute-0 podman[263474]: 2026-01-31 08:51:43.361536129 +0000 UTC m=+0.601676951 container init e0a4e00c6b910f8e9c57140fbee0ada2d9581aef87ccbda8a6f1bd39fc3e7d62 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=amazing_tharp, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Jan 31 08:51:43 compute-0 podman[263474]: 2026-01-31 08:51:43.36896031 +0000 UTC m=+0.609101102 container start e0a4e00c6b910f8e9c57140fbee0ada2d9581aef87ccbda8a6f1bd39fc3e7d62 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=amazing_tharp, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 08:51:43 compute-0 podman[263474]: 2026-01-31 08:51:43.374009144 +0000 UTC m=+0.614149946 container attach e0a4e00c6b910f8e9c57140fbee0ada2d9581aef87ccbda8a6f1bd39fc3e7d62 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=amazing_tharp, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 08:51:43 compute-0 ceph-mgr[75519]: log_channel(audit) log [DBG] : from='client.14746 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 31 08:51:43 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config get", "who": "client.rgw.rgw.compute-0.dnvgmk", "name": "rgw_frontends"} v 0)
Jan 31 08:51:43 compute-0 ceph-mon[75227]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "config get", "who": "client.rgw.rgw.compute-0.dnvgmk", "name": "rgw_frontends"} : dispatch
Jan 31 08:51:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] _maybe_adjust
Jan 31 08:51:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:51:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 31 08:51:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:51:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 08:51:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:51:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 08:51:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:51:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 08:51:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:51:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 2.546112324845754e-07 of space, bias 1.0, pg target 7.638336974537263e-05 quantized to 32 (current 32)
Jan 31 08:51:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:51:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 2.7421629738588775e-06 of space, bias 4.0, pg target 0.003290595568630653 quantized to 16 (current 16)
Jan 31 08:51:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:51:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 08:51:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:51:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 31 08:51:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:51:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Jan 31 08:51:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:51:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 08:51:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 31 08:51:43 compute-0 ceph-mgr[75519]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 31 08:51:43 compute-0 ceph-mgr[75519]: log_channel(audit) log [DBG] : from='client.14750 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 31 08:51:43 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1490: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:51:43 compute-0 lvm[263700]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 31 08:51:43 compute-0 lvm[263700]: VG ceph_vg0 finished
Jan 31 08:51:43 compute-0 lvm[263705]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 31 08:51:43 compute-0 lvm[263705]: VG ceph_vg1 finished
Jan 31 08:51:43 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "quorum_status"} v 0)
Jan 31 08:51:43 compute-0 ceph-mon[75227]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3484109842' entity='client.admin' cmd={"prefix": "quorum_status"} : dispatch
Jan 31 08:51:43 compute-0 lvm[263708]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 31 08:51:43 compute-0 lvm[263708]: VG ceph_vg2 finished
Jan 31 08:51:44 compute-0 ceph-mon[75227]: pgmap v1489: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:51:44 compute-0 ceph-mon[75227]: from='client.14734 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 31 08:51:44 compute-0 ceph-mon[75227]: from='client.14736 -' entity='client.admin' cmd=[{"prefix": "telemetry channel ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 08:51:44 compute-0 ceph-mon[75227]: from='client.14738 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 31 08:51:44 compute-0 ceph-mon[75227]: from='client.14740 -' entity='client.admin' cmd=[{"prefix": "telemetry collection ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 08:51:44 compute-0 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "config get", "who": "client.rgw.rgw.compute-0.dnvgmk", "name": "rgw_frontends"} : dispatch
Jan 31 08:51:44 compute-0 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' cmd={"prefix": "config get", "who": "client.rgw.rgw.compute-0.dnvgmk", "name": "rgw_frontends"} : dispatch
Jan 31 08:51:44 compute-0 amazing_tharp[263543]: {}
Jan 31 08:51:44 compute-0 systemd[1]: libpod-e0a4e00c6b910f8e9c57140fbee0ada2d9581aef87ccbda8a6f1bd39fc3e7d62.scope: Deactivated successfully.
Jan 31 08:51:44 compute-0 podman[263474]: 2026-01-31 08:51:44.125816619 +0000 UTC m=+1.365957401 container died e0a4e00c6b910f8e9c57140fbee0ada2d9581aef87ccbda8a6f1bd39fc3e7d62 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=amazing_tharp, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Jan 31 08:51:44 compute-0 systemd[1]: var-lib-containers-storage-overlay-40995e59e178fda0159ee2ff148e5f8627c6942eb5481c776be7739e0a1ccba2-merged.mount: Deactivated successfully.
Jan 31 08:51:44 compute-0 podman[263474]: 2026-01-31 08:51:44.4186177 +0000 UTC m=+1.658758482 container remove e0a4e00c6b910f8e9c57140fbee0ada2d9581aef87ccbda8a6f1bd39fc3e7d62 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=amazing_tharp, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 08:51:44 compute-0 systemd[1]: libpod-conmon-e0a4e00c6b910f8e9c57140fbee0ada2d9581aef87ccbda8a6f1bd39fc3e7d62.scope: Deactivated successfully.
Jan 31 08:51:44 compute-0 sudo[263266]: pam_unix(sudo:session): session closed for user root
Jan 31 08:51:44 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 31 08:51:44 compute-0 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 08:51:44 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 31 08:51:44 compute-0 ceph-mon[75227]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 08:51:44 compute-0 ceph-mgr[75519]: log_channel(audit) log [DBG] : from='client.14753 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 31 08:51:44 compute-0 sudo[263764]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 31 08:51:44 compute-0 sudo[263764]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 31 08:51:44 compute-0 sudo[263764]: pam_unix(sudo:session): session closed for user root
Jan 31 08:51:44 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "versions"} v 0)
Jan 31 08:51:44 compute-0 ceph-mon[75227]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1596618574' entity='client.admin' cmd={"prefix": "versions"} : dispatch
Jan 31 08:51:44 compute-0 systemd[1]: Starting Hostname Service...
Jan 31 08:51:44 compute-0 ceph-mgr[75519]: log_channel(audit) log [DBG] : from='client.14756 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 31 08:51:44 compute-0 systemd[1]: Started Hostname Service.
Jan 31 08:51:45 compute-0 ceph-mon[75227]: from='client.14742 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 31 08:51:45 compute-0 ceph-mon[75227]: from='client.14746 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 31 08:51:45 compute-0 ceph-mon[75227]: from='client.14750 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 31 08:51:45 compute-0 ceph-mon[75227]: pgmap v1490: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:51:45 compute-0 ceph-mon[75227]: from='client.? 192.168.122.100:0/3484109842' entity='client.admin' cmd={"prefix": "quorum_status"} : dispatch
Jan 31 08:51:45 compute-0 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 08:51:45 compute-0 ceph-mon[75227]: from='mgr.14122 192.168.122.100:0/1251306279' entity='mgr.compute-0.fqetdi' 
Jan 31 08:51:45 compute-0 ceph-mon[75227]: from='client.14753 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 31 08:51:45 compute-0 ceph-mon[75227]: from='client.? 192.168.122.100:0/1596618574' entity='client.admin' cmd={"prefix": "versions"} : dispatch
Jan 31 08:51:45 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "health", "detail": "detail", "format": "json-pretty"} v 0)
Jan 31 08:51:45 compute-0 ceph-mon[75227]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3870161625' entity='client.admin' cmd={"prefix": "health", "detail": "detail", "format": "json-pretty"} : dispatch
Jan 31 08:51:45 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "format": "json-pretty"} v 0)
Jan 31 08:51:45 compute-0 ceph-mon[75227]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2083199589' entity='client.admin' cmd={"prefix": "osd tree", "format": "json-pretty"} : dispatch
Jan 31 08:51:45 compute-0 ceph-mon[75227]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Jan 31 08:51:45 compute-0 ceph-mon[75227]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Jan 31 08:51:45 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1491: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:51:45 compute-0 ceph-mon[75227]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd='sessions' args=[]: dispatch
Jan 31 08:51:45 compute-0 ceph-mon[75227]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd=sessions args=[]: finished
Jan 31 08:51:46 compute-0 ceph-mon[75227]: from='client.14756 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 31 08:51:46 compute-0 ceph-mon[75227]: from='client.? 192.168.122.100:0/3870161625' entity='client.admin' cmd={"prefix": "health", "detail": "detail", "format": "json-pretty"} : dispatch
Jan 31 08:51:46 compute-0 ceph-mon[75227]: from='client.? 192.168.122.100:0/2083199589' entity='client.admin' cmd={"prefix": "osd tree", "format": "json-pretty"} : dispatch
Jan 31 08:51:46 compute-0 ceph-mon[75227]: from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Jan 31 08:51:46 compute-0 ceph-mon[75227]: from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Jan 31 08:51:46 compute-0 ceph-mon[75227]: from='admin socket' entity='admin socket' cmd='sessions' args=[]: dispatch
Jan 31 08:51:46 compute-0 ceph-mon[75227]: from='admin socket' entity='admin socket' cmd=sessions args=[]: finished
Jan 31 08:51:46 compute-0 ceph-mon[75227]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 08:51:46 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config dump"} v 0)
Jan 31 08:51:46 compute-0 ceph-mon[75227]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/885827335' entity='client.admin' cmd={"prefix": "config dump"} : dispatch
Jan 31 08:51:46 compute-0 ceph-mgr[75519]: log_channel(audit) log [DBG] : from='client.14772 -' entity='client.admin' cmd=[{"prefix": "device ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 08:51:47 compute-0 ceph-mon[75227]: pgmap v1491: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:51:47 compute-0 ceph-mon[75227]: from='client.? 192.168.122.100:0/885827335' entity='client.admin' cmd={"prefix": "config dump"} : dispatch
Jan 31 08:51:47 compute-0 ceph-mon[75227]: from='client.14772 -' entity='client.admin' cmd=[{"prefix": "device ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 08:51:47 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "detail": "detail"} v 0)
Jan 31 08:51:47 compute-0 ceph-mon[75227]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1302809609' entity='client.admin' cmd={"prefix": "df", "detail": "detail"} : dispatch
Jan 31 08:51:47 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df"} v 0)
Jan 31 08:51:47 compute-0 ceph-mon[75227]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/390087635' entity='client.admin' cmd={"prefix": "df"} : dispatch
Jan 31 08:51:47 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1492: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:51:48 compute-0 ceph-mon[75227]: from='client.? 192.168.122.100:0/1302809609' entity='client.admin' cmd={"prefix": "df", "detail": "detail"} : dispatch
Jan 31 08:51:48 compute-0 ceph-mon[75227]: from='client.? 192.168.122.100:0/390087635' entity='client.admin' cmd={"prefix": "df"} : dispatch
Jan 31 08:51:48 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "fs dump"} v 0)
Jan 31 08:51:48 compute-0 ceph-mon[75227]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2966003140' entity='client.admin' cmd={"prefix": "fs dump"} : dispatch
Jan 31 08:51:48 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "fs ls"} v 0)
Jan 31 08:51:48 compute-0 ceph-mon[75227]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/689549498' entity='client.admin' cmd={"prefix": "fs ls"} : dispatch
Jan 31 08:51:49 compute-0 ceph-mgr[75519]: log_channel(audit) log [DBG] : from='client.14782 -' entity='client.admin' cmd=[{"prefix": "fs status", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 08:51:49 compute-0 ceph-mon[75227]: pgmap v1492: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 31 08:51:49 compute-0 ceph-mon[75227]: from='client.? 192.168.122.100:0/2966003140' entity='client.admin' cmd={"prefix": "fs dump"} : dispatch
Jan 31 08:51:49 compute-0 ceph-mon[75227]: from='client.? 192.168.122.100:0/689549498' entity='client.admin' cmd={"prefix": "fs ls"} : dispatch
Jan 31 08:51:49 compute-0 ceph-mon[75227]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mds stat"} v 0)
Jan 31 08:51:49 compute-0 ceph-mon[75227]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1200279567' entity='client.admin' cmd={"prefix": "mds stat"} : dispatch
Jan 31 08:51:49 compute-0 ceph-mgr[75519]: log_channel(cluster) log [DBG] : pgmap v1493: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
